ANN FISHER, Ph.D., senior research associate, Department of Agricultural Economics and Rural Sociology, Pennsylvania State University, University Park, Pennsylvania.
Evaluation can reveal whether risk communication is effective. But defining effectiveness requires careful specification of what the initiator wants to accomplish with a risk communication activity. Techniques for measuring effectiveness differ when the goal is to (1) raise awareness, (2) result in informed decisions, or (3) change behavior. Good measures of effectiveness can be used early to revise materials, delivery channels, and strategies; midstream in the process to ensure that target audiences are receiving materials; and later to improve similar risk communication activities by learning from previous mistakes. Dr. Fisher elaborated on these ideas to illustrate why the benefits are worth the effort of evaluating risk communication, and to suggest how evaluation can be useful even with limited resources.
Evaluation during the communications planning stage requires data for effective program design. Essential data are health, environmental, and lifestyle needs and concerns; risk management needs and concerns; and how to meet those needs and concerns. Collecting the appropriate data depends on the risk communication problem. Several types include the following:
Dr. Fisher mentioned some of the issues that affect evaluation, including individual versus community decisions. Did the process work? Were people satisfied with it? What if they made informed choices that conflict with the public health perspective? A natural tension seems to occur between the risk communicator's goal to protect public health and the public's ability to make informed choices that are consistent with protecting public health.
To illustrate this point, Dr. Fisher used the issue of radon testing and abatement. The general risk characteristics of radon have kept it a relatively obscure public health issue. Several of radon's risk characteristics make it unlikely that the public will pay much attention: this colorless, odorless gas provides no perceptual cues; it occurs naturally, so there is no villain to blame; people view their radon exposure experience as benign; because their home is a haven, they have difficulty believing it could be a threat; the health effects are delayed and occur singly, without drama; because lung cancer is identified as having multiple causes, it is impossible to prove that radon caused an individual's cancer. Radon risk reduction is a clear example of a health information program that may have substantial success from a marketing perspective, but fail from a public health perspective. Dr. Fisher referred the workshop participants to a report, Evaluation and Effective Risk Communication Workshop Proceedings, providing a more detailed description of evaluation issues. An edited version of the article is in Appendix 3.
Research shows that evaluation can lead to more effective risk communication. Formative evaluation involves pretesting and pilot testing before full production of a risk communication activity to provide feedback on and allow modification basic concepts, materials design, and delivery channels. While a risk communication program is being implemented, process evaluation identifies and measures its strengths and weaknesses. In addition to providing accountability, the process evaluation allows midstream modifications to ensure that intended messages are reaching the target audiences.
Outcome and impact evaluation identify what the audience actually received, what they learned, and whether they changed the way they feel, think, or behave. Outcome and impact evaluation can answer the crucial question: Did the risk communication program achieve its goals?
Given the potential for risk communication evaluation to document and improve agencies' effectiveness in protecting public health, why is it used so seldom? And what practical steps would make it easier to conduct and benefit from risk communication evaluations?
The stakeholders for a risk communication activity include government agencies, corporations, unions, media, scientists, public interest groups, diverse communities, and individual citizens. Because these stakeholders (and the units within them) have varying and often conflicting needs, interests, and perspectives, an agency should choose and be explicit about its evaluation perspective. This perspective will be conditioned by the agency's goals. Lofty goalssuch as raising the public's (1) awareness and ability to make informed decisions, (2) willingness to seek help or information, (3) willingness to take action or change behavioral patterns, or (4) ability to participate effectively in the decision processmight have to accommodate practical goals such as personal and organizational survival, damage control, overcoming opposition to decisions, achieving informed consent, or enhancing public participation. For example, there can be divergence between what we want people to do and what we are willing to say; public relations staffs tend to stress the agency's image rather than emphasizing a risk communication program message about how people can reduce their own risk.
Once an agency has agreed on its risk communication goals, which of these goals to assess, and what success measures will be used, the resources issue rears its ugly head. Managers often believe that evaluation (of any kind, not just for risk communication) is prohibitively expensive. Their inclination is to produce and distribute more materials, without recognizing that the response to an additional "dose" of the same risk communication program usually is less than linear. They worry that evaluation has the potential to show failure, potentially overwhelming its insights for improving current and future risk communication efforts. This is reinforced by the limited success of past risk communication efforts. Managers of mandatory programs with (assumed) high compliance have yet to learn that the need to compete with other messages for the audience's attention means that a marketing perspectivewhere increases by a few percentage points in market share are a big successis more appropriate than the traditional public health perspective.
Overcoming these barriers to evaluation is feasible, especially with early planning. Managers will have an incentive to fund evaluations if they understand that the results can become part of their overall strategy for doing a better job.
Evaluation can show which risk communication activities are most effective, rather than simply justify what was done. Evaluation can confirm that changes are not needed because the risk communication program is accomplishing its objectives, measure long-term changes in target audience awareness and behavior, and contribute to the evolution of agency attitudes and expectations (e.g., agency recognition that risk communication impacts are likely to be modest because society already has moderated risks). Documenting and sharing information on successes will increase the agency's credibility.
Managers and staff should expect evaluation of risk communication, know that changes can be made, and know that funds have been allocated for evaluation. The evaluator is more likely to get the go-ahead if he or she understands how "the big picture" differs for the manager and the public relations staff. This enables the evaluator to be insightful and visible by determining what questions need answering and concentrating presentations on data and answers.
There is no such thing as a free evaluation, but quickandeasy methods often yield answers that are good enough for deciding the next step. The evaluation should be tailored to the scope of the risk communication effort, as indicated by the level of resources used for communicating and the seriousness of the risk (if the communication fails).
The bottom line comes from applying what is learned in an evaluation to
Internal and external sharing of what is learned in an evaluation helps in
Return to Table of Contents
Return to Committee Reports Page