APPENDIX 3

Evaluation and Effective Risk Communication Workshop Proceedings

Editors: Ann Fisher, Maria Pavlova, and Vincent Covello (Source: Interagency Task Force on Environmental Cancer and Heart and Lung Disease, Committee on Public Education and Communication, January 1991, pp. xvi­-xvii)

Evaluation in the context of risk communication is defined as any purposeful effort to determine the effectiveness of risk communication programs. According to this definition, evaluation encompasses a wide range of activities, from diagnosing risk communication problems to measuring and analyzing program effects and outcomes.

Why Evaluate?

One fundamental question dominated initial workshop discussions: Why is it important to evaluate risk communication programs? In response to this question, participants agreed that evaluation is critical to effective risk communication; without evaluation, there is no way to determine whether risk communication activities are achieving or have achieved their objectives.

Evaluation should be an integral part of the risk communication process. When carried out at each stage of program development, evaluation provides information critical to program effectiveness. For example, evaluation provides essential planning information and program direction, and it can help demonstrate program accomplishments. Most fundamentally, evaluation can signal the need for timely modifications.

When viewed in this way, evaluation has much to offer organizations that have risk communication responsibilities. During the planning and preproduction phase, evaluation can provide data critical to effective program design, including information about health, environment, and lifestyle needs and concerns; information about risk management needs and concerns; and information about how to meet those needs and concerns. Through surveys, questionnaires, focus groups, and other research tools, evaluation can be used to (1) identify stakeholders and other relevant audiences, (2) assess audience opinions or reactions, (3) find out what people see as important problems, (4) find out what issues and events people are aware of, and (5) find out how people react to different sources of information. Pretesting and pilot testing can be used to (1) forecast the effectiveness and feasibility of alternative risk communication activities, (2) determine the kinds of information needed by target audiences to understand risk communication material, (3) examine how people process and interpret risk communication information, and (4) obtain feedback on draft materials. Estimates of the effectiveness of alternative risk communication activities can be combined with information about their costs to determine which risk communication strategy will be most cost effective.

Once the risk communication program is operational, evaluation can be used to address questions of accountability and performance. For example, evaluation studies can determine whether the risk communication program is reaching the intended audience, provide feedback on the performance of risk communicators, identify program strengths, suggest ways these strengths can be used to communicate more effectively, and determine whether the program is being implemented appropriately (e.g., what material was produced, how much was produced, how long it took, what it cost, and what audiences received the material).

Once the risk communication program has been implemented, evaluation can provide information on program impact and outcome. For example, evaluation can determine what members of the audience actually received the information, what they learned, and whether changes occurred in the way they feel, think, or behave. The results can be used to answer the most important question: Did the program achieve its goals?

One major reason for evaluating risk communication activities is the general lack of resources for development of comprehensive risk communication strategies and programs. Few organizations have the resources needed to launch state-of-the-art risk communication programs that address multiple audiences through multiple channels. As a result, managers need to be able to choose messages and channels that use their limited resources most effectively.

Problems and Difficulties

These advantages raise a second question: If evaluation is so valuable, why are so few risk communication activities formally evaluated? The answer to this question appears to lie in a variety of problems and difficulties that affect the conduct of evaluations. These problems and difficulties stem from conflicts and disagreements about values, goals, resources, and usefulness of the evaluation. Each is briefly discussed.

Values. Many difficulties in evaluation arise from its nature as a normative, value­laden undertaking that carries important policy, ethical, and practical implications.

The value­laden nature of evaluation derives in part from the many stakeholders interested in the conduct and effectiveness of any given risk communication activity or program. These stakeholders include government agencies, corporations and industry groups, unions, the media, scientists, professional organizations, public interest groups, and individual citizens. Each of these groups has varying and often conflicting needs, interests, and perspectives. Evaluators are often asked to respond to the needs and concerns of each of these constituencies; however, different audiences have different goals and need different types of information, and different risk communication activities require different types of evaluation studies. As a result, one difficulty in any evaluation study is determining the perspective from which the evaluation will be conducted. Having chosen a perspective, several implications follow for reporting evaluation findings, including the evaluator's responsibility to be explicit about the chosen perspective and to acknowledge the existence of other perspectives. Several practical implications also follow, including limits on the relevance and role that evaluation can play in affecting risk communication programs, and an increased likelihood that evaluation results will be criticized, even by the sponsors of the evaluation.

Goals. A second problem affecting evaluation is the difficulty identifying risk communication goals. What goals are appropriate? For example, should the primary goal of risk communication be to help people become aware of an issue, make more informed decisions, take action, seek information, seek help, protect themselves, change their behavior, or participate more effectively in the decision making process? For some, the goal of risk communication is narrowly defined as personal or organizational survival and damage control; for others, it is to overcome opposition to decisions. For still others, it is to achieve informed consent, enhanced public participation, constructive dialogue, and citizen empowerment.

Meaningful evaluation is possible only when the program's goals, intended audience, and expected effects can clearly be specified. However, for many risk communication programs, such specification is extremely difficult and sometimes impossible. In many cases, evaluators and those who commission the evaluation are not able to agree on what the goals of the risk communication program should be, let alone which goals should be assessed or how to measure success (e.g., through measures of knowledge, attitudes, and perceptions; measures of message awareness, comprehension, and acceptance; measures of information demand; or measures of behavioral intentions or actual behavior).

One practical requirement for evaluation is thinking through communication goals at the beginning. Program and evaluation activities should be based on a set of clear risk communication goals. Even the most basic risk communication activity, such as responding to a telephone inquiry from a concerned citizen, should have a specific goal. Without clear communication goals—be they informational, organizational, legally mandated, or process goals—it is impossible to know if the interaction and exchange have been successful.

Once risk communication goals have been determined. they should play a key role in the planning and implementation process. At each stage of the program, activities should be evaluated in light of these goals. If warranted, program goals should be reviewed and changed as the program develops.

Resources. Effective risk communication requires a determined effort to ascertain whether the program is working as intended. Ideally, this should be done while there is still time to change direction. Feedback is essential to ensure that the communication effort is achieving its goals; if done early enough, it can save time by identifying places where midcourse corrections may be effective.

In practice, however, evaluation is often neglected in favor of more urgent tasks, especially if evaluation has not been planned and budgeted in advance. In most cases, the amount of money spent on evaluation represents an extremely small percentage of the total amount spent on the risk communication effort.

There are several reasons for the reluctance of managers to evaluate. One reason is that many program managers believe that evaluation is prohibitively expensive and that only a few organizations have the resources and skills to carry out evaluation. Another reason is the tendency for program managers to exhaust all available resources producing and distributing more risk communication materials in the hope of increasing effectiveness by reaching more people, rather than conducting evaluation studies that ask whether the message has reached the target audience and whether the target audience has received and internalized the message. There also is an understandable reluctance on the part of many program managers to support research that has the potential for revealing that the time, resources, and effort they have invested in a risk communication activity or program have not produced the desired results. Program managers may not want to be told that their, programs have shortcomings because this fact may have implications for career advancement, for intraorganizational decisions about the allocation of resources, and for program survival. Whenever an evaluation is conducted, there is a chance that it will reveal serious shortcomings. Thus, not performing evaluations avoids the potential for evidence of failure. On the other hand, if program managers are convinced that evaluation can demonstrate success, according to what they judge to be appropriate measures, then evaluation may be viewed very differently: it becomes a tool to justify promotions, bonuses, or increases in financial resources and staff.

Another factor that may affect the decision to evaluate is the limited success of previous risk communication programs aimed at changing risk-related attitudes and behaviors. These planned risk communication activities make up only a small share of the many factors that impinge on people's perceptions and behavior. Most evaluation studies conducted to date suggest that even when the message is clearly communicated and appears to be in the audience's best interest, the goals and expectations for such programs should be realistic. For example, a successful risk communication program might change the behavior of only a small percentage of the population. Agencies that have a public health mandate may view a small percentage change as insignificant even if the number of individuals affected is large; however, from the perspective of competing for attention and recognizing the complexities of behavioral change, risk communication endeavors should be compared with marketing efforts. For example, a marketing effort that produced an increase of a few percentage points in market share would be judged a big success. Beyond this lack of understanding of what level of impact should be considered a success, program managers may prefer formative and process evaluation over outcome and impact evaluation because the former affords opportunities to make changes in response to findings.

All of these factors suggest that increased attention needs to be given to understanding organizational and other barriers to evaluating risk communication activities. Equally important is the need to develop strategies to overcome these barriers. First among these strategies is planning risk communication efforts early in the program planning stage so that evaluation activities can be integrated into the effort from the beginning. Evaluation is less likely to be resisted when it is built into each stage of the risk communication process, when adequate resources are available for evaluation, and when changes implied by evaluative data can be made. Evaluation is also less likely to be resisted when funds for evaluation have been set aside and built into the risk communication budget in the beginning.

Second, greater attention needs to be given to the use of informal, quick, and simple evaluation methods, many of which can produce extremely valuable planning and program information. When more rigorous, systematic evaluations are required, they ideally should be carried out by parties other than those who control and conduct the risk communication activity or program.

Third, greater attention needs to be given to developing incentives for program managers to fund evaluations for the purpose of better understanding which risk communication activities are most effective and not solely for justifying what has been done.

Fourth, program managers should be encouraged to develop well­articulated evaluation plans with clear goals and clear explanations of what the evaluation is designed to achieve.

Finally, program managers should be encouraged to document and share risk communication successes, including cases in which community feedback was solicited and used to enhance the risk communication activity or program.

Usefulness. A common criticism of many evaluations is that the results are seldom used. Implicit in this criticism is the notion that use means direct and immediate changes in risk communication policies and programs; however, there are several different types of use, and not all of them are immediately apparent. For example, results may be used to confirm that changes in the risk communication program are not needed. In some cases, evaluation may indicate directions for risk communication that are inappropriate or not feasible. Even when there is no immediate discernible use of the information derived from an evaluation, results may accumulate over time and be absorbed slowly, eventually leading to changes in risk communication concepts, perspectives, and programs.

In assessing the usefulness of evaluation research, an important consideration is that the forces and events impinging on risk communication programs are often more powerful than the results derived from evaluation studies. The environment in which risk communication programs are developed seldom permits swift and unilateral changes; new information may actually slow down the change process because it may make decisions more complicated.

Summary Recommendations

Several recommendations can be derived from these observations and from those found in the papers included in this volume. The recommendations are short term and long term.

Consistent with the goals of the workshop, most of these recommendations are oriented toward policymakers in public sector agencies that have risk communication responsibilities; however, the recommendations apply equally well to risk communication efforts in private sector organizations, such as public interest groups and industrial corporations.

Short-Term Recommendations

Long-Term Recommendations

Agencies and organizations should support development of guidebooks and manuals for practitioners on how to apply evaluation techniques. Guidebooks and manuals should include information on how to tailor an evaluation program to the scope and importance of a risk communication activity, as well as how to recognize the limitations of alternative methods. They should also include case studies demonstrating the value and importance of evaluation research in risk communication.

Return to Table of Contents

Return to Committee Reports Page