DEVELOP PLANNING AND EVALUATION SKILLS
Developing an Evaluation Plan
Will you help us make sure
the Tool Box remains available?
Donate now.
Seeking supports for evaluation?
Learn more.
Main Section
Checklist
Examples
Tools
PowerPoint
Learn the four main steps to developing an evaluation plan, from clarifying objectives and goals to setting up a timeline for evaluation activities.
Why should you have an evaluation plan?
When should you develop an evaluation plan?
What are the different types of stakeholders and what are their interests in your evaluation?
How do you develop an evaluation plan?
What sort of products should you expect to get out of the evaluation?
What sort of standards should you follow?
Why should you have an evaluation plan?
After many late nights of hard work, more planning meetings than you care to remember, and many pots of coffee, your initiative has finally gotten off the ground. Congratulations! You have every reason to be proud of yourself and you should probably take a bit of a breather to avoid burnout. Don't rest on your laurels too long, though--your next step is to monitor the initiative's progress. If your initiative is working perfectly in every way, you deserve the satisfaction of knowing that. If adjustments need to be made to guarantee your success, you want to know about them so you can jump right in there and keep your hard work from going to waste. And, in the worst case scenario, you'll want to know if it's an utter failure so you can figure out the best way to cut your losses. For these reasons, evaluation is extremely important.
There's so much information on evaluation out there that it's easy for community groups to fall into the trap of just buying an evaluation handbook and following it to the letter. This might seem like the best way to go about it at first glance-- evaluation is a huge topic and it can be pretty intimidating. Unfortunately, if you resort to the "cookbook" approach to evaluation, you might find you end up collecting a lot of data that you analyze and then end up just filing it away, never to be seen or used again.
Instead, take a little time to think about what exactly you really want to know about the initiative. Your evaluation system should address simple questions that are important to your community, your staff, and (last but never least!) your funding partners. Try to think about financial and practical considerations when asking yourself what sort of questions you want answered. The best way to insure that you have the most productive evaluation possible is to come up with an evaluation plan.
Here are a few reasons why you should develop an evaluation plan:
It guides you through each step of the process of evaluation
It helps you decide what sort of information you and your stakeholders really need
It keeps you from wasting time gathering information that isn't needed
It helps you identify the best possible methods and strategies for getting the needed information
It helps you come up with a reasonable and realistic timeline for evaluation
Most importantly, it will help you improve your initiative!
When should you develop an evaluation plan?
As soon as possible! The best time to do this is before you implement the initiative. After that, you can do it anytime, but the earlier you develop it and begin to implement it, the better off your initiative will be, and the greater the outcomes will be at the end.
Remember, evaluation is more than just finding out if you did your job. It is important to use evaluation data to improve the initiative along the way.
What are the different types of stakeholders and what are their interests in your evaluation?
We'd all like to think that everyone is as interested in our initiative or project as we are, but unfortunately that isn't the case. For community health groups, there are basically three groups of people who might be identified as stakeholders (those who are interested, involved, and invested in the project or initiative in some way): community groups, grantmakers/funders, and university-based researchers. Take some time to make a list of your project or initiative's stakeholders, as well as which category they fall into.
What are the types of stakeholders?
Community groups: Hey, that's you! Perhaps this is the most obvious category of stakeholders, because it includes the staff and/or volunteers involved in your initiative or project. It also includes the people directly affected by it--your targets and agents of change.
Grantmakers and funders: Don't forget the folks that pay the bills! Most grantmakers and funders want to know how their money's being spent, so you'll find that they often have specific requirements about things they want you to evaluate. Check out all your current funders to see what kind of information they want you to be gathering. Better yet, find out what sort of information you'll need to have for any future grants you're considering applying for. It can't hurt!
University-based researchers: This includes researchers and evaluators that your coalition or initiative may choose to bring in as consultants or full partners. Such researchers might be specialists in public health promotion, epidemiologists, behavioral scientists, specialists in evaluation, or some other academic field. Of course, not all community groups will work with university-based researchers on their projects, but if you choose to do so, they should have their own concerns, ideas, and questions for the evaluation. If you can't quite understand why you'd include these folks in your evaluation process, try thinking of them as auto mechanics--if you want them to help you make your car run better, you will of course include them in the diagnostic process. If you went to a mechanic and started ordering him around about how to fix your car without letting him check it out first, he'd probably get pretty annoyed with you. Same thing with your researchers and evaluators: it's important to include them in the evaluation development process if you really want them to help improve your initiative.
Each type of stakeholder will have a different perspective on your organization as well as what they want to learn from the evaluation. Every group is unique, and you may find that there are other sorts of stakeholders to consider with your own organization. Take some time to brainstorm about who your stakeholders are before you being making your evaluation plan.
What do they want to know about the evaluation?
While some information from the evaluation will be of use to all three groups of stakeholders, some will be needed by only one or two of the groups. Grantmakers and funders, for example, will usually want to know how many people were reached and served by the initiative, as well as whether the initiative had the community -level impact it intended to have. Community groups may want to use evaluation results to guide them in decisions about their programs, and where they are putting their efforts. University-based researchers will most likely be interested in proving whether any improvements in community health were definitely caused by your programs or initiatives; they may also want to study the overall structure of your group or initiative to identify the conditions under which success may be reached.
What decisions do they need to make, and how would they use the data to inform those decisions?
You and your stakeholders will probably be making decisions that affect your program or initiative based on the results of your evaluation, so you need to consider what those decisions will be. Your evaluation should yield honest and accurate information for you and your stakeholders; you'll need to be careful not to structure it in such a way that it exaggerates your success, and you'll need to be really careful not to structure it in such a way that it downplays your success!
Consider what sort of decisions you and your stakeholders will be making. Community groups will probably want to use the evaluation results to help them find ways to modify and improve your program or initiative. Grantmakers and funders will most likely be making decisions about how much funding to give you in the future, or even whether to continue funding your program at all (or any related programs). They may also think about whether to impose any requirements on you to get that program (e.g., a grantmaker tells you that your program may have its funding decreased unless you show an increase of services in a given area). University-based researchers will need to decide how they can best assist with plan development and data reporting.
You'll also want to consider how you and your stakeholders plan to balance costs and benefits. Evaluation should take up about 10--15% of your total budget. That may sound like a lot, but remember that evaluation is an essential tool for improving your initiative. When considering how to balance costs and benefits, ask yourself the following questions:
What do you need to know?
What is required by the community?
What is required by funding?
How do you develop an evaluation plan?
There are four main steps to developing an evaluation plan:
Clarifying program objectives and goals
Developing evaluation questions
Developing evaluation methods
Setting up a timeline for evaluation activities
Clarifying program objectives and goals
The first step is to clarify the objectives and goals of your initiative. What are the main things you want to accomplish, and how have you set out to accomplish them? Clarifying these will help you identify which major program components should be evaluated. One way to do this is to make a table of program components and elements.
Developing evaluation questions
For our purposes, there are four main categories of evaluation questions. Let's look at some examples of possible questions and suggested methods to answer those questions. Later on, we'll tell you a bit more about what these methods are and how they work
Planning and implementation issues: How well was the program or initiative planned out, and how well was that plan put into practice?
Possible questions: Who participates? Is there diversity among participants? Why do participants enter and leave your programs? Are there a variety of services and alternative activities generated? Do those most in need of help receive services? Are community members satisfied that the program meets local needs?
Possible methods to answer those questions: monitoring system that tracks actions and accomplishments related to bringing about the mission of the initiative, member survey of satisfaction with goals, member survey of satisfaction with outcomes.
Assessing attainment of objectives: How well has the program or initiative met its stated objectives?
Possible questions: How many people participate? How many hours are participants involved?
Possible methods to answer those questions: monitoring system (see above), member survey of satisfaction with outcomes, goal attainment scaling.
Impact on participants: How much and what kind of a difference has the program or initiative made for its targets of change?
Possible questions: How has behavior changed as a result of participation in the program? Are participants satisfied with the experience? Were there any negative results from participation in the program?
Possible methods to answer those questions: member survey of satisfaction with goals, member survey of satisfaction with outcomes, behavioral surveys, interviews with key participants.
Impact on the community: How much and what kind of a difference has the program or initiative made on the community as a whole?
Possible questions: What resulted from the program? Were there any negative results from the program? Do the benefits of the program outweigh the costs?
Possible methods to answer those questions: Behavioral surveys, interviews with key informants, community-level indicators.
Developing evaluation methods
Once you've come up with the questions you want to answer in your evaluation, the next step is to decide which methods will best address those questions. Here is a brief overview of some common evaluation methods and what they work best for.
Monitoring and feedback system
This method of evaluation has three main elements:
Process measures: these tell you about what you did to implement your initiative;
Outcome measures: these tell you about what the results were; and
Observational system: this is whatever you do to keep track of the initiative while it's happening.
Member surveys about the initiative
When Ed Koch was mayor of New York City, his trademark call of "How am I doing?" was known all over the country. It might seem like an overly simple approach, but sometimes the best thing you can do to find out if you're doing a good job is to ask your members. This is best done through member surveys. There are three kinds of member surveys you're most likely to need to use at some point:
Member survey of goals: done before the initiative begins - how do your members think you're going to do?
Member survey of process: done during the initiative - how are you doing so far?
Member survey of outcomes: done after the initiative is finished - how did you do?
Goal attainment report
If you want to know whether your proposed community changes were truly accomplished-- and we assume you do--your best bet may be to do a goal attainment report. Have your staff keep track of the date each time a community change mentioned in your action plan takes place. Later on, someone compiles this information (e.g., "Of our five goals, three were accomplished by the end of 1997.")
Behavioral surveys
Behavioral surveys help you find out what sort of risk behaviors people are taking part in and the level to which they're doing so. For example, if your coalition is working on an initiative to reduce car accidents in your area, one risk behavior to do a survey on will be drunk driving.
Interviews with key participants
Key participants - leaders in your community, people on your staff, etc. - have insights that you can really make use of. Interviewing them to get their viewpoints on critical points in the history of your initiative can help you learn more about the quality of your initiative, identify factors that affected the success or failure of certain events, provide you with a history of your initiative, and give you insight which you can use in planning and renewal efforts.
Community-level indicators of impact
These are tested-and-true markers that help you assess the ultimate outcome of your initiative. For substance abuse coalitions, for example, the U.S. Centers for Substance Abuse Prevention (CSAP) and the Regional Drug Initiative in Oregon recommend several proven indicators (e.g., single-nighttime car crashes, emergency transports related to alcohol) which help coalitions figure out the extent of substance abuse in their communities. Studying community-level indicators helps you provide solid evidence of the effectiveness of your initiative and determine how successful key components have been.
Setting up a timeline for evaluation activities
When does evaluation need to begin?
Right now! Or at least at the beginning of the initiative! Evaluation isn't something you should wait to think about until after everything else has been done. To get an accurate, clear picture of what your group has been doing and how well you've been doing it, it's important to start paying attention to evaluation from the very start. If you're already part of the way into your initiative, however, don't scrap the idea of evaluation altogether--even if you start late, you can still gather information that could prove very useful to you in improving your initiative.
Outline questions for each stage of development of the initiative
We suggest completing a table listing:
Key evaluation questions (the five categories listed above, with more specific questions within each category)
Type of evaluation measures to be used to answer them (i.e., what kind of data you will need to answer the question?)
Type of data collection (i.e., what evaluation methods you will use to collect this data)
Experimental design (A way of ruling out threats to the validity - e.g., believability - of your data. This would include comparing the information you collect to a similar group that is not doing things exactly the way you are doing things.)
With this table, you can get a good overview of what sort of things you'll have to do in order to get the information you need.
When do feedback and reports need to be provided?
Whenever you feel it's appropriate. Of course, you will provide feedback and reports at the end of the evaluation, but you should also provide periodic feedback and reports throughout the duration of the project or initiative. In particular, since you should provide feedback and reports at meetings of your steering committee or overall coalition, find out ahead of time how often they'd like updates. Funding partners will want to know how the evaluation is going as well.
When should evaluation end?
Shortly after the end of the project - usually when the final report is due. Don't wait too long after the project has been completed to finish up your evaluation - it's best to do this while everything is still fresh in your mind and you can still get access to any information you might need.
What sort of products should you expect to get out of the evaluation?
The main product you'll want to come up with is a report that you can share with everyone involved. What should this report include?
Effects expected by shareholders: Find out what key people want to know. Be sure to address any information that you know they're going to want to hear about!
Differences in the behaviors of key individuals: Find out how your coalition's efforts have changed the behaviors of your targets and agents of change. Have any of your strategies caused people to cut down on risky behaviors, or increase behaviors that protect them from risk? Are key people in the community cooperating with your efforts?
Differences in conditions in the community: Find out what has changed Is the public aware of your coalition or group's efforts? Do they support you? What steps are they taking to help you achieve your goals? Have your efforts caused any changes in local laws or practices?
You'll probably also include specific tools (i.e., brief reports summarizing data), annual reports, quarterly or monthly reports from the monitoring system, and anything else that is mutually agreed upon between the organization and the evaluation team.
What sort of standards should you follow?
Now that you've decided you're going to do an evaluation and have begun working on your plan, you've probably also had some questions about how to ensure that the evaluation will be as fair, accurate, and effective as possible. After all, evaluation is a big task, so you want to get it right. What standards should you use to make sure you do the best possible evaluation? In 1994, the Joint Committee on Standards for Educational Evaluation issued a list of program evaluation standards that are widely used to regulate evaluations of educational and public health programs.The standards the committee outlined are for utility, feasibility, propriety, and accuracy. Consider using evaluation standards to make sure you do the best evaluation possible for your initiative.
Contributor
Chris Hampton
Online Resource
The Action Catalogue is an online decision support tool that is intended to enable researchers, policy-makers and others wanting to conduct inclusive research, to find the method best suited for their specific project needs.
CDC Evaluation Resources provides an extensive list of resources for evaluation, as well as links to key professional associations and key journals.
Developing an Evaluation Plan offers a sample evaluation plan provided by the U.S. Department of Housing and Urban Development.
Developing an Effective Evaluation Plan is a workbook provided by the CDC. In addition to ample information on designing an evaluation plan, this book also provides worksheets as a step-by-step guide.
Evaluating Your Community-Based Program is a handbook designed by the American Academy of Pediatrics and includes extensive material on a variety of topics related to evaluation.
GAO Designing Evaluations is a handbook provided by the U.S. Government Accountability Office. It contains information about evaluation designs, approaches, and standards.
The Magenta Book - Guidance for Evaluation provides an in-depth look at evaluation. Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible. Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.
Comments
Post a Comment