I was recently asked, “What is your view of program evaluation? How does it differ from research?” At the risk of over-simplifying these two approaches to knowing, I would say that the purpose of evaluation is to improve programs, services, and organizations and to be accountable for results. Research uses many of the same methods as evaluation but has a different purpose. The purpose of research is to advance our knowledge and generalize findings to a wide population, beyond a specific program, service, or organization. For example, if we want to know if a particular medical procedure is effective, we would do research to correlate that procedure with intended outcomes in a range of settings and with a range of people. If we want to know how well that procedure is being implemented in hospital X and what can be done to make the procedure more effective in that hospital, then we would do an evaluation study. Similar methods of data collection; different purposes.
Even more important is how the data is used. I believe that the primary purpose of evaluation should be organizational learning. Evaluation of a program, service, or organization should result in stakeholders learning how to achieve success and how to sustain that success over time. Evaluation should help them become smarter about what they are doing for clients, customers, and partners.
Following are the key evaluation principles I’ve concluded from my 30 years of evaluation work[1]:
Involve staff (internal customers of evaluation) and other key stakeholders in deciding what to measure and how to measure it. Ask them what they want to know and why. Ask them for their thoughts about data collection methods. Ask them to help pilot these methods to determine if the methods will produce the data that is needed.
Choose the method of measurement only after deciding what to measure. The tendency is to use a survey to measure just about everything. But other methods (e.g., interviews, focus groups, observation) can be more useful depending on what you want to know. The appropriateness of each of these methods depends on the kind of data needed, the sources of that data, the circumstances for collecting the data, and how the data will be used. For example, you probably shouldn’t send a questionnaire to users of a literacy program, and you probably shouldn’t convene a focus group of users of a drug treatment program.
Collect data that are credible to stakeholders. Managers might accept the accuracy of staff interviews and focus groups, whereas board members might listen only to output (e.g., number of people served) and financial data (e.g., cost per person served). One group wants to hear numbers; another group wants to hear stories. Know your audience so that you can collect data and report findings that key stakeholders will find credible.
Collect and report data that are useful. The fact that the average rating of the agency is 4.5 on a 5-point scale or that the cost of the adult day care program has gone down 10% in the past year are not particularly useful data for organizational learning. The distribution of ratings across different client groups and finding out why they rated the agency the way they did would be more useful to know. The reasons for the 10% drop in day care costs and the implications for improving the quality and future of the program would be useful to know. Consider what data will help stakeholders make critical decisions.
Report findings in a manner in which stakeholders are receptive to the information. This has to do with the format in which information is reported. You will want all of the various stakeholders to understand your findings and be able to act on the implications. Keep it simple, relate it to the goals that are important to the particular audience, and recommend what should be done about the results. Do not report numbers, only; explain what the numbers mean to the organization. Tell the story that explains the statistics. If board members are influenced by statistics, then give them charts and graphs. If board members are influenced by client stories, then tell stories. Fit the method to the evaluation questions and learning styles of stakeholders, not the other way around.
Measure the process as well as the outcome. Continuous improvement is achieved by regular assessment of where people are in the process of achieving their goals. This starts by understanding what are key elements that make up the process. Then adjustments to the process can be made, especially as you learn more about staff and client needs and the organization becomes clearer about its goals. Do not wait a year. Give yourself the opportunity for short-term adjustments that will have long-term impact.
Provide just-in-time and just-enough information. Give staff the information they need, when and where they need it. Learning is maximized when people are not overwhelmed with new information, they can relate new information to their work, and they can apply that information to a problem on the job, immediately.
Measure to improve the process, not to blame or punish. Our tendency is to feel threatened by anything that might reveal a lack of personal competency. When we feel threatened, we become less cooperative and less willing to improve performance. Do everything that you can to assure participants that the measures are not being used to make judgments about individuals. Follow through on this promise. Use the data only to help individuals learn, make changes in the organization, and plan for additional activities that will make a difference in everyone’s performance. Do not use the data to chastise or penalize. As Peter Drucker said, “The question isn’t, Do you make mistakes? It’s, Do you learn from them?”