Nonprofit organizations often regard program evaluation as an expensive, onerous, academic exercise that produces unreadable reports with charts and graphs and standard deviations that are of no use to the practitioners providing services. Organizations should take charge of the evaluation process, not simply submit to it, so that it produces MEANINGFUL actionable recommendations, in a manner that is FEASIBLE in terms of the resources and infrastructure necessary to get it done.

Organizations that engage an external evaluator, often because they are required to do so in a grant agreement, often treat their evaluation consultant differently than any other vendor. When they hire someone do provide professional development for staff, they direct that person to conduct the training they want. When they hire a web developer, they are very clear with that tech person about what they want for their website. When they hire someone to wash the windows or cut the grass, they tell them what to do, plain and simple.

However, when they hire an evaluation consultant, they often cower in fear, and give that person carte blanche to shape the process however he or she wants. All because this outside "expert" throws around fancy language about statistics and sample sizes and scientific rigor. As a result, these organizations often get a report full of statistics, sample sizes, and scientific rigor that is of no use to them whatsoever.

Organizations should take charge of the evaluation process, not simply submit to it, so that it produces MEANINGFUL actionable recommendations, in a manner that is FEASIBLE in terms of the resources and infrastructure necessary to get it done.

Imagine a different scenario. When an organization hires an external evaluator, they pull together a group of staff (and even perhaps some clients or community members) to serve as an evaluation committee that directs the work of the evaluator. They make it clear to the evaluator what they want from the process, what would be meaningful for the organization. They must first decide, and then convey to the evaluator, what the purpose of the evaluation is. Program improvement? Replication of successful approaches in new sites? Report to the community? The answer has very direct ramifications for the evaluation process.

Then, the organization should be explicit about what questions they want the evaluation to answer. Which of our array of interventions have the most significant impact on high school graduation? Which professional development approaches lead to the most sustained changes in practice? Do our after-school enrichment activities improve academic performance in the students' regular school? Organizations should decide what they want to learn, and shape the evaluation process t achieve those ends.

Further, the organization should tell the evaluator what data gathering they want done for the process. Statistical data gathered in the course of project implementation, focus groups of clients, surveys of staff, ..., whatever. The organization should also have a chance to review data gathering instruments and protocols, and to make recommendations, again driven by what will be most valuable, most meaningful to the organization, its programs, its clients, and its community. Now here's where the feasibility issue comes in. Every program evaluation represents a series of choices for the organization. They have to choose what they want to learn about the program in the context of what data exists or what data they can reasonably collect or assemble. They need to look at the budget for the evaluator's contract, and make choices about what data gathering and analysis is most important and will fit in the budget. Not every program evaluation is a PhD. dissertation. It is instead a targeted look at program implementation and impact, that is shaped by the strategic needs of the organization AND the resources that can be made available for the process. (Some attention must be paid to what's required by the funder, of course.)

Much of this is, in practice, a discussion and/or negotiation between the organization and the evaluator. A qualified evaluator should bring a significant degree of knowledge and experience about methodology and the credibility of using particular types of data to generate findings and recommendations. And the evaluator, as the author of the final report, has to be able to stand behind the methodology and the findings.

But at the end of the day, the organization should direct the process, and select an evaluator that is comfortable with this collaborative approach, so that the process is maximally beneficial and meaningful to the organization, as well as feasible within the available resources. This is evaluation done FOR the organization, not something done TO it.