Indiana University Bloomington

Indiana Prevention Resource Center (IPRC)

Evaluation: What’s In It for Your Program?

Program Evaluation is defined as “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development.

Program Evaluation can accomplish several goals, including:

  • Monitoring progress towards your program’s goals
  • Determining whether your program is achieving its outcomes
  • Identifying opportunities for quality improvement
  • Helping maintain effective programs and avoiding waste
  • Enabling comparisons among different groups
  • Justifying the need for further funding and support (sustainability)

In the standard definition, “systematic” is a key word. Too often, programs are simply monitored informally as they progress. In the end that can lead to more questions than answers.

By contrast, data gathered during a properly performed evaluation empowers managers and staff to create the best possible programs, to learn from mistakes and make modifications as needed,  and to gauge their progress. It also lets them judge how well the program succeeded in reaching its short-term, intermediate, and long-term outcomes. Through evaluation, you can track these changes. A carefully designed evaluation helps you assess the effectiveness and impact of a particular program, intervention, or strategy in producing the changes you seek, and strengthens your case for further funding.

Once the components of the program description have been identified, a visual aid such as a chart is often used to summarize the relationship among any or all of the components.  A chart, such as Logic Model, can support both strategic planning and program evaluation. While there are other ways to depict these relationships, logic models are one of the most accepted and commonly used tools.

An effective evaluation should contain five key elements:

  • Recommendations – suggested actions to pursue after the evaluation.
  • Preparation – steps needed to prepare to use the evaluation findings.
  • Feedback – a record of the communication among everyone involved in the evaluation.
  • Dissemination – the process of communicating evaluation procedures or lessons learned to relevant audiences in a timely, unbiased, and consistent manner.
  • Follow-up – support provided after delivering the evaluation results.

The ultimate benefit of a well-designed evaluation is to bring accountability to a program, which adds to its effectiveness. If you need help evaluating your current or upcoming program, contact the IPRC.

By Peter Poletti,   5/2/2011