Click to PrintPublication Date: January 2010

Conducting program evaluations is an integral part of operating and managing a program because it helps to examine whether you are meeting the needs of your client base and achieving the overall goals of your program.

Organizations that receive funding from either government or foundations are often required to design and conduct a process and/or outcome evaluation of their programs. The plan can include the following:

  • A comprehensive needs assessment.
  • Clearly stated goals and objectives that link program activities and objectives.
  • Clearly defined performance measures.
  • Information on all of the resources that will be devoted to fulfilling this requirement.
  • A plan to address the requirements related to data confidentiality and the protection of human subjects who may be involved in a needs assessment and/or program evaluation.

This guide will help you

  • Develop an evaluation plan for collecting data on performance measures.
  • Establish measureable goals and objectives.
  • Design and conduct the program evaluation to continuously assess your program’s progress in achieving its established goals and objectives.
  • Identify measures to reflect the impact of your program’s activities.
  • Use the results to refine and improve services.

About This Series

OVC's series of four technical assistance guides are tools for victim service providers and allied professionals, like you, who want or need to conduct program evaluations or needs assessments. The guides will help you make critical decisions and ensure that you make the best use of your funds to promote the goals and purposes of your program. These four guides were originally developed for OVC and the grantees who received funding to serve victims of human trafficking. The guides have since been adapted for use by other grantees and organizations that provide programs for victims of any type of crime.

Guide to Performance Measurement and Program Evaluation
Prepare goals and objectives, identify performance measures and program outcomes, identify evaluation questions, create a program planning or logic model, select evaluation design, decide on data collection methods, analyze and present data, use evaluation data
Guide to Conducting a Needs Assessment
Design your needs assessment, determine data collection methods, analyze and present data, use the needs assessment results for planning
Guide to Hiring a Local Evaluator
Find a local evaluator, determine questions to ask and what to look for, decide on what to include in job descriptions, find out how to work effectively with evaluators
Guide to Protecting Human Subjects
Read about the law related to protecting human subjects, learn about key issues you need to address when conducting research that involves human subjects, learn about Institutional Review Boards and how to involve them in your research

About This Guide


This guide addresses two basic questions:

  • What are performance measurement and program evaluation? This section defines performance measurement and program evaluation, explains the relationship between the two, and describes how they work together along a continuum. The section also introduces important terms such as "input," "output," and "outcome" and reviews the types of evaluations you may conduct and their benefits.
  • What are the basic steps? This section describes a step-by-step approach for conducting a program evaluation, referring to other guides in this series when appropriate. Additionally, this section includes specific instructions on how to use the results of such an evaluation to plan for, refine, and sustain your program.

The guide includes a resources section with references, resources, and a glossary and also includes appendixes containing sample tools, templates, and instruments to assist you with your evaluation.

How To Use the Guide

You may or may not need to use this entire guide. If you simply want to know what performance measurement or program evaluation is, read the section on key concepts and the distinctions between these two activities. If you are trying to develop performance measures, conduct program evaluation efforts, or refine program activities, review the basics of program evaluation. If you are seeking guidance on specific formats for your evaluation plan or data collection instruments, review the appendixes. Whatever your need, there is a section to address it.

Other guides are available in this series to assist you with needs assessments and program evaluation:

U.S. Department of Justice
Office of Justice Programs
810 Seventh Street NW.
Washington, DC 20531

Eric H. Holder, Jr.
Attorney General

Laurie O. Robinson
Assistant Attorney General

Joye E. Frost
Acting Director, Office for Victims of Crime

Office of Justice Programs

Office for Victims of Crime

NCJ 228961

What Are Performance Measurement and Program Evaluation?

Before you can begin developing an evaluation plan, it is important to understand performance measurement and program evaluation and how their results can be used to meet your needs. This section provides an overview of performance measurement and program evaluation, the relationship between them, their advantages, and how to use them to meet your program goals. Definitions are provided, as are the types and benefits of each.


Performance measurement and program evaluation share similarities but serve different purposes. Performance measurement provides the data you will use to measure your program’s results; program evaluation is the process of obtaining, analyzing, interpreting, and reporting on this data to describe how your program is working.

Performance measurement is the ongoing monitoring and reporting of program accomplishments and progress toward preestablished goals. For many programs, requirements can be met through performance measurement, which includes collecting data on the level and type of activities (inputs) and the direct products and services delivered by the program (outputs).

Program evaluation is a systematic process of obtaining information to be used to assess and improve a program. In general, organizations use program evaluations to distinguish successful program efforts from ineffective program activities and services and to revise existing programs to achieve successful results. Conducting evaluations is an integral part of operating and managing a program because it helps to determine whether you are meeting the needs of your client base. The type and application of program evaluation methods depend on the mission and goals of the program.

Both mechanisms support resource allocation and policy decisions aimed at improving service delivery and program effectiveness. While performance measures can tell you only what is occurring in your program, program evaluation provides you with an overall assessment of whether your program is working and can help identify adjustments that may improve your program results. Performance measurement data can be used to detect problems early in the process so that you can correct them before it is too late. Program evaluation data are often used when results or outcomes are not readily observable and performance measures are not sufficient to demonstrate a program’s results.

Program evaluation and performance measurement complement each other in—

  • Developing or improving measures of program performance.
  • Helping to understand the relationship between activities and results.
  • Generating data on program results not regularly collected or available through other means.
  • Ensuring the quality of regularly collected performance data.

Types and Benefits

Performance Measurement

Performance measurement assesses a program’s progress toward its stated goals. The data collected measure specific outputs and outcomes a program is designed to achieve. Outputs are the result of the activities (inputs) that go into a program, while outcomes are the final results of or changes resulting from an activity. Exhibit 1 reviews inputs, outputs, and outcomes/impacts.

Exhibit 1
Types of Performance Measures
Term Definition
Input Resources used to produce outputs and outcomes.
Output The direct, immediate result of an activity. Products and services that result from an activity.
The intended initial, intermediate, and final result of an activity. The desired change in behavior, attitude, knowledge, skills, and conditions at the individual, agency, system, or community level.

Collecting data on your program’s inputs, outputs, and outcomes/impacts will help you answer a few key questions:

  • What are we doing with our resources (inputs)?
  • What services are we providing? Are we reaching our target audience (outputs)?
  • Are program activities achieving the desired objectives (outcomes)?
  • What long-term effects are these efforts having in achieving our goals (impacts)?

Moreover, the results of your program evaluation will justify continued funding from your funding sources. Specific performance measures may include the following:

  • Number and type of services provided to victims.
  • Number and type of service providers available to victims.
  • Number of service professionals who received training.
  • Changes in policy and practice in the community response to victims.
  • Increase in the number of collaborative partners in the designated region.

Program Evaluation

Although various types of program evaluations exist, the type of evaluation you conduct depends on the questions you want to answer. Process, outcome, and impact evaluations are three types of evaluations you may be required to conduct:

  • Process evaluations assess the extent to which the program is functioning as planned.
  • Outcome evaluations examine overall program effects. This type of evaluation focuses on objectives and provides useful information about program results.
  • Impact evaluations focus on long-term program goals and issues of causality.

Each type of evaluation has specific benefits that will assist you in successfully meeting your program goals and objectives along a continuum. Overall, evaluations help—

  • Improve program management.
  • Inform program planning and implementation.
  • Allocate resources strategically.
  • Win program support by demonstrating results.
  • Improve accountability to stakeholders.
  • Improve program effectiveness by focusing on results.
  • Demonstrate that program activities contribute to achieving agency or governmentwide goals.

What Are the Basic Steps?

Measuring a program’s performance and conducting an evaluation of it involve a great deal of planning and organization. One of the first steps to ensuring that your evaluation is conducted logically and methodically is for you to develop an evaluation plan, which provides the framework for the evaluation. If developed well, the plan will guide you step-by-step through the process of planning and implementing an evaluation and using its results to improve your program.

It is important to start with a plan so that you can review what you did against what you originally planned to do. The plan will help you focus on what will be evaluated (e.g., a program or aspect of a program), what you want to know (e.g., how effective the program or services are), how you will know the answer to your question when you see it (i.e., evidence), and when to collect the relevant data (i.e., timing). The plan also will help you determine the best methods for collecting, analyzing, and interpreting the data you collect, and reporting the results of your evaluation.

This section reviews the information you should include in your evaluation plan. A sample template for developing an evaluation plan is provided in appendix A (PDF 22.7 KB). Another helpful document for ensuring that you are making the right decisions as you implement your plan is the Evaluation and Performance Measurement Checklist in appendix B (PDF 57.2 KB). We encourage you to use this checklist throughout the process.

Assess Needs

For any program to be effective, it must fulfill the needs of its constituencies. A needs assessment can pinpoint reasons for gaps in performance or identify new and future performance needs (Gupta, 1999). Needs assessment information can be gathered using various methods (e.g., surveys, interviews, focus groups) and can be used to—

  • Determine the needs of the target population.
  • Gather information on previous services that did or did not meet the needs of this population.
  • Identify gaps in services.

For further information and tools on planning and conducting your needs assessment, refer to the Guide to Conducting a Needs Assessment in this series.

Define Goals and Objectives

Tips To Remember!

  • Be careful about defining goals too narrowly so that they appear to be outcomes.
  • Beware of stating activities as goals or objectives.
  • Be sure not to write compound goals and objectives.
  • Be realistic.

Make sure that you clearly understand your program’s goals and objectives before measuring performance or evaluating your program. The most important reason to prepare measurable goals and objectives is to ensure that you do not undermine your intended results.

  • A goal is a measurable statement of the desired long-term, global impact of the program. Goals generally address change. For example, “Our program will make comprehensive services available to 50 victims of human trafficking.”
  • An objective is a specific, measurable statement of the desired immediate or direct outcomes of the program that support the accomplishment of a goal. For example, “Our program will provide shelter to 50 victims of human trafficking over the next 12 months.”

Both goals and objectives should be tangible and measurable. Use the exercise in exhibit 2 to help develop a program goal or adopt one that you feel accurately captures your mission.

Exhibit 2
Preparing Measureable Goals and Objectives
The ABCDEs of writing measurable goals and objectives include the who, what, by when, and to-what-degree information for your program goals and objectives.
The population/target audience for whom the desired outcome is intended.
Behavior—What is to happen?
A clear statement of the behavior change/results expected.
Condition—By when?
The conditions under which measurements will be made. This may refer to the timeframe and/or implementation of a specific intervention.
Degree—By how much?
The quantification, or level, of results expected. This often involves measuring change in comparison to an identified baseline.
Evidence—As measured by?
The definition of the method of measuring the expected change. The degree of change (set forth above) will be measured using a specific instrument or criterion.

Using the method above, here are steps for developing goals and objectives:

To develop measurable goals

Step 1: Identify the long-term, global outcome(s) you want to achieve.
Step 2: Identify each of the elements (A, B, C, D, E).
Step 3: Formulate the goal statement using each of the necessary elements.

To develop measurable objectives

Step 1: Identify the short-term, more immediate outcome(s) you want to achieve.
Step 2: Identify each of the elements (A, B, C, D, E).
Step 3: Formulate the objective statement using each of the necessary elements.

Identify Outcomes

Program evaluation uses outcomes, which are often measures of performance, for determining the degree of program change. Outcomes reflect the changes in service experienced by a program’s participants and progress toward the program’s goals. Outcomes also describe the consequences of your program activities or intervention. Measures are data that can be used to determine whether program objectives have been achieved. A measure is a specific (quantitative) piece of information (i.e., data are numeric and consist of frequency counts, percentages, or other statistics) that provides evidence of your outcomes and helps you assess your program’s progress toward its stated goals. Measures are simply data that demonstrate what is occurring, not what caused the occurrence (Burt et al., 1997).

Measuring outcomes is fundamental to program evaluation. Essentially, it helps answer the questions, “Did the intervention work? Is your program employing the right activities to meet its program requirements, client needs, and program goals?” Exhibit 3 presents an example of how to think about the relationship between outcomes and measures.

Exhibit 3
Example of Outcomes and Measures
Outcome What change are you measuring? Increased understanding of the needs of victims.
Measure What specific piece of data shows the change made by your program? Number of appropriate referrals.

The rest of this section is divided into two parts:

  • The Four Rs
  • Developing and Generating Outcomes and Measures

The Four Rs

In identifying outcomes and measures, keep in mind the four Rs:

  • Relevance: Are the expected outcomes relevant to the program? (This is the “Does it make sense?” test.)
  • Reality: Can you measure what you want to know? Can you get the data you need?
  • Reliability: Are the data accurate? Are the data of high quality?
  • Resources: Do you have the staff, the money, and the time to gather the data?

Developing and Generating Outcomes and Measures

Measures of outcomes represent the extent to which a program is effective in producing its intended outcomes and achieving desired results. To illustrate a broad view of program effectiveness, measures of outcomes are usually expressed as immediate or short-term, intermediate, and long-term outcomes. Immediate or short-term outcomes are the changes (e.g., knowledge, attitudes, behaviors) that occur early in the delivery of program services or interventions. Intermediate outcomes are results that emerge after immediate outcomes, but before long-term outcomes. Long-term outcomes are the overall intended changes or results you are trying to achieve. Please keep this information in mind as you develop and generate your own outcomes and measures.

Here are the general steps for developing outcomes and measures:

  • Identify desired outcomes. First, look to your program’s mission and goals. Ask yourself, “What activities are we doing? Why are we doing these particular activities?” The answer to the “Why” is usually an outcome. For instance, if one overall goal is to increase law enforcement’s ability to make appropriate referrals for trafficking victims, what are the benefits to your target population?
  • Choose desired outcomes and prioritize them. Specify a target goal on which to base your outcomes. For example, providing training to law enforcement on how to better identify trafficking victims may be one way of meeting your goal and prioritizing your manner of achieving a goal.
  • Identify what information you need to measure the outcomes. For example, one way of measuring whether law enforcement’s understanding of the needs of trafficking victims has increased (outcome) is to look at the number of appropriate service referrals law enforcement makes for trafficking victims after you have trained the officers (impact).
  • Decide how that information can be efficiently and realistically gathered. For example, think about what documentation (e.g., grant applications, monthly reports) or personnel can help you support your outcomes.

Tips To Remember!

Be sure that you don’t confuse outcomes and outputs:

  • Outcomes are the changes that result from the program or its activities (e.g., graduating from a shelter to transitional housing).
  • Outputs are units of services (e.g., the number of people who went through the program).

Formulate Research Questions

Research questions define the issues the evaluation will investigate and are worded so that they can be answered using research methods. These questions emerge from the goals and objectives of the evaluation and determine its design.

Here are some of the questions you may consider for your evaluation:

  • What obstacles does the community face in providing services to victims?
  • Is there a viable network of services to respond adequately and appropriately to the needs of victims?
  • Has the number of victims being identified and served increased? If so, what is the increase?
  • What additional or enhanced services have been provided?
  • Have previously unserved victims received services?
  • What approaches have been successful in overcoming obstacles to establish or enhance services for victims?
  • How were these approaches developed and implemented?
  • How do you plan to sustain your victim service program over time?

To ensure that you are asking the right research questions—

  • Verify the focus of your evaluation. Why are you evaluating your program? For example, your focus may be to ensure that your program’s activities are meeting the needs of its target population: crime victims.
  • Gear the questions to the decisions you want to make. For example, you may tailor your questions toward finding ways to fill gaps in meeting your program’s goals or improving its services.
  • Make the questions specific. For example, you may want to know how many staff are needed to provide basic counseling services.

Develop a Program Planning Model

A planning model is a graphic representation that clearly identifies the logical relationships between program conditions, inputs, activities, outcomes, and impacts. Planning models are beneficial because they—

  • Identify program goals, objectives, activities, desired outcomes, and impacts.
  • Clarify assumptions about and relationships between program efforts and expected results.
  • Help specify what to measure through evaluation.
  • Guide assessment of underlying assumptions.
  • Support the development of effective proposals.

Discussed below are—

  • Knowing the Elements of a Planning Model
  • Creating and Using a Program Planning Model

Knowing the Elements of a Planning Model

Exhibit 4 shows the elements of a planning model, which visually displays day-to-day program activities and how they link to the outcomes the program is working to achieve. The process of creating planning models allows you to think systematically about what outcomes you hope to achieve and how you plan to achieve them. Exhibit 4 also illustrates the conditions that your program will address through the services you provide, the resources available for carrying out program activities as well as the activities or services your program provides, intended or unintended outcomes, and the impacts of your program. Each step of the planning model provides a point of reference against which you can compare your program’s progress toward achieving its desired outcomes and impact.

Exhibit 4: Elements of a Planning Model

A planning model can guide all types of evaluations and help you specify the performance measure to be collected. For process evaluations, the planning model identifies your expectations of how the program should work so that you can see whether your program has derailed or is on track. For outcome/impact evaluations, the planning model displays how and for whom certain services are expected to create change.

Creating and Using a Program Planning Model

As displayed above, planning models are constructed left to right and can include arrows that show temporal sequence. You can make a planning model for your program by following the steps in exhibit 5.

Exhibit 5
Steps For Creating a Program Planning Model
Create a table with seven columns and follow the steps below:

Step 1: List all the background factors or people you think may influence the relationship between your program activities and goals in column 1. For example, lack of knowledge about the rights and needs of trafficking victims is a background factor.

Step 2: List program inputs in column 2. For example, the amount of funds that will go toward increasing the knowledge of the rights and needs of trafficking victims is an input.

Step 3: List program activities in column 3. For example, training law enforcement officers on the rights and needs of trafficking victims is an activity.

Step 4: List program outputs in column 4. For example, the number of law enforcement officers trained on the rights and needs of trafficking victims is an output.

Steps 5 and 6: List all outcomes occurring during or after your program activities that could affect how or whether you accomplish your program goals in columns 5 and 6. For example, a better understanding of human trafficking is an immediate outcome and a better understanding of how to identify trafficking victims is an intermediate outcome.

Step 7: List program impacts in column 7. Listing program goals helps you see the overall effects of your inputs as well as the changes resulting from them. For example, one goal might be to increase law enforcement’s ability to refer trafficking victims to appropriate services. Start with short-term goals because it is often difficult to measure and document long-term goals.

One row of a sample planning model is shown in exhibit 6. A sample planning model template is presented in appendix C (PDF 19.7 KB).

Exhibit 6
Sample Program Planning Model
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7
Conditions Inputs Activities Outputs Immediate
Lack of knowledge about the rights and needs of trafficking victims Training funds Law enforcement training Twelve law enforcement officers trained Better understanding of human trafficking Better understanding of how to identify victims of trafficking Increased ability to refer victims to appropriate services

After you have identified the information for each element of your planning model, you should be able to explain how each element leads to the next. These connections show how you expect change to occur as a result of your program. (If I do step 3, I can expect steps 4 and 5 to happen, which will cause step 6 to occur.)

There is more than one way to build a planning model. Your methods will depend on the needs of your program and its participants. Overall, consider the following:

  • What services and activities does your program provide?
  • Who are its participants? Whom will it influence?
  • How will your program activities lead to the expected outcomes?

Tips To Remember!

  • The planning model is only a tool. There is no single right way to develop a planning model.
  • Creating the planning model should be an interactive, dynamic process.
  • Keep the model to one page, if possible.
  • Identify and then organize the elements of your planning model:
    • Use boxes.
    • Group sequentially.
    • Add arrows to show relationships.
    • Consider the element of time.
    • Start at any point along the model—try beginning with outcomes.

Develop an Evaluation Design

An evaluation design simply describes the type of evaluation you are going to conduct. The type of evaluation you use will direct you to the data collection methods and sources that will help you answer the questions posed. As mentioned earlier in this guide, evaluations are designed to answer different questions. Process evaluations can help answer the overall question, “What is my program doing?” Outcome/impact evaluations can help answer the overall questions, “Is my program achieving its goals and objectives?” or “Is my program effecting change?” Exhibit 7 reviews the specific questions that can be answered by each evaluation type, methods that can be used to collect data, and sources of information.

Exhibit 7
Types of Evaluations
Questions Answered Data Collection Methods Information Sources
  • What is the program or intervention?
  • Is the program or intervention being implemented as intended?
  • Is it reaching the target population?
  • What works and what doesn’t work and for whom?
  • How much does it cost?
  • Document review
  • Observation
  • Interviews
  • Program documents (e.g., award documentation, proposals, forms)
  • Meeting minutes
  • Marketing materials
  • Curricula/training materials
  • Program participants
  • Program staff
  • Is the program achieving its objectives?
  • Is the program achieving its goals?
  • Is the program achieving its intended outcomes?
  • Is it effective?
  • Is it achieving its long-term impacts?
  • Can we attribute change to the program?
  • Interviews
  • Observation
  • Focus groups
  • Surveys and questionnaires
  • Document review
  • Official records (e.g., benefit letters, documents of certification)
  • Program participants
  • Program staff
  • Program documents

Developing an evaluation design involves two steps:

  • Selecting the design.
  • Selecting a data collection method.


Various evaluation designs are available, each requiring different levels of experience, resources, and time to execute. Consider the examples of evaluation designs discussed below for your program.

Pre-Post Designs.This design involves assessing participants both before and after the program activity or service (intervention), thus allowing you to assess and measure change over time. For example, if you are training law enforcement officers as part of your program, you could apply the pre-test/post-test design in the following ways:

  • Using the simplest design, officers would be assessed after completing the training. This is a post-test design. A drawback to this approach is that there is no objective indication of the amount of change in participants because there is no measure of what their attitudes or knowledge levels were before the program or intervention took place.
  • Measuring change in participants requires assessing them both before and after the intervention in a pre-test/post-test design. This involves assessing the same participants in the same manner both before and after training to ensure that the results of each test are comparable.
  • To assess both the amount of change and how long that change lasts, you can administer a pre-test/post-test/post-test design. This requires assessing participants before, after, and then again 1, 3, or 6 months after the intervention. This allows you to compare both the amount of change between the start and end of the program intervention as well as the change that occurs over time after the intervention. As with the previous design, you must assess the same people in the same manner all three times. This design is the most feasible for assessing change over time and will provide you with data that allow you to track your target population (e.g., clients, service providers, law enforcement, the community at large) over time.

The benefit of the pre-post design is that it is relatively easy to implement. The drawback is that you cannot say conclusively that differences after the intervention are due to your program’s efforts. Consider the previous example of training law enforcement officers. These same officers may have received training through another agency during the intervention period that caused the change. To determine whether your training caused the change, you would need to also assess the knowledge of law enforcement officers who did not take the training at the same points in time. This type of comparison design, however, may not be feasible; the time and resources available for evaluating your program may not be sufficient for you to use comparison groups. You may want to consult with a local evaluator to discuss these and other possible designs for evaluating your program. Exhibit 8 clarifies different options of the pre-post design.

Exhibit 8
Summary of Pre-Post Design Options
Design Characteristics Advantages Disadvantages Required
Post-test Measures program participants after the intervention Requires access only to one group No valid baseline measure for comparison; cannot assess change Low
Pre-test/Post-test Measures program participants before and after intervention Provides a baseline measure; requires access only to one group Cannot prove causality Moderate
Pre-test/Post-test/Post-test Measures program participants before and twice after the intervention Enables you to determine if your program has sustained effects Cannot prove causality; may be difficult to follow up with participants Moderate

Mixed Methods Evaluation Design. A mixed methods design involves integrating process and outcome designs. This approach can increase the chances of accurately describing the processes and assessing the outcomes of your program. This requires using a mixture of data collection methods such as reviewing case studies and surveys to ensure that the intervention was implemented properly and to identify its immediate and intermediate outcomes. Mixed methods are strongly recommended for large-scale evaluations.

Data Collection Method

After you have selected the evaluation design, you will need to select appropriate data collection methods. The methods you choose will depend on the type of evaluation you choose to conduct, the questions to be addressed, and the specific data you need to answer your evaluation questions. Before you consider selecting data collection methods, you should first—

  • Review existing data. Take a look at the data you routinely collect and decide whether to use it in this evaluation.
  • Define the data you need to collect. Figure out which data you still need to collect. Make a list of topics you need to know more about and develop a list of the data you will collect. Finalize the list based on the importance of the information and its ease of collection.

This section begins with a description of qualitative and quantitative approaches and ends with an overview of the methods you can use for collecting data. The most important thing to remember is to select the method that will allow you to collect data that you can use to answer your evaluation questions.

Qualitative Methods. Qualitative methods capture data that are difficult to measure, count, or express in numerical terms. Various qualitative methods can be used to collect data, three of which are described below.

  • Observation involves gathering information about how a program operates. Data can be collected on the setting, activities, and participants. You can conduct observations directly or indirectly in a structured or unstructured manner. Direct observation entails onsite visits during which you collect data about program processes by witnessing and taking notes on program operations. Indirect observation takes place when you discreetly observe program activity without the knowledge of program staff. You will need to develop a protocol for observations that details the start and end date of the visit, staff who will be interviewed (if direct), and program activities to be observed.
  • Tips To Remember!

    • Choose opening questions that are designed to break the ice.
    • Use transition questions to get the data you need.
    • Be sure to get key questions answered before you finish.
    • Be sure to include ending questions that summarize the discussion and gather any missing information.
    Interviews involve asking people to describe or explain particular program issues or practices. You can conduct interviews by telephone or in person. Interviews allow you to gather information on unobserved program attributes. For example, through interviewing program staff, you may find that their opinions of program operations do not mirror those of the program’s management. Depending on the type of interview you are conducting, you may or may not need a guide. For example, informational, conversational interviews are the least structured and do not require structured guides; fixed-response interviews are the least flexible and require the interviewer to follow a structured guide exactly as written. Again, the interview may include a combination of open-ended and closed-ended questions, depending on the type of interview.
  • Focus groups involve group discussions guided by an evaluator acting as a facilitator using a set of structured questions. The goals of the discussion may vary, but this method is designed to explore a particular topic in depth. The discussion group is small, the conversation is fluid, and the setting is nonthreatening. Focus group participants are not required to complete an instrument, but notes are taken by the interviewer/facilitator or a second person during the discussion period. The primary purpose for using focus groups is to obtain data and insights that can only be found through group interaction.

Sample observation, interview, and focus group guides are available in appendixes D (PDF 74.6 KB), E (PDF 19.8 KB), and F (PDF 63.6 KB).

Quantitative Methods. Quantitative methods capture data that can be counted, measured, compared, or expressed in numerical terms. Various quantitative methods can be used to collect data, two of which are described below.

  • Document review involves collecting and reviewing existing written material about the program. Documents may include program records or materials such as proposals, annual or monthly reports, budgets, organizational charts, memorandums, policies and procedures, operations handbooks, and training materials. Reviewing program documents can provide an idea of how the program works without interrupting program staff or activities.
  • Questionnaires and surveys involve collecting data directly from individuals. This approach allows you to gather data directly from the source. Through self-administered or face-to-face surveys, questionnaires, checklists, or telephone or mail surveys, you can find out exactly how your program is making an impact. To administer a survey, however, you must develop a protocol that includes a sampling plan and data collection instruments. The sampling plan describes who will be included in the study and the criteria by which they will be selected to participate.

    Questionnaires and surveys are written instruments that include a number of closed- and open-ended questions. You can design your instrument to collect information that will help you measure a particular factor. For example, you can design your survey to measure changes in knowledge, attitude, skills, or behavior. Remember that when you are developing your questionnaire or survey, questions should be—

    • Well-constructed, easily understood, unambiguous, and objective.
    • Short, simple, and specific.
    • Grouped logically.
    • Devoid of vague qualifiers, abstract terms, and jargon.

A sample document review guide and survey instrument are available in appendixes G (PDF 25.2 KB) and H (PDF 56.8 KB).

Overview of Data Collection Methods. After you choose a data collection method, you will need to develop protocols for it. Overall, the data collection tools you use or develop should contain instructions that are well-written, clear, and easy to understand. The instrument should appear consistent and well-formatted to make it easy to locate certain sections for reference and analysis. Appendix I, an “Instrument Development Checklist” (PDF 70.9 KB) will guide you as you develop data collection instruments.

Be sure to provide an overview of the evaluation plan, review the data collection instruments, and allow time for staff to practice using the instruments before administering them. Each of the data collection methods described above are presented in exhibit 9.

Exhibit 9
Overview of Data Collection Methods
Method Type Overall Purpose Advantages Challenges
Observation Qualitative To gather information first-hand about how a program actually works Can see program in operation; requires small amount of time to complete Requires much training; expertise needed to devise coding scheme; can influence participants
Interview Qualitative To explore participant perceptions, impressions, or experiences and to learn more about their answers Can gather indepth, detailed information Takes much time; analysis can be lengthy; requires good interview or conversation skills; formal analysis methods can be difficult to learn
Focus Group Qualitative To explore a particular topic in depth, get participant reactions, understand program issues and challenges Can quickly get information about participant likes and dislikes Can be difficult to manage; requires good interview or conversation skills; data can be difficult to analyze
Document Review Quantitative To unobtrusively get an impression of how a program operates Objective; least obtrusive; little expertise needed Access to data may be tricky; data can be difficult to interpret; may require a lot of time; data may be incomplete
Questionnaire and Self-Administered Survey Quantitative To gather data quickly and easily in a nonthreatening way Anonymous; easy to compare and analyze; can administer to several people; requires little expertise to gather data but some expertise needed to administer; can get lots of data in a moderate timeframe Impersonal; subjective; results are easily biased
In-Person Survey Quantitative To gather data quickly and easily in a nonthreatening way Can clarify responses Requires more time to conduct than self-administered survey; need some expertise to gather and use

Tips To Remember!

  • Ask only necessary demographic questions.
  • Make sure you ask all of the important questions.
  • Consider the setting in which the survey is administered or disseminated.
  • Assure your respondents of their anonymity and privacy.
If you are required to collect personal or demographic data from potential respondents, it is important that you (1) gain their consent, (2) explain how the information will be used and reported, and (3) explain how the information will be stored and maintained. Clearly explain the terms of confidentiality and any legal or agencywide procedures that govern the collection of demographic data. For more information on informed consent, ethics, and confidentiality, consult the Guide to Protecting Human Subjects in this series.

Conduct the Evaluation

The type of evaluation you plan to conduct will determine what you need to measure and subsequently the data you need to collect. Start your evaluation by—

  • Reviewing your evaluation plan.
  • Developing data collection protocols.
  • Training program staff.

After you have developed your evaluation design and selected your method of data collection, the next step is to conduct the evaluation by collecting, analyzing, and interpreting the data.

Collect Data

To ensure that you collect the appropriate data, start with a data collection plan. This plan should outline your evaluation questions, the data you need to answer those questions, the data sources you expect to use (e.g., program staff, participants), your data collection methods (e.g., observation, interview) and instruments (e.g., surveys, interview guide), and your schedule (e.g., site visit schedule or survey administration dates). The plan also may include how you will enter, track, and store or secure data. The plan allows you to compare what you planned to do in your program with what you actually did and to monitor your overall progress.

Many types of data will be collected from various sources and will be driven by the key evaluation questions. For process evaluations, you may collect data on aspects of your program that relate to program activities such as—

  • Program interventions.
  • Characteristics of clients and program staff.
  • Resources.
  • Organizational structures.
  • Standard operating procedures.
  • Perceptions of program staff and participants about the program’s effectiveness and efficiency.
  • Internal and external factors influencing the attainment of program goals.

For outcome/impact evaluations, you may collect data that measure intermediate outcomes, which may include the number of—

  • Services provided to precertified victims.
  • Law enforcement or social service professionals who received training.
  • Collaborative partners within a region.

To get the best quality data, administer your instrument in the same way each time. When interviewing, for example, ask the questions in the same order and in the same manner. Other important suggestions for properly entering, tracking, and securing your data are listed below.

Data Entry. You will need to create a system for entering data. Your data entry system will allow you to know, at a glance, how many people have responded and when and any missing data not provided by the respondent. Before entering data from hardcopies (i.e., paper documents), be sure to make a copy of the form. You can make edits or comments on the copies. If the data are in electronic form, be sure to make a master backup copy on your hard drive or keep separate copies of your database on jumpdrives or CD–ROMs. Exhibit 10 shows a sample data entry sheet.

Exhibit 10
Sample Data Entry Sheet
Title of Instrument Respondent Identification Number Completion Date Q. Have you assisted a trafficking victim in the past 6 months? (1 = Yes; 2 = No; 3 = Unsure) Q. Have you received training on trafficking victims’ rights and laws? (1 = Yes; 2 = No; 3 = Unsure)
Law Enforcement Training Feedback Survey 061704 05/05/09 1 1

Data Tracking. It also is important to track the data you collect. There are several ways to do this. You can create a chart that includes the title of the instrument, the source of the data (e.g., document review, program participants), how the instrument was delivered, and the start and end dates of the data collection. It also might be a good idea to use computer software, such as Excel or Corel Quattro Pro. Exhibit 11 provides a sample data tracking sheet.

Exhibit 11
Sample Data Tracking Sheet
Title of Instrument Source of Data How Administered? Data Collection Period Number of Instruments Administered
Client Satisfaction Survey Program participants Self-administered May 1–30, 2009 25

Data Storage and Security.You will also want to think about where and how you will store and secure the data you collect. Be sure to store hardcopy forms in a place safe from damage or loss. For electronic data, be sure to back up hard drives or keep separate copies of your database on an external drive or CD–ROM.

Analyze and Interpret Data

Analyzing the data you have collected should begin with a review of your research questions. This will help you organize your data and focus your analysis. Here are a few tips for analyzing and interpreting qualitative and quantitative data.

Qualitative Data. Qualitative data are typically obtained from open-ended questions, the answers to which are not limited by a set of choices or a scale. For example, qualitative data include answers to questions such as “What experiences have you had working with victims?” or “How can services to victims be improved in your area?”—but only if the study participant is not restricted by a preselected set of answers. You would typically ask these types of questions during interviews, focus groups, or as open-ended questions on a survey instrument. They yield responses that explain in detail the participant’s position, knowledge, or feelings about an issue. Analyze qualitative data to look for trends or patterns in the responses. These trends or patterns are the general statements that you can make about what you have learned about your community.

Below are some basic steps for analyzing qualitative data:

  • Review all of the data.
  • Organize and label responses into similar categories or themes. For example, all comments
    or concerns can be labeled “Suggestions,” and program activities can be labeled “Program
  • Try to identify patterns or associations among the data (e.g., all of the law enforcement officers who attended training have less than 1 year of experience).
  • Look for similarities and differences among your respondents. This review will allow themes to emerge from the data and provide a basis for your coding scheme.
  • Develop a coding scheme based on the data collected. For example, if you find that all of the law enforcement officers trained have less than 1 year of experience on the job, then you can code that response, “Less 1,” so that when you come to that response in subsequent reviews, you can easily categorize and code it.

Quantitative Data. These data are collected in surveys or through other means in the form of numbers and are usually presented as totals, percentages, and rates. For example, quantitative data include answers to questions such as “How many hours do you spend looking for service providers for your clients?” or “How many victims have you served this year?” You would typically ask these closed-ended questions in a survey, on which the participant circles a preset answer choice or provides a numeric response. Use quantitative data to generate averages or percentages across the responses. These averages or percentages tell you what proportion of your respondents feel a certain way or have a certain level of knowledge about an issue.

When embarking on the data analysis process, keep in mind the following questions:

  • What do the raw data tell you?
  • Are the results low, average, or high?
  • Are there any red flags or extreme values?
  • What can you infer from the data?

Tips To Remember!

  • Copy or back up your data before analyzing the data.
  • Keep track of what you have or have not analyzed.
  • Use computer software to organize, enter, track, and secure your data.

Depending on your skills as a qualitative or quantitative data analyst, you may want to hire a local evaluator or consultant. Some questions to ask when considering whether you need outside help include the following:

  • Do you have enough experience analyzing qualitative and quantitative data to make sense of the data collected?
  • Do you have sufficient time to thoroughly analyze the data?
  • Do you have the funds to hire an evaluator?
  • Are you able to use the data to answer the research questions in the most effective way?

The more questions that you, and those in your initiative, answer “no” to, the more advantageous it might be to hire a local evaluator or consultant, if you have the funds, to assist you with the data analysis. The Guide to Hiring a Local Evaluator, included within this series, can help with finding an evaluator.

Present and Report Results

In summarizing evaluation results, remember the purpose of your evaluation and the audience for your report. The report will include an interpretation of the results of your evaluation and will serve several purposes:

  • Demonstrate accountability and attract resources.
  • Educate stakeholders and the public about your program’s value.
  • Gain support for your program.
  • Guide decisionmaking.

Tips on creating the report and considering your audience follow.

The Report

The evaluation report is a comprehensive document that describes the results of implementing your evaluation plan. In general, the type and structure of your report will depend on your audience, but every evaluation report has several integral parts:

  • Title Page—A single page that includes the name of your program, the name of the evaluator or company (if applicable), and the date the report is prepared.
  • Table of Contents—A list of topics and their page locations in the report.
  • Executive Summary—A very brief overview of the purpose of the evaluation, evaluation questions, and procedures that highlights your findings and recommendations.
  • Introduction—A description of the background, purpose, and contents of the report. This section sets the stage for the report by providing a description of your program and the type of evaluation conducted, the target audience, goals of the evaluation, and the questions addressed.
  • Methodology—A description of the evaluation plan, which includes a description of the evaluation design, data collection strategies and instruments, and analysis methods.
  • Findings and Results—A summary of your analysis and an interpretation of your findings. This section should provide extensive details about the results of the evaluation (e.g., accomplishments, program strengths and weaknesses, participant reactions or knowledge and skill gains, effectiveness in bringing about changes, outcomes, impacts). The section should also document the limitations of the evaluation (e.g., cautions about how to use the findings). Generously use tables, charts, and graphs in addition to text to illustrate the results.
  • Conclusions and Recommendations—A summary of the implications of the findings, which includes how the findings will be used, strengths and weaknesses revealed, and decisions that must be made as a result of the evaluation.
  • Appendixes—Documents that support aspects of the report and further illustrate its findings or that describe the evaluation overall. For example, you will want to include the data collection instruments, bibliography of resources consulted, and diagrams to further explain how you implemented the evaluation.

Overall, the evaluation report is your chance to document the results of your program activities. A sample report outline is provided in appendix J (PDF 72.1 KB).

Tips To Remember!

  • Communicate clearly and effectively. Be succinct.
  • Avoid making sweeping generalizations.
  • Note the limitations of your data and conclusions.
  • Cross-check your data and sources, but refrain from suppressing unfavorable results.

The Audience

To ensure that you get the right message out, you must think about your audience and its specific information needs. Make sure that your conclusions are relevant to your audience. Consider these questions:

  • Who will be reading these reports?
  • What do you most hope to convey?
  • What do you hope they learn from your reports?
  • How can they duplicate what you have done to achieve similar results?

There are potentially many audiences you may want to target. These include program staff, community stakeholders, collaborators or external partners, policymakers, and the media. Your staff may use this information internally to improve program function, effectiveness, and efficiency. Community stakeholders and external collaborators may want to implement an evaluation similar to yours. Finally, depending on the results, policymakers and even the media also may be interested in your findings. Policymakers may be interested in the success or overall nature of the evaluation in terms of making strategic decisions about your program and other programs like yours, while your findings may also increase the visibility of your program in the media. Therefore, what you highlight for each audience will differ greatly. The bottom line: Consider your audience.

Use Evaluation Results

You can use the results of your evaluation in ongoing program planning, program refinement, or program sustainability.

Program Planning

You can use the results of evaluations and performance measurement to make internal decisions about the planning and management of your program. This ongoing activity can aid decisionmaking because it helps uncover concrete evidence of the effectiveness of your program. For example, if you are providing a service that is rarely used or has shown little impact, the results will help you decide whether to continue providing the service. Your findings also will help guide you in day-to-day operational decisions that support program activities. Overall, your findings will help keep you informed so that you can think strategically about what modifications (e.g., revamping, enhancing, changing) are necessary to improve your program’s activities (e.g., fill gaps).

Program Refinement

You can use evaluation results and performance measures to refine your program. Similar to ongoing planning, you will learn what works and what doesn’t work so that you can fine-tune your program’s activities and services. To keep your program operating effectively and efficiently, you must continuously monitor the activities and services you provide. Using the information in this guide will help you understand how all of these factors, independently or together, influence your program and how you can use the information resulting from the evaluation to refine your efforts so that you continue to see results.

Program Sustainability

Sustainability refers to a program’s capacity for continued success within a continuously changing environment. Evaluations are important tools for determining sustainability because the results inform you about the health of your program. Performance measures and evaluation results can be used to—

  • Demonstrate the effectiveness of your program.
  • Justify and maintain your program or specific program activities.
  • Expand ongoing programs or specific activities.
  • Obtain additional funding to support outcome activities.
  • Plan or implement a new program.
  • Determine the needs of your target population.
  • Increase the chances of reaching and effectively serving your target population.

Appropriate planning, ongoing monitoring, and periodic refinements are all integral to ensuring sustainability. Employing the program evaluation tools provided in this guide will help you find ways of measuring your performance so that you can remedy challenges before they become overwhelming and plan for shifts in client, community, or program needs.

Apply the Basics

Applying what you have learned requires preparation and planning. Here is a five-step plan for preparing to evaluate your program:

Step 1: Clarify the focus of your evaluation. What is your purpose? Who is your audience?

Step 2: Assess your resources. Who will conduct the evaluation? What is your timeline for conducting this evaluation? What is your budget?

Step 3: Form an evaluation team, if possible. This team should include the independent evaluator you have selected, if appropriate, and/or program staff to brainstorm, provide rationale and guidance, and act as quality assurance overseers to ensure that you are conducting an efficient and effective evaluation. (Refer to the Guide to Hiring a Local Evaluator in this series for help in deciding who will conduct the evaluation.)

Step 4: Develop an evaluation plan. This plan serves as your roadmap for conducting the evaluation. Developing your plan involves reviewing program objectives, defining needs and questions, choosing an approach, and selecting indicators. This plan lists the concrete steps you will take during the evaluation process. See the sample evaluation plan template in appendix A (PDF 22.7 KB).

Step 5: Prepare an evaluation checklist. Although this information is included in your evaluation plan, the additional checklist serves as a quality assurance tool. An example of such a checklist is available in appendix B (PDF 57.2 KB). Use this checklist at each stage of your evaluation to ensure that you are making the right decisions as you implement your plan.


  • Conduct a needs assessment.
  • Define your goals and objectives.
  • Identify outcomes.
  • Formulate research questions.
  • Develop a program planning model.
  • Develop an evaluation design to include data collection methods and instruments.
  • Conduct the evaluation to include collecting and analyzing data.
  • Present and report your results.
  • Use evaluation results for overall program planning, refinement, or sustainability.

It is important to monitor your progress frequently and to identify any changes that are needed in your evaluation plan. Keep in mind that your plan is a roadmap to your evaluation, but it should not be viewed as set in stone. If something is not working, you need to correct it along the way to ensure a successful evaluation—one that produces the results you need to inform your future plans for services and demonstrate program effectiveness.


By conducting performance measurement and program evaluation, you will be able to determine the extent to which you have built on existing community resources and are meeting the unique needs of your target population. As this guide has illustrated, developing performance measures and conducting program evaluations are necessary to determining the outcomes of your program activities.

The essential elements of measuring program performance and conducting evaluations include proper planning, identifying outcomes and measures, conducting needs assessments, developing appropriate questions, defining goals and objectives, and choosing the right evaluation design. Each step of the process builds on the previous step, so the instruments you design and the data you collect, analyze, and report all contribute to a successful program evaluation. The results can assist you with ongoing program planning, program refinement, and, most important, program sustainability.



Atkinson, A., M. Deaton, R. Travis, and T. Wessel. 1999. Program Planning and Evaluation Handbook: A Guide for Safe and Drug-Free Schools and Communities Act Programs. Harrisonburg, VA: James Madison University, Office of Substance Abuse Research, Virginia Effective Practices Project.

Burt, M.R., A.V. Harrell, L.C. Newmark, L.Y. Aron, and L.K. Jacobs. 1997. Evaluation Guidebook for Projects Funded by S.T.O.P. Formula Grants Under the Violence Against Women Act. Washington, DC: Urban Institute.

Caliber Associates. 2003. Law Related Education (LRE) Toolkit. Tipping the Scales Toward Quality Programming for Youth. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention.

Gupta, K. 1999. A Practical Guide to Needs Assessment. San Francisco, CA: Jossey-Bass Pfeiffer.

KRA Corporation. 1997. A Guide to Evaluating Crime Control of Programs in Public Housing. Washington, DC: U.S. Department of Housing and Urban Development, Office of Policy Development and Research.

Maxfield, M. 2001. Guide to Frugal Evaluation for Criminal Justice. Newark, NJ: Rutgers University, School of Criminal Justice.

McNamara, C. 1998. Nuts-and-Bolts Guide to Nonprofit Program Design, Marketing and Evaluation. Minneapolis, MN: Authenticity Consulting, LLC.

Office for Victims of Crime. 2002. Services for Trafficking Victims Discretionary Grant Application Kit. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office for Victims of Crime.

Office for Victims of Crime. 2004. Services for Trafficking Victims Discretionary Grant Application Kit. Fiscal Year 2004. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office for Victims of Crime.

Rossi, P.H., H.E. Freeman, and M.W. Lipsey. 1999. Evaluation: A Systematic Approach. 6th edition. Thousand Oaks, CA: Sage Publications, Inc.

Salasin, S.E. 1981. Evaluating Victim Services. Sage Research Progress Series in Evaluation (Volume 7). Beverly Hills, CA: Sage Publications.

Scriven, M. 1991. Evaluation Thesaurus. 4th edition. Newbury Park, CA: Sage Publications.

Trochim, W.M. 2006. Research Methods Knowledge Base. (version current as of October 20, 2006)

Wholey, J.S., H.P. Hatry, and K.E. Newcomer, eds. 2004. Handbook of Practical Program Evaluation. 2d edition. San Francisco, CA: Jossey-Bass.


American Evaluation Association (

The American Evaluation Association is an international professional association of evaluators devoted to applying and exploring program evaluation, personnel evaluation, technology evaluation, and many other forms of evaluation.

The Evaluation Center, Western Michigan University (

The Evaluation Center advances the theory, practice, and use of evaluation. The center's principal activities are research, development, dissemination, service, instruction, and national and international leadership in evaluation.

Grants and Funding Web Page, Office for Victims of Crime (

This web page links you to current funding opportunities, formula grant funds, and other funding resources.


Activities are the products and services your program provides to help solve the conditions identified.

Conditions are the problems or needs the program is designed to address.

Contextual factors are background or existing factors that define and influence the context in which the program takes place.

Data are pieces of specific information collected as part of an evaluation.

Data analysis is the process of applying systematic methods or statistical techniques to compare, describe, explain, or summarize data.

Evaluation design details the type of evaluation you are going to conduct.

Evaluation plan provides the framework for conducting the evaluation.

Evaluator is an individual trained and experienced in designing and conducting evaluations.

Focus group is a small-group discussion, guided by a trained facilitator, used to gather data on a designated topic.

Goals are measurable statements of the desired long-term, global impact of the program. Goals generally address change.

Immediate outcomes are the changes in program participants that occur early in the delivery of program services or interventions.

Impact evaluations focus on long-term program results and issues of causality.

Impacts are the desired long-term effects of the program.

Indicators are measures that demonstrate the extent to which goals have been achieved.

Inputs are the resources that your program uses in its activities to produce outputs and outcomes.

Intermediate outcomes are program results that emerge after immediate outcomes, but before long-term outcomes.

Measures are specific (quantitative) pieces of information that provide evidence of outcomes.

Mixed methods design involves integrating process and impact/outcome designs.

Needs assessment is a process for pinpointing reasons for gaps in performance or a method for identifying new and future performance needs.

Objectives are specific, measurable statements of the desired immediate or direct outcomes of the program that support the accomplishment of a goal.

Outcomes are the intended initial, intermediate, and final results of an activity.

Outcome evaluations examine overall program effects. This type of evaluation focuses on goals and objectives and provides useful information about program results.

Outputs are the direct/immediate results of an activity.

Performance measurement is the ongoing monitoring and reporting of program accomplishments and progress toward preestablished goals.

Planning model is a graphic representation that clearly identifies the logical relationships between program conditions, inputs, activities, outputs, outcomes, and impacts.

Post-test involves assessing participants after they complete a program or intervention.

Pre-test assesses participants’ knowledge, skills, attitudes, or behavior before they participate in a program or receive intervention.

Pre-test/Post-test involves assessing participants both before and after the program or intervention.

Pre-test/Post-test/Post-test assesses both the amount of change in participants and how long that change lasts.

Process evaluations assess the extent to which the program is functioning as planned.

Program evaluation is a systematic process of obtaining credible information to be used to assess and improve a program.

Qualitative data are a record of thoughts, observations, opinions, or words. They are difficult to measure, count, or express in numerical terms.

Quantitative data are numeric information such as a rating of an opinion on a scale, usually from 1 to 5. These data can be counted, measured, compared, and expressed in numerical terms.

Research questions are developed by the evaluator to define the issues the evaluation will investigate. These questions are worded so that they can be answered using research methods.


This printer-friendly version doesn’t include the appendixes. To print the appendixes, click the appropriate links and hit "print."