About This Series
Publication Date: January 2010
Contents
minus iconWhat Are Performance Measurement and Program Evaluation?
What Are the Basic Steps?
minus iconAppendixes
Text Size:   A  |  A |  A |  A
Questions?  Contact OVC TTAC
About This GuideResources

What Are the Basic Steps?

Develop an Evaluation Design

An evaluation design simply describes the type of evaluation you are going to conduct. The type of evaluation you use will direct you to the data collection methods and sources that will help you answer the questions posed. As mentioned earlier in this guide, evaluations are designed to answer different questions. Process evaluations can help answer the overall question, “What is my program doing?” Outcome/impact evaluations can help answer the overall questions, “Is my program achieving its goals and objectives?” or “Is my program effecting change?” Exhibit 7 reviews the specific questions that can be answered by each evaluation type, methods that can be used to collect data, and sources of information.

Exhibit 7
Types of Evaluations
Evaluation
Type
Questions Answered Data Collection Methods Information Sources
Process
  • What is the program or intervention?
  • Is the program or intervention being implemented as intended?
  • Is it reaching the target population?
  • What works and what doesn’t work and for whom?
  • How much does it cost?
  • Document review
  • Observation
  • Interviews
  • Program documents (e.g., award documentation, proposals, forms)
  • Meeting minutes
  • Marketing materials
  • Curricula/training materials
  • Program participants
  • Program staff
Outcome/
Impact
  • Is the program achieving its objectives?
  • Is the program achieving its goals?
  • Is the program achieving its intended outcomes?
  • Is it effective?
  • Is it achieving its long-term impacts?
  • Can we attribute change to the program?
  • Interviews
  • Observation
  • Focus groups
  • Surveys and questionnaires
  • Document review
  • Official records (e.g., benefit letters, documents of certification)
  • Program participants
  • Program staff
  • Program documents

Developing an evaluation design involves two steps:

Design

Various evaluation designs are available, each requiring different levels of experience, resources, and time to execute. Consider the examples of evaluation designs discussed below for your program.

Pre-Post Designs. This design involves assessing participants both before and after the program activity or service (intervention), thus allowing you to assess and measure change over time. For example, if you are training law enforcement officers as part of your program, you could apply the pre-test/post-test design in the following ways:

  • Using the simplest design, officers would be assessed after completing the training. This is a post-test design. A drawback to this approach is that there is no objective indication of the amount of change in participants because there is no measure of what their attitudes or knowledge levels were before the program or intervention took place.
  • Measuring change in participants requires assessing them both before and after the intervention in a pre-test/post-test design. This involves assessing the same participants in the same manner both before and after training to ensure that the results of each test are comparable.
  • To assess both the amount of change and how long that change lasts, you can administer a pre-test/post-test/post-test design. This requires assessing participants before, after, and then again 1, 3, or 6 months after the intervention. This allows you to compare both the amount of change between the start and end of the program intervention as well as the change that occurs over time after the intervention. As with the previous design, you must assess the same people in the same manner all three times. This design is the most feasible for assessing change over time and will provide you with data that allow you to track your target population (e.g., clients, service providers, law enforcement, the community at large) over time.

The benefit of the pre-post design is that it is relatively easy to implement. The drawback is that you cannot say conclusively that differences after the intervention are due to your program’s efforts. Consider the previous example of training law enforcement officers. These same officers may have received training through another agency during the intervention period that caused the change. To determine whether your training caused the change, you would need to also assess the knowledge of law enforcement officers who did not take the training at the same points in time. This type of comparison design, however, may not be feasible; the time and resources available for evaluating your program may not be sufficient for you to use comparison groups. You may want to consult with a local evaluator to discuss these and other possible designs for evaluating your program. Exhibit 8 clarifies different options of the pre-post design.

Exhibit 8
Summary of Pre-Post Design Options
Design Characteristics Advantages Disadvantages Required
Expertise
Post-test Measures program participants after the intervention Requires access only to one group No valid baseline measure for comparison; cannot assess change Low
Pre-test/Post-test Measures program participants before and after intervention Provides a baseline measure; requires access only to one group Cannot prove causality Moderate
Pre-test/Post-test/Post-test Measures program participants before and twice after the intervention Enables you to determine if your program has sustained effects Cannot prove causality; may be difficult to follow up with participants Moderate

Mixed Methods Evaluation Design. A mixed methods design involves integrating process and outcome designs. This approach can increase the chances of accurately describing the processes and assessing the outcomes of your program. This requires using a mixture of data collection methods such as reviewing case studies and surveys to ensure that the intervention was implemented properly and to identify its immediate and intermediate outcomes. Mixed methods are strongly recommended for large-scale evaluations.

Data Collection Method

After you have selected the evaluation design, you will need to select appropriate data collection methods. The methods you choose will depend on the type of evaluation you choose to conduct, the questions to be addressed, and the specific data you need to answer your evaluation questions. Before you consider selecting data collection methods, you should first—

  • Review existing data. Take a look at the data you routinely collect and decide whether to use it in this evaluation.
  • Define the data you need to collect. Figure out which data you still need to collect. Make a list of topics you need to know more about and develop a list of the data you will collect. Finalize the list based on the importance of the information and its ease of collection.

This section begins with a description of qualitative and quantitative approaches and ends with an overview of the methods you can use for collecting data. The most important thing to remember is to select the method that will allow you to collect data that you can use to answer your evaluation questions.

Qualitative Methods. Qualitative methods capture data that are difficult to measure, count, or express in numerical terms. Various qualitative methods can be used to collect data, three of which are described below.

  • Observation involves gathering information about how a program operates. Data can be collected on the setting, activities, and participants. You can conduct observations directly or indirectly in a structured or unstructured manner. Direct observation entails onsite visits during which you collect data about program processes by witnessing and taking notes on program operations. Indirect observation takes place when you discreetly observe program activity without the knowledge of program staff. You will need to develop a protocol for observations that details the start and end date of the visit, staff who will be interviewed (if direct), and program activities to be observed.
  • Tips To Remember!

    • Choose opening questions that are designed to break the ice.
    • Use transition questions to get the data you need.
    • Be sure to get key questions answered before you finish.
    • Be sure to include ending questions that summarize the discussion and gather any missing information.
    Interviews involve asking people to describe or explain particular program issues or practices. You can conduct interviews by telephone or in person. Interviews allow you to gather information on unobserved program attributes. For example, through interviewing program staff, you may find that their opinions of program operations do not mirror those of the program’s management. Depending on the type of interview you are conducting, you may or may not need a guide. For example, informational, conversational interviews are the least structured and do not require structured guides; fixed-response interviews are the least flexible and require the interviewer to follow a structured guide exactly as written. Again, the interview may include a combination of open-ended and closed-ended questions, depending on the type of interview.
  • Focus groups involve group discussions guided by an evaluator acting as a facilitator using a set of structured questions. The goals of the discussion may vary, but this method is designed to explore a particular topic in depth. The discussion group is small, the conversation is fluid, and the setting is nonthreatening. Focus group participants are not required to complete an instrument, but notes are taken by the interviewer/facilitator or a second person during the discussion period. The primary purpose for using focus groups is to obtain data and insights that can only be found through group interaction.

Sample observation, interview, and focus group guides are available in appendixes D (PDF 74.6 KB), E (PDF 19.8 KB), and F (PDF 63.6 KB).

Quantitative Methods. Quantitative methods capture data that can be counted, measured, compared, or expressed in numerical terms. Various quantitative methods can be used to collect data, two of which are described below.

  • Document review involves collecting and reviewing existing written material about the program. Documents may include program records or materials such as proposals, annual or monthly reports, budgets, organizational charts, memorandums, policies and procedures, operations handbooks, and training materials. Reviewing program documents can provide an idea of how the program works without interrupting program staff or activities.
  • Questionnaires and surveys involve collecting data directly from individuals. This approach allows you to gather data directly from the source. Through self-administered or face-to-face surveys, questionnaires, checklists, or telephone or mail surveys, you can find out exactly how your program is making an impact. To administer a survey, however, you must develop a protocol that includes a sampling plan and data collection instruments. The sampling plan describes who will be included in the study and the criteria by which they will be selected to participate.

    Questionnaires and surveys are written instruments that include a number of closed- and open-ended questions. You can design your instrument to collect information that will help you measure a particular factor. For example, you can design your survey to measure changes in knowledge, attitude, skills, or behavior. Remember that when you are developing your questionnaire or survey, questions should be—

    • Well-constructed, easily understood, unambiguous, and objective.
    • Short, simple, and specific.
    • Grouped logically.
    • Devoid of vague qualifiers, abstract terms, and jargon.

A sample document review guide and survey instrument are available in appendixes G (PDF 25.2 KB) and H (PDF 56.8 KB).

Overview of Data Collection Methods. After you choose a data collection method, you will need to develop protocols for it. Overall, the data collection tools you use or develop should contain instructions that are well-written, clear, and easy to understand. The instrument should appear consistent and well-formatted to make it easy to locate certain sections for reference and analysis. Appendix I, an “Instrument Development Checklist” (PDF 70.9 KB) will guide you as you develop data collection instruments.

Be sure to provide an overview of the evaluation plan, review the data collection instruments, and allow time for staff to practice using the instruments before administering them. Each of the data collection methods described above are presented in exhibit 9.

Exhibit 9
Overview of Data Collection Methods
Method Type Overall Purpose Advantages Challenges
Observation Qualitative To gather information first-hand about how a program actually works Can see program in operation; requires small amount of time to complete Requires much training; expertise needed to devise coding scheme; can influence participants
Interview Qualitative To explore participant perceptions, impressions, or experiences and to learn more about their answers Can gather indepth, detailed information Takes much time; analysis can be lengthy; requires good interview or conversation skills; formal analysis methods can be difficult to learn
Focus Group Qualitative To explore a particular topic in depth, get participant reactions, understand program issues and challenges Can quickly get information about participant likes and dislikes Can be difficult to manage; requires good interview or conversation skills; data can be difficult to analyze
Document Review Quantitative To unobtrusively get an impression of how a program operates Objective; least obtrusive; little expertise needed Access to data may be tricky; data can be difficult to interpret; may require a lot of time; data may be incomplete
Questionnaire and Self-Administered Survey Quantitative To gather data quickly and easily in a nonthreatening way Anonymous; easy to compare and analyze; can administer to several people; requires little expertise to gather data but some expertise needed to administer; can get lots of data in a moderate timeframe Impersonal; subjective; results are easily biased
In-Person Survey Quantitative To gather data quickly and easily in a nonthreatening way Can clarify responses Requires more time to conduct than self-administered survey; need some expertise to gather and use

Tips To Remember!

  • Ask only necessary demographic questions.
  • Make sure you ask all of the important questions.
  • Consider the setting in which the survey is administered or disseminated.
  • Assure your respondents of their anonymity and privacy.
If you are required to collect personal or demographic data from potential respondents, it is important that you (1) gain their consent, (2) explain how the information will be used and reported, and (3) explain how the information will be stored and maintained. Clearly explain the terms of confidentiality and any legal or agencywide procedures that govern the collection of demographic data. For more information on informed consent, ethics, and confidentiality, consult the Guide to Protecting Human Subjects in this series.