About This Series
Publication Date: January 2010
Contents
minus iconWhat Are Performance Measurement and Program Evaluation?
minus iconWhat Are the Basic Steps?
minus iconAppendixes
Text Size:   A  |  A |  A |  A
Questions?  Contact OVC TTAC
About This GuideResources

Resources

References
Resources
Glossary

References

Atkinson, A., M. Deaton, R. Travis, and T. Wessel. 1999. Program Planning and Evaluation Handbook: A Guide for Safe and Drug-Free Schools and Communities Act Programs. Harrisonburg, VA: James Madison University, Office of Substance Abuse Research, Virginia Effective Practices Project.

Burt, M.R., A.V. Harrell, L.C. Newmark, L.Y. Aron, and L.K. Jacobs. 1997. Evaluation Guidebook for Projects Funded by S.T.O.P. Formula Grants Under the Violence Against Women Act. Washington, DC: Urban Institute.

Caliber Associates. 2003. Law Related Education (LRE) Toolkit. Tipping the Scales Toward Quality Programming for Youth. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention.

Gupta, K. 1999. A Practical Guide to Needs Assessment. San Francisco, CA: Jossey-Bass Pfeiffer.

KRA Corporation. 1997. A Guide to Evaluating Crime Control of Programs in Public Housing. Washington, DC: U.S. Department of Housing and Urban Development, Office of Policy Development and Research.

Maxfield, M. 2001. Guide to Frugal Evaluation for Criminal Justice. Newark, NJ: Rutgers University, School of Criminal Justice. www.ncjrs.org/pdffiles1/nij/187350.pdf

McNamara, C. 1998. Nuts-and-Bolts Guide to Nonprofit Program Design, Marketing and Evaluation. Minneapolis, MN: Authenticity Consulting, LLC.

Office for Victims of Crime. 2002. Services for Trafficking Victims Discretionary Grant Application Kit. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office for Victims of Crime.

Office for Victims of Crime. 2004. Services for Trafficking Victims Discretionary Grant Application Kit. Fiscal Year 2004. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office for Victims of Crime.

Rossi, P.H., H.E. Freeman, and M.W. Lipsey. 1999. Evaluation: A Systematic Approach. 6th edition. Thousand Oaks, CA: Sage Publications, Inc.

Salasin, S.E. 1981. Evaluating Victim Services. Sage Research Progress Series in Evaluation (Volume 7). Beverly Hills, CA: Sage Publications.

Scriven, M. 1991. Evaluation Thesaurus. 4th edition. Newbury Park, CA: Sage Publications.

Trochim, W.M. 2006. Research Methods Knowledge Base. http://www.socialresearchmethods.net/kb (version current as of October 20, 2006)

Wholey, J.S., H.P. Hatry, and K.E. Newcomer, eds. 2004. Handbook of Practical Program Evaluation. 2d edition. San Francisco, CA: Jossey-Bass.

Resources

American Evaluation Association

The American Evaluation Association is an international professional association of evaluators devoted to applying and exploring program evaluation, personnel evaluation, technology evaluation, and many other forms of evaluation.

Center for Program Evaluation and Performance Measurement, Bureau of Justice Assistance

This center maintains a user-friendly online evaluation and performance measurement tool designed to assist state and local criminal justice planners, practitioners, State Administrative Agencies, researchers, and evaluators in conducting evaluations and developing program performance measures that will address the effectiveness and efficiency of their projects.

The Evaluation Center, Western Michigan University

The Evaluation Center advances the theory, practice, and use of evaluation. The center's principal activities are research, development, dissemination, service, instruction, and national and international leadership in evaluation.

Grants and Funding Web Page, Office for Victims of Crime

This web page links you to current funding opportunities, formula grant funds, and other funding resources.

Glossary

Activities are the products and services your program provides to help solve the conditions identified.

Conditions are the problems or needs the program is designed to address.

Contextual factors are background or existing factors that define and influence the context in which the program takes place.

Data are pieces of specific information collected as part of an evaluation.

Data analysis is the process of applying systematic methods or statistical techniques to compare, describe, explain, or summarize data.

Evaluation design details the type of evaluation you are going to conduct.

Evaluation plan provides the framework for conducting the evaluation.

Evaluator is an individual trained and experienced in designing and conducting evaluations.

Focus group is a small-group discussion, guided by a trained facilitator, used to gather data on a designated topic.

Goals are measurable statements of the desired long-term, global impact of the program. Goals generally address change.

Immediate outcomes are the changes in program participants that occur early in the delivery of program services or interventions.

Impact evaluations focus on long-term program results and issues of causality.

Impacts are the desired long-term effects of the program.

Indicators are measures that demonstrate the extent to which goals have been achieved.

Inputs are the resources that your program uses in its activities to produce outputs and outcomes.

Intermediate outcomes are program results that emerge after immediate outcomes, but before long-term outcomes.

Measures are specific (quantitative) pieces of information that provide evidence of outcomes.

Mixed methods design involves integrating process and impact/outcome designs.

Needs assessment is a process for pinpointing reasons for gaps in performance or a method for identifying new and future performance needs.

Objectives are specific, measurable statements of the desired immediate or direct outcomes of the program that support the accomplishment of a goal.

Outcomes are the intended initial, intermediate, and final results of an activity.

Outcome evaluations examine overall program effects. This type of evaluation focuses on goals and objectives and provides useful information about program results.

Outputs are the direct/immediate results of an activity.

Performance measurement is the ongoing monitoring and reporting of program accomplishments and progress toward preestablished goals.

Planning model is a graphic representation that clearly identifies the logical relationships between program conditions, inputs, activities, outputs, outcomes, and impacts.

Post-test involves assessing participants after they complete a program or intervention.

Pre-test assesses participants’ knowledge, skills, attitudes, or behavior before they participate in a program or receive intervention.

Pre-test/Post-test involves assessing participants both before and after the program or intervention.

Pre-test/Post-test/Post-test assesses both the amount of change in participants and how long that change lasts.

Process evaluations assess the extent to which the program is functioning as planned.

Program evaluation is a systematic process of obtaining credible information to be used to assess and improve a program.

Qualitative data are a record of thoughts, observations, opinions, or words. They are difficult to measure, count, or express in numerical terms.

Quantitative data are numeric information such as a rating of an opinion on a scale, usually from 1 to 5. These data can be counted, measured, compared, and expressed in numerical terms.

Research questions are developed by the evaluator to define the issues the evaluation will investigate. These questions are worded so that they can be answered using research methods.