Skip to main content

1. What are the benefits of the Annnual Evaluation Plan?

2. Who should be involved in the Annual Evaluation Plan process?

3. What is a significant evaluation?

4. What should a good evaluation plan include?

5. Where can I find additional resources on evaluation plans?

An Annual Evaluation Plan describes specific evaluations that your agency will undertake. What is an Annual Evaluation Plan, and how can it benefit your agency?

Under the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act), Chief Financial Officers (CFO) Act agencies are required to develop and publicly share an Annual Evaluation Plan. Non-CFO Act agencies and agency subcomponents are encouraged, but not required, to create them.

OMB M-19-23 Reference

What is the Annual Evaluation Plan?

The Annual Evaluation Plan describes the evaluation activities the agency plans to conduct in the fiscal year following the year in which the performance plans are submitted.

The Annual Evaluation Plan should include “significant” evaluations related to the learning agenda and any other “significant” evaluations, such as those required by statute.

Your agency’s Annual Evaluation Plan:

Has required components. The plan must include specific information about the “significant” evaluations that your agency intends to conduct in a given fiscal year. “Significant” evaluations can address learning agenda priorities or can be important to the agency for other reasons (e.g., statutorily mandated evaluations). Your agency can determine its own criteria for deeming an evaluation “significant.” See Section 3: Defining Significant Evaluations for tips on developing criteria.

Your agency may also include other types of evidence-building activities on the plan as long as these activities are clearly identified as such. However, your agency should include this information only if it is useful to implement your learning activities in this way.

Explains how your agency’s evaluations connect to the learning agenda. An Annual Evaluation Plan shows how your agency will address learning agenda and other priority questions through evaluations and describes the methods that each evaluation will use. It is not a “laundry list” of evidence-building activities.

Must be updated each year. An Annual Evaluation Plan describes the evaluations that your agency will undertake in the fiscal year following publication of the plan. It will be published each February concurrent with the Annual Performance Plan.

Must be updated each year. The Annual Evaluation Plan Overview, which can be downloaded from the tab to the right, provides a quick summary of the required and optional components that agencies can choose to include in their plan.

Benefits of an Annual Evaluation Plan

Must be updated each year. The Annual Evaluation Plan Overview, which can be downloaded at the top of this page, provides a quick summary of the required and optional components that agencies can choose to include in their plan. These plans can be useful even if your agency already has an active evaluation program and shares information publicly about its studies. Specifically, developing the plan can help your agency to:

Benefits

Example Actions

Promote understanding, transparency, and accountability.

An Annual Evaluation Plan is an opportunity to tell a coherent and compelling story of what your agency will evaluate, how your agency’s evaluations relate to each other, and how the evaluations will help to answer important questions. For this reason, a well-written plan can inform agency staff and build trust with external stakeholders.

The Evaluation Officer (EO) and your agency’s communications office could promote the importance of the Annual Evaluation Plan to agency staff and the public.

Reinforce a shared commitment to and capacity for evidence building in furthering the agency’s mission.


Evaluation planning brings together agency staff, with input from external stakeholders, to prioritize and plan evaluation activities for the coming fiscal year. The development process should engage stakeholders from within and across your agency and, when feasible and appropriate, include the perspectives of external stakeholders. This process can also build capacity across your agency for using evaluations for ongoing improvement.

The EO and related staff could involve diverse sub-agency representation in a working group or core team to brainstorm specific opportunities to build evidence in the coming year. To build capacity among agency staff, the EO could offer “Evaluation 101” brown bags to introduce important evaluation concepts and practices.

Document what you have learned and plan activities to address remaining learning agenda priority questions.

Updating the Annual Evaluation Plan each year allows your agency and its stakeholders to reflect on what they have learned from completed evaluations and assess progress towards using evaluations to answer learning agenda and other priority questions. As part of this annual process, your agency can identify remaining learning agenda priorities that require evaluation and plan new studies as needed.

Throughout the year, the EO and related staff could schedule inperson or web-based presentations for evaluators to share what has been learned from “significant” evaluations and to discuss the implications of findings.

Continue reading the toolkit for more tips, templates, and examples on how to complete your agency’s Annual Evaluation Plan.

Evaluation planning is most effective when it is inclusive and efficient. How can your agency develop a planning process that suits its unique needs and capacity?

Evaluation Officers (EOs) oversee the development of the Annual Evaluation Plan, but others at your agency can offer important insights. When developing a plan for whom to engage in the process, consider which stakeholders will:

  • Participate in a core team (optional) that helps to develop the Annual Evaluation Plan
  • Make final decisions on the plan
  • Help to identify “significant” evaluations and inform content for the plan
  • Receive updates on Annual Evaluation Plan developments

OMB M-19-23 Reference

Creating an Annual Evaluation Plan: Management and Leadership of the Process

The agency’s designated Evaluation Officer [EO] shall lead, coordinate, develop and implement the Annual Evaluation Plan. The EO will play a leading role in the development and implementation of the Annual Evaluation Plan at the agency level and also support efforts to develop plans at the sub-agency, operational division, or bureau level.

Identifying Key Stakeholders

Core Team: Agencies are not required to create a core team to develop the Annual Evaluation Plan. If your agency opted to create a core team to lead learning agenda development, members of this group could also contribute to creating and updating the Annual Evaluation Plan (see Section 3: Engaging Stakeholders in the Learning Agenda Toolkit for more information about forming a core team). Some agencies might find that a core team comprised of individuals from a range of offices can increase staff buy-in and diversify perspectives. Other agencies might find it more efficient to develop the plan within the agency’s evaluation office, consulting with valuable stakeholders for information at key points during the process.

As creating and updating the Annual Evaluation Plan requires technical expertise in evaluation design and methods, the individuals responsible for developing content should be knowledgeable about data and evaluation. Key partners might include your agency’s Chief Data Officer (CDO), who can advise on data access, and the Statistical Official (SO), who can offer insight into issues such as data quality and sound statistical methods. Agencies with decentralized evaluation activities might consider involving representatives from sub-agencies to enhance understanding of the full scope of ongoing and planned evaluations throughout the agency. Other partners could include the Performance Improvement Officer and related staff, and individuals from your agency’s internal research or analytics offices.

Leadership: Agency leaders are critical partners in the Annual Evaluation Plan development process. As leadership provides the final signoff on the plan, their buy-in is necessary to carry out evaluations and promote the use of findings within the agency. To engage leaders early on, the EO could facilitate an initial conversation to explain the Annual Evaluation Plan development process, discuss leaders’ priorities for or concerns about the process, and strategize about resources to support evaluations. The EO and related staff can establish recurring communications with leadership to ensure access to the resources needed to conduct evaluations.

Stakeholders who can inform content: Beyond the core team and leadership, consider the range of internal and external stakeholders who can offer useful perspectives or insights. OMB M-19-23 dictates that program staff should be consulted if they will be responsible for supporting a significant evaluation or using findings. Program staff can be helpful in informing what questions the agency should be asking, which evaluations are “significant,” and how to use evaluation findings.

The box below describes an approach used by the Small Business Administration to involve program offices in evaluation planning.

OMB M-19-23 Reference

Who supports the development of an Annual Evaluation Plan?

Agencies should consult with internal and external stakeholders as they develop and implement their initial Annual Evaluation Plan and those that will follow. … Internal consultation should, at minimum, include those offices and staff that have a role in either undertaking evaluations or using their results.

How the Small Business Administration Engages Program Staff in Evaluation Planning

The Office of Program Performance, Analysis and Evaluation works with program offices throughout the year to discuss evaluation possibilities and offer technical assistance. The annual call for proposals grows from discussions with program managers who want to identify questions that, if answered, could improve their program’s performance.

  • Call for Evaluation Proposals: Each winter, program offices are invited to submit program evaluation proposals for awards. The template requires program managers to develop 3-6 questions that relate to the operations of the program or its outcomes, and explain how the evaluation will support recommendations that could improve program processes or enhance service delivery.
  • Proposal Review: The lead program evaluator will convene a team to consider the proposals, and the team will make recommendations to senior leadership about which proposals to support in that evaluation cycle.
  • Proposal Selection: : SBA funds four to five new evaluations each year that are managed by one of SBA’s lead program evaluators and a team of independent contractors. SBA aims to complete evaluations within 12 to 15 months.
  • Impact: The evaluations have helped SBA program managers identify and improve actions for program operations and outcomes, support efficiency gains, and enhance service delivery. The program evaluation call for proposals supports the current year’s Enterprise Learning Agenda and SBA’s progress toward its strategic goals and objectives.

Agencies may choose to consult external stakeholders while developing an Annual Evaluation Plan due to their content expertise, familiarity with relevant communities, or ability to help identify useful evaluation questions for the broader field.

Stakeholders who should be kept apprised: Other internal or external stakeholders may need to stay informed about evaluations selected for the Annual Evaluation Plan. For example, it might make sense to inform a key community or advocacy group that a grant program intended to meet their needs will be included on the agency’s plan. More broadly, agencies should consider how to share the plan with agency staff and the public to enhance transparency and foster a shared purpose.

Activity 1 of the Annual Evaluation Plan Workbook provides one approach for mapping the landscape of stakeholders who could be involved in the Annual Evaluation Plan development process. Activity 2 of the Annual Evaluation Plan Workbook is intended to help you determine the role each stakeholder may take on.

When to Engage Stakeholders

Agencies may differ in the extent to which they involve stakeholders outside of the core team to help shape the Annual Evaluation Plan, but two key opportunities at which stakeholders are likely to enhance the process are:

Identifying “significant” evaluations. The Annual Evaluation Plan includes only evaluations that your agency determines to be “significant” according to the criteria that it develops. For more about developing criteria for “significant” evaluations, please refer to Section 3: Defining Significant Evaluations.

One approach to involving stakeholders in the development of criteria for “significant” evaluations is to engage them in small-group brainstorming and prioritization activities. Activity 3 of the Annual Evaluation Plan Workbook provides a sample meeting agenda for such activities. Alternatively, your agency might decide to solicit feedback from stakeholders on possible criteria developed by the core team or the EO. Providing an opportunity for staff outside of the evaluation office to weigh in on criteria could enhance their understanding of the Annual Evaluation Plan purpose and process.

One approach to involving stakeholders in the development of criteria for “significant” evaluations is to engage them in small-group brainstorming and prioritization activities. Activity 4 3 of the Annual Evaluation Plan Workbook provides a sample meeting agenda for such activities. Alternatively, your agency might decide to solicit feedback from stakeholders on possible criteria developed by the core team or the EO. Providing an opportunity for staff outside of the evaluation office to weigh in on criteria could enhance their understanding of the Annual Evaluation Plan purpose and process.

Developing content for the Annual Evaluation Plan. Key stakeholders, such as your agency’s program staff, can assist evaluation staff in clarifying specific evaluation questions, identifying opportunities for rigorous evaluation, and designing strategies for targeted engagement of key external stakeholders. These activities can generate basic information about a planned evaluation that can be included the Annual Evaluation Plan. OMB M-19-23 acknowledges that “forecasting evaluation activities in advance may be challenging” and encourages agencies to provide the level of detail that is feasible

Your agency’s Annual Evaluation Plan should describe evaluations that the agency considers “significant.” How can your agency decide which evaluations are “significant”?

Each agency must develop its own criteria for “significant” evaluations, describe these criteria in the Annual Evaluation Plan, and apply these criteria consistently to determine which evaluations to include in the plan. OMB M-19-23 provides several potential criteria for agencies to consider (see box on the right).

Identifying an evaluation as “significant” communicates its special importance to your agency and elevates it as a priority for resource allocation and stakeholder awareness. The designation of some evaluations as “significant” does not imply that others are unimportant. For many agencies, especially those with active evaluation programs, the Annual Evaluation Plan may not include all of their ongoing and planned evaluations.

OMB M-19-23 Reference

What is a “significant” evaluation?

The significance of an evaluation study should take into consideration factors such as:

  • The importance of a program or funding stream to the agency mission;
  • The size of the program in terms of funding or people served; and
  • The extent to which the study will fill an important knowledge gap regarding the program, population(s) served, or the issue(s) that the program was designed to address

Agencies should clearly state their criteria for designating evaluations as “significant” in their plans.

Developing a Definition for “Significant”

There are various approaches to developing your agency’s definition of “significant.” One potential approach is to consider the questions below and develop criteria for what makes an evaluation “significant.”

Does the evaluation help to answer a priority question in the learning agenda? “ “Significant” evaluations do not have to address learning agenda priorities, but your agency might want to highlight evaluations that will provide evidence to answer these important questions.

Does the evaluation focus on an agency strategic priority? For example, agency leadership may have funded a new, high-priority program to address an emerging issue

Does the evaluation address important Administration priorities or a congressional mandate? Agencies may be mandated to conduct evaluations as a result of a reauthorization or because a new program has generated public interest. Congress or the Administration may also simply express interest in evaluating specific topics. These evaluations are not required to be listed on the Annual Evaluation Plan, but agencies can decide to factor this into their selection criteria.

Does the evaluation focus on one of your agency’s largest or highest-profile programs or initiatives? Programs in which your agency has invested significant resources or programs that are highly visible or innovative may merit stronger consideration.

Will your agency need to make an especially consequential decision about a program, policy, or regulatory decision? Consider whether this evaluation could enable your agency to bring evidence to this decision.

Will this evaluation bring new information to a program or policy that currently has very little evidence? Consider the time that has elapsed since the program was last evaluated.

Will this evaluation pioneer an important new approach to evaluation? For example, the evaluation could have broader implications on the field of evaluation techniques (e.g., a new way of merging or analyzing agency administrative data).

Will the evaluation help to significantly advance knowledge in the field? This might be informed with input from external researchers and partners.

Stakeholder engagement might be beneficial when defining what “significant” means to your agency. The Evaluation Officer (EO) could seek feedback from agency leadership, staff from programs or offices, and/or a subset of stakeholders from sub-agencies and bureaus. See Section 2: Engaging Stakeholders for more ideas about engaging stakeholders in the development of the agency’s definition of “significant”.

A rubric can be a useful tool to facilitate conversations when selecting evaluations for the Annual Evaluation Plan based on your agency’s criteria. For an example rubric, see Worksheet #1 of the Annual Evaluation Plan Workbook.

Sample Classification of “Significant” and Non-“Significant” Evaluations

The following example illustrates how an agency might choose to define “significant” evaluations and apply that definition to determine which of its evaluations to include on its Annual Evaluation Plan.

The fictitious Department of Rural Affairs considers an evaluation “significant” if it meets any of these criteria:

  • Criteria A: Addresses question(s) on the agency’s learning agenda
  • Criteria B: Critical to the agency’s mission
  • Criteria C: Targets a program or initiative that involves significant agency resources
  • Criteria D: Congressionally mandated

The Department determined that two evaluations met its definition of “significant” according to these criteria. One evaluation—though important to the agency—did not meet the agency’s definition of “significant.” This was a judgment by agency staff who did not believe that this evaluation needed to be highlighted in the Annual Evaluation Plan.

Note: This example is illustrative only. Other agencies could make different but equally defensible decisions about the criteria and how to apply them to evaluations.

Agency: The Department of Rural Affairs

Agency Mission: Improve quality of life in rural areas

“Significant” Evaluations
Evaluation Description Rationale
Evaluation #1 An evaluation of the impact of offering a new strategy for supporting families with loved ones battling opioid addiction. This was a major investment aimed at strengthening families.
  • Addresses questions on the agency’s learning agenda (Criteria A)
  • Critical to the agency’s mission (Criteria B)
  • Involves significant agency resources (Criteria C)
Evaluation #2 A congressionally mandated implementation evaluation of the agency’s rural downtown revitalization program.
  • Critical to the agency’s mission (Criteria B)
  • Congressionally mandated (Criteria D))
Non-“Significant” Evaluations
Evaluation Description Rationale
Evaluation #3 A study to understand why fewer small businesses than expected participate in a rural broadband access initiative. The agency commissioned the evaluation because it wants to improve the program, but the evaluation:
  • Does not address learning agenda priorities
  • Is not mission-critical
  • Does not involve significant agency resources
  • Is not congressionally mandated

Your agency’s Annual Evaluation Plan will describe “significant” evaluations. What specific information could your agency include about each evaluation?

Though the Office of Management and Budget (OMB) requires certain components to be included in your agency’s Annual Evaluation, you have the flexibility to include additional information about the evaluations that may offer more detail or insight into future plans. The table below summarizes the components that agencies must include, as specified by OMB M19-23, as well as optional elements that could provide more context about how the planned evaluations will help address learning agenda priority questions.

Required Components Optional Components
  • Questions to be answered by each “significant” evaluation or phase of an existing study
  • Information needed for each “significant” evaluation, including whether new information will be collected or existing information will be acquired
  • Methods to be used for each “significant” evaluation, including the evaluation design (e.g., experiment or quasi-experiment, pre-post design, implementation study)
  • Anticipated challenges related to “significant” evaluations
  • Plan for how agencies will disseminate and use results from each “significant” evaluation
  • Descriptions of the purpose, goals, and objectives of a program being evaluated
  • Program logic models
  • Other analytic considerations, such as planned subgroups of interest
  • Mitigation strategies to address challenges
  • Who will conduct or is conducting the evaluation
  • Background on what is already known about the topic

Writing the Plan

You have developed a learning agenda, identified current and recent evaluations, possibly developed new evaluations, and determined which evaluations are “significant.” Now, it is time to bring that all together into a written Annual Evaluation Plan. For a sample evaluation description, please download the handout at the top of this page

Consider using the checklist below to ensure each of your evaluation descriptions contains all required components

Does your description of each “significant” evaluation contain these required components?

  • Clear statement of the questions to be answered
  • Information that will be needed to conduct the evaluation
  • Method(s) to be used
  • Potential challenges to conducting the evaluation
  • Plan for sharing evaluation findings with relevant stakeholders

Has your agency chosen to include any of these optional components?

  • Rationale for why your agency considers this evaluation “significant”
  • Description of the program, policy, or initiative being studied
  • Background on what is already known, to demonstrate how the evaluation will build upon existing knowledge
  • Description of how your agency anticipates it will use the findings
  • Relevant evaluation logistics, such as who will conduct the study
  • Potential strategies for mitigating the anticipated challenges

Required Components of the Plan

Clearly state the questions to be answered. The Annual Evaluation Plan should include questions that will be addressed by the agency’s “significant” evaluations. For help crafting a strong evaluation question, see the Evaluation Planning Quick Tips guide, which can be downloaded on the Annual Evaluation Plan Toolkit homepage

Describe the information needed to conduct the evaluation. Your agency will need to describe what data are available and how they might be used to answer your evaluation questions.

Administrative data, such as program data or outcomes data that your agency already tracks, are a low-cost alternative to gathering new data. Keep in mind that using administrative data might mean that your agency will need to make specific plans for accessing and using it, such as obtaining clearances for contractors or cleaning data for external use.

If administrative data are not sufficient for addressing an evaluation question, new data might be collected by administering surveys or assessments or conducting interviews or focus groups. If new data must be collected, it is important that your timeline account for any necessary Paperwork Reduction Act (PRA) or research approvals from jurisdictions or other entities from which you seek to collect data.

For further considerations about data availability and examples of strong practices from federal agencies, please see the Evaluation Planning Quick Tips guide.

Sample Questions to Consider when Documenting Required Information
  • Are agency administrative data available to answer the question? If so, have the data been used for research before? Are they well-organized and well-documented?
  • Will data from other federal agencies be needed? Will this require merging data across agencies?
  • Can data that already exists at the State or local level be used to support the evaluation?
  • Will new information collections be needed? How complex will these collections be?
  • Does the timeline account for any data-sharing agreements or necessary information collection approvals, such as PRA clearances?

Describe the methods to be used. A strong evaluation plan uses methods that are wellmatched to the evaluation questions. OMB M-19-23 notes that agencies should “use the most rigorous methods possible that align to identified questions.” This statement implies that questions should drive the choice of methods, not the other way around. For more information on methods, refer to the Evaluation Planning Quick Tips guide.

Sample Questions to Consider when Selecting Methods
  • Which methods are most appropriate to address the evaluation questions posed?
  • Is there an opportunity to use the most rigorous methods to answer the questions?

Note potential challenges. Most evaluations include challenges that warrant mitigation strategies. For each “significant” evaluation on the Annual Evaluation Plan, your agency must describe any challenges that you anticipate encountering in carrying out the evaluation. OMB M-19-23 does not require your agency to identify mitigation strategies for the challenges you describe in your plan, but if you have identified such strategies, you could include them for additional context.

Sample Questions to Consider when Documenting Challenges
  • What funding is or might become available to support the evaluation and dissemination of findings?
  • Will the evaluation produce results in time to inform decisions?
  • How complicated is it to access data? Will there be challenges with cleaning and organizing the data?
  • Is the necessary data available?
  • For longitudinal studies, is tracking participants over time a concern?
  • Are stakeholders enthusiastic about the evaluation? How can the agency obtain their support and buyin?

Describe a plan for sharing evaluation findings with relevant stakeholders. The Annual Evaluation Plan should describe how your agency intends to disseminate results. This includes considering how to frame evaluation results for different audiences. There are three considerations that may inform the development of your dissemination plans:

  • Timing: For evaluation results to meaningfully inform ongoing agency work and the learning agenda, agencies may take into account whether there are any important upcoming decisions that could be informed by the evaluation.
  • Relevance: Consider who needs to be informed of evaluation result: to whom is the evaluation relevant, and how might they use the information?
  • Method: Different audiences might need different strategies to alert them to a completed evaluation and help them to understand the findings.

Use Worksheet #2 of the Annual Evaluation Plan Workbook to brainstorm dissemination approaches. For additional information about appropriate formats and channels of communication, see the Evaluation Planning Quick Tips guide.

Worksheet #3 can help you summarize evaluations along some of the key dimensions highlighted in this section. The following toolkit sections describe the considerations for each dimension and suggest activities for developing the content of the Annual Evaluation Plan.

Optional Components of the Plan

While not required, additional components that could be included in summaries of significant evaluations are described below. This list of optional components is not exhaustive. Agencies may decide to include additional optional components if they are helpful in describing significant evaluations.

Provide a rationale for why your agency considers this evaluation “significant.” See Section 3: Defining Significant Evaluations for guidance on developing criteria for “significant” and creating a process for using the criteria to assess evaluations. Including an explanation as to why a study is “significant” provides insights into your agency’s evaluation priorities.

Describe the program, policy, or initiative being studied. A description of the program, policy, or initiative could provide helpful context for your agency’s evaluations. OMB M-19-23 suggests that agencies include program logic models in their Annual Evaluation Plan to visually depict how a program being evaluated is expected to function.

Provide background on what is already known. Your agency could choose to show how its “significant” evaluations build on previous studies conducted internally or by others. If an evaluation is part of a multi-year evaluation portfolio that aims to answer a complex question, the description could explain how the current evaluation builds on what your agency has learned from completed studies.

Describe how your agency anticipates using the findings. This information can help explain the practical value of an evaluation to stakeholders. If a “significant” evaluation is part of a set of related studies, the description could explain how your agency will use findings from the evaluation and the other studies to answer a question.

Describe relevant evaluation logistics. Evaluation logistics include questions such as: who will conduct the study? How will your agency fund it? Does the evaluation have key phases that are important to communicate? You may choose to discuss logistics in an evaluation description if there is something noteworthy about your agency’s plans. For example, if your agency is conducting an evaluation through an ongoing research partnership with an academic institution, you might want to share this information to highlight your agency’s resourcefulness in addressing learning agenda priorities.

This section contains additional resources to learn more about evaluation plans. While this list is not exhaustive, it provides insights from experts in the public and private sectors. Please keep in mind Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) requirements and OMB M-19-23 guidance when reviewing tips or examples.

The Office of Management and Budget (OMB) operates an Evidence and Evaluation Community on MAX. This page contains many of the resources below as well as updates, workgroups, and initiatives. Click here to view the community with your MAX account.

Agency Evaluation Plans

Note: These example evaluation plans were created prior to the passage of the Evidence Act; therefore they can be informative but are not fully compliant with the requirements of the Evidence Act and OMB M-19-23.

  1. Department of Labor, Chief Evaluation Office
  2. Department of Education, Institute of Education Sciences
  3. Millennium Challenge Corporation, Monitoring & Evaluation Plans

Conducting Evaluation in Federal Agencies

  1. Pew-MacArthur Results First, Targeted Evaluations Can Help Policymakers Set Priorities

    This article walks through considerations for government entities conducting impact evaluations, including different approaches to enhancing evaluation capacity.

  2. Government Accountability Office, Experienced Agencies Follow a Similar Model for Prioritizing Research

    This article discusses how some federal agencies have planned evaluation agendas in the context of preparing spending plans for the coming year

  3. Government Accountability Office, Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making

    This report focuses on the use of evaluation findings and offers suggestions from agencies about how to facilitate evaluation influence.

  4. Coalition for Evidence-Based Policy, Rigorous Program Evaluations on a Budget

    This article provides illustrative examples of agencies and organizations that have conducted low-cost randomized controlled trials.

  5. Economic Report of the President, Evaluation as a Tool for Improving Federal Programs

    This chapter reflects on the implementation and use of impact evaluations in federal programs, with examples and lessons learned.

Comprehensive Evaluation Toolkits and Guides

  1. Pew-MacArthur Results First, Targeted Evaluations Can Help Policymakers Set Priorities

    This toolkit contains guidance and tools on the Collaborating, Learning, and Adapting framework. It is designed for USAID staff members, but many of the tools and guides are broadly applicable.

  2. Corporation for National & Community Service, Evaluation Core Curriculum Courses

    This site is designed to help users increase capacity for evaluation programs and interventions, with topics related to planning, managing, and reporting on evaluations.

  3. Centers for Disease Control, Evaluation Resources

    This site contains resources for conducting evaluations, with resources on evaluation frameworks, tools and workbooks for conducting evaluations, and a self-study guide.

  4. Institutes of Education Sciences, Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit

    This guide is designed to educate practitioners on the purpose of a logic model and how to use one to support program evaluation.

  5. Government Accountability Office, Performance Measurement and Evaluation

    This brief document provides an overview of different types of evaluation and discusses the difference between performance measurement and evaluation.

  6. Government Accountability Office, Designing Evaluations: 2012 Revision

    This guide introduces key issues about planning evaluation studies of federal program, describes the different types of evaluations, and outlines the process for designing them.

Evaluation Methods

  1. J-PAL North America, Evaluation Toolkit

    This toolkit walks users through the process of preparing to launch a randomized controlled trial.

  2. ACF Office of Planning, Research, and Evaluation, Using Behavioral Insights to Increase Participation in Social Services Programs: A Case Study

    This case study helps readers to understand how to apply behavioral insights to real-world challenge.

  3. J-PAL, Six Rules of Thumb for Understanding Statistical Power

    This articles gives a brief primer on considerations related to statistical power.

  4. Laura and John Arnold Foundation, Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs

    This checklist covers items that are critical to the success of a randomized controlled trial.

  5. J-PAL, Real-World Challenges to Randomization and Their Solutions

    This report provides overview of common challenges related to randomized controlled trials is geared toward policymakers with a general understanding of such studies.