Because results from OES evaluations impact the lives of millions of Americans, the quality of our work is of paramount importance.

OES Evaluation Policy & Process

The OES Evaluation Policy (PDF) lays out the principles that guide our work. Our evaluation projects follow six steps to produce results that are relevant and reliable.

Project Process Diagram
  1. Partner with Federal Agencies to target priority outcomes
  2. Translate behavioral insights into concrete recommendations
  3. Embed evaluations
  4. Analyze results using existing administrative data
  5. Ensure our work meets evaluation best practice
  6. Measure impact and generate evidence to continuously improve

Learn more about our project process here

Statistical Analysis Resources

We have produced a series of methods papers for our own team’s use in designing randomized evaluations and conducting statistical analysis. Take a look if you would like to know more about our methods. If you find these useful in your own evaluation work, or if you have questions or would like to request additional resources, please let us know.

Reporting Statistical Results in Text and in Graphs

This guidance paper describes OES’s preferred methods for reporting statistical results from a randomized evaluation. It explains how to report a regression coefficient that estimates the effect of a treatment or intervention, as well as how to produce the graphs that OES includes in its project abstracts. Code for generating graphs, both in R and in Stata, is included.
Reporting Statistical Results in Text and in Graphs (PDF)

Blocking in Randomized Evaluations

Whenever possible, we incorporate background information about individuals (or other units) into an evaluation through block randomization. This helps make our estimates of the effects of a program or intervention as precise as possible. This guidance paper describes OES’s approach to block randomization.
Blocking in Randomized Evaluations (PDF)

Calculating Standard Errors Guide

OES often analyzes the results of a randomized evaluation by estimating a statistical model — typically an ordinary least squares (OLS) regression — where one of the parameters represents the effect of an intervention. In order to decide whether a result is statistically significant, we must estimate the standard error for this parameter. This guide describes our preferred method for doing this. In particular, it explains the reasons for using so-called HC2 standard errors — and how to calculate them in R and Stata.
Calculating Standard Errors Guide (PDF)

Multiple Comparison Adjustment Guide

When evaluators run multiple statistical tests — for example, looking at multiple possible outcomes of a program or intervention, or testing multiple versions of an intervention — they run the risk of getting a “false positive” result unless they account for these multiple tests in some way. There are various approaches to this, and OES’s preferred approach is described in this guide.
Multiple Comparison Adjustment Guide (PDF)

Guidance on Using Multinomial Tests for Differences in Distribution

Some descriptive and causal research questions at OES center on drawing comparisons across multiple categories between two samples given their status with respect to some policy outcome or behavior. We may alternatively wish to compare how a benefit was distributed among a sample of beneficiaries relative to a well-defined target or eligible population. When we do this, we may use a multinomial statistical test such as a chi-squared test to draw inferences about whether (1) two sub-samples were drawn from the same population or (2) the sample of beneficiaries of a program reflects the population of eligible individuals.
Using Multinomial Tests for Differences in Distribution (PDF)

Evaluation Resources

Effect Size and Evaluation: The Basics

An impact evaluation aims to detect and measure the effect of a program or policy on a priority outcome. To plan for an evaluation, we need to decide how large or small an effect we want to be able to detect. This important decision will influence all aspects of evaluation planning, including budget, operations, duration, and sample. This resource explains what effect sizes are and their importance in designing an evaluation.
Effect Size Guide (PDF)

Evidence Reviews to Support Evidence-Based Policymaking

The Foundations for Evidence-Based Policymaking Act of 2018 (the Evidence Act) directs Federal agencies to develop evidence to support policymaking. A crucial component of developing evidence is understanding what evidence already exists. This helps ensure that key learnings are incorporated into new and existing programming, and that the resources available for evidence-building activities are targeted towards areas where there are bigger evidence gaps. This resource introduces a framework for how to conduct a review of existing evidence, and provides additional resources for those seeking to conduct more systematic reviews.
Evidence Reviews Guide (PDF)

Preregistration as a Tool for Strengthening Federal Evaluation

In order to ensure that evaluation findings are reliable and that statistical results are well founded, it is essential that evaluators commit to specific design choices and analytic methods in advance. By making these details publicly available -- a practice known as preregistration -- we promote transparency and reduce the risk of inadvertently tailoring methods to obtain certain results or selectively reporting positive results. This guidance paper describes the importance and benefits of preregistration and addresses concerns that Federal evaluators might have.
Preregistration Guide (PDF)

How to Use Unexpected and Null Results

Recent research shows that null results in federal evaluations are more common than we think, and occur for a variety of reasons. When agencies share both expected and unexpected results, we can learn about what programs work, what effect sizes are realistic, and improve Federal evaluations. This post dispels misconceptions about null results and highlights different uses and lessons from null results.
Unexpected Results Guide (PDF)

Observational Causal Evaluations with Quasi-Experimental Designs

The goal of this document is to provide helpful resources for OES team members engaged in observational, usually retrospective, causal projects. In particular, this intends to support and augment conversations with agency partners, especially those unfamiliar with designs for such projects. This piece does not intend to provide guidance to be followed during analysis, but intends to outline OES’s perspective on observational causal studies. We expect that agency partners will work closely with OES team members on the details of their particular designs.
Resources for Quasi-Experimental Designs (PDF)


Equity Evaluation Series

This memo series is intended to guide OES’ commitment to equity in our evaluation process and our efforts toward understanding and reducing barriers to equitable access to federal programs. These memos are intended to be internal guidance documents for OES team members, covering a range of topics including defining equity in quantitative evaluations and methodological guidance on choosing control variables in regression analyses. Their goal is to improve the consistency and quality of equity in OES evaluations, as well as to provide training resources for OES researchers in this field.
Defining Equity in Federal Government Evaluations (PDF) Matching an Evaluation Method to Your Equity Question (PDF) Choosing Controls in Regression Analyses Involving Equity (PDF) Guidance on Using Multinomial Tests for Differences in Distribution (PDF)