Reflections from 100 OES collaborations
On November 16, 2021, we held an event OES@100 that celebrated the completion of 100 collaborations across government, involving a total sample size of over 44 million individuals. Distinguished speakers discussed insights gained from the evaluation portfolio to date and future priorities for evidence-building.
“The federal government is learning extremely rich and impactful lessons from these 100 studies — and we’ve only scratched the surface,” GSA Administrator Robin Carnahan said in a GSA press release. “Great outcomes start with good data, and the Office of Evaluation Sciences is a prime example of GSA’s commitment to using data and technology to improve service delivery governmentwide. OES is gathering the evidence we need to make recommendations that improve how the government delivers for the American people.”
In our first session of the event, our team reflected on a few lessons learned from both our completed evaluations as well as the many evaluations that did not launch. Here are a few takeaways:
1. A program change and evaluation approach don’t have to be ideal to be valuable.
In government, there are typically a variety of barriers and challenges that make it difficult to field the ideal project. Nonetheless, what is feasible to field can yield valuable insights. Building evidence about a promising program change can yield benefits even when the ideal evaluation approach is not feasible.
2. Coupling randomized evaluations with administrative data let’s us learn things quickly, rigorously and at low-cost.
Leveraging administrative data to measure impacts makes it possible to answer program improvement questions quickly, confidently, and cost effectively. OES was able to analyze data across over 44 million individuals, on a relatively small budget and in a limited time period, using data routinely collected by federal agencies.
3. There’s more to sample size than counting the number of participants in a program.
Having worked frequently with administrative datasets, we have found that projected sample sizes do not always provide a good indication of how big your sample will ultimately be—there is a lot more to sample size than merely counting the number of participants or people reached by an intervention. Anticipating the many ways that sample sizes grow and shrink is instrumental to a successful evaluation.
4. Being transparent about implementation — as well as results — has enormous value.
Since the earliest evaluations, we have published all of its collaborations and evaluation results. Then in 2018, we began pre-registering every project by posting our analysis plans on our website. Finally, we now deliver transparency in implementation, by publishing an intervention pack and record of implementation for many evaluations.
As agencies build their evaluation capacity, we hope our lessons learned and resources can be of use.