How does OES work?
Our team is based at the General Services Administration (GSA), making us uniquely situated to deploy technical support to agencies across the Federal government, provide an independent yet intragovernmental perspective, and develop standards for high-quality evaluations in the government environment. The Office of Evaluation Sciences (OES) mission and operating model involve engaging a wide variety of agencies to answer high-priority questions and build evidence where it is lacking.
OES works directly with agencies to design, launch, and analyze rigorous rapid-cycle evaluations. As an interdisciplinary team, OES brings deep knowledge of a broad research literature, and thus is able to apply the most relevant insights to agency challenges. OES enhances the capacity of federal teams by developing iterative collaborations over multiple years to fill evidence gaps. We are agile and creative as we find ways to rigorously evaluate what is working (and what is not) to enable government to better serve the public.
OES team members follow three guiding principles:
- Humbly collaborate: Our single objective is to enable agencies to better achieve their missions. Our partners, who are civil servants with years of experience working on delivering programs across the government, are the experts on how their programs work and often have the best ideas for how to improve them. OES team members can provide a new perspective and knowledge of relevant literature to program design and evaluation, but only in collaboration with those on the ground and on the frontlines of public service. Sustainable change is possible when the people within partner agencies drive the process and participate in the design and implementation of an evaluation.
- Bring the highest level of rigor and research integrity: Our team aims to tackle our nation’s toughest challenges in the notoriously complex government environment while at the same time employing research methods that are rigorous, reliable, and reproducible. This includes using randomized evaluations whenever possible, pre-registering analyses, writing and releasing open-source code to teach others how to follow our example, providing general guidelines for the decisions we make during analysis and design, and making our processes and methods publicly available for feedback.
- Act transparently and openly: We keep a public record of all evaluations fielded, publicize our findings (including null findings and those that run counter to our own prior expectations and goals), and, when possible, submit our work for publication via peer review in collaboration with academics at Universities. If others in the Federal government, as well as State and Local entities, can see our processes, plans, and results, they can use these resources to conduct their own tests of program changes, improving the work already done.
In addition to making our own research as rigorous, reliable, and reproducible as possible, we are committed to making our research tools available for others to apply or adapt to their own work. Below are some of these tools, which are continuously under development as we refine our methods and learn from our own projects.
Find these useful? Have suggestions for improvement? We look forward to hearing from you!
Resources coming soon:
- OES Project Cycle
- OES Standard Operating Procedure for Design and Analysis
- OES Guide to Research Integrity
- OES Project Review Document
- OES Project Design Document
- OES Project Analysis Document
- OES Abstract Template
- Minimum Detectable Effect Calculator