Make the Government Work Better through Testing and Learning

Join OES and numerous agency partners to learn how the Federal government uses repeat testing and leverages administrative data to improve outcomes. GSA Administrator Emily Murphy will introduce the event highlighting government-wide progress using data as a strategic asset. In three brief sessions following, distinguished agency partners and academic collaborators will present case studies and lessons learned from rapid cycle testing, building a portfolio of evidence, and transparent evaluation.

Note: If you are an OMB or other agency employee where Google Forms does not work, email oes@gsa.gov to register

Following this event, the Behavioral Science and Policy Association (BSPA) has asked leading experts to facilitate a series of workshops for Federal employees on Behavioral Science 101. Stay tuned for more information on this training.

Make the Government Work Better through Rapid Cycle Testing

In this segment, partners from the Department of Health and Human Services (HHS), Social Security Administration (SSA) and the Department of Housing and Urban Development (HUD) will share new results and lessons learned from conducting repeated experimental tests. These leaders will address priority topics such as overprescribing of certain medications, energy cost-savings, and making government communications more effective.

Speakers include:

Make the Government Work Better through a Portfolio of Evidence

This segment will highlight findings from tests with HHS, the Department of Veterans Affairs (VA), Louisiana Department of Health and Centers for Medicare and Medicaid Services (CMS) on low-cost and scalable program changes to improve vaccination rates among various populations such as Veterans, children and adolescents, and adults 65 and over.

Speakers include:

Make the Government Work Better through Transparent Evaluation

In this final portion, speakers will discuss why transparency about evaluation methods is a good idea not only as a matter of public accountability but also as a tool for strengthening federal evaluation. The session will focus, in particular, on pre-registration of study designs and analysis plans. Speakers will discuss why the results of pre-registered evaluations are generally more reliable for policy and program design; address some challenges to preregistration in the government context (both real and perceived); and present examples of recent evaluations that benefited from pre-registration.