Targeting incentives increased the response rate but did not decrease nonresponse bias

Buildings in New York City

Buildings in New York City

What is the agency priority?

The American Housing Survey (AHS) is a biannual, longitudinal survey of housing units designed by the U.S. Department of Housing and Urban Development (HUD) and administered by the U.S. Census Bureau. As with many statistical surveys, the AHS has experienced declining response rates, raising concerns about nonresponse bias — divergence between estimates from the survey and their true value that is created by differences between respondents and nonrespondents. This evaluation asks: can targeted cash incentives both increase response rates and decrease nonresponse bias?

What did we evaluate?

We evaluated the effectiveness of targeting incentives to potential respondents estimated to have the highest risk of nonresponse during the 2021 wave of the AHS. Unconditional financial incentives ($2, $5, or $10 delivered in cash in a letter) were either allocated at random or “targeted”, meaning provided only to those who had the 30% highest estimated risk of nonresponse. We focus on the comparison between targeted incentives versus random incentives, rather than that between incentives versus no incentives, because the goals of the evaluation were to increase response rates and decrease nonresponse bias while minimizing survey costs.

We first estimated 86,000 potential respondents’ risk of 2021 nonresponse. We then paired potential respondents with similar estimated risks of nonresponse and assigned each member of the pair to one of two different groups: 1. a “targeted” group (n = 43,000) of potential respondents who received incentives if and only if they fell into the 30% with the highest estimated risk of nonresponse; 2. a “non-targeted” group (n = 43,000) of potential respondents, each of whom had a 30% probability of receiving an incentive irrespective of their estimated nonresponse risk. Potential respondents receiving an incentive via either method were randomly assigned an amount of $2, $5, or $10, delivered in cash in a letter sent to all units in the survey.

How did the evaluation work?

Potential respondents were randomly assigned to the “targeted” group (n = 43,000) or the “non-targeted” group (n = 43,000). Among those receiving incentives, the incentive amount ($2, $5, or $10) was also randomly assigned. The response rate outcome was measured using data from the 2021 fielding of the AHS. To estimate changes in nonresponse bias, we tested whether the targeted and non-targeted respondents looked different across a range of ten key demographic and housing characteristics.

What did we learn?

We find that targeted incentives did increase response rates but did not measurably improve or worsen nonresponse bias. The response rate among the group who received targeted incentives was 0.7 percentage points higher than the response rate of 67.2% in the group who received them completely at random. Holding the total incentive budget constant, simply targeting incentives to the potential respondents with the 30% highest predicted risk of nonresponse induced an additional 300 people to respond compared to allocating incentives totally at random.

Verify the upload date of our analysis plans on GitHub: main evaluation and preparatory descriptive study.

Year

2023

Status

Complete

Project Type

Impact evaluation of program change

Agency

Census, Housing and Urban Development

Domain

Government Efficiency

Resources

View Analysis Plan (PDF) View Abstract