A collaboration between OES and CDC's Demand for Immunization Team

Guiding evaluations of global vaccine demand and uptake

What was the challenge?

The Demand for Immunization Team (DIT) in the U.S. Centers for Disease Control and Prevention (CDC) Global Immunization Division (GID) seeks to increase and sustain demand and uptake for vaccination services across the life course while strengthening immunization systems to meet community needs. The team’s primary objectives are to:

  • Build, test, and use quantitative and qualitative tools to assess behavioral and social drivers of vaccine demand and uptake
  • Use principles of human-centered design to encourage use of social data for action
  • Develop, implement, and evaluate demand and uptake interventions
  • Build capacity of government and partner staff working to address demand challenges in low and middle-income countries

Evaluating demand interventions can be challenging for various reasons, including that projects in the DIT portfolio occur in complex implementation environments where data quality can be highly variable due to ongoing conflict or other health systems challenges. In these environments, there is often a need for more rapid emergency response, which can influence how evaluation of these activities can be conducted. However, robust evaluations of demand interventions can help build the evidence base on vaccine demand and uptake, inform the funding strategy (e.g., for improved cost-effectiveness), and shape stakeholder commitment to specific interventions.

What did we do?

We partnered with the DIT to develop a practical guide that describes the various tasks that could be done by a team member or evaluation partner when conducting an impact evaluation of vaccine demand interventions. In addition to outlining the pros, cons, and prerequisites for different causal and correlational (i.e., non-causal) designs, the practical guide addressing the topics outlined below also describes steps and questions that users could ask of program implementers as they considered an evaluation.

Before planning an intervention:

  • What is the current status of an immunization program and how does it address vaccine demand and uptake? Are there approaches that can be varied or tailored to specific concerns and demographic populations?
  • How have you assessed factors contributing to low uptake of vaccines in the current program (e.g., using existing specific frameworks such as the Behavioral and Social Drivers (BeSD) of vaccination)?
  • How is implementation and adaptation of existing evidence-based interventions supported for a new context, target population etc.?

Once intervention is identified:

  • How does the proposed intervention influence drivers of vaccination?
  • What upstream (demand/intent/uptake) and downstream (health benefits or cost saving) outcomes is the proposed intervention expected to achieve and can these changes be measured directly via existing data sources or do new data need to be collected?
  • Is there buy-in on evaluation design and rationale from relevant stakeholders?

Ingredients for an evaluation:

  • Relevant questions on data access, collection, and data sharing
  • Are there differences between the current immunization program and proposed intervention in terms of access, timing of access, encouragement, location, etc.?
  • What are relevant individual-level/behavior and system-level outcomes that can be monitored for this intervention?
  • What are the pros/cons of different analytical approaches (including both causal and non-causal study designs)?

What did we learn?

The guide suggests key tips for thinking about conducting impact evaluations in this space, as outlined below:

  • Be transparent about the pros and cons associated with the use of a causal or non-causal design (such as pre/post designs or multiple regression methods) and be aware of how they help assess intervention or program impact in different ways (especially in terms of rigor and statistical bias). Often, this choice of study design and impact evaluation feasibility will be shaped by the implementation environment.
  • Recognize which data sources can/cannot be used and for what purposes (e.g., the role of registry data for vaccine uptake may differ from that of survey measures for behavioral outcomes).
  • Situate the different roles of process indicators to measure what was done (e.g., to assess the fidelity of implementation) and qualitative research that may be used to explain why an intervention did or did not work, relative to impact evaluation of whether the intervention worked and the magnitude of its effect.
  • There is potential to rigorously assess impact using randomization techniques if there are randomly-occurring differences in how the current and proposed programs are set up in terms of access, timing, encouragement, location, etc. For example, there are automatically two groups of individuals whose outcomes may be assessed if more people are offered access to the intervention than those that can be served, or if certain areas are covered and others are not.

The guide was designed to be used by DIT and other stakeholders in several ways: in evaluating existing vaccine demand and uptake interventions with partners; in scaling up initiatives that show promise in improving vaccine demand and uptake; in developing and implementing new interventions to increase vaccine demand and uptake; and in shaping the learning agenda with other programmatic and research stakeholders to generate evidence of impact for demand interventions. Ultimately, program implementers can also use impact estimates derived from evaluations alongside empirical cost data collected by other GID teams to estimate the cost-effectiveness and sustainability of vaccine demand interventions.

Year

2023

Agency

Health and Human Services

Domain

Global Health