How CFA Uses the Expert Elicitation Process to Inform Our Work

At a glance

For some CFA products, we use expert elicitation to help fill gaps in knowledge. Expert elicitation is a structured process that involves convening subject-matter experts in disease topics and cross-cutting disciplines—to discuss questions that we have insufficient data to answer or solicit input on the likelihood of various outcomes or scenarios. Experts provide feedback on the questions of interest and discuss the collective results and any areas of disagreement. Then, the elicitation's final results are reported for consideration in CFA analysis and modeling.

Thumbnail for CFA's expert elicitation process page

Overview: What is expert elicitation?

Expert elicitation is a process that begins with a convening of subject-matter experts—in disease topics as well as cross-cutting disciplines—to discuss questions that we have insufficient data to answer or solicit input on the likelihood of various outcomes or scenarios.

Experts each provide their feedback and judgments on the questions at hand, such as expected trends related to specific disease(s) and key drivers for deviations from these expectations. Then, experts convene to discuss their individual and collective responses. The final results of the elicitation are compiled and used as inputs into CDC products.

Evidence

Protocols for conducting expert elicitation have been studied and used across various disciplines. One method, the IDEA protocol, is an evidence-based, detailed, six-step process to conduct an expert elicitation workshop. IDEA stands for: Investigate, Discuss, Estimate, and Aggregate. Another evidence-based structured elicitation protocol is the Delphi method. This method focuses on seeking areas of consensus among experts through conducting sequential surveys and sharing group results with participants so they can adjust their answers if desired. This process can also incorporate participation from stakeholders such as patients and community members.

Studies have found that structured expert elicitation—which is designed using principles from several disciplines, including decision theory, mathematics, policy, health science, education, and psychology—can yield higher-quality judgments from subject-matter experts across many scientific disciplines, particularly when the judgments are used to inform decision-making and in areas of uncertainty.

Process

Before experts are convened, the elicitation protocol is designed, and study materials are drafted.

The expert elicitation process then officially begins with recruiting relevant subject-matter experts and sharing relevant data and other project-related information with them.

Then, experts each provide their feedback and judgments on the questions at hand— such as expected trends related to specific disease(s) and key drivers for deviations from these expectations. This expert feedback may be collected through answering questionnaires, through one-on-one interviews, or other methods. Experts then convene, often via a virtual or in-person workshop, to discuss their individual and collective responses. Or, alternatively, group results can be shared without additional convening or discussion. The final results of the elicitation are then compiled and used as inputs into CDC modeling and products.

Results

The final report from the expert elicitation may include:

  • A quantitative range related to a key unknown metric, such as the relative transmissibility of a new COVID-19 variant or the number of COVID-19 hospitalizations that will occur next year.
  • A range of different scenarios for a disease-related metric or parameter in question, including associated data visualizations
  • Key drivers that affect the likelihood of each scenario or metric
  • Any areas of disagreement among the experts involved in the elicitation, and the reasoning behind the differing opinions
  • Any additional quantitative estimates to inform analysis or modeling, including aggregate estimates for either likelihood of different scenarios or for specific parameter values and summary confidence levels
  • An assessment of confidence in the estimates or scenarios provided

Confidence Levels

In many cases, expert elicitation processes involve experts assigning confidence levels to their answers to specific questions, as a way to provide an estimate of individual uncertainty. These individual confidence levels are then summarized to provide summary confidence levels for each question. These differ from our overarching confidence levels in our overall assessments, which incorporate the entire body of evidence, including expert judgment.

For example, in a recent expert elicitation exercise we conducted for our Respiratory Disease Outlook, we asked experts to assign confidence levels to their answers for each question we asked, according to the following scale:

  • Very low: The expert has very low confidence in their estimate (the estimate is highly uncertain), and they would put very wide error bars around it. The estimate is the number the expert comes up with when asked directly, but the expert would not be surprised if they updated their estimate significantly after more time to think and after receiving more information.
  • Low: The expert has low confidence in their estimate (their estimate is fairly uncertain), and they would put somewhat wide error bars around it. The expert thinks their estimate is reasonable, but the expert would only be a little surprised if they updated their estimate significantly after more time to think and after receiving more information.
  • Medium: The expert has medium confidence in their estimate (their estimate is moderately uncertain), and they would put moderate error bars around it. The expert is unsure about the exact value, but they would be somewhat surprised if they updated their estimate significantly after more time to think and after receiving more information.
  • High: The expert has high confidence in their estimate (their estimate is fairly certain), and they would put somewhat narrow error bars around it. They would be surprised if they updated their estimate significantly after more time to think and after receiving more information.
  • Very High: The expert has very high confidence in their estimate (their estimate is highly certain), and they would put very narrow error bars on their estimate. The expert would be very surprised if they updated their estimate significantly after more time to think and after receiving more information.