Elicitation of Expert Knowledge and Probabilities for Controlling Subjectivity in Risk-Based Decision Making

Feature
Article
Pharmaceutical TechnologyPharmaceutical Technology, November 2023
Volume 47
Issue 11
Pages: 30-46

More attention should be given to how expert opinions and judgments are elicited for reducing uncertainty in quality risk management and risk-based decision making.

concept of auditing and evaluating quality and efficiency of personnel, business document evaluation process ,inspection of business finance tax documents, Data analysis reports growth results | chaylek - stock.adobe.com

concept of auditing and evaluating quality and efficiency of personnel, business document evaluation process ,inspection of business finance tax documents, Data analysis reports growth results | chaylek - stock.adobe.com

This paper updates “Expert Judgments in Quality Risk Management: Where Quality Risk Management can ‘Go Wrong’” (1), in which several authors writing on the first 10 years of the International Council for Harmonisation’s (ICH’s) Q9 discussed controlling “subjectivity” in quality risk management (QRM) as a persistent challenge to ICH Q9 implementation. Indeed, “[t]he robustness and validity of a risk management program depend in part on recognition and control of the subjectivity that might be introduced by the many expert-based judgments for decisions that occur during the development and use of risk tools” (1). The revised guideline, ICH Q9 (R1), provides a new section of specific comments on subjectivity, “Managing and Minimizing Subjectivity (5.3),” adjacent to an expanded discussion about the meaning of formality in QRM (2). This paper discusses some of the approaches for controlling subjectivity using elicitation of expert knowledge and probabilities.

Editor's Note

This article was peer-reviewed by a member of Pharmaceutical Technology’s Editorial Advisory Board.

Submitted: June 12, 2023

Accepted: July 5, 2023

The general means for controlling subjectivity in QRM and for risk-based decision making (RBDM) include:

  • raising awareness about subjectivity through training on probability and expert judgments of subjective probabilities, Heuristics and
    biases, and possible impact of subjectivity on RBDM
  • using formal calibration of individual experts using Cooke’s classical method or other frameworks to elicit subjective probabilities using seed questions for expert calibration, elicit subjective probabilities for the de novo situation, and use formal scoring rules to weight experts’ subjective judgments using subjective probability elicitations in group settings with facilitation, such as Delphi technique, value-focused thinking, and multi-criteria decision making (MCDM) processes.

The first approaches for controlling subjectivity were discussed in the prior paper on ICH Q9 (1). In this paper, the last two approaches are visited, while recognizing that this list is certainly not an exhaustive summary of the many decades of research and application of methods for controlling subjectivity in risk analysis and decision-making under uncertainty.

RBDM often relies on experts for qualitative knowledge about likely sources, predictive factors (parameters), appropriate models, and approaches for controlling identified risks. For complex and significant risks to the patient, RBDM also calls upon experts “to provide quantitative probabilistic assessments of key uncertainties in an analysis when empirical data are unavailable, incomplete, uninformative, or conflicting” (3). This general view of expert elicitation is discussed in the following paragraphs, especially in the context that “[f]ormality in quality risk management is not a binary concept (i.e., formal/informal); varying degrees of formality may be applied […] when making risk-based decisions (3) at §5.1).” In other words, there is no distinct boundary between when qualitative or quantitative elicitations of expert knowledge and probabilities will suffice for a given risk-based decision.

Elicitation of expert knowledge and probabilities

Expert elicitation methods are used as a means of controlling subjectivity and thereby help reduce uncertainty arising in risk management decision-making. For this paper, “expert elicitation” (EE) is used as a general term for processes ranging from a relatively informal and qualitative elicitation of the structure of a risk problem to mathematically formal expert elicitation of risk factors (e.g., variables and parameters), their possible values, and the uncertainties about those estimated values. The latter, known as structured expert judgment (SEJ), is nearly a discipline itself attributed to Roger Cooke and a long list of his colleagues and research collaborators (3–6). SEJ processes apply scientifically justified methods for eliciting experts’ quantitative knowledge to fill critical information gaps in the risk assessment and risk control evaluations.

Elicitation of expert knowledge. The first concept in this use of EE is the qualitative (and possibly informal) elicitation of expert knowledge to help identify decision-making objectives, values, model structure, key risk factors/parameters, and appropriate QRM tools and controls. Here, the SEJ literature is relatively silent on identifying or defining suitable elicitation processes (7).Therefore, the author relies on the volumes of scholarly work about eliciting complex decision-making hierarchies or models for risk-based decisions that are available in the MCDM literature (8–11)(12–14). The benefits of multi-criteria decision analysis (MCDA) for controlling subjectivity in QRM risk modeling and decision-making were previously reported (15). The distinction between a model structure for a quantitative risk analysis and that of MCDA is not trivial. The former seeks a testable mathematical formulation of a risk question while the latter models objectives (goals) performance measured over the possible alternatives (e.g., risk projects). Overall, either approach can inform risk-based decision making.The typical steps in MCDM—which includes MCDA and other decision tools—to explain the two major types of expert elicitation for developing risk models that inform RBDM are given in Table I.

Table I. Typical application of structured expert judgement (SEJ) tools in typical risk-based and multi-criteria decision making (MCDM).

Table I. Typical application of structured expert judgement (SEJ) tools in typical risk-based and multi-criteria decision making (MCDM).

Defining and structuring of a complex risk-based decision begins by eliciting a clear definition of the risk in question and the principal factors contributing to the risk (Table I). The hierarchical view of a complex multi-objective or multi-criteria decision problem is sometimes referred to as an “objectives hierarchy” acknowledging that the purpose of a decision is to achieve one or more specific objectives (12,16,17). In contrast, the purpose in most classical expert elicitations is to have expert knowledge stand in for data, risk parameters, and the uncertainty in the parameters. The elicited data distributions and parameters are then used to calculate risk. Although the theoretical underpinnings differ, a decision objective to “reduce risk to the patient” can lead to similar results as the predictive endpoint of a quantitative, expert-built risk model. Most importantly, both processes inform decision-making about the need to control risk.

Figure 1 shows a small section of an elicited objectives hierarchy model that might occur in prioritizing pharmaceutical manufacturing sites for good manufacturing practice (GMP) surveillance inspection. In this view, the model includes inherent product and process factors, among others, that experts identify as contributing to overall risk. The attributes of these factors are measurable parameters that combine (usually linearly) to estimate a defined risk value or a score, “upwards” in the hierarchy.A relevant example of qualitative expert knowledge elicitation is found in the development of FDA’s original “GMP Site Selection Model” for GMP surveillance inspections. The model development began in 2003 using qualitative expert elicitation of the key parameters that might predict risk based on methods in the MCDM literature (18). The important attributes that experts believe contribute to a “risk score” for the purpose of GMP surveillance inspections were elicited through both focus group and survey methods of approximately 80 pharmaceutical manufacturing and compliance experts. Since 2005, the FDA site-selection model has undergone many iterations, continuous improvement, and peer-review; however, many of the original risk factors and attributes are retained (19).

Figure 1. Hypothetical and partial elicited hierarchy for a good manufacturing practice (GMP) surveillance model. Elicitation of expert knowledge about the factors important in predicting risk can be organized hierarchically as an aid to risk model communication. The hierarchies range from a handful of criteria to a complex hierarchy of (e.g.) 60 criteria and attributes the author elicited from experts in adverse drug experience reporting. (Figure courtesy of the author)

Figure 1. Hypothetical and partial elicited hierarchy for a good manufacturing practice (GMP) surveillance model. Elicitation of expert knowledge about the factors important in predicting risk can be organized hierarchically as an aid to risk model communication. The hierarchies range from a handful of criteria to a complex hierarchy of (e.g.) 60 criteria and attributes the author elicited from experts in adverse drug experience reporting. (Figure courtesy of the author)

Expert elicitation for structured expert judgments. A formal quantitative elicitation of expert judgments of probabilities, such as described in Cooke’s classical method for SEJ (20), is the second concept within expert elicitation as defined herein. Rigorous SEJ is used in high-impact RBDM where eliciting continuous risk estimates and the cumulative distribution—a presentation of uncertainty in the risk estimate—may be crucial. The steps for eliciting experts for SEJ are shown in Table II. Figure 2 illustrates using theoretical probability densities—the notion that the expert is elicited to obtain a justifiable estimate of likely value and uncertainty for a risk parameter to use in making risk estimates. Figure 3 illustrates the simplest form of eliciting subjective probabilities, which is a common starting point for novices in the practice. This method is sometimes used to initialize parameters for monte carlo risk models. But SEJ is much more complicated, as outlined in Table II. The process steps given in Table II mask the very complicated process of eliciting judgments using interviews of one expert at a time before aggregating experts’ results for modeling. Unfortunately, the one-on-one is notorious for being time consuming and challenging for the experts and facilitators alike.

Table II. Structured expert judgment. After: Cooke (5), Kurowicka and Cooke (6).

Table II. Structured expert judgment. After: Cooke (5), Kurowicka and Cooke (6).

Cooke’s classical model relies significantly on calibrating experts by asking them to estimate value and probability distributions of approximately 10 quantities that are reasonably knowable in their discipline. The results of estimating probabilities in the so-called “seed questions” exercise is used under proper statistical scoring rules to calculate two scores for each expert. The first is a calibration score that roughly measures accuracy using statistical distance measures from the known value of the parameter. The second is an information score measuring the match of the expert’s probability distribution and the known parameter distribution (21). Placing estimates in probability bins or probabilities on values estimates can be used (Figure 4). For example, a sterile manufacturing expert might be asked for the probability of sterility failures, given a defined sterile process, and subsequently to make reasoned judgments about the uncertainty around values. Reasonable estimates are likely to exist in the pharmaceutical engineering and manufacturing domains. The expert’s accuracy and precision can then be used as measure of the experts’ calibration. The mathematics to calculate the expert’s performance can be daunting and is beyond the scope of this discussion.

Figure 2. A key purpose of expert elicitation (EE). EE/structured expert judgement (SEJ) is sometimes used to obtain estimates of a probability distribution (here shown as probability density functions (PDFs) for convenience of presentation). The EE/SEJ process also generates information about a variable’s expected uncertainty according to the expert. (Figure courtesy of the author)

Figure 2. A key purpose of expert elicitation (EE). EE/structured expert
judgement (SEJ) is sometimes used to obtain estimates of a probability
distribution (here shown as probability density functions (PDFs) for
convenience of presentation). The EE/SEJ process also generates
information about a variable’s expected uncertainty according to the expert. (Figure courtesy of the author)

Aggregating expert judgments. Finally, the use of multiple experts in an SEJ or in the qualitative hierarchy or structure building is generally preferrable to relying on a single expert. The methods for quantitatively aggregating expert judgments have been topic of much research in risk analysis, operations research and management, and statistics. In general, the many proposed aggregation methods fall within only two general classes: mathematical and behavioral (Figure 5). Both classes may include one-on-one elicitations of experts prior to aggregation of the results. But often, such as in failure mode and effects analysis (FMEA) exercises, groupwise estimates might be “socialized” and replace the aggregation of individual risk and uncertainty estimates.

Figure 3. The simplest notion of eliciting a probability distribution. The expert is asked for the low, highest, and most likely (probable) estimates of the risk event or parameters needed for a risk assessment. Given an assumption about the shape of the distribution (e.g., normal, triangular, …), a crude distribution of parameters can be inferred from the elicited values. (Figure courtesy of the author)

Figure 3. The simplest notion of eliciting a probability distribution. The expert is asked for the low, highest, and most likely (probable) estimates of the risk event or parameters needed for a risk assessment. Given an assumption about the shape of the distribution (e.g., normal, triangular, …), a crude distribution of parameters can be inferred from the elicited values. (Figure courtesy of the author)

Although the math and statistics for SEJ and aggregating EE results is beyond the scope of this report, SEJ has been scientifically validated on many occasions. Key to its validation is the availability of results from tens of thousands of expert elicitations (6,21) and dozens of risk assessments from diverse areas such as nuclear reactor safety, global climate change, and microbial contamination in the food chain (4,5). To date, the author is unaware of a formal SEJ for GMP manufacturing risks for the reasons discussed in the following. Nevertheless, similar but more direct elicitations along the lines of the Sheffield Elicitation Framework (SHELF) (22,23) value and uncertainty judgments have been used by the author and colleagues.

Figure 4. Elicitation of Probabilities by two methods. One approach is to have the expert assign value(s) of the parameters to percentile bins of an assumed distribution of the parameter. The alternative method is to assign percentiles to binned values of the parameters. (Figure courtesy of the author)

Figure 4. Elicitation of Probabilities by two methods. One approach is to have the expert assign value(s) of the parameters to percentile bins of an assumed distribution of the parameter. The alternative method is to assign percentiles to binned values of the parameters. (Figure courtesy of the author)

When is expert elicitation needed?

Expert elicitation—as defined by the author—is always necessary for QRM. Even the design of apparently straightforward risk tools, such as risk matrices and FMEAs, begins by convening experts in the domain and risk assessment/management possibly with project managers, process engineers, scientists, and managers. Expert elicitation of the key elements of simple models as the basis of, for example, new FMEAs, implementation plans, scheduling periodic reviews, etc., can benefit greatly from structured thinking that is a characteristic of MCDM (11,24), such as value-focused thinking (12–14,25).In this author’s experience, QRM teams who have taken the time and care for structured thinking with qualitative expert elicitation produce sustainable risk tools as opposed to the teams who were more casual about the elicitation process.

Figure 5. General approaches to aggregating expert judgements. The two major processes for aggregating experts' judgments are mathematical and behavioral. The former uses statistical, Bayesian, or AI methods to aggregate independently elicited judgments from the experts. Behavioral methods are “socialized” processes where adjustments to individual’s judgments can occur in real time. Adapted from the author’s work (16). (Figure courtesy of the author)

Figure 5. General approaches to aggregating expert judgements. The two major processes for aggregating experts' judgments are mathematical and behavioral. The former uses statistical, Bayesian, or AI methods to aggregate independently elicited judgments from the experts. Behavioral methods are “socialized” processes where adjustments to individual’s judgments can occur in real time. Adapted from the author’s work (16). (Figure courtesy of the author)

Significant RBDMs. Most of the significant decisions risk managers make involve a complex landscape of known and unknown risks, competing risk management resource needs, and large gaps in the information and data for risk assessments, and there is sometimes a compelling argument for more rigorous QRM methods that might benefit from expert elicitations as defined in the SHELF and Cooke classical method procedures. Fortunately, the need for a completely rigorous SEJ continues to be somewhat rare. Pharmaceutical industry risk events or public health crises that could have benefited from more formal risk modeling include the international heparin contamination event (26), glass particulates in injectable drugs, and the ongoing manufacturing and consumer supply chain issues.

Principles for expert elicitation add value. Training of QRM team and risk managers discussed in the 2015 review of QRM was mentioned at the outset of this paper as an important way to control subjectivity. In addition to the topics mentioned previously, the principles for subjectivity control delineated in the MCDM literature and textbooks can extend the training of risk team members beyond a basic “heuristics and biases” level. This information is also valuable for teaching best practices in structured thinking for intermediate QRM risk assessments and risk control evaluations. Moreover, having teams ready to solve complex QRM using the structured thinking behind expert elicitation contributes significantly to the overall risk maturity in an organization.

Cooke’s principles for structured expert judgments were developed around his quantitative methodology; however, the principles are broadly applicable to group RBDM. These principles for the SEJ process discussed in Cooke (4) are:

  1. Scrutability/Accountability, “all data, including experts’ names and assessments, and all processing tools should be open to peer review and results must be reproducible by competent reviewers.”
  2. Empirical Control, “quantitative expert assessments should be subject to empirical quality controls.”
  3. Neutrality, “the method for combining and evaluating expert opinion should encourage experts to state their true opinions and must not bias results.”
  4. Fairness, “experts should not be prejudged, prior to processing the results of their assessments.”

Clearly, Cooke’s principles extend easily to good science practice and might be applied throughout QRM in RBDM activities that call for eliciting expert knowledge as input.

Limitations to expert elicitation in practice

Important limitations to using formal expert elicitation in practice include resources, adequate expertise among staff, lack of supporting information, rejection of the methodology by experts, and misuse or misunderstanding the utility of formal expert elicitation. The state-of-the-art implementations of expert elicitation among many disciplines are those engaging many experts, facilitators, analysts, and project managers. Given the rarity of major QRM risks problems, it is unlikely that companies and governments will routinely assemble or maintain these resource-intensive teams and processes. In fact, regulatory oversight by legislative bodies is often the impetus for agencies to engage in resource-intensive risk assessments for risk policy decision-making.

Anecdotal reports from QRM practitioners often mention the lack of adequate expertise in QRM with respect to more arcane RBDM tools and frameworks. Having adequate expertise in expert elicitation on site for RBDM is likely a widespread limitation. Perhaps the publicly accessible frameworks and primary literature here and elsewhere will help motivate QRM practitioners to expand their knowledge base in the area. Many of the popular QRM tools (e.g., risk matrices, FMEAs, corrective and preventive actions [CAPAs]) benefit greatly by having identified and well-controlled sources of subjectivity based on training and principles for expert elicitation.

The processes outlined in expert elicitation and MCDM are subject to misuse and misunderstanding. For example, it is easy to be dazzled by current analytical and computational power supporting the practice and then to apply sophisticated analyses to every risk question in QRM. This is a fraught approach for both its overuse of limited resources and for its potential risk miscommunication. Sophisticated quantitative results from SEJs, Monte Carlo analyses, or even overly complex risk dashboards can unintentionally communicate more knowledge about a risk problem than can be supported by data and information and expert probability judgments. Simply stated, such endeavors violate the ICH Q9 and Q9(R1) principle, “[t]he level of effort, formality and documentation of the quality risk management
process should be commensurate with the level of risk”(2).

Finally, risk and decision-making facilitators often recall working with highly credentialed and well-trained experts who adamantly reject the notion of eliciting probabilities and, particularly, their personal subjective probabilities. To these experts, the structure of the elicitation processes is not necessarily the problem. Rather, the expert feels that the use of subjectivism runs counter to basic science training emphasizing objectivity. Moreover, experts can feel as though their personal expertise and judgment is in question. With the most resistant experts, this requires discussion and possibly even managerial intervention stressing the importance of the overall risk management goal.

Resources for expert elicitation

One of the enduring challenges in QRM is the paucity of QRM-specific procedures or regulatory guidance about specific risk tools. The problem is greatest for the most complex risk problems requiring a formal risk analysis. Procedures are particularly lacking for quality risks to the patient having greater potential severity of harm as these problems require more formal risk management. For example, writing a single “QRM cookbook” for a risk issue, such as the international heparin crisis of 2008 (26), involving many dozens of experts, would be a major task. Nevertheless, elements of qualitative expert elicitation were evident as risk assessment and risk controls moved forward.

QRM facilitators and other interested parties will find abundant supporting literature for both the (qualitative) structural knowledge elicitation and the quantitative SEJ. In the first case, helpful works in the field of MCDM include those by Keeney (12,14,25), Belton, and others (11,24,27–29), in addition to the engineering approaches of Haimes (17,30), Ayyub (31), and Saaty (32). The quantitative SEJ has been discussed above as works attributable to Cooke (4,33,34) and with similar frameworks available from Hemming et al. (35), Gosling (22), van der Sluijs (36), and Burgman (37). Importantly, some of these authors make software available for the quantitative aggregation of experts’ subject probabilities. Finally, there are a few sources for expert elicitation, particularly more qualitative aspects, from regulatory authorities (38–41)

In the author’s experience, the choice of a framework for expert elicitation and MCDM depends on the composition of the decision-making or risk management group. Some groups gravitate to more prescriptive, stepwise RBDM methods while others are comfortable in an open brainstorming format. In the author’s experience and as shown in the rich literature on the subject, many opportunities exist for building hybrid frameworks. Thus, it is likely that the relatively new practice of QRM will continue to see one-off frameworks for RBDM until best practices evolve and are committed to guidelines. This predicament is not a negative; rather, it both enables creativity in solving problems and follows the ICH Q9 principle of using “the right tool for the job!”

Final thoughts

The frameworks for expert elicitation of knowledge and subjective probabilities contribute significantly to controlling subjectivity in QRM. Whether following Cooke’s classical method for SEJ or using MCDM theory to guide elicitation knowledge about the hierarchical structure of a risk problems, QRM practitioners will find the support of a rich knowledge base from the peer-reviewed literature. The literature from either of these practice disciplines provides thorough guidance on controlling intrusions of subjectivity in RBDM.

There is no simple “binary decision” (e.g., ICH Q9(R1) § 5.1) to know when either qualitative expert elicitation of knowledge or quantitative structured expert elicitation are necessary. For example, there might be significant uncertainties in judging the base frequencies of identified risks that are of low impact. Thus, the significant uncertainties beg for structured judgments; however, the likely low impacts might argue against the resource-intensive structured expert judgement. In this case, extra precision is not warranted.

References

  1. Claycamp, G. Expert Judgments in Quality Risk Management: Where Quality Risk Management Can “Go Wrong.” In: Waldron, K.; O’Donnell, K., editors. Quality Risk Management in the GMP Environment–Ten Years Since the Finalisation of ICH Q9 Anniversary Special Edition. 2015, p. 51–5.
  2. FDA. Q9(R1) Quality Risk Management. Guidance for Industry. (ICH, 2023) (accessed May 31, 2023).
  3. French, S.; Hanea, A. M.; Bedford, T.; Nane, G. F. Introduction and Overview of Structured Expert Judgement, 2021 In: Hanea, A.M.; Nane, G.F.; Bedford, T.; French, S. (eds) Expert Judgement in Risk and Decision Analysis. International Series in Operations Research &
    Management Science, Vol 293. Springer, Cham, p. 1–16.
  4. Cooke, R. M. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press, 1991.
  5. Kurowicka, D; Cooke, R. Uncertainty Analysis with High Dimensional Dependence Modelling. John Wiley & Sons, Ltd, 2006; pp.1–284.
  6. Cooke, R. M.; Goossens, L. L. H. J. TU Delft Expert Judgment Data Base. Reliability Eng Syst Safety. 2008, 93, 657–674.
  7. Hanea, A. M.; Hemming, V.; Nane, G. F. Uncertainty Quantification with Experts: Present Status and Research Needs. Risk Analysis. 2022, 42 (2), 254–263.
  8. Franco, L. A.; Montibeller, G. Problem Structuring for Multicriteria Decision Analysis Interventions. In: Wiley Encyclopedia of Operations Research and Management Science; Cochran, J.J., Ed.; John Wiley & Sons, Inc., 2010; pp. 1–14.
  9. Nutt, D.J.; King, L.A.; Phillips, L.D. Drug Harms in the UK: A Multicriteria Decision Analysis. Lancet. 2010, 376, 1558–1565.
  10. Sussex, J.; Rollet, P.; Garau, M.; Schmitt, C.; Kent, A.; Hutchings, A. A Pilot Study of Multicriteria Decision Analysis for Valuing Orphan Medicines. Value in Health 2013, 16, 1163–1169.
  11. Belton, V.; Stewart, T. J. Multiple Criteria Decision Analysis, An Integrated Approach. Boston: Kluware Academic Publishers, 2002; pp. 1–372.
  12. Keeney, R. L. Value-Focused Thinking. Cambridge University Press, 1992.
  13. Keeney, R. L. Making Better Decision Makers. Decision Analysis 2004, 1 (4), 193–204.
  14. Keeney, R. L. Value-focused Brainstorming. Decision Analysis 2012, 9 (4), 303–313.
  15. Claycamp, G. Controlling Subjectivity in Risk Decision-making: The Benefits of Multi-criteria Decision Analysis. InAudience with International Regulators in the Manufacture of Medicines: Quality Risk Management (QRM) and Knowledge Management (KM); Greene, A.; O’Donnell, K.; Calnan N., Eds.; DIT Academic Press, 2018; pp. 15–30.
  16. Bond, S. D.; Carlson, K. A.; Keeney, R. L. Generating Objectives: Can Decision Makers Articulate What They Want? Manage Sci. 2008, 54 (1).
  17. Haimes, Y. Y.; Kaplan, S.; Lambert, J. H. Risk Filtering, Ranking, and Management Framework using Hierarchical Holographic Modeling. Risk Analysis 2002, 22 (2), 383–397.
  18. Tran, N. L.; Hasselbalch, B.; Morgan, K.; Claycamp, G. Elicitation of Expert Knowledge about Risks Associated with Pharmaceutical Manufacturing Processes. Pharm Eng. 2005, 25 (4), 24–38.
  19. FDA Office of Pharmaceutical Quality. MAPP 5041.1 Understanding CDER’s Risk-Based Site Selection Model (Silver Spring, Md., September 2018).
  20. Colson, A. R.; Cooke, R. M. Expert Elicitation: Using the Classical Model to Validate Expert’s Judgments. Rev Environ Econ Policy. 2018, 12 (1), 113–132.
  21. Hanea, A. M.; Nane, G. F. An In-Depth Perspective on the Classical Model. In International Series in Operations Research and Management Science; 2021; pp. 225–256.
  22. Gosling, J. P. SHELF: The Sheffield Elicitation Framework. In Elicitation, The Science and Art of Structuring Judgement; Dias, L. C.; Morton, A.; Quigley, J., Eds.; Springer, 2018; pp. 61–93.
  23. Oakley, J. E.; O’Hagan, A. “SHELF:the Sheffield Elicitation Framework” (version 3.0), School of Mathematics and Statistics, University of Sheffield. 2016 (accessed Jan. 5, 2023).
  24. Marttunen, M.; Lienert, J.; Belton, V. Structuring Problems for Multi-Criteria Decision Analysis in Practice: A Literature Review of Method Combinations. Eur J Oper Res. 2017, 263 (1).
  25. Keeney, R. L. Foundations for Group Decision Analysis. Decision Analysis 2013, 10, 103–120.
  26. GAO. Response to Heparin Contamination Helped Protect Public Health; Controls That Were Needed for Working with External Entities Were Recently Added. Washington, DC; October 2020 (accessed June 9, 2023).
  27. von Winterfeldt, D.; Edwards, W. Decision Analysis and Behavioral Research. Cambridge University Press, 1986.
  28. Otway, H.; von Winterfeldt, D. Expert Judgment in Risk Analysis and Management: Process, Context, and Pitfalls. Risk Anal. 1992;12(1):83–93.
  29. von Winterfeldt, D.; Fasolo, B. Structuring Decision Problems: A Case Study and Reflections for Practitioners. Eur J Oper Res. 2009, 199, 857–866.
  30. Haimes, Y. Y.; Kaplan, S.; Lambert, J. H. Risk Filtering, Ranking, and Management Framework Using Hierarchical Holographic Modeling. Risk Analysis 2002, 22 (2), 383–397.
  31. Ayyub, B. M. Elicitation of Expert Opinions for Uncertainty and Risks. CRC Press, 2002.
  32. Saaty, T. L.; Peniwati, K. Group Decision Making: Drawing Out and Reconciling Differences. 2013th; 2008th ed.; RWS Publications, 2013.
  33. Cooke, R. M. The Aggregation of Expert Judgment: Do Good Things Come to Those Who Weight? Risk Analysis. 2015, 35 (1), 12–15. DOI: 10.1111/risa.12353
  34. der Fels-Klerx, H. J.; Cooke, R. M.; Nauta, M. N.; Goossens, L. H.; Havelaar, A .H. A Structured Expert Should this Judgment Study for a Model of Campylobacter Transmission During Broiler-chicken Processing. Risk Anal. 2005, 25 (1), 109–124.
  35. Hemming, V.; Burgman, M. A.; Hanea, A. M.; McBride, M. F. A Practical Guide to Structured Expert Elicitation Using the IDEA Protocol. Methods Ecol Evol. 2018, 9, 169–180.
  36. van der Sluijs, J. P.; Craye, M.; Funtowicz, S.; et al. Combining Quantitative and Qualitative Measures of Uncertainty in Model-Based Environmental Assessment: The NUSAP System. Risk Analysis 2005, 25 (2), 481–492. DOI: 10.1111/j.1539-6924.2005.00604.x
  37. Burgman, M. A. Trusting Judgements: How to Get the Best Out of Experts. Cambridge University Press, 2016.
  38. EFSA. Guidance on Expert Knowledge Elicitation in Food and Feed Safety Risk Assessment. EFSA Journal. 2014 Jun;12(6).
  39. MacGillivray, B.H. Characterising Bias in Regulatory Risk and Decision Analysis: An Analysis of Heuristics Applied in Health Technology Appraisal, Chemicals Regulation, and Climate Change Governance. Environ Int. 2017;105:20–33.
  40. Practice EG on QU in. Quantifying Uncertainties in Practice. In Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories; Odingo, R., Ed.; Intergovernmental Panel on Climate Change, 2001; pp. 6.1–6.34.
  41. (UK) D for C, Government L. Multi-Criteria Analysis: A Manual, 2009.

About the author

H. Gregg Claycamp, MS, PhD, is a Risk and Decision Analysis Consultant, hgclaycamp@gmail.com.

Article details

Pharmaceutical Technology
Vol. 47, No. 11
November 2023
Pages: 30-46

Citation

When referring to this article, please cite it as Claycamp, H. G. Elicitation of Expert Knowledge and Probabilities for Controlling Subjectivity in Risk-Based Decision Making. Pharmaceutical Technology 2023 47 (11).

Recent Videos
CPHI Milan 2024: Compliance and Automation in Aseptic Processing