Skip to main content

Write a comment

PREreview of Prior knowledge elicitation: The past, present, and future

Published
DOI
10.5281/zenodo.5770931
License
CC BY 4.0

# General Comments

Overall great and needed review. Much has been written in the past, but perhaps not for this audience. Recent book @hanea2021ExpertJudgementRisk is one of the best reviews of existing literature on the subject.

It seems like there are two papers combined in one:

- a systematic review with a hypercube and

- a proposal for model-free elicitation framework based on @hartmann2020FlexiblePriorElicitation and other examples.

Chapter 4 provides a very good overview of your perspective on EKE and should be up front. There's no shame in stating that you intend to look at the prior elicitation from the point of view of Bayesian Workflow, contextualizing expert elicitation in the overall landscape of Bayesian inference.

Predictive elicitation is a revolving theme in many of the sections, but it it not clearly separated from the parametric elicitation. A lot of elicitation literature has been written with a view of eliciting "next observation", i.e. without statistical model in mind and with no aim to arrive at the distribution of parameters. This makes significant difference for interpreting and applying the lessons from that literature to the elicitation of priors. It would be good to make clear distinction between *elicitation of parameters* using questions on observable scale [@hartmann2020FlexiblePriorElicitation; @perepolkin2021HybridElicitationIndirect] and *predictive elicitation*, prevalent, for example, in the decision analysis literature. Predictive elicitation may be necessary as a complement to parametric elicitation (for validating the prior by comparing implied prior predictive to the respective elicited distribution), but it is fundamentally a different animal, because there's no statistical model involved [@akbarov2009ProbabilityElicitationPredictive].

# Specific comments by chapter

## Chapter 1. Introduction

Introduction is too long (too detailed) and repeats too much of what will be said again in the following chapters without providing a higher lever of abstraction or summary. Consider trimming it down and making more high-level. Perhaps merging it with Chapter 4 could be a good idea.

## Chapter 2. Prior elicitation

The opening paragraph of Section 2.1 would benefit from introduction of the term *statistical model*, as a context for the prior [@gelman2017PriorCanOften]. It is addressed in Section 4.2 quite well, so if Chapter 4 is moved up front, it would provide necessary vocabulary.

Section 2.1, para 2. Here it might be relevant to mention posterior passing as an example of hyper-informed prior [@brand2017CumulativeScienceBayesian; @beppu2009IteratedLearningCultural], which assumes that the posterior distribution from one study is directly taken as the prior distribution for the next study (i.e. the problem of sampling from posterior samples).

Section 2.1. para 3. The distinction between eliciting and specifying prior is unclear. If prior is to measure the degree of knowledge (or ignorance) then specification of uninformative prior is also an act of elicitation. Otherwise, you need to draw a line between informative and uninformative prior (i.e. every prior is in some sense informative, and particularly, uniform prior).

Section 2.1 last paragraph. The role of the facilitator (*analyst*) is to enforce a structured process to minimize biases [@quigley2021CharacteristicsProcessSubjective], provide necessary review and critical appraisal, document the process (for reproducibility), motivate the elicitation, set the frame and guide the process through carefully selected unambiguous questions [@spetzler1975ProbabilityEncodingDecision; @morgan2014UseAbuseExpert]. Predictive elicitation may be model-agnostic, as you talk about in Section 3.2 [@akbarov2009ProbabilityElicitationPredictive], but parametric elicitation is always done in the context of a (statistical) model.

Alignment around the model is worthy of its own section. First of all, as argued by Richard McElreath (see Figure 1.2 in @mcelreath2020StatisticalRethinkingBayesian) Bayesian inference tasks always have some hypotheses linked to (one or multiple) process/causal models mapped to (one or multiple) statistical models.

</p><p>HypothesisProcess model/Causal modelStatistical model</p><p></p><p>\fbox{Hypothesis}\longrightarrow \fbox{Process model/Causal model}\longrightarrow \fbox{Statistical model}</p><p>

In prior elicitation, experts come with their own process/causal models which, hopefully, coincides with the causal model of the facilitator. Alignment around this model is highly desirable, otherwise interpretation of observations will differ between the analyst and the expert. When elicitation is performed in the parameter space, the task is even more complex, because the analyst and the expert must also agree on the statistical model, which maps the causal process to the data-generative mechanism. Given that prior can only understood in the context of the likelihood, the form of the likelihood is often dictated by the data available at hand.

Another question to discuss: who "owns" the model? In most cases it will be the analyst who will be ultimately responsible for the choice of the statistical model and for ensuring that it maps well to the process model in the head of the expert. This can be further complicated when the analyst (model builder) and the facilitator (interviewer) are not the same person. In parametric elicitation (elicitation in parameter space) the responsibility of the facilitator is also to introduce and communicate the model, and make sure that the expert understand the model exactly the same way as the modeler(analyst), in order to provide judgement on the prior. The criterion for validating that the prior is elicited correctly is that the quantiles of the predictive elicitation match the respective quantiles of the prior predictive from the data generative model.

Section 2.2 para 2 (also in the introduction page 3 para 1). The list of obstacles to wider adoption of prior elicitation. I believe there are also the following reasons:

- **Philosophical:** Scientists are driven by the desire to stay "objective" (ref value-free ideal) and let the data speak for itself, therefore, non-informative priors are widely popular. Also researchers can not (don't want to) rationalize subjective priors, so it is easier for them to adopt non-informative or "recommended" weakly-informative priors.

- **Educational:** Lack of understanding of the role of non-informative priors and/or the principles of structured expert knowledge elicitation. This might be related to the category of *practical* in your classification, but I think that misunderstandings (especially related to role of priors) are worth highlighting in own category. The particular aspect of elicitation related to "elicitation of expert causal model" and alignment around statistical model (which are critical for parametric elicitation) should also be mentioned separately.

I disagree with the name of the category *Societal*. It must have something to do with pragmatism, but not with societal norms. From the perspective of the perfectly functioning markets, the low adoption of prior elicitation might signal that the "return on investment" for the effort it requires is low. This return should be looked at in the VOI (value of information) framework. The base line for comparison should be the "non-informative" prior. Indeed, the analyst has two plausible alternatives: collect more data or refine the prior. Collecting more data can be evaluated using Value of Sample Information (VoSI) [@schlaifer1961AppliedStatisticalDecision; @runge2011WhichUncertaintyUsing] and refining a prior using the Value of Refined Prior Information (VoRPI? the term is mine, but feel free to propose yours). In any case, value of information (of any kind) can be evaluated only in the context of the (decision-maker-specific) value function and in the context of a particular decision (i.e. given the set of alternatives). Since every decision is made in highly specialized circumstances, it would be nearly impossible to generalize it to a statement like "prior elicitation is always a good thing to do".

The elephant in the room is of course **big data**. In the presence of abundant data any prior will be overwhelmed by the likelihood, so it does not matter whether prior is elicited or specified. In fact the strong opinionated prior may hinder learning, so analysts are opting to be on the cautious side and adopt non-informative priors. Even in the "small data" scenario, the easier option is usually to "get more data", given that data storage and acquisition is cheap.

Another concern is that whenever the data is small, the analyst might feel that specification of informed prior will degrade the scientific value of his research, because the prior will not be overwhelmed by likelihood and his incremental contribution to the posterior will be negligible. It is much easier to leave wide priors and then claim that some learning happened due to the data the analyst collected and the model he proposed. When one builds "on the shoulders of the giants", the contribution of the builder may not be visible.

Section 2.2 para 2 and 3 (after the three bullets) of page 6 (both starting with "By") are repeating Introduction without much additional substance. Penultimate sentence in para 3 of page 6 starting with "To put it briefly" could be moved to the technical reasons in the paragraph above. The second paragraph on page 7 starting with "Following the above" (still in Section 2.2) may refer to "pragmatic" reasons and should discuss the VOI perspective to evaluation of the prior elicitation.

I think that the last paragraph of this Section 2.2 weakens the argument a little bit, by calling the logic "circular". Value of prior elicitation has not been studied, because Bayesian methods adoption has been low and because data is abundant. If and when the need will appear, elicitation will be studied and new tools will be developed. This is likely to happen on the wake of causal revolution [@pearl2018BookWhyNew; @mcelreath2020StatisticalRethinkingBayesian]. Lastly I want to mention, that the state of research in predictive elicitation [@morgan2014UseAbuseExpert; @quigley2021CharacteristicsProcessSubjective; @spetzler1975ProbabilityEncodingDecision], where no attempt is made deduce the prior (or even to explicitly describe the model), is very telling. There are no agreed approaches how the simplest of the elicitations are to be done, so there's no wonder that the research on more complex (prior) elicitation is lagging behind. We are generally pretty bad at expressing our knowledge probabilistically, let alone documenting it and comparing it over time.

Section 2.3 Hypercube.

Good set of dimensions, although I am not sure about its completeness (if it captures all dimensions) or even comprehensiveness (if it covers most relevant dimensions). For example, you cite @bernardo1994BayesianTheory discussing "operational interpretation of the model parameters". I think interpretability of parameters is very relevant (potential) dimension. Perhaps it is included in D2 The model family. Not all distributions are equally suitable as priors. Some of them, although flexible, may be unelicitable due to completely uninterpretable nature of parameters (unless one chooses the supra-Bayesian approach, which is worth discussing as an advantage). In the recent [talk at StanCon2020 Ben Goodrich](https://www.youtube.com/watch?v=_wfZSvasLFk) argued that most of the parametric distributions are unintuitive and therefore not very useful for reasoning about the priors. He pitched quantile-parameterized distributions (QPDs) as a viable alternative for defining priors. We mention them briefly in @perepolkin2021TenetsIndirectInference, since most of them are quantile distributions. The prior distribution is then defined indirectly through the quantile function transformation within the likelihood (see Section 3.2 in @perepolkin2021TenetsIndirectInference).

The Elicitation Space dimension (D3) is relevant, but I wonder if it is confounded by some other unidentified category. Also it might be confused with parametric and predictive elicitation (which is definitely not what you mean). I would argue that elicitation of prior predictive distribution should always be done *in addition to* the parametric elicitation which, in case of quantile-parametrized likelihood [@perepolkin2021HybridElicitationIndirect] or supra-Bayesian method [@hartmann2020FlexiblePriorElicitation] is also performed in the observable space. Therefore, some form of elicitation of observables is inevitable. The real distinction is whether it is possible to *parameterize* a Bayesian model without any elicitation in the unobserved (latent) space. One can argue that the properties of the hypothetical sample elicited in @perepolkin2021HybridElicitationIndirect are also not observed, but the Dirichet (or Connor-Mosimann) distribution parameter vector(s) are never elicited directly from the user. The user is never forced to reason about the values of abstract unobserved parameters. I think *predictive elicitation* needs to be clearly carved out of the scope of the paper, because predictive elicitation does not lead to distribution of parameters. It can accompany parametric elicitation, but it does not replace it. No amount of predictive elicitation is able to define parameters. These distinctions should be made clearer in the discussion of the D3 dimension of your hypercube.

The dimension D4 Elicitation model (para 3 on page 9) is discussing *overfitting*, as a situation where analyst elicits "more summaries than needed" for fitting the parametric distribution. I think the term is used incorrectly, since if the fitted function is not going to go through every elicited quantile-probability pair (QPP) then we are witnessing *underfitting*. I understand that this is a cited reference, but I there's no reason to perpetuate confusion. *Overfitting* can not be desirable, because imprecision (or internal contradiction) in the expert judgement is mere the evidence cognitive noise [@kahneman2021NoiseFlawHuman] and lack of rational judgement (where *rational* is defined as judgement "immune to introspection" [@al-najjar2009AmbiguityAversionLiterature]). The role of the facilitator (analyst) to surface the inconsistencies in judgement and help the expert correct them. Fitting the prior distribution to the cognitive noise does favor to noone. One may argue that the advantage of supra-Bayesian pooling is that it is self-correcting through MCMC and Bayesian updating, but, as will be clear below, I question the impact of "elicitation prior": how it is formed and what it means for the elicitation posterior (which becomes the elicited prior in the model).

Regardin the fitting vs supra-Bayesian method. Although some fitting is performed in @perepolkin2021HybridElicitationIndirect (as part of @elfadaly2013ElicitingDirichletConnor method of fitting marginal conditional betas to the elicited quartiles), the simplices generated by the QDirichlet prior are honored exactly, as they are parameterizing the metalog likelihood. It is definitely not supra-Bayesian, because expert judgement is not taken as input to be updated and there's no "elicitation prior" involved, plus metalog likelihood is the only likelihood in the model. Perhaps the correct classification would be "fitting(D4), observation space(D3)". I understand that "elicitation likelihood" in last sentence of para 2 on page 9 is referencing the para-Bayesian elicitation model.

When you explain supra-Bayesian approach, it would be good to coin good name for the "elicitation prior" (which is the analyst's prior belief about the hyperparameters of the expert's "model prior"). Perhaps the Bayesian model diagram would be useful. It might be confusing to some that there exist this meta-prior which is introduced by the analyst and it serves as a tool to *infer* the parametric prior of the expert. It would be good to have some standard terminology around these definitions. Your paper could take the initiative and introduce a vocabulary as a text box insert into the paper. Examples, as always, are welcome.

In dimension D5 Computation (para 3 on page 10), you again mention *predictive* elicitation and, as discussed earlier, I think you should explicitly carve out predictive elicitation as *not* leading to elicited priors (as it is model-agnostic). Unless of course you mean validation of elicited priors through comparing to elicited quantiles from prior predictive distribution. In such a case, the reference to validation step should be made explicit.

Dimension D6 (para 4 on page 10) I would call "elicitation protocol". Although it is often confused with aggregation there's a value of studying the differences in recommended sequence of steps to elicit judgement from a single expert, regardless of whether it will be later aggregated with other experts or not. In other words, it might be useful to try and separate elicitation from aggregation, as these are often confounded and confused in the EKE literature [@quigley2021CharacteristicsProcessSubjective; @morgan2014UseAbuseExpert].

Regarding *one-shot* vs *iterative* vs *interactive* elicitation. Any robust (reproducible) elicitation framework must end with validation, so some interaction is inevitable. If *iterative* means "repeated" (i.e. same type of judgement is elicited every week/month), then care should be taken to model the change in expert judgement due to new knowledge and evolution of beliefs. Otherwise every elicitation ought to be to some extent *interactive*.

Dimension D7 Capability of the expert (para 2 on page 11). You highlight an important aspect of expert's requisite statistical expertise for comprehending the model and therefore expressing judgement on parameters. This is an important argument for observable-space elicitation, as good comprehension of the model shall not be assumed regardless of the education level of the expert.

## Chapter 3. Current main lines of research

In the first paragraph on page 12 there is mentining of $133 + 1000 + 182 = 1315results.Thosenumbersareneverexplainedinthearticle.Perhapsitwouldbeusefultoknowwhattheycorrespondto.</p><p>Section3.1Elicitationinparameterspace</p><p>Inthefirstparagraphyourefertoelicitinginformationontheobservablespace results. Those numbers are never explained in the article. Perhaps it would be useful to know what they correspond to.</p><p>Section 3.1 Elicitation in parameter space</p><p>In the first paragraph you refer to eliciting information on the observable space p(y)combinedwithasuitablealgorithmtodeduce combined with a suitable algorithm to deduce p(\theta).Perhapsitisworthcontrastingthisapproachtothepredictiveelicitation,whichstopsateliciting. Perhaps it is worth contrasting this approach to the predictive elicitation, which stops at eliciting \int p(y|\theta)p(\theta)d\theta$, the *prior predictive distribution*, making no effort to deconstruct it into the prior and likelihood.

In the Univariate prior subsection on page 13 para 3 you discuss probability encoding methods used by decision analysis community. Those are explicitly predictive. They are never used to elicit parameters. Even though the principles look useful for parametric elicitation, one should not assume that they are the same. Emphasizing the predictive vs parametric nature of elicitaion would do great service to the reader understanding the difference in traditions and interpretations of the word "expert knowledge".

The terminology used in this paragraph (*variable interval* and *fixed interval*) is unfortunate. It does very little to aid intuition about what the method represents, as it is unclear which interval the method name refers to. On the other hand, the @spetzler1975ProbabilityEncodingDecision terminology is quite transparent: V-method refers to elicitation of *values* (corresponding to analyst-defined cumulative probabilities), the P-method elicits *probabilities* (corresponding to the analyst-proposed quantile values). The process you describe in the last paragraph on page 13 (by Oakley) seems equally applicable to any elicitation using fitting (D4) approach. The order of elicited quantiles matters and @morgan2014UseAbuseExpert, for example, would discourage starting with median (to avoid anchoring). By the way, the symmetric quantiles, like 0.25, 0.5, 0.75 are referred to as SPT (symmetric percentile triplets) in the decision analysis literature [@hadlock2017QuantileparameterizedMethodsQuantifying; @keelin2016MetalogDistributions].

Subsection on Multivariate prior, penultimate line on page 14. When you use the term "subjective independence", would "conditional independence" be a better term?

Page 15 second paragraph introduces Gaussian copulas. I guess it would equally apply to other types of copulas. Given the wide recognition and popularity, vine copulas should be dedicated more attention [@czado2019AnalyzingDependentData; @kurowicka2011DependenceModelingVine].

Subsection on Beta and Dirichlet prior, page 16, second para, sentence starting with "So EFS can be classified as a hybrid prior elicitation method". It is unclear what "hybrid" is referring to. What features are combined and why it is referred to as "hybrid"?

Page 16 penultimate line. References to multivariate copulas and vines should be moved to section of Multivariate priors (pages 14-15).

Subsection on scoring rules, page 17 para 3. Risk appetite is described well in @howard2014FoundationsDecisionAnalysis (including elicitation of risk appetite for monetary value function). Also, there's an interesting work on reducing bias in the elicited probabilistic estimates by @welsh2018MoreorlessElicitationMOLE.

Section 3.2 Elicitation in observable space, penultimate line on page 18 cites @akbarov2009ProbabilityElicitationPredictive. It is a very valuable work and you should elaborate a little more on the method he proposed and not only cite the theoretical section of his PhD thesis. His proposed method could be reviewed before introducing @hartmann2020FlexiblePriorElicitation work in last paragraph on page 20.

Page 19 para 3, sentence starting with "The goal of the elicitation is to construct priors for the model hyperparameters". Since this sentence appears in the paragraph which starts with reference to the predictive elicitation, perhaps it would be good to make it clear that here you are talking about *prior elicitation*, not *predictive elicitation* (also,hyperparameters or parameters?).

Page 20 para 2 mentioning the "Fredholm integral equation of the first kind" needs a citation.

When talking about @hartmann2020FlexiblePriorElicitation in penultimate sentence on page 20 you say "expert's assessment task consists of assessing (prior predictive) probabilities...". Although this is not wrong, it might be confusion because you are not talking about depths (CDF values) corresponding to prior predictive quantiles, you are instead referring to other probability measures (which correspond to forward difference in cumulative probabilities). It would be nice to avoid calling these prior *predictive probabilities* for the sake of clarity.

Section 3.3 on Nonparametric priors, last paragraph on page 21. When you are talking about "satisfying prior distribution" it is unclear how this satisficing is supposed to be measured.

Section 3.3 on Nonparametric prior, page 22 second paragraph the discussion about Schlaifer's proposal to draw prior density directly might benefit from discussion of quantile-parametrized distributions [@keelin2011QuantileParameterizedDistributions; @hadlock2017QuantileparameterizedMethodsQuantifying], which, I believe, made his dream come true, such as Myerson [@myerson2005ProbabilityModelsEconomic], Johnson QPDs [@hadlock2017JohnsonQuantileParameterizedDistributions; @hadlock2019GeneralizedJohnsonQuantileParameterized], and SPT-metalog [@keelin2016MetalogDistributions].

Last paragraph of section 3.5 on Active elicitation, page 25 (para 2) admits that many methods mentioned earlier include elements of active elicitation, including "the expert was shown data during the elicitation". This perhaps suggests that the dimension is somewhat artificial and active elicitation is really a desirable feature of any elicitation protocol (discussed above).

Section 3.7 on Heuristics and Biases should probably mention @morgan2014UseAbuseExpert and @welsh2018MoreorlessElicitationMOLE.

## Chapter 4. Where should we be going

In para 2 of chapter 4 on page 26. Sentence starting "The literature is aware of many cognitive biases..". This is a good place to cite @spetzler1975ProbabilityEncodingDecision describing structured approach to elicitation developed by Stanford Research Institute [@quigley2021CharacteristicsProcessSubjective].

Section 4.1 Bayesian treatment, para 2. I am concerned that the analyst's "elicitation prior" might influence the expert's "model prior", especially if the number of elicited points is low (see my similar comments above).

In the list describing two parties to the elicitation event the *analyst* and the *expert*, I think you should explicitly mention that the analyst brings his "elicitation prior", which he updates by eliciting input from expert. And for the expert, it should be said that his judgement regarding the prior, are used as data to update analyst's prior.

First paragraph on page 28. When you say that in 4.1 the $\mathcal{D_y}and and \mathcal{D_\Theta}$ are conditionally independent, I think you should elaborate on what it implies about the type of the distribution being elicited.

Page 29 bullet point on QPTs with indirect Bayesian inference. @perepolkin2021HybridElicitationIndirect also show elicitation of informed QDirichlet prior for the metalog likelihood. The QDirichlet prior is assembled from elicited marginal (conditional) betas using the @elfadaly2013ElicitingDirichletConnor method. The "uniform Dirichlet" is complimented with a vector of quantile values, which can be elicited through predictive elicitation. Hence the term "hybrid elicitation", because it combines the features of predictive elicitation (by operating solely in the observable space) with parametric elicitation (because the result is the prior for parameters of an indirect QPD likelihood, see @perepolkin2021TenetsIndirectInference for details).

Section 4.4 page 33 first para. Sentence starting with "A promising empirical appoach". It would be interesting to hear your opinion on what can constitute such "ground truth" against which the elicited prior can be compared.

Section 4.4 page 33 para 3. This paragraph discusses value of information gained by specifying informed prior. It should be evaluated from the perspective of decision(s) at hand and against the baseline of non-informative prior and opportunities to collect more data (see my comments above).

Section 4.5 first paragraph, last sentence. I believe Kay's plots are called *quantile dotplots*.

## Chapter 5 Conclusion

The conclusions are relatively well articulated, but perhaps a little loose in connection to the text of the article. Item 2 requires extensive review of existing software solutions, including home-made elicitation templates in Excel spreadsheets, which practitioners use today to facilitate the EKE sessions. Item 3 efficiency in elicitation was not discussed in detail in this paper, so more detail is needed to define what is supposed to be measured. Item 4 examples can be obtained by reworking multiple predictive elicitation examples produced by classical elicitation methods [@hanea2021ExpertJudgementRiska].

# General small comments

Care should be taken in citations. Many of the citations are supposed to be cited in text using \LaTeX command `\citet` and instead they are cited with `\citep` and end up in parentheses. The instances are too numerous to list. I have a marked up text with notes on the margins which I am happy to provide, if required. It would be helpful if line numbering would be used, then I cound just provide line numbers here.

# References

Akbarov, A. 2009. “Probability Elicitation: Predictive Approach.” PhD thesis, University of Salford. http://usir.salford.ac.uk/id/eprint/26502/?template=banner.

Al-Najjar, Nabil I., and Jonathan Weinstein. 2009. “The Ambiguity Aversion Literature: A Critical Assessment.” Economics & Philosophy 25 (3): 249–84. https://doi.org/10.1017/S026626710999023X.

Beppu, Aaron, and Thomas Griffiths. 2009. “Iterated Learning and the Cultural Ratchet.” In. Vol. 31. 31.

Bernardo, Jos M., and Adrian F. M. Smith, eds. 1994. Bayesian Theory. Wiley Series in Probability and Statistics. Hoboken, NJ, USA: John Wiley & Sons, Inc. https://doi.org/10.1002/9780470316870.

Brand, Charlotte Olivia, James Ounsley, Daniel van der Post, and Tom Morgan. 2017. “Cumulative Science via Bayesian Posterior Passing, an Introduction.” March. https://doi.org/10.31235/osf.io/67jh7.

Czado, Claudia. 2019. Analyzing Dependent Data with Vine Copulas. New York, NY: Springer Berlin Heidelberg.

Elfadaly, Fadlalla G., and Paul H. Garthwaite. 2013. “Eliciting Dirichlet and Connor–Mosimann Prior Distributions for Multinomial Models.” TEST 22 (4): 628–46. https://doi.org/10.1007/s11749-013-0336-4.

Gelman, Andrew, Daniel Simpson, and Michael Betancourt. 2017. “The Prior Can Often Only Be Understood in the Context of the Likelihood.” Entropy 19 (10, 10): 555. https://doi.org/gfgwdd.

Hadlock, Christopher C., and J. Eric Bickel. 2017. “Johnson Quantile-Parameterized Distributions.” Decision Analysis 14 (1): 35–64. https://doi.org/f936ww.

———. 2019. “The Generalized Johnson Quantile-Parameterized Distribution System.” Decision Analysis 16 (1): 67–85. https://doi.org/ghhdxr.

Hadlock, Christopher Campbell. 2017. “Quantile-Parameterized Methods for Quantifying Uncertainty in Decision Analysis.” Thesis, Austin, TX: University of Texas. https://doi.org/10.15781/T2F18SX41.

Hanea, Anca M., Gabriela F. Nane, Tim Bedford, and Simon French, eds. 2021a. Expert Judgement in Risk and Decision Analysis. Vol. 293. International Series in Operations Research & Management Science. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-46474-5.

Hanea, Anca M, Gabriela F Nane, Tim Bedford, and Simon French. 2021b. Expert Judgement in Risk and Decision Analysis. http://public.eblib.com/choice/PublicFullRecord.aspx?p=6484598.

Hartmann, Marcelo, Georgi Agiashvili, Paul Bürkner, and Arto Klami. 2020. “Flexible Prior Elicitation via the Prior Predictive Distribution.” February 23, 2020. http://arxiv.org/abs/2002.09868.

Howard, Ronald A., and Ali E. Abbas. 2014. Foundations of Decision Analysis. Boston: Pearson.

Kahneman, Daniel, Olivier Sibony, and Cass R Sunstein. 2021. Noise: A Flaw in Human Judgment. Little,

Akbarov, A. 2009. “Probability Elicitation: Predictive Approach.” PhD thesis, University of Salford. Brown.

Keelin, Thomas W. 2016. “The Metalog Distributions.” Decision Analysis 13 (4): 243–77. https://doi.org/f9n7nt.

Keelin, Thomas W., and Bradford W. Powley. 2011. “Quantile-Parameterized Distributions.” Decision Analysis 8 (3): 206–19. https://doi.org/10.1287/deca.1110.0213.

Kurowicka, Dorota, and Harry Joe, eds. 2011. Dependence Modeling: Vine Copula Handbook. Singapore: World Scientific.

McElreath, Richard. 2020. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. 2nd ed. New York, NY: Chapman and Hall/CRC. https://doi.org/10.1201/9780429029608.

Morgan, M. Granger. 2014. “Use (and Abuse) of Expert Elicitation in Support of Decision Making for Public Policy.” Proceedings of the National Academy of Sciences 111 (20): 7176–84. https://doi.org/f53vcd.

Myerson, Roger B. 2005. Probability Models for Economic Decisions. Duxbury Applied Series. Belmont, CA: Thomson/Brooke/Cole.

Pearl, Judea, and Dana Mackenzie. 2018. The Book of Why: The New Science of Cause and Effect. New York: Basic Books.

Perepolkin, Dmytro, Benjamin Goodrich, and Ullrika Sahlin. 2021a. “The Tenets of Indirect Inference in Bayesian Models.” Preprint. https://osf.io/enzgs: Open Science Framework. https://doi.org/10.31219/osf.io/enzgs.

———. 2021b. “Hybrid Elicitation and Indirect Bayesian Inference with Quantile-Parametrized Likelihood,” September. https://doi.org/10.31219/osf.io/paby6.

Quigley, John, and Lesley Walls. 2021. “Characteristics of a Process for Subjective Probability Elicitation.” In Expert Judgement in Risk and Decision Analysis, edited by Anca M. Hanea, Gabriela F. Nane, Tim Bedford, and Simon French, 287–318. International Series in Operations Research & Management Science. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-46474-5_13.

Runge, Michael C., Sarah J. Converse, and James E. Lyons. 2011. “Which Uncertainty? Using Expert Elicitation and Expected Value of Information to Design an Adaptive Program.” Biological Conservation, Adaptive management for biodiversity conservation in an uncertain world, 144 (4): 1214–23. https://doi.org/10.1016/j.biocon.2010.12.020.

Schlaifer, Robert, and Howard Raiffa. 1961. Applied Statistical Decision Theory.

Spetzler, Carl S., and Carl-Axel S. Staël Von Holstein. 1975. “Probability Encoding in Decision Analysis.” Management Science 22 (3): 340–58. https://doi.org/fftvhj.

Welsh, Matthew B., and Steve H. Begg. 2018. “More-or-Less Elicitation (MOLE): Reducing Bias in Range Estimation and Forecasting.” EURO Journal on Decision Processes 6 (1): 171–212. https://doi.org/ghr8gr.

You can write a comment on this PREreview of Prior knowledge elicitation: The past, present, and future.

Before you start

We will ask you to log in with your ORCID iD. If you don’t have an iD, you can create one.

What is an ORCID iD?

An ORCID iD is a unique identifier that distinguishes you from everyone with the same or similar name.

Start now