Statistics Seminars: Robust versus Constrained-Non-Informative Bayesian Priors in Common-Cause Failure Modelling with Zero Counts
6 February 2012 14:00 in CM221
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model incorporates precise mean values for the alpha-factors, but is otherwise quite diffuse. Nevertheless, although this maximum entropy prior is in principle non-informative about anything but the mean, its tail turns out to be far too light to properly cope with zero counts, which are very typical in multi-component failure data. In fact, inferences turn out to be strongly sensitive to the tail due to zero counts, and whence, it seems that a single prior cannot do the job.
To address these concerns, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this robust Bayesian approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations or each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable.
(Joint work with Dana Kelly and Gero Walter.)
Contact email@example.com for more information