Statistics Seminars: Calibration of p-values via the Dirichlet process
21 November 2011 14:00 in CM221
In testing a simple statistical hypothesis, the P-value is defined as the probability, assuming the null hypothesis, of the most extreme event that actually happened. It is commonly described as quantifying the amount of evidence against the null hypothesis, although this interpretation has been disputed by Berger and Sellke. Non-statisticians frequently misinterpret the P-value as the (posterior) probability of the null hypothesis, and even those with statistical training sometimes confuse it with probability of type I error. Indeed, although P-values are ubiquitous in applied statistics, they play a role in neither Bayesian nor Neymanian theories of inference.
In 2001, Sellke, Bayarri, and Berger proposed a calibration of P-values whereby they could be given a Bayesian interpretation. In the case of multiple P-values, Efron proposed in 2005 the local false discovery rate as the (estimated) posterior probability of the null hypothesis given the P-value.
When one has a large number of P-values from related hypotheses, their empirical distribution can be used to make inferences about the proportion of true null hypotheses. Under certain regularity conditions, the posterior probability of the null hypotheses can be calculated as a ratio of slopes of the actual distribution. To obtain a smooth estimate of this distribution, the P-values can be modelled as arising from normally-distributed test statistics in which the location parameter itself has an underlying distribution, consisting of an atom at zero mixed with a distribution of alternatives. The prior of this distribution of alternatives is modelled as a Dirichlet process. The posterior mean of the distribution of alternatives is then used to calibrate the P-values as posterior probabilities.
Contact firstname.lastname@example.org for more information