Colleagues are very welcome at the following events hosted by CHESS and K4U (ERC project) this academic year
Workshop: Philosophy of Statistics
Prof David Hendry (University of Oxford)
Robust Model Selection (joint paper with Jennifer L. Castle, Jurgen A. Doornik)
Complete and correct a priori specifications of models for observational data never exist, so model selection is unavoidable. The target of selection needs to be the process generating the data for the variables under analysis, while retaining the objective of the study, often a theory-based formulation. Successful selection requires robustness against many potential problems jointly, including outliers and shifts; omitted variables; incorrect distributional shape; non-stationarity; mis-specified dynamics; and non-linearity, as well as inappropriate exogeneity assumptions, while seeking parsimonious final representations that retain the relevant information, are well specified, encompass alternative models, and evaluate the validity of the study. Our approach to doing so inevitably leads to more candidate variables than observations, handled by iteratively switching between contracting and expanding multi-path searches, here programmed in Autometrics. We re-analyse the `local instability' in the `robust' method of least median squares shown by Hettmansperger and Sheather (1992) using indicator saturation (IS) to explain their findings, and apply IS to discriminate between measurement errors and outliers, as well as between outliers and large observations arising from non-linear responses (illustrated by artificial data). We also illustrate the approach by empirical models of wage-age relationships for the USA, and inflation for the UK (both tackling outliers and non-linearities that can distort other estimation methods), as well as by impacts of volcanic eruptions on Northern-hemisphere temperature reconstructions (using designed shift functions).
Prof Deborah Mayo (Virginia Tech)
How to 'Keep Calm and Carry On' in Today's 3D* Statistics Wars: 7 Responses for Severe Testers
*Philosophical, Social & Political
Intermingled in today’s statistical controversies are some long-standing, but unresolved, disagreements on the nature and principles of statistical methods and the roles for probability in statistical inference. These have important philosophical dimensions that must be recognized to effectively carry out as well as appraise statistical research in today’s social contexts. To combat the dangers of unthinking, bandwagon effects, practitioners and consumers should be in a position to critically evaluate the ramifications of proposed statistical "reforms," as well as respond to often-rehearsed objections to statistical significance tests. I distill key complex philosophical issues by means of 7 simple responses to key challenges.
Deborah G. Mayo is Professor Emerita in the Department of Philosophy at Virginia Tech and is a visiting professor at the London School of Economics and Political Science, Centre for the Philosophy of Natural and Social Science. She is the author of Error and the Growth of Experimental Knowledge(Chicago, 1996), which won the 1998 Lakatos Prize awarded to the most outstanding contribution to the philosophy of science during the previous six years. She co-edited Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (CUP, 2010) with Aris Spanos, and has published widely in the philosophy of science, statistics, and experimental inference. Her most recent book is Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (CUP, 2018). She will co-direct (with Aris Spanos) a Summer Seminar on Philosophy of Statistics at Virginia Tech, with 15 participating philosophy and social science faculty and post docs, July 28-August 11, 2019. A link (from my blog) to the entire first chapter (Excursion 1 Tour I ) is here: https://errorstatistics.com/2018/09/08/excursion-1-tour-i-beyond-probabilism-and-performance-severity-requirement/
Prof Jan Sprenger (University of Turin)
Degree of Corroboration: An Antidote to the Replication Crisis
Shortcomings of prevalent statistical methods---in particular, Null Hypothesis Significance Testing (NHST)---are often cited as causes of the replication crisis in various scientific disciplines. In this paper, I identify how a particular feature of NHST contributes to the replication crisis: the impossibility of quantifying support for the null hypothesis. I argue that also popular alternatives to NHST such as confidence intervals and Bayesian inference do not address the problem in a fully satisfactory way. In this talk, I elaborate on the concept of corroboration of the null hypothesis in order to fill this gap. I explicate degree of corroboration using a parsimonious set of adequacy criteria and I show how corroboration-based hypothesis testing improves statistical inference and mitigates the replication crisis.
All welcome and refreshments provided – please contact the Centre Administrator at firstname.lastname@example.org to confirm attendance
Contact email@example.com for more information about this event.
Department of Philosophy
50 Old Elvet
DH1 3HN, UK
Tel: 0191 334 6552
Fax: 0191 334 6551