We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Department of Mathematical Sciences

Seminar Archives

On this page you can find information about seminars in this and previous academic years, where available on the database.

Statistics Seminars: Prediction with Expert Advice and Game-Theoretic Supermartingales

Presented by Alexey Chernov, CLRC and Dept Computer Science, Royal Holloway, University of London

22 March 2010 14:15 in CM221

We consider online sequence prediction, a standard machine learning task where Learner repeatedly predicts the next outcome in a sequence of trials. The quality of predictions is assessed by a loss function (a scoring rule), and Learner's performance is measured by his accumulated loss. Statistical Learning framework assumes some statistical properties of the trials to construct a good strategy for Learner. Prediction with Expert Advice is an alternative framework, free of statistical assumptions. Instead, it assumes that there is a pool of predictors called Experts and Learner can observe their predictions and must predict (almost) as well as the best Expert. A number of approaches have been suggested for constructing Learner's strategy in this framework. The talk is devoted to recent developments concerning one of those approaches, Defensive Forecasting.

Defensive Forecasting originates from the ideas of game-theoretic approach to probability by Vovk and Shafer and asymptotic calibration by Foster and Vohra. Informally speaking, we choose some desirable statistical property (a test) and, for a given sequence of outcomes (of arbitrary nature), construct a probability measure such that the property holds for these outcomes and this measure. The measure is constructed in online fashion, that is, at each step the distribution for the next trial is constructed and only after that the outcome is revealed. The statistical property (the test) is represented with the help of a continuous supermartingale that grows to infinity on non-complying sequences. At each step, we select a measure such that for any outcome the supermartingale will not grow. This measure (called a neutral measure) exists due to a variant of Minimax Theorem.

For a range of loss functions, we can achieve optimal regret (the difference between Learner's loss and the best Expert's loss) with the help of Defensive Forecasting. To this end we construct a certain supermartingale and a mapping of the measure generated by Defensive Forecasting to Learner's prediction. It turns out that the loss function combined with this mapping becomes a proper scoring rule.

The talk describes results by Volodya Vovk and his collaborators and is partially based on the paper "Supermartingales in Prediction with Expert Advice" (ALT 2008, LNCS vol.5254, pp.199-213). See also

Contact for more information