MATH2647 Probability II
A Markov chain is a process in which the next state depends only on the current state but not on the past. Such processes were first introduced by A. Markov in the early XXth century in an attempt to extend the validity of the law of large numbers to dependent variables. As an interesting application, A. Markov used such processes to describe the distribution of letters in Pushkin's novel Eugene Onegin. Since then the theory of Markov chains became one of the most important areas of contemporary probability theory, providing the foundation for understanding, explaining and predicting phenomena in diverse areas including biology, chemistry, physics, economics, finance to name just a few. Markov chains are also used in computer science and statistics (Markov Chain Monte Carlo is one of the most popular methods of simulation) as well as in many everyday applications (e.g. Google ranks webpages by using a particular Markov chain).
In addition to Markov chains, we will discuss other important topics including generating functions and convergence.
Outline of Course
- Infinite collections of events, Borel-Cantelli lemmas.
- Sequences of random variables: modes of convergence, limit theorems.
- Generating functions and their applications.
- Markov chains: Markov property, classification of states, hitting probabilities, stationary distribution.
- Topics in discrete probability: further examples if time permits.
For details of prerequisites, corequisites, excluded combinations, teaching methods, and assessment details, please see the Faculty Handbook.
Please see the Library Catalogue for the MATH2647 reading list.