This course covers the fundamentals of a Bayesian (i.e., probabilistic) approach to machine learning and information processing systems. The Bayesian approach allows for a unified and consistent treatment of many model-based machine learning techniques. We focus on Linear Gaussian systems and will discuss many useful models and applications, including common regression and classification methods, Gaussian mixture models, hidden Markov models and Kalman filters. We will discuss important algorithms for parameter estimation in these models including the Expectation-Maximization (EM) algorithm and Variational Bayes (VB). The Bayesian method also provides tools for comparing the performance of different information processing systems by means of estimating the ``Bayesian evidence’’ for each model. We will discuss several methods for approximating the Bayesian evidence. Next, we will discuss intelligent agents that learn purposeful behavior from interactions with their environment. These agents are used for applications such as self-driving cars or interactive design of virtual and augmented realities. Indeed, in this course we relate synthetic Bayesian intelligent agents to natural intelligent agents such as the brain. You will get challenged to code yourself new Bayesian machine learning algorithms and apply them to practical information processing problems.
- Watch this for news about the course
In principle, you can download all needed materials from this site.
Please download the following books/resources:
- Christopher M. Bishop (2006), Pattern Recognition and Machine Learning. You can also buy a hardcopy, e.g. at bol.com.
- Ariel Caticha (2012), Entropic Inference and the Foundations of Physics
The Fundamentals and Advanced Materials lectures (see below) are taught by Bert de Vries. The Probabilistic Programming minicourse is taught by Wouter Kouw and the What is Life? lecture is presented by Magnus Koudahl.
- 0 - Course Outline and Administrative Issues
- 1 - Machine Learning Overview
- 2 - Probability Theory Review
- 3 - Bayesian Machine Learning
- 4 - Continuous Data and the Gaussian Distribution
- 5 - Discrete Data and the Multinomial Distribution
- 6 - Regression
- 7 - Generative Classification
- 8 - Discriminative Classification
- 9 - Latent Variable Models and Variational Bayes
- 10- Factor Graphs
- 11- Dynamic Models
- 12- Intelligent Agents and Active Inference
Minicourse Probabilistic Programming
- 13- PP Introduction
- 14- Linear Regression & Classification
- 15- Gaussian Mixture Model
- 16- Hidden Markov Model
- 17- Kalman Filtering
Each year there will be two written exam opportunities. Check the official TUE course site for exam dates.
In preparation for the exam, we recommend that you work through the following exercises (updates with solutions will be posted, see the News section):
Please feel free to consult the following matrix and Gaussian cheat sheets (by Sam Roweis) when making exercises.
Note however that you cannot bring notes or books to the exam. All needed formulas are supplied at the exam sheet.
Prerequisites for this course: Mathematical maturity equivalent to undergraduate engineering program. Scientific programming skills (e.g. in Python, MATLAB or Julia) are helpful.
You’re advised to bring the lecture notes (either in soft- or hardcopy) with you to class in order to add your personal comments.