
In the 2018/19 academic year, this class is taught in semester B (3rd quarter) and starts on 4-Feb-2019.
News
- 26-Mar-2019 We added an extra old exam (from April 2017, with solutions) to aid your exam preparation.
- 12-Mar-2019 The pdf handout for part-2 has been updated.
- 5-Feb-2019 The pdf bundle for part-1 has been updated.
Materials
In principle, you can download all needed materials from this site. We strongly recommend that you acquire the following text book: Pattern Recognition and Machine Learning (Springer, 2006) by Christopher M. Bishop. You can also download this book for free in PDF format here. Try to get the book before classes start.
Part 1: Linear Gaussian Models and the EM Algorithm
Instructor: Prof.dr.ir. Bert de Vries
We present a unified probabilistic modeling approach to a large set of algorithms based on Linear Gaussian Models, including models for regression and classification problems, Gaussian mixture models, Kalman filters, hidden Markov models and various latent component analysis models. Furthermore, we derive the Expectation Maximization (EM) algorithm for maximum likelihood estimation problems and present factor graphs as a unifying framework for efficient realization of probabilistic inference algorithms. In part 1, the emphasis will be on parameter estimation for a given model specification. You can view the lecture notes through the links below:
- 0 - Introduction
- 1 - Machine Learning Overview
- 2 - Probability Theory Review
- 3 - Bayesian Machine Learning
- 4 - Working with Gaussians
- 5 - Density Estimation
- 6 - Linear Regression
- 7 - Generative Classification
- 8 - Discriminative Classification
- 9 - Clustering with Gaussian Mixture Models
- 10- The EM Algorithm
- 11- Continuous Latent Variable Models - PCA and FA
- 12- Dynamic Latent Variable Models
- 13- Factor Graphs and Message Passing Algorithms
-
The source files for these lecture notes are accessible at github. If you catch an error or if you have a specific update request, please file a github issue. .
-
Here is a PDF bundle of all classes for part-1. The lecture notes may change a bit during the course, e.g., to process comments by students. A final PDF version will be posted after the last lecture.
-
Code examples in the lecture notes are in the Julia language, which is syntactically similar to MATLAB. In order to run the code examples straight in the browser, you will need to run the lecture notes files in a Jupyter notebook. We recommend that you run the cloud-based JuliaBox service to run Jupyter notebooks. Please see these instructions (scroll to down to the README) if you want to run the lecture notes in JuliaBox.
Part 2: Model Complexity Control and the MDL Principle
Instructor: Dr.ir. Tjalling J. Tjalkens
In part 2, the discussion on probabilistic modeling extends to model specification itself. Specifically, the notion of Stochastic Complexity will be developed and the Minimum Description Length (MDL) principle will be used to select appropriate models. The lessons are structured as follows:
- Part 2A: The Bayesian Information Criterion
- Part 2B: Bayesian model estimation and Context-tree model selection
- Part 2C: Descriptive complexity
- Click here to view or download the lecture notes for part-2.
- An extended version of the part-2 handouts is in preparation but only half-finished. You can download this UNFINISHED work as well.
- Background on information theory.
- Markov structures and summary of essential content.
Exam Preparation
- Each year there will be two written exam opportunities. Check the official TUE course site for exam dates.
- In preparation for the exam, we recommend that you work through the following exercises and old exams:
- Please feel free to consult the following matrix and Gaussian cheat sheets (by Sam Roweis) when making exercises.
- Note however that you cannot bring notes or books to the exam. All needed formulas are supplied at the exam sheet.
Video
The 2007 class meetings were recorded and can be viewed if you have a valid TU/e account. Note however that the current class will change a bit relative to the 2007 class. Talk to us before you plan to follow the class only from video.
Miscellany
-
Prerequisites: Mathematical maturity equivalent to undergraduate engineering program. Some MATLAB programming skills are helpful.
-
You’re advised to bring the lecture notes (either in soft- or hardcopy) with you to class in order to add your personal comments.
-
Some related resources on the net with lots of relevant content
- CS281: Advanced Machine Learning by prof. Ryan Adams, at Harvard University.