Adaptive Information Processing (5SSB0)

In this course, using fundamental concepts of probability theory, we present an introduction to the design of adaptive information processing systems. This course extends coursework on adaptive signal processing and can also be taken as an introduction to **machine learning** and **data science**. Typical application areas include pattern recognition, medical signal analysis, speech and language processing, image processing, bio-informatics and robotics.

In the 2018/19 academic year, this class is taught in semester B (3rd quarter) and starts on 4-Feb-2019.

News

Materials

In principle, you can download all needed materials from this site. We strongly recommend that you acquire the following text book: Pattern Recognition and Machine Learning (Springer, 2006) by Christopher M. Bishop. You can also download this book for free in PDF format here. Try to get the book before classes start.

Part 1: Linear Gaussian Models and the EM Algorithm

Instructor: Prof.dr.ir. Bert de Vries

We present a unified probabilistic modeling approach to a large set of algorithms based on Linear Gaussian Models, including models for regression and classification problems, Gaussian mixture models, Kalman filters, hidden Markov models and various latent component analysis models. Furthermore, we derive the Expectation Maximization (EM) algorithm for maximum likelihood estimation problems and present factor graphs as a unifying framework for efficient realization of probabilistic inference algorithms. In part 1, the emphasis will be on parameter estimation for a given model specification. You can view the lecture notes through the links below:


  • The source files for these lecture notes are accessible at github. If you catch an error or if you have a specific update request, please file a github issue. .

  • Here is a PDF bundle of all classes for part-1. The lecture notes may change a bit during the course, e.g., to process comments by students. A final PDF version will be posted after the last lecture.

  • Code examples in the lecture notes are in the Julia language, which is syntactically similar to MATLAB. In order to run the code examples straight in the browser, you will need to run the lecture notes files in a Jupyter notebook. We recommend that you run the cloud-based JuliaBox service to run Jupyter notebooks. Please see these instructions (scroll to down to the README) if you want to run the lecture notes in JuliaBox.

Part 2: Model Complexity Control and the MDL Principle

Instructor: Dr.ir. Tjalling J. Tjalkens

In part 2, the discussion on probabilistic modeling extends to model specification itself. Specifically, the notion of Stochastic Complexity will be developed and the Minimum Description Length (MDL) principle will be used to select appropriate models. The lessons are structured as follows:

  • Part 2A: The Bayesian Information Criterion
  • Part 2B: Bayesian model estimation and Context-tree model selection
  • Part 2C: Descriptive complexity

Exam Preparation

  • Please feel free to consult the following matrix and Gaussian cheat sheets (by Sam Roweis) when making exercises.
  • Note however that you cannot bring notes or books to the exam. All needed formulas are supplied at the exam sheet.

Video

The 2007 class meetings were recorded and can be viewed if you have a valid TU/e account. Note however that the current class will change a bit relative to the 2007 class. Talk to us before you plan to follow the class only from video.

Miscellany

  • Prerequisites: Mathematical maturity equivalent to undergraduate engineering program. Some MATLAB programming skills are helpful.

  • You’re advised to bring the lecture notes (either in soft- or hardcopy) with you to class in order to add your personal comments.

  • Some related resources on the net with lots of relevant content

Instructors