Gesture recognition enables a natural extension of the way we currently interact with devices. Commercially available gesture recognition systems are usually pre-trained. We propose a method that allows users to define their own gestures using only a few training examples.

Read more

We develop a fully probabilistic approach to pure-tone audiometry. By assuming a Gaussian process based response model for test tones, the hearing threshold estimation problem becomes one of Bayesian inference. This allows the use of information-theoretic criteria to select optimal test tones.

Read more

We want to provide a hearing impaired patient with the best setting for her hearing aid device. By recording in-situ user feedback on device performance, we are able to better understand the specific hearing loss problem and preferences of the user. Using this knowledge, we can provide a better and personalized hearing experience.

Read more

More Publications

How do you solve a problem that is only vaguely described? We describe here an engineering approach that guides our research on solving vaguely defined problems such as hearing impairment.

Read more

The primary mission of the BIASlab team is to develop in-situ trainable Bayesian Intelligent Agents for applications to wearable technology.

Read more

Bert de Vries

Professor, TU Eindhoven

Tjalling Tjalkens

Associate Professor, TU Eindhoven

Anouk van Diepen

PhD candidate, TU Eindhoven

Thijs van de Laar

PhD candidate, TU Eindhoven

Marco Cox

PhD candidate, TU Eindhoven

Ivan Bocharov

PhD candidate, TU Eindhoven

Quan (Eric) Nguyen

PhD Candidate, TU Eindhoven

Ismail Senoz

Graduate student, TU Eindhoven

Joris Kraak

Senior Software Engineer, GN Hearing

All members

In this project, you are challenged to develop novel machine learning technology for recognizing human motions.

Read more

In this project, you are challenged to design an agent that learns to solve the cocktail party problem through on-the-spot interactions with a (human) listener.

Read more

All vacancies

We gratefully acknowledge financial support from our sponsors:

NWO
GN Resound