Acoustic scene classification from few examples

Abstract

In order to personalize the behavior of hearing aid devices in different acoustic environments, we need to develop personalized acoustic scene classifiers. Since we cannot afford to burden an individual hearing aid user with the task to collect a large acoustic database, we aim instead to train a scene classifier on just one (or maximally a few) in-situ recorded acoustic waveform of a few seconds duration per scene. In this paper we develop such a “one-shot” personalized scene classifier, based on a Hidden Semi-Markov model. The presented classifier consistently outperforms a more classical Dynamic-Time-Warping-Nearest-Neighbor classifier, and correctly classifies acoustic scenes about twice as well as a (random) chance classifier after training on just one recording of 10 seconds duration per scene.

Publication
IEEE European Signal Processing Conference
Ivan Bocharov
Ivan Bocharov
Former PhD student

Former researcher at BIASlab.

Bert de Vries
Bert de Vries
Professor

I am a professor at TU Eindhoven and team leader of BIASlab.