An In-situ Trainable Gesture Classifier


Gesture recognition enables a natural extension of the way we currently interact with devices. Commercially available gesture recognition systems are usually pre-trained and offer no option for customization by the user. In order to improve the user experience, it is desirable to allow users to define their own gestures. To avoid overburdening the user, this scenario requires learning from just a few training examples. To this end, we propose a gesture classifier based on a hierarchical probabilistic modeling approach. In this framework, high-level features that are shared among different gestures can be extracted from a large labeled data set, yielding a prior distribution for gestures. Using this prior when learning new types of gestures reduces the number of required training examples for individual gestures. As a result, our system needs very few examples to learn to detect previously unseen gestures. We implemented the proposed gesture classifier for a Myo sensor bracelet and tested the system on a database of 17 different gesture types. We show that the proposed system needs significantly fewer examples from new classes in comparison to a traditional (nearest-neighbor based) classifier.

Benelearn 2017 conference