2019
Monika, the visual microphone is based on Machine Learning. It doesn't pick up sound, but takes photos. First we start the training phase: A pedal triggers the integrated webcam. Thirty slightly varying images of the same gesture are enough for the system to identify them later and assign them to a range of given audio tracks. If you show your teeth, an I-sound can be heard. A wide open mouth produces an A-sound. Now the performance can begin. A background beat runs over the loudspeakers. As soon as the microphone recognizes the gestures, the vocals are played. Mo Ni Ka.
Ludwig Pfeiffer, Josua Roters, Tim Rumpf