Immersions - Visualizing and sonifying how an Artificial Ear hears music

Vincent Herrmann, University of Music Karlsruhe

March 24, 2020

Abstract

Immersions is a system that lets us interact with and explore an audio processing neural network, or what Vincent calls an “artificial ear”. This network was trained in an unsupervised way on various music datasets using a contrastive predictive coding method. There are two modes of showing its inner workings - one is visual, the other is sonic. For the visualization, first, the neurons of the network are laid out in two-dimensional space, then their activation is shown at every moment, depending on the input, in real-time. To make audible how music sound to the artificial ear, an optimization procedure generates sounds that specifically activate certain neurons in the network.

Bio

Vincent Herrmann caused quite a stir at last year's Neurips in Vancouver when demonstrating his Artificial Ear, which also earned him a well-deserved "Best Demonstration" award. His background is in classical music, but in the last years he has gotten more and more interested in the field of machine learning and artificial intelligence. At the moment he is finishing his Master thesis at the Bosch Center for Artificial Intelligence. Right before that he got a Master’s degree in piano from the University of Music Stuttgart.

Event Info

The event will take place on Tuesday, 24 March, 2020 at 7:00pm at the DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page.