MARL: Immersive Audio & Augmented Reality Headphone Reverberation

We’re getting back into action after the long weekend, with two back to back MARL talks! Jean-Marc Jot of DTS technology will be joining us on Wednesday and Thursday to talk about Immersive and Object-Based Multi-Channel Audio Formats and Augmented Reality Headphone reverberation. Both talks will take place in Steinhardt’s 6th Floor Conference Room, at 12:30 on Wednesday and 1 o’clock on Thursday. See more information Jean-Marc Jot and the respective topics below.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90’s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding. (For more details: sites.google.com/site/jmmjot.)

Immersive and Object-Based Multi-Channel Audio Formats:

In recent years, several audio technology companies and standardization organizations (including Dolby, Auro, DTS, MPEG) have developed new formats and tools for the creation, archiving and distribution of immersive audio content in the cinema or broadcast industries. These developments extend legacy multi-channel audio formats to support three-dimensional (with height) sound field encoding, along with optional audio object channels accompanied with positional rendering metadata. They enable efficient content delivery to consumer devices and flexible reproduction in multiple consumer playback environments, including headphones and frontal audio projection systems. In this talk, we’ll review and illustrate the state of these developments and discuss perspectives and pending issues, including virtual reality applications.

Augmented Reality Headphone Reverberation:

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.