Thesis Presentations & Defenses Live Stream

As we head into the last week of finals, the Music Tech thesis candidates are preparing for their presentations next week. We’ll be streaming the defenses  in full 360 video! You can also watch via VR devices. The presentations will be taking place on Monday and Wednesday, covering a variety of topics from film audio, phonograph recordings, and binaural space. (Hint: keep your headphones handy for binaural audio presentations!) Plan your viewing with the full schedule here.

The stream will be active from our YouTube site during the scheduled presentation times.

Posted on | Posted in Uncategorized |

MARL Talk: Jacoti Lola: A low-latency Wi-Fi-based Audio System

Tomorrow, 12/15, composer & producer Richard Einhorn will be joining MARL to talk about Jacoti Lola, a low-latency Wi-Fi-based Audio System. The lecture will take place at 1 PM in Steinhardt’s 6th floor conference room (609).

Jacoti Lola is an assistive listening solution for classrooms, meeting rooms, and lecture halls that provides low-latency multi-peer audio streaming over consumer-grade Wi-Fi. Because audio problems like echo, reverb, and noise are very common in classrooms, meeting rooms, and big conference rooms, even people with no hearing loss can have considerable difficulty understanding speech. Jacoti Lola Classroom wirelessly transmits high-quality audio from speaker to listener which can help all listeners hear better.

Richard Einhorn is a composer, music producer, and hearing loss advocate. A summa cum laude graduate of Columbia University in music, Richard’s oratorio with silent film, Voices of Light, has been called a “great masterpiece of modern music” and been performed by the National Symphony, Baltimore Symphony, and at such venues as the BAM Next Wave Festival, Disney Hall, David Geffen Hall, the National Cathedral of Washington, and the Sydney Opera House. Active as a record producer, Richard produced the Grammy-winning Bach Suites with Yo-Yo Ma and many other recordings by well-known artists. After losing much of his hearing overnight to a virus in 2010, Richard has continued to compose but has also become well-known internationally as a passionate advocate for better hearing technology. He has spoken to the President’s Council of Advisors on Science and Technology, the National Academy of Sciences, and is on the Board of Trustees of the Hearing Loss Association of America.


Posted on | Posted in Uncategorized |

Mirrors of Time, Moments in Time

Last weekend, Music Tech professor Tom Beyer worked with students to create a long distance collaboration. Students performed at Steinhardt’s Loewe theater as students at Universidad Nacional de Quilmes, Argentina streamed a concurrent performance via webcam. These performances have been pushing life performance and streaming technology since the mid-nineties. In the early years, rather than a simultaneous performance, Steinhardt would fax notes back and forth between collaborating schools. Today, the performances use several HD camera and 8 channels of streaming audio, to preserve the independence of each signal.. Performers learn to compensate with the latency of the signal to create an entirely live, cross-continental performance. It’s worth mentioning that these themes of latency and audio streaming will be discussed in Richard Einhorn’s MARL talk this Thursday, 12/15.

Due to the nature of the performance, sending audio from once place to another, and then back to the source, feedback is a major concern. The video and audio for last Sunday’s performance required a full day of setup, where the crew had to get particularly creative with the microphone placement to ensure that no unpleasant feedback occurred. Despite the complexities of the signal path, all of the performances of the afternoon sounded great. Check out the photos!


Posted on | Posted in Uncategorized |

Gear Spotlight: JazzMutant Lemur

Lo and behold: French technology! Used by the likes of Justice, Daft Punk, Björk, Nine Inch Nails, and so many others, the Lemur is a multi-touch, modular controller. First brought onto the market by JazzMutant in 2005, the device paved the way for touch-based controllers as we know them today.  Unfortunately discontinued in 2010 following the rise of iPads and other consumer tablets, the Lemur is starting to become a rare piece of gear with features that most touch-based controllers still don’t have today.

Rather than two or three touch points, the Lemur supports up to 10 gestures at a time, allowing performers to make use of all of their fingers. The Lemur software allows users to fully customize their interface, bringing the freedom of modular to a modern digital unit. Communicating with its host computer (and other Lemurs) via Ethernet and an OSC protocol means lower latency, higher data storage, 32-bit precision: a data flow that’ll blow your MIDI interface out of the water.

One of the most unique features of the Lemur is the Multiball feature, a physics-based automation system. Unexplored by other multi-touch interfaces, the Multiball function uses virtual bouncy balls connected to parameters of your choosing. The behavior of these balls can be randomized or with some creative scripting, they can create circuits as shown here. The X/Y gestures work between interpolation and mass spring options, as can the faders. If you’re looking for a plug & play controller, all of these options might be a bit overwhelming. However, if you’re the type of performer who likes to geek out in Max/MSP or SuperCollider, setting up the JazzMutant Lemur will be a breeze.

One of the many strengths of the Lemur is its cross-platform compatibility. Software, modular synthesizers, lighting rigs and VSTs can all be controlled within this one piece of hardware, making it clear why it has spent years on the road with Thievery Corporation, M.I.A., and Justice, among others.

Check out the Lemur from the 8th floor monitor’s closet, and if you don’t have an Ethernet port, grab an Ethernet to Thunderbolt adapter from MTech’s IT department!

Or, if you’ve got an iPad, iPhone, or Android, download the Lemur app by Liine for $24.99.

Useful Links:

Four-part Lemur Tutorial

Basic Scripting Tutorial

The Lemur at MusikMesse 2010

Daft Punk & Kanye West 2008 Grammy Performance


Posted on | Posted in Uncategorized |

Music Tech Artists: ‘Evans’


A few weeks ago we sat in on a session with a number of Music Technology students at The Cutting Room to see what they’ve been up to. NYU Music Composition student Jonathan Evans and his band Evans are recording their first EP under their new name, with Music Technology senior Josh Liebman as head engineer. An intern at The Cutting Room, Josh has access to a pretty incredible space that’s given the group the opportunity to try out some unique recording techniques.

Walking down the hallway into the mixing room, there was a maze of microphones, cables and guitar amps. To compliment the band’s retro-pop style, Josh wanted to keep the band together during the recording, while still getting a clean, modern mix. With the drums recording in the main room, the lead guitar amp was recording in another isolated booth. Then with rhythm guitar amp being recorded in the hallway and the bass going into a DI Box, there was nobody allowed in or out once the group started recording.

Josh Liebman, Music Tech Senior

Both the band and the studio have a deep affiliation with the Music Technology program. Alumnus Matt Lau is on bass, and current seniors Jake Zacharia and Torin Geller on drums and rhythm guitar, respectively. The Cutting Room itself was founded by alumnus David Crafa in 1996, and has been offering opportunities to students from the program since. We’re excited to see more students getting involved with this iconic New York studio in 2017.

Jonathan Evans, Music Composition

Matt Lau, Music Tech Alumnus

Posted on | Posted in Uncategorized |

Gear Spotlight: 1176/MC77 & Alumnus Andrew Roberts

In 1967, Bill Putnam and United Recording Electronics Industries (UREI) released the 1176 Peak Limiter. The compressor was the only solid state peak limiter available at the time. Putnam’s circuit design underwent several revisions and changes, which explains the variety of different 1176 “revs” available today.

Arguably the most popular revisions to the 1176 are revs C, D, and E. These revisions indicated the development of what UREI (now Universal Audio) calls the Low Noise circuit – hence, the “1176LN” title given to the compressors of this era. As of the year 2000, Universal Audio retails an 1176 reissue, based on the Rev C/D/E circuit designs.

The 1176 is a noticeably versatile and “bright” compressor, capable of both a mild, subtle sound (great for using first-to-bat on a vocal, kick, or snare), or highly aggressive and energetic compression. In addition to the standard attack, release, and ratio functions, the 1176 can also be used in what engineers call “British Mode” or “All Buttons In,” in which the four Ratio buttons on the device’s faceplate are pushed downward simultaneously, in theory engaging each ratio setting simultaneously. This technique leads to a compression ratio somewhere between 12:1 and 20:1, but also changes the circuit’s bias points such that the 1176 becomes even more aggressive.

Expanding on these classic characteristics, Purple Audio designed the MC76, their own 1176 revision in 1997. Founded that same year by Music Tech alumni Andrew Roberts, Purple Audio has now updated to the MC77, one of the most faithfully purchased 1176 reissues on the market. These pieces have a rugged, quality-build reputation that certainly doesn’t precede them. Check out the links below for more information, and visit studio A and Studio D to hear our two MC77’s and the 1176 for yourself.

Here are the standard features of any 1176 revision, clone, or DIY build:

  • Variable attack time (between 20µs-800µs)

  • Variable release time (between 50ms-1.1s)

  • Transformer-balanced inputs and outputs

  • Compression ratios of 4:1, 8:1, 12:1, and 20:1 (additionally, somewhere between 12:1 and 20:1 when using “all-buttons-in” mode)

Purple Audio built upon the original 1176 Rev D & Rev E Low Noise designs, incorporating all of the original core features while making these significant additions:

  • Easily accessible and convenient stereo-link function

  • True Bypass (meaning that, when bypassed, signal is dumped directly from the input to the output of the device – signal never passes through the device’s circuitry.)

  • Sidechain/key-input, an extremely useful and common compressor feature.

Audio engineers and electrical engineers alike have taken to online forums to write extensively about the advantages and disadvantages of using one or the other of these units. The full history of the 1176 is linked below, as are the schematics and more information about the MC77.

Useful Links:

Universal Audio’s 1176 Overview

1176 Hardware Revision History

Universal Audio History

UA: All Buttons In Mode

Purple Audio MC77

Purple Audio MC77 Manual & Schematics



Posted on | Posted in Uncategorized |

Ear to the Earth presents: Thin Air

Tomorrow evening Music Technology’s own Paul Geluso is performing at Whitebox Sound Lab alongside Lars Graugaard. The performance focuses on the concept of 3D sound objects to create a unique in-air listening experience.

As Paul Geluso describes it, “What is the 3D Sound Object? What is the sound? What are we doing? It’ s fantastic! Special. An adventure in listening. Whereas traditional sound synthesis is usually done electronically then projected through loudspeakers, the 3D Sound Object purposely causes electronic sound sources to be summed, subtracted, and filtered in the air. To my surprise, the in-air processing technique creates a complex and evolving physical sound sculpture that can be experienced from several perspectives. To my knowledge, it’s a one of a kind. I can’t predict how everyone will perceive the sound, but everyone who has heard it is excited.”

Lars Graugaard adds, “Working with Paul Geluso’s 3D Sound Object in Thin Air isn’t like any other sound amplification techniques. It creates in fact an entirely different notion of sound amplification – what it is, what it can be. It’s a rich, immersive impact on the senses. And the unparalleled opportunities it offers gives a whole new meaning to the conception of a ‘sound.’ It wraps you in a three-dimensional space.”

The concert is Thursday, December 1st at Whitebox Sound Lab on 329 Broome Street. Tickets are $10, and can be purchased online or at the door. More info at Ear to the Earth.

Posted on | Posted in Uncategorized |

Gear Spotlight: ROLI Seaboard Rise

This week’s gear spotlight focuses on the ROLI Seaboard Rise. ROLI, a London-based startup company, is making waves (literally) with their keyboards. Rather than keys, they have “keywaves,” pieces of flexible silicone that can sense even the most subtle nuances of a player’s inflections, and allow for parameter control beyond that of any typical knobs-and-keys controller.

Similar to the 3D touch of the QuNeo as we discussed last week, the Seaboard uses what ROLI calls “5D Touch” performing. Two pieces of software are necessary to access these 5D functions. Although they are not integral to using the Rise as a simple MIDI controller, to really use the device to its full potential, you should locate one of the three computers in Music Technology that have these softwares installed: the Mac in Studio E, the mastering laptop, and the Dolan Studios Mac. The softwares are called Equator, which is ROLI’s in-house synth for Rise, and ROLI Dashboard for Rise, which is a configuration utility.

You can also use the Rise with a ROLI developed application called NOISE, which is free for use on your iPhone or iPad. (In fact, you don’t even need a Rise to use the app!)

Using the ROLI developed computer applications, you can take full advantage of 5D Touch, meaning complete control over the following parameters:

1. Strike

The velocity force and curve response of the Rise25.

2. Glide

The x-axis pitch bend response, horizontal movements from side to side along the keywaves or the ribbon controllers. Control horizontal vibrato and pitch-bend sensitivity.

3. Slide

The y-axis response, vertical movements up and down the keywaves, enabling or disabling things like fade-ins, filter shifts, etc

4. Press

The continuous sustain sensitivity of the Rise25.

5. Lift

Allows user to change the release velocity, aftertouch.

Useful Links:

Seaboard Support (Installation, Technique, Sounds, etc)

Seaboard Performance Technique

Interpreting Seaboard MIDI Data


Posted on | Posted in Uncategorized |

MARL: Immersive Audio & Augmented Reality Headphone Reverberation

We’re getting back into action after the long weekend, with two back to back MARL talks! Jean-Marc Jot of DTS technology will be joining us on Wednesday and Thursday to talk about Immersive and Object-Based Multi-Channel Audio Formats and Augmented Reality Headphone reverberation. Both talks will take place in Steinhardt’s 6th Floor Conference Room, at 12:30 on Wednesday and 1 o’clock on Thursday. See more information Jean-Marc Jot and the respective topics below.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90’s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding. (For more details:

Immersive and Object-Based Multi-Channel Audio Formats:

In recent years, several audio technology companies and standardization organizations (including Dolby, Auro, DTS, MPEG) have developed new formats and tools for the creation, archiving and distribution of immersive audio content in the cinema or broadcast industries. These developments extend legacy multi-channel audio formats to support three-dimensional (with height) sound field encoding, along with optional audio object channels accompanied with positional rendering metadata. They enable efficient content delivery to consumer devices and flexible reproduction in multiple consumer playback environments, including headphones and frontal audio projection systems. In this talk, we’ll review and illustrate the state of these developments and discuss perspectives and pending issues, including virtual reality applications.

Augmented Reality Headphone Reverberation:

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.

Posted on | Posted in Uncategorized |

Adobe Max Sneaks: Photoshop for Audio

Last month at the Adobe Max Creativity Conference, developers introduced various new “sneaks” that they’ve been working on. While the software was focused mostly on film and photography, two of the utilities may become of interest to music technologists. “VoCo” is a new tool that is being nicknamed ‘Photoshop for Audio.’ Users can import speech recordings and rearrange the order of the text. Thing get more spooky after establishing a linguistic profile for the speaker, when one can type completely different phrases from the original dialogue and generate a new recording. This opens up a whole knew level of correction for film and voiceover work. Filmmakers will be impressed with Adobe’s other new tool, “Syncmaster,” which takes music analysis to a new level. By splittingimported music into three bands, Syncmaster automatically detects the most significant sections of a song. From there, it establishes visual cue points that editors can use as a map for positioning video footage. Editors can now sync clips to music in seconds without even having to scrub through the footage.

For more info on these projects and the other Adobe ‘Sneaks,’ visit the Adobe blog.

Posted on | Posted in Uncategorized |