Mirrors of Time, Moments in Time

Last weekend, Music Tech professor Tom Beyer worked with students to create a long distance collaboration. Students performed at Steinhardt’s Loewe theater as students at Universidad Nacional de Quilmes, Argentina streamed a concurrent performance via webcam. These performances have been pushing life performance and streaming technology since the mid-nineties. In the early years, rather than a simultaneous performance, Steinhardt would fax notes back and forth between collaborating schools. Today, the performances use several HD camera and 8 channels of streaming audio, to preserve the independence of each signal.. Performers learn to compensate with the latency of the signal to create an entirely live, cross-continental performance. It’s worth mentioning that these themes of latency and audio streaming will be discussed in Richard Einhorn’s MARL talk this Thursday, 12/15.

Due to the nature of the performance, sending audio from once place to another, and then back to the source, feedback is a major concern. The video and audio for last Sunday’s performance required a full day of setup, where the crew had to get particularly creative with the microphone placement to ensure that no unpleasant feedback occurred. Despite the complexities of the signal path, all of the performances of the afternoon sounded great. Check out the photos!


Posted on | Posted in Uncategorized |

Gear Spotlight: JazzMutant Lemur

Lo and behold: French technology! Used by the likes of Justice, Daft Punk, Björk, Nine Inch Nails, and so many others, the Lemur is a multi-touch, modular controller. First brought onto the market by JazzMutant in 2005, the device paved the way for touch-based controllers as we know them today.  Unfortunately discontinued in 2010 following the rise of iPads and other consumer tablets, the Lemur is starting to become a rare piece of gear with features that most touch-based controllers still don’t have today.

Rather than two or three touch points, the Lemur supports up to 10 gestures at a time, allowing performers to make use of all of their fingers. The Lemur software allows users to fully customize their interface, bringing the freedom of modular to a modern digital unit. Communicating with its host computer (and other Lemurs) via Ethernet and an OSC protocol means lower latency, higher data storage, 32-bit precision: a data flow that’ll blow your MIDI interface out of the water.

One of the most unique features of the Lemur is the Multiball feature, a physics-based automation system. Unexplored by other multi-touch interfaces, the Multiball function uses virtual bouncy balls connected to parameters of your choosing. The behavior of these balls can be randomized or with some creative scripting, they can create circuits as shown here. The X/Y gestures work between interpolation and mass spring options, as can the faders. If you’re looking for a plug & play controller, all of these options might be a bit overwhelming. However, if you’re the type of performer who likes to geek out in Max/MSP or SuperCollider, setting up the JazzMutant Lemur will be a breeze.

One of the many strengths of the Lemur is its cross-platform compatibility. Software, modular synthesizers, lighting rigs and VSTs can all be controlled within this one piece of hardware, making it clear why it has spent years on the road with Thievery Corporation, M.I.A., and Justice, among others.

Check out the Lemur from the 8th floor monitor’s closet, and if you don’t have an Ethernet port, grab an Ethernet to Thunderbolt adapter from MTech’s IT department!

Or, if you’ve got an iPad, iPhone, or Android, download the Lemur app by Liine for $24.99.

Useful Links:

Four-part Lemur Tutorial

Basic Scripting Tutorial

The Lemur at MusikMesse 2010

Daft Punk & Kanye West 2008 Grammy Performance


Posted on | Posted in Uncategorized |

Music Tech Artists: ‘Evans’


A few weeks ago we sat in on a session with a number of Music Technology students at The Cutting Room to see what they’ve been up to. NYU Music Composition student Jonathan Evans and his band Evans are recording their first EP under their new name, with Music Technology senior Josh Liebman as head engineer. An intern at The Cutting Room, Josh has access to a pretty incredible space that’s given the group the opportunity to try out some unique recording techniques.

Walking down the hallway into the mixing room, there was a maze of microphones, cables and guitar amps. To compliment the band’s retro-pop style, Josh wanted to keep the band together during the recording, while still getting a clean, modern mix. With the drums recording in the main room, the lead guitar amp was recording in another isolated booth. Then with rhythm guitar amp being recorded in the hallway and the bass going into a DI Box, there was nobody allowed in or out once the group started recording.

Josh Liebman, Music Tech Senior

Both the band and the studio have a deep affiliation with the Music Technology program. Alumnus Matt Lau is on bass, and current seniors Jake Zacharia and Torin Geller on drums and rhythm guitar, respectively. The Cutting Room itself was founded by alumnus David Crafa in 1996, and has been offering opportunities to students from the program since. We’re excited to see more students getting involved with this iconic New York studio in 2017.

Jonathan Evans, Music Composition

Matt Lau, Music Tech Alumnus

Posted on | Posted in Uncategorized |

Gear Spotlight: 1176/MC77 & Alumnus Andrew Roberts

In 1967, Bill Putnam and United Recording Electronics Industries (UREI) released the 1176 Peak Limiter. The compressor was the only solid state peak limiter available at the time. Putnam’s circuit design underwent several revisions and changes, which explains the variety of different 1176 “revs” available today.

Arguably the most popular revisions to the 1176 are revs C, D, and E. These revisions indicated the development of what UREI (now Universal Audio) calls the Low Noise circuit – hence, the “1176LN” title given to the compressors of this era. As of the year 2000, Universal Audio retails an 1176 reissue, based on the Rev C/D/E circuit designs.

The 1176 is a noticeably versatile and “bright” compressor, capable of both a mild, subtle sound (great for using first-to-bat on a vocal, kick, or snare), or highly aggressive and energetic compression. In addition to the standard attack, release, and ratio functions, the 1176 can also be used in what engineers call “British Mode” or “All Buttons In,” in which the four Ratio buttons on the device’s faceplate are pushed downward simultaneously, in theory engaging each ratio setting simultaneously. This technique leads to a compression ratio somewhere between 12:1 and 20:1, but also changes the circuit’s bias points such that the 1176 becomes even more aggressive.

Expanding on these classic characteristics, Purple Audio designed the MC76, their own 1176 revision in 1997. Founded that same year by Music Tech alumni Andrew Roberts, Purple Audio has now updated to the MC77, one of the most faithfully purchased 1176 reissues on the market. These pieces have a rugged, quality-build reputation that certainly doesn’t precede them. Check out the links below for more information, and visit studio A and Studio D to hear our two MC77’s and the 1176 for yourself.

Here are the standard features of any 1176 revision, clone, or DIY build:

  • Variable attack time (between 20µs-800µs)

  • Variable release time (between 50ms-1.1s)

  • Transformer-balanced inputs and outputs

  • Compression ratios of 4:1, 8:1, 12:1, and 20:1 (additionally, somewhere between 12:1 and 20:1 when using “all-buttons-in” mode)

Purple Audio built upon the original 1176 Rev D & Rev E Low Noise designs, incorporating all of the original core features while making these significant additions:

  • Easily accessible and convenient stereo-link function

  • True Bypass (meaning that, when bypassed, signal is dumped directly from the input to the output of the device – signal never passes through the device’s circuitry.)

  • Sidechain/key-input, an extremely useful and common compressor feature.

Audio engineers and electrical engineers alike have taken to online forums to write extensively about the advantages and disadvantages of using one or the other of these units. The full history of the 1176 is linked below, as are the schematics and more information about the MC77.

Useful Links:

Universal Audio’s 1176 Overview

1176 Hardware Revision History

Universal Audio History

UA: All Buttons In Mode

Purple Audio MC77

Purple Audio MC77 Manual & Schematics



Posted on | Posted in Uncategorized |

Ear to the Earth presents: Thin Air

Tomorrow evening Music Technology’s own Paul Geluso is performing at Whitebox Sound Lab alongside Lars Graugaard. The performance focuses on the concept of 3D sound objects to create a unique in-air listening experience.

As Paul Geluso describes it, “What is the 3D Sound Object? What is the sound? What are we doing? It’ s fantastic! Special. An adventure in listening. Whereas traditional sound synthesis is usually done electronically then projected through loudspeakers, the 3D Sound Object purposely causes electronic sound sources to be summed, subtracted, and filtered in the air. To my surprise, the in-air processing technique creates a complex and evolving physical sound sculpture that can be experienced from several perspectives. To my knowledge, it’s a one of a kind. I can’t predict how everyone will perceive the sound, but everyone who has heard it is excited.”

Lars Graugaard adds, “Working with Paul Geluso’s 3D Sound Object in Thin Air isn’t like any other sound amplification techniques. It creates in fact an entirely different notion of sound amplification – what it is, what it can be. It’s a rich, immersive impact on the senses. And the unparalleled opportunities it offers gives a whole new meaning to the conception of a ‘sound.’ It wraps you in a three-dimensional space.”

The concert is Thursday, December 1st at Whitebox Sound Lab on 329 Broome Street. Tickets are $10, and can be purchased online or at the door. More info at Ear to the Earth.

Posted on | Posted in Uncategorized |

Gear Spotlight: ROLI Seaboard Rise

This week’s gear spotlight focuses on the ROLI Seaboard Rise. ROLI, a London-based startup company, is making waves (literally) with their keyboards. Rather than keys, they have “keywaves,” pieces of flexible silicone that can sense even the most subtle nuances of a player’s inflections, and allow for parameter control beyond that of any typical knobs-and-keys controller.

Similar to the 3D touch of the QuNeo as we discussed last week, the Seaboard uses what ROLI calls “5D Touch” performing. Two pieces of software are necessary to access these 5D functions. Although they are not integral to using the Rise as a simple MIDI controller, to really use the device to its full potential, you should locate one of the three computers in Music Technology that have these softwares installed: the Mac in Studio E, the mastering laptop, and the Dolan Studios Mac. The softwares are called Equator, which is ROLI’s in-house synth for Rise, and ROLI Dashboard for Rise, which is a configuration utility.

You can also use the Rise with a ROLI developed application called NOISE, which is free for use on your iPhone or iPad. (In fact, you don’t even need a Rise to use the app!)

Using the ROLI developed computer applications, you can take full advantage of 5D Touch, meaning complete control over the following parameters:

1. Strike

The velocity force and curve response of the Rise25.

2. Glide

The x-axis pitch bend response, horizontal movements from side to side along the keywaves or the ribbon controllers. Control horizontal vibrato and pitch-bend sensitivity.

3. Slide

The y-axis response, vertical movements up and down the keywaves, enabling or disabling things like fade-ins, filter shifts, etc

4. Press

The continuous sustain sensitivity of the Rise25.

5. Lift

Allows user to change the release velocity, aftertouch.

Useful Links:

Seaboard Support (Installation, Technique, Sounds, etc)

Seaboard Performance Technique

Interpreting Seaboard MIDI Data


Posted on | Posted in Uncategorized |

MARL: Immersive Audio & Augmented Reality Headphone Reverberation

We’re getting back into action after the long weekend, with two back to back MARL talks! Jean-Marc Jot of DTS technology will be joining us on Wednesday and Thursday to talk about Immersive and Object-Based Multi-Channel Audio Formats and Augmented Reality Headphone reverberation. Both talks will take place in Steinhardt’s 6th Floor Conference Room, at 12:30 on Wednesday and 1 o’clock on Thursday. See more information Jean-Marc Jot and the respective topics below.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90’s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding. (For more details: sites.google.com/site/jmmjot.)

Immersive and Object-Based Multi-Channel Audio Formats:

In recent years, several audio technology companies and standardization organizations (including Dolby, Auro, DTS, MPEG) have developed new formats and tools for the creation, archiving and distribution of immersive audio content in the cinema or broadcast industries. These developments extend legacy multi-channel audio formats to support three-dimensional (with height) sound field encoding, along with optional audio object channels accompanied with positional rendering metadata. They enable efficient content delivery to consumer devices and flexible reproduction in multiple consumer playback environments, including headphones and frontal audio projection systems. In this talk, we’ll review and illustrate the state of these developments and discuss perspectives and pending issues, including virtual reality applications.

Augmented Reality Headphone Reverberation:

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.

Posted on | Posted in Uncategorized |

Adobe Max Sneaks: Photoshop for Audio

Last month at the Adobe Max Creativity Conference, developers introduced various new “sneaks” that they’ve been working on. While the software was focused mostly on film and photography, two of the utilities may become of interest to music technologists. “VoCo” is a new tool that is being nicknamed ‘Photoshop for Audio.’ Users can import speech recordings and rearrange the order of the text. Thing get more spooky after establishing a linguistic profile for the speaker, when one can type completely different phrases from the original dialogue and generate a new recording. This opens up a whole knew level of correction for film and voiceover work. Filmmakers will be impressed with Adobe’s other new tool, “Syncmaster,” which takes music analysis to a new level. By splittingimported music into three bands, Syncmaster automatically detects the most significant sections of a song. From there, it establishes visual cue points that editors can use as a map for positioning video footage. Editors can now sync clips to music in seconds without even having to scrub through the footage.

For more info on these projects and the other Adobe ‘Sneaks,’ visit the Adobe blog.

Posted on | Posted in Uncategorized |

Gear Spotlight: Keith McMillen Instruments QuNeo

The KMI QuNeo is a 3D multi-touch pad MIDI controller, similar to pad controllers by Akai and Novation. However, the QuNeo is unique in several ways. It offers three different types of trigger pads; some look like faders, some like rotary controls, some like traditional trigger pads (think Akai MPC). Each of these triggers offers comprehensive, near limitless mapping options. The rotary controllers can function as physical infinite rotary controls (think Moog Source), or as plain old potentiometers with maximum and minimum values. The faders work more or less as expected, though the bottom-center fader can perform width control when mapped to the right parameter. Perhaps the QuNeo’s greatest appeal lies in the ability to map all four corners of each of the sixteen traditional trigger pads. This allows users to program four different notes on each of the pads, with room for portamento and chordal playing depending on how you choose to map the QuNeo in its downloadable editing software.

Pro Tips:

  • Run the installer and download the editor before making too much of an attempt at using this device. Otherwise, your options may feel pretty limited.

  • The sensitivity of the pads can initially seem lackluster – make sure to turn the “Global Sensitivity” command all the way up in the QuNeo editor!

  • You can individually adjust the sensitivity of each quadrant of the trigger pads!

  • Try flipping through the six available Velocity Curves, in the “Velocity Table For All Pads” section of the editor!

  • The rubber sliders have a bit of friction, which might not be ideal for gradual fades. Be careful!

  • Use a Keith McMillen USB->MIDI converter to control any MIDI device with the QuNeo you’ve programmed working as a standalone controller. You could even convert MIDI messages to CV with the MOTM synth module in Studio B, and then control all of Studio B with the QuNeo!

Useful Links:


Posted on | Posted in Uncategorized |

PdCon16~ Begins Today!

This afternoon begins the 5th International Convention Pure Data Convention, celebrating 20 years of Pure Data. Through Sunday, New York University, Stevens Institute of Technology, ShapeShifter Lab, Maxwell’s Tavern and Pianos NYC will be hosting 9 workshops, 8 installations and 5 paper sessions regarding the ways that community members and developers are working with the Pd language. The convention includes several concerts for each day, for which more details can be found on the full program.

The convention will take place at NYU and Stevens Institute of Technology, and it is made possible by the generous support of the Washington Square Contemporary Music Society and New Blankets, as well as form Harvestworks and the Waverly Project.


Posted on | Posted in Uncategorized |