Gear Spotlight: 1176/MC77 & Alumnus Andrew Roberts

In 1967, Bill Putnam and United Recording Electronics Industries (UREI) released the 1176 Peak Limiter. The compressor was the only solid state peak limiter available at the time. Putnam’s circuit design underwent several revisions and changes, which explains the variety of different 1176 “revs” available today.

Arguably the most popular revisions to the 1176 are revs C, D, and E. These revisions indicated the development of what UREI (now Universal Audio) calls the Low Noise circuit – hence, the “1176LN” title given to the compressors of this era. As of the year 2000, Universal Audio retails an 1176 reissue, based on the Rev C/D/E circuit designs.

The 1176 is a noticeably versatile and “bright” compressor, capable of both a mild, subtle sound (great for using first-to-bat on a vocal, kick, or snare), or highly aggressive and energetic compression. In addition to the standard attack, release, and ratio functions, the 1176 can also be used in what engineers call “British Mode” or “All Buttons In,” in which the four Ratio buttons on the device’s faceplate are pushed downward simultaneously, in theory engaging each ratio setting simultaneously. This technique leads to a compression ratio somewhere between 12:1 and 20:1, but also changes the circuit’s bias points such that the 1176 becomes even more aggressive.

Expanding on these classic characteristics, Purple Audio designed the MC76, their own 1176 revision in 1997. Founded that same year by Music Tech alumni Andrew Roberts, Purple Audio has now updated to the MC77, one of the most faithfully purchased 1176 reissues on the market. These pieces have a rugged, quality-build reputation that certainly doesn’t precede them. Check out the links below for more information, and visit studio A and Studio D to hear our two MC77’s and the 1176 for yourself.

Here are the standard features of any 1176 revision, clone, or DIY build:

  • Variable attack time (between 20µs-800µs)

  • Variable release time (between 50ms-1.1s)

  • Transformer-balanced inputs and outputs

  • Compression ratios of 4:1, 8:1, 12:1, and 20:1 (additionally, somewhere between 12:1 and 20:1 when using “all-buttons-in” mode)

Purple Audio built upon the original 1176 Rev D & Rev E Low Noise designs, incorporating all of the original core features while making these significant additions:

  • Easily accessible and convenient stereo-link function

  • True Bypass (meaning that, when bypassed, signal is dumped directly from the input to the output of the device – signal never passes through the device’s circuitry.)

  • Sidechain/key-input, an extremely useful and common compressor feature.

Audio engineers and electrical engineers alike have taken to online forums to write extensively about the advantages and disadvantages of using one or the other of these units. The full history of the 1176 is linked below, as are the schematics and more information about the MC77.

Useful Links:

Universal Audio’s 1176 Overview

1176 Hardware Revision History

Universal Audio History

UA: All Buttons In Mode

Purple Audio MC77

Purple Audio MC77 Manual & Schematics

 

 

Ear to the Earth presents: Thin Air

Tomorrow evening Music Technology’s own Paul Geluso is performing at Whitebox Sound Lab alongside Lars Graugaard. The performance focuses on the concept of 3D sound objects to create a unique in-air listening experience.

As Paul Geluso describes it, “What is the 3D Sound Object? What is the sound? What are we doing? It’ s fantastic! Special. An adventure in listening. Whereas traditional sound synthesis is usually done electronically then projected through loudspeakers, the 3D Sound Object purposely causes electronic sound sources to be summed, subtracted, and filtered in the air. To my surprise, the in-air processing technique creates a complex and evolving physical sound sculpture that can be experienced from several perspectives. To my knowledge, it’s a one of a kind. I can’t predict how everyone will perceive the sound, but everyone who has heard it is excited.”

Lars Graugaard adds, “Working with Paul Geluso’s 3D Sound Object in Thin Air isn’t like any other sound amplification techniques. It creates in fact an entirely different notion of sound amplification – what it is, what it can be. It’s a rich, immersive impact on the senses. And the unparalleled opportunities it offers gives a whole new meaning to the conception of a ‘sound.’ It wraps you in a three-dimensional space.”

The concert is Thursday, December 1st at Whitebox Sound Lab on 329 Broome Street. Tickets are $10, and can be purchased online or at the door. More info at Ear to the Earth.

Gear Spotlight: ROLI Seaboard Rise

This week’s gear spotlight focuses on the ROLI Seaboard Rise. ROLI, a London-based startup company, is making waves (literally) with their keyboards. Rather than keys, they have “keywaves,” pieces of flexible silicone that can sense even the most subtle nuances of a player’s inflections, and allow for parameter control beyond that of any typical knobs-and-keys controller.

Similar to the 3D touch of the QuNeo as we discussed last week, the Seaboard uses what ROLI calls “5D Touch” performing. Two pieces of software are necessary to access these 5D functions. Although they are not integral to using the Rise as a simple MIDI controller, to really use the device to its full potential, you should locate one of the three computers in Music Technology that have these softwares installed: the Mac in Studio E, the mastering laptop, and the Dolan Studios Mac. The softwares are called Equator, which is ROLI’s in-house synth for Rise, and ROLI Dashboard for Rise, which is a configuration utility.

You can also use the Rise with a ROLI developed application called NOISE, which is free for use on your iPhone or iPad. (In fact, you don’t even need a Rise to use the app!)

Using the ROLI developed computer applications, you can take full advantage of 5D Touch, meaning complete control over the following parameters:

1. Strike

The velocity force and curve response of the Rise25.

2. Glide

The x-axis pitch bend response, horizontal movements from side to side along the keywaves or the ribbon controllers. Control horizontal vibrato and pitch-bend sensitivity.

3. Slide

The y-axis response, vertical movements up and down the keywaves, enabling or disabling things like fade-ins, filter shifts, etc

4. Press

The continuous sustain sensitivity of the Rise25.

5. Lift

Allows user to change the release velocity, aftertouch.

Useful Links:

Seaboard Support (Installation, Technique, Sounds, etc)

Seaboard Performance Technique

Interpreting Seaboard MIDI Data

 

MARL: Immersive Audio & Augmented Reality Headphone Reverberation

We’re getting back into action after the long weekend, with two back to back MARL talks! Jean-Marc Jot of DTS technology will be joining us on Wednesday and Thursday to talk about Immersive and Object-Based Multi-Channel Audio Formats and Augmented Reality Headphone reverberation. Both talks will take place in Steinhardt’s 6th Floor Conference Room, at 12:30 on Wednesday and 1 o’clock on Thursday. See more information Jean-Marc Jot and the respective topics below.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90’s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding. (For more details: sites.google.com/site/jmmjot.)

Immersive and Object-Based Multi-Channel Audio Formats:

In recent years, several audio technology companies and standardization organizations (including Dolby, Auro, DTS, MPEG) have developed new formats and tools for the creation, archiving and distribution of immersive audio content in the cinema or broadcast industries. These developments extend legacy multi-channel audio formats to support three-dimensional (with height) sound field encoding, along with optional audio object channels accompanied with positional rendering metadata. They enable efficient content delivery to consumer devices and flexible reproduction in multiple consumer playback environments, including headphones and frontal audio projection systems. In this talk, we’ll review and illustrate the state of these developments and discuss perspectives and pending issues, including virtual reality applications.

Augmented Reality Headphone Reverberation:

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.

Adobe Max Sneaks: Photoshop for Audio

Last month at the Adobe Max Creativity Conference, developers introduced various new “sneaks” that they’ve been working on. While the software was focused mostly on film and photography, two of the utilities may become of interest to music technologists. “VoCo” is a new tool that is being nicknamed ‘Photoshop for Audio.’ Users can import speech recordings and rearrange the order of the text. Thing get more spooky after establishing a linguistic profile for the speaker, when one can type completely different phrases from the original dialogue and generate a new recording. This opens up a whole knew level of correction for film and voiceover work. Filmmakers will be impressed with Adobe’s other new tool, “Syncmaster,” which takes music analysis to a new level. By splittingimported music into three bands, Syncmaster automatically detects the most significant sections of a song. From there, it establishes visual cue points that editors can use as a map for positioning video footage. Editors can now sync clips to music in seconds without even having to scrub through the footage.

For more info on these projects and the other Adobe ‘Sneaks,’ visit the Adobe blog.

Gear Spotlight: Keith McMillen Instruments QuNeo

The KMI QuNeo is a 3D multi-touch pad MIDI controller, similar to pad controllers by Akai and Novation. However, the QuNeo is unique in several ways. It offers three different types of trigger pads; some look like faders, some like rotary controls, some like traditional trigger pads (think Akai MPC). Each of these triggers offers comprehensive, near limitless mapping options. The rotary controllers can function as physical infinite rotary controls (think Moog Source), or as plain old potentiometers with maximum and minimum values. The faders work more or less as expected, though the bottom-center fader can perform width control when mapped to the right parameter. Perhaps the QuNeo’s greatest appeal lies in the ability to map all four corners of each of the sixteen traditional trigger pads. This allows users to program four different notes on each of the pads, with room for portamento and chordal playing depending on how you choose to map the QuNeo in its downloadable editing software.

Pro Tips:

  • Run the installer and download the editor before making too much of an attempt at using this device. Otherwise, your options may feel pretty limited.

  • The sensitivity of the pads can initially seem lackluster – make sure to turn the “Global Sensitivity” command all the way up in the QuNeo editor!

  • You can individually adjust the sensitivity of each quadrant of the trigger pads!

  • Try flipping through the six available Velocity Curves, in the “Velocity Table For All Pads” section of the editor!

  • The rubber sliders have a bit of friction, which might not be ideal for gradual fades. Be careful!

  • Use a Keith McMillen USB->MIDI converter to control any MIDI device with the QuNeo you’ve programmed working as a standalone controller. You could even convert MIDI messages to CV with the MOTM synth module in Studio B, and then control all of Studio B with the QuNeo!

Useful Links:

 

PdCon16~ Begins Today!

This afternoon begins the 5th International Convention Pure Data Convention, celebrating 20 years of Pure Data. Through Sunday, New York University, Stevens Institute of Technology, ShapeShifter Lab, Maxwell’s Tavern and Pianos NYC will be hosting 9 workshops, 8 installations and 5 paper sessions regarding the ways that community members and developers are working with the Pd language. The convention includes several concerts for each day, for which more details can be found on the full program.

The convention will take place at NYU and Stevens Institute of Technology, and it is made possible by the generous support of the Washington Square Contemporary Music Society and New Blankets, as well as form Harvestworks and the Waverly Project.

 

Roger Linn Discussion Recap/Photos

NYU Music Tech, Clive Davis & Tandon would like to thank everybody who made Tuesday’s panel discussion a great success! Speakers Roger Linn, Bob Power, Nicholas Sansano and Dan Freeman each gave their own insight into the history of drum machine technology, and we were later given incredible demo of Roger Linn’s new LinnStrument.

Click the photos below to enlarge

 

 

 

 

 

SONYC in the News

The SONYC project, a research initiative bringing together researchers from MARLTandon, CUSP and OSU, is in the news these days, with articles popping up on The New York TimesWNYC, and Wired (Italy). The project is launching the first phase of a 5 year program with the goal of changing the way noise pollution is monitored and analyzed in New York City. Using a network of microphones to record the various breeds of urban annoyances, Music Tech’s own Juan Bello and his team are creating novel technologies for the automatic identification of sound sources throughout the city.

The project has received a generous $4.6 million grant from the National Science Foundation, and there’s a lot of excitement around its agenda from New Yorkers that know too well the agony that is noise pollution. For more information on the team and the technology involved, see NYU’s news release and the SONYC website.

 

 

AES: The Future of Audio for Television

Next Tuesday, November 15th, the AES NY Section is hosting a meeting on ATSC 3.0. We’re very excited to join AES members and other audio professionals to learn about the new standards in the future of audio for television.

Among the speakers for the evening are Tim Carroll and Jeff Riedmiller from Dolby Laboratories, Stefan Meltzer and Deep Sen from the MPEG-H Audio Alliance, and host Jim Starzynski, NBCU Principal Audio Engineer and Chairman of the ATSC Audio Group.

Please RSVP here to reserve your seat.