Gear Spotlight: ROLI Seaboard Rise

This week’s gear spotlight focuses on the ROLI Seaboard Rise. ROLI, a London-based startup company, is making waves (literally) with their keyboards. Rather than keys, they have “keywaves,” pieces of flexible silicone that can sense even the most subtle nuances of a player’s inflections, and allow for parameter control beyond that of any typical knobs-and-keys controller.

Similar to the 3D touch of the QuNeo as we discussed last week, the Seaboard uses what ROLI calls “5D Touch” performing. Two pieces of software are necessary to access these 5D functions. Although they are not integral to using the Rise as a simple MIDI controller, to really use the device to its full potential, you should locate one of the three computers in Music Technology that have these softwares installed: the Mac in Studio E, the mastering laptop, and the Dolan Studios Mac. The softwares are called Equator, which is ROLI’s in-house synth for Rise, and ROLI Dashboard for Rise, which is a configuration utility.

You can also use the Rise with a ROLI developed application called NOISE, which is free for use on your iPhone or iPad. (In fact, you don’t even need a Rise to use the app!)

Using the ROLI developed computer applications, you can take full advantage of 5D Touch, meaning complete control over the following parameters:

1. Strike

The velocity force and curve response of the Rise25.

2. Glide

The x-axis pitch bend response, horizontal movements from side to side along the keywaves or the ribbon controllers. Control horizontal vibrato and pitch-bend sensitivity.

3. Slide

The y-axis response, vertical movements up and down the keywaves, enabling or disabling things like fade-ins, filter shifts, etc

4. Press

The continuous sustain sensitivity of the Rise25.

5. Lift

Allows user to change the release velocity, aftertouch.

Useful Links:

Seaboard Support (Installation, Technique, Sounds, etc)

Seaboard Performance Technique

Interpreting Seaboard MIDI Data

 

MARL: Immersive Audio & Augmented Reality Headphone Reverberation

We’re getting back into action after the long weekend, with two back to back MARL talks! Jean-Marc Jot of DTS technology will be joining us on Wednesday and Thursday to talk about Immersive and Object-Based Multi-Channel Audio Formats and Augmented Reality Headphone reverberation. Both talks will take place in Steinhardt’s 6th Floor Conference Room, at 12:30 on Wednesday and 1 o’clock on Thursday. See more information Jean-Marc Jot and the respective topics below.

Jean-Marc Jot leads DTS technology R&D in audio reproduction and fidelity enhancement for consumer electronics. Previously, he led the design and development of Creative Labs’ SoundBlaster audio processing and architectures, including the EAX and OpenAL technologies for game 3D audio authoring and rendering. Before relocating to the US in the late 90’s, he conducted research at the Institut de Recherche et Coordination Acoustique / Musique in Paris (IRCAM), where he designed and developed the IRCAM Spat software suite for immersive audio composition in computer music creation, performance and virtual reality. He is a recipient of the Audio Engineering Society (AES) Fellowship Award and has authored numerous patents and papers on spatial audio signal processing and coding. (For more details: sites.google.com/site/jmmjot.)

Immersive and Object-Based Multi-Channel Audio Formats:

In recent years, several audio technology companies and standardization organizations (including Dolby, Auro, DTS, MPEG) have developed new formats and tools for the creation, archiving and distribution of immersive audio content in the cinema or broadcast industries. These developments extend legacy multi-channel audio formats to support three-dimensional (with height) sound field encoding, along with optional audio object channels accompanied with positional rendering metadata. They enable efficient content delivery to consumer devices and flexible reproduction in multiple consumer playback environments, including headphones and frontal audio projection systems. In this talk, we’ll review and illustrate the state of these developments and discuss perspectives and pending issues, including virtual reality applications.

Augmented Reality Headphone Reverberation:

In audio-visual augmented reality applications, computer-generated audio objects are rendered via acoustically transparent earphones to blend with the physical environment heard naturally by the viewer/listener. This requires binaural artificial reverberation processing to match local environment acoustics, so that synthetic audio objects are not readily discriminable from sounds occurring naturally or reproduced over loudspeakers. Approaches involving the measurement or calculation of binaural room impulse responses in consumer environments are limited by practical obstacles and complexity. We exploit a statistical reverberation model enabling the definition of a compact “reverberation fingerprint” for characterization of the local environment and computationally efficient data-driven reverberation rendering for multiple virtual sound sources. The method applies equally to headphone-based “audio-augmented reality” – facilitating natural-sounding, externalized virtual 3D audio reproduction of music, movie or game soundtracks, navigation guides or alerts.

Adobe Max Sneaks: Photoshop for Audio

Last month at the Adobe Max Creativity Conference, developers introduced various new “sneaks” that they’ve been working on. While the software was focused mostly on film and photography, two of the utilities may become of interest to music technologists. “VoCo” is a new tool that is being nicknamed ‘Photoshop for Audio.’ Users can import speech recordings and rearrange the order of the text. Thing get more spooky after establishing a linguistic profile for the speaker, when one can type completely different phrases from the original dialogue and generate a new recording. This opens up a whole knew level of correction for film and voiceover work. Filmmakers will be impressed with Adobe’s other new tool, “Syncmaster,” which takes music analysis to a new level. By splittingimported music into three bands, Syncmaster automatically detects the most significant sections of a song. From there, it establishes visual cue points that editors can use as a map for positioning video footage. Editors can now sync clips to music in seconds without even having to scrub through the footage.

For more info on these projects and the other Adobe ‘Sneaks,’ visit the Adobe blog.

Gear Spotlight: Keith McMillen Instruments QuNeo

The KMI QuNeo is a 3D multi-touch pad MIDI controller, similar to pad controllers by Akai and Novation. However, the QuNeo is unique in several ways. It offers three different types of trigger pads; some look like faders, some like rotary controls, some like traditional trigger pads (think Akai MPC). Each of these triggers offers comprehensive, near limitless mapping options. The rotary controllers can function as physical infinite rotary controls (think Moog Source), or as plain old potentiometers with maximum and minimum values. The faders work more or less as expected, though the bottom-center fader can perform width control when mapped to the right parameter. Perhaps the QuNeo’s greatest appeal lies in the ability to map all four corners of each of the sixteen traditional trigger pads. This allows users to program four different notes on each of the pads, with room for portamento and chordal playing depending on how you choose to map the QuNeo in its downloadable editing software.

Pro Tips:

  • Run the installer and download the editor before making too much of an attempt at using this device. Otherwise, your options may feel pretty limited.

  • The sensitivity of the pads can initially seem lackluster – make sure to turn the “Global Sensitivity” command all the way up in the QuNeo editor!

  • You can individually adjust the sensitivity of each quadrant of the trigger pads!

  • Try flipping through the six available Velocity Curves, in the “Velocity Table For All Pads” section of the editor!

  • The rubber sliders have a bit of friction, which might not be ideal for gradual fades. Be careful!

  • Use a Keith McMillen USB->MIDI converter to control any MIDI device with the QuNeo you’ve programmed working as a standalone controller. You could even convert MIDI messages to CV with the MOTM synth module in Studio B, and then control all of Studio B with the QuNeo!

Useful Links:

 

PdCon16~ Begins Today!

This afternoon begins the 5th International Convention Pure Data Convention, celebrating 20 years of Pure Data. Through Sunday, New York University, Stevens Institute of Technology, ShapeShifter Lab, Maxwell’s Tavern and Pianos NYC will be hosting 9 workshops, 8 installations and 5 paper sessions regarding the ways that community members and developers are working with the Pd language. The convention includes several concerts for each day, for which more details can be found on the full program.

The convention will take place at NYU and Stevens Institute of Technology, and it is made possible by the generous support of the Washington Square Contemporary Music Society and New Blankets, as well as form Harvestworks and the Waverly Project.

 

Roger Linn Discussion Recap/Photos

NYU Music Tech, Clive Davis & Tandon would like to thank everybody who made Tuesday’s panel discussion a great success! Speakers Roger Linn, Bob Power, Nicholas Sansano and Dan Freeman each gave their own insight into the history of drum machine technology, and we were later given incredible demo of Roger Linn’s new LinnStrument.

Click the photos below to enlarge

 

 

 

 

 

SONYC in the News

The SONYC project, a research initiative bringing together researchers from MARLTandon, CUSP and OSU, is in the news these days, with articles popping up on The New York TimesWNYC, and Wired (Italy). The project is launching the first phase of a 5 year program with the goal of changing the way noise pollution is monitored and analyzed in New York City. Using a network of microphones to record the various breeds of urban annoyances, Music Tech’s own Juan Bello and his team are creating novel technologies for the automatic identification of sound sources throughout the city.

The project has received a generous $4.6 million grant from the National Science Foundation, and there’s a lot of excitement around its agenda from New Yorkers that know too well the agony that is noise pollution. For more information on the team and the technology involved, see NYU’s news release and the SONYC website.

 

 

AES: The Future of Audio for Television

Next Tuesday, November 15th, the AES NY Section is hosting a meeting on ATSC 3.0. We’re very excited to join AES members and other audio professionals to learn about the new standards in the future of audio for television.

Among the speakers for the evening are Tim Carroll and Jeff Riedmiller from Dolby Laboratories, Stefan Meltzer and Deep Sen from the MPEG-H Audio Alliance, and host Jim Starzynski, NBCU Principal Audio Engineer and Chairman of the ATSC Audio Group.

Please RSVP here to reserve your seat.

 

The History & Future of Expressive Electronic Instruments w/ Roger Linn

Next Tuesday, November 8th, the Music Technology program teams up with the NYU Tisch Clive Davis institute and Tandon ​​​Integrated Digital Media to present a panel discussion with iconic instrument designer Roger Linn. Over the last 35 years,  Linn’s drum machines have been a profound influence on hip-hop and pop music, with early adopters such as Michael Jackson, Madonna, and Stevie Wonder.

NYU students are invited to come and learn about the development of Roger Linn’s past work in instrument development, as well as the direction that electronic instruments are heading today. Please visit the Facebook Event Page for more info.

11/8, 7:00-8:30, Steinhardt Room #610

Music Tech Undergrads design OTB Sound Mural

Vimeo: OTB Sound Mural

As part of their work with Brooklyn studio One Thousand Birds, music tech undergraduate Torin Geller and alumni Matt Lau have helped to create this interactive sound mural. Alongside Jonathan Evans, a Music Composition student also at Steinhardt, the group used conductive paint and Arduino boards (similar to those used in our digital electronics classes) to send data from the painting to a digital wav trigger. Between this project and another with IBM designer Stephen Nixon, the idea has been a hit, and several companies have been looking to get murals of their own.