MARL talk by Dan Ellis (Google)

When: Tuesday, May 15th @10am

Where: 6th floor Conference Room (609), 35 West 4th StreetMARL Logo

Title: “Supervised and Unsupervised Semantic Audio Representations”

Abstract: The Sound Understanding team at Google has been developing automatic sound classification tools with the ambition to cover all possible sounds – speech, music, and environmental.  I will describe our application of vision-inspired deep neural networks to the classification of our new ‘AudioSet’ ontology of ~600 sound events.  I’ll also talk about recent work using triplet loss to train semantic representations — where semantically ‘similar’ sounds end up close by in the representation — from unlabeled data.

Bio:  Dan Ellis joined Google in 2015 after 15 years as a faculty member in the Electrical Engineering department at Columbia University, where he headed the Laboratory for Recognition and Organization of Speech and Audio (LabROSA). He has over 150 publications in the areas of audio processing, speech recognition, and music information retrieval.

Joint work with Aren Jansen, Manoj Plakal, Ratheet Pandya, Shawn Hershey, Jiayang Liu, Channing Moore, Rif A. Saurous

Google Logo

Posted on | Posted in MARL | Tagged |

Electronic Music Performance Concerts

Electronic IllustrationThis week!  Two Electronic Music Performances Wednesday / Thursday nights!
Come hear students of Dafna Naphtali’s two Electronic Music Performance classes, as they play new electronic music, compositions, experiments, and improvisations.
Wednesday May 9th, 8pm
 
Electric Pizza
NYU Electronic Music Performance – Section 001
with guest Hans Tammen conducting from Dark Circuits Orchestra score samples, drones, time-travel, facial gestures as control, and a self-designed speaker instrument.
Starring the creative and all-star students…Tristan Alleyne, Sam Grossman, Quinton Ashley, Angel E. Daniels, Ned Dana. Erez Aviram, Daksh Bhatia, Brendan Prednis, Harrison Shimazu, Emma Camell, John Sloan, Pari Songmuang. Directed by Dafna Naphtali
NYU Education Building
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thursday May 9th, 8pm
NYU Electronic Music Performance – Section 002
with guest computer conductor by Mohamed Kubbara
plus music made with plunderphonics, waving hands, secret signals,sportscasters and more..
Starring the creative and all-star students…Gregory Borodulin, Max Chidzero, Miles Grossenbacher, Thomas Miritello, Trevor Rivkin, Ethan Uno-Portillo, Nick Royall, Greg Tock, Jake Sandakly, Emily Thaler, Zoltán Sindhu. Directed by Dafna Naphtali
                                                                              Music Technology FacebookMusic Technology Instagram

Spring 2018 Graduate Thesis Defense Presentations

                     With the end of the semester right around the corner, it’s time for our graduate thesis defense presentations.  Projects to be presented include original hardware and software development, cognition research, audio analysis, assistive audio technologies, acoustic design, and much more.

Thesis Defense Image Schedule (See Link Below)

Check out the full schedule with times and thesis titles HERE

Live Stream

All defense presentations will be live streamed! Friends and family from near and far are welcome to tune in. Feel free to share THIS link!

 Music Technology on FacebookMusic Technology on Instagram

MARL talk by Tlacael Esparza

When: Thursday, May 3rd @1pmMARL Logo

Where: 6th floor Conference Room (609), 35 West 4th Street

Title: Sensory Percussion And The Future of Drumming

Description: Sensory Percussion is a platform for creating and performing music through acoustic control of digital sound. With the mission of bridging one of the oldest forms of musical expression with the new, Sensory Percussion translates the nuance of acoustic drumming into a flexible and expressive control language for electronic processes, allowing for a new frontier in performance and sound design. This presentation will include a technology overview and demonstration of Sensory Percussion’s capabilities.

Bio: Tlacael Esparza is a co-founder of the music tech startup Sunhouse and creator of Sensory Percussion, a radically new system for expressive electronic percussion being used on stages and in studios around the world. Tlacael is a Los Angeles native based in New York City, and a professional drummer with over fifteen years of experience. He has a background in mathematics and is a NYU Music Tech alumnus (2013), where he focused on applications of machine learning in music information retrieval. With Sunhouse, he is dedicated to building a future where music technology supports musicians and their creative endeavors.

Posted on | Posted in MARL | Tagged |

MARL talks by Hyunkook Lee and members of Applied Psychoacoustics Lab

When: Friday, May 4th @10am-12pmMARL Logo

Where:  6th floor Conference Room (609), 35 West 4th Street

10:00-11:00am

Dr. Hyunkook Lee

Title: Introduction to 3D Audio Research at the APL

Abstract: This talk will overview recent 3D audio research conducted at the Applied Psychoacoustics Lab (APL) at the University of Huddersfield. The APL, established by Dr Hyunkook Lee in 2013, aims to bridge gap between fundamental psychoacoustics and audio engineering. The talk will first describe some of the fundamental research conducted on various perceptual aspects of 3D audio, followed by the introduction of practical engineeringmethods developed based on the research. The topics to be covered include: vertical stereophonic perception, 3D and VR microphone techniques, vertical interchannel decorrelation, phantom image elevation effect, new time-level trade-off function, perceptually motivated amplitude panning (PMAP), virtual hemispherical amplitude panning (VHAP), Perceptual Band Allocation (PBA), etc. Additionally, the APL’s software packages for audio research will be introduced.

Bio: Dr Hyunkook Lee is the Leader of the Applied Psychoacoustics Lab (APL) and Senior Lecturer (i.e. Associate Professor) in Music Technology at the University of Huddersfield, UK. His current research focuses on spatial audio psychoacoustics, recording and reproduction techniques for 3D and VR audio, and interactive virtual acoustics. He is also an experienced sound engineer specialising in surround and 3D acoustic recording. Before joining Huddersfield in 2010, Dr. Lee was Senior Research Engineer in audio R&D at LG Electronics for five years. He has been an active member of the Audio Engineering Society since 2001.

11:00-11:30am

Maksims Mironovs

Title: Localisation accuracy and consistency of real sound sources in a practical environment

Abstract: Human ability to localise sound sources in a three-dimensional (3D) space has been thoroughly studied in the past decades, however, only few studies tested its full capabilities across a wide range of vertical and horizontal positions. Yet, these studies do not reflect the real-life situations where room effect is present. Additionally, there is not enough data for the assessment of modern multichannel loudspeaker setups, such as Dolby Atmos or Auro 3D. This talk will provide an overview of a practical localisation study performed at Applied Psychoacoustics Lab, as well as an insight into human localisation mechanism in the 3D space. Furthermore, a new response method for localisation studies will be presented and analysed.

Bio: Maksims Mironovs is a PhD student at the University of Huddersfield’s Applied Psychoacoustics Lab. In 2016 he obtained a First class BSc degree with Honours in Music Technology and Audio Systems at University of Huddersfield. During his placement, he spent one year at Fraunhofer IIS, where he was involved in multichannel audio research and development of the VST plugins. The primary focus of his research is the human auditory localisation mechanism in the context of 3D audio reproduction. Additionally, he is an experienced audio software developer and is currently working as part time lecturer and research assistant.

11:30am-12:00pm

Connor Millns

Title: An overview of capture techniques for Virtual Reality soundscape

Abstract: This presentation will cover the history of soundscape capture techniques and then introduce current recording practices for soundscape in VR. The results from an investigation into low-level spatial attributes that highlight differences between VR capture techniques will be discussed. The presentation will conclude with a discussion of future work on the influence of audio- visual interaction and acoustics on the perception of audio quality in the context of soundscape.

Bio: Connor Millns is a PhD student at the APL investigating capture techniques for Virtual Reality soundscapes and the influence of audio-visual interaction on Quality of Experience. He was also a student at the University of Huddersfield that completed the BSc (Hons) Music Technology and Audio Systems course with an industry year at Fraunhofer IIS. In his final year bachelor’s project. Connor undertook an investigation into the spatial attributes of various microphones techniques for virtual reality.

Posted on | Posted in MARL | Tagged |

Concert on the Holodeck: Connecting Artists

Join NYU Music Technology for an evening of collaboration in distributed music and remote dancers. The concert will involve several combinations of remote and on-stage musicians and dancers connected through internet as a stepping stone towards augmented performances and virtual connections. Music selections will include those of classical, jazz and percussion-only genres.

WHEN: Sunday, April 29th, @ 3pm

WHERE: Frederick Loewe Theatre, 35 West 4th Street

      Free and open to the public!

Concert on the Holodeck Poster

Photo of Prepping Holo-dancers for motion capture

Prepping Holo-dancers for motion capture!

MARL talk by Yotam Mann

MARL Logo

“Making Music Interactive”

When: Thursday, April 19th @1pm

Where:  6th floor Conference Room (609), 35 West 4th Street

Abstract: Yotam Mann makes music that engages listeners through interactivity. His work takes the form of websites, installations, and instruments. He is also the author the open source Web Audio framework, Tone.js, which aims to enable other music creators to experiment with interactivity. In this talk, he discusses some of his techniques and motivations in creating interactive music.

Bio: Yotam Mann is a composer and programmer. He creates interactive musical experiences in which listeners can explore, create and play with sound.While studying jazz piano at UC Berkeley, Yotam stumbled across the Center for New Music and Audio Technologies (CNMAT), which opened his eyes to a new way of making music with technology that eventually inspired him to earn a second degree in Computer Science. He is the author of the most popular open source library for making interactive music in the browser, Tone.js. Now based in New York, Yotam continues to work at the intersection of music and technology, creating interactive musical experiences in the form of apps, websites, and installations. He was part of the inaugural class at NEW INC, adjunct professor at ITP, NYU Tisch, and 2016 Creative Capital Grantee in Emerging Fields.

 

Posted on | Posted in MARL | Tagged |

MARL Presents: Emilia Gómez

MARL Logo“Music Information Retrieval: From Accuracy to Understanding, from Machine Intelligence to Human Welfare”

When: Friday April 13th, @11am

Where: 6th floor conference room (609), 35 W 4th Street

In this seminar Gómez will provide an overview of her research on the Music Information Retrieval (MIR) research field, which aims at facilitating the access to music in a world with overwhelming musical choices.

Emilia Gómez is a researcher at the Joint Research Centre, European Commission and the Music Technology Group at Universitat Pompeu Fabra in Barcelona, Spain.  Her research background is within the music information retrieval (MIR) field. She tries to understand the way people describe music and emulate these description with computational models that learn from large music collections.

 

Posted on | Posted in MARL | Tagged |

Music Tech Open House 2018

Music Tech Open House 2018

The NYU Music Technology program invites you to join us at our annual open house taking place on May 5th, 2018! The open house has a number of ways to get involved, showcase your work, get some industry-standard experience, and win some prizes! Submit your work in the link below: https://docs.google.com/forms/d/e/1FAIpQLSd3LTHLSP1XkZC4x_9PHDMqQeC3v2hdkwDyDf3SEdt4E0z6Cw/viewform

Posted by NYU Music Technology on Wednesday, March 21, 2018

 

The NYU Music Technology program invites you to join us at our annual open house!  This event showcases our awesome students from the undergraduate, graduate, and post-graduate programs.  Current students are encouraged to submit their work in the contests and exhibits described below.  Friends and family are welcome to attend and enjoy the food, refreshments, and fun!

The Open House will be hosted on May 5th, 2018 in the
Steinhardt Education Building at 35 West 4th Street 10012
Reception will be located on the 6th floor with other events and showcases on the 7th and 8th floors.

EVERYONE IS WELCOME!

The open house will play host to a variety of student and departmental showcases.  We encourage you to invite your friends and family to attend.  Live music, installation experiences, discussion, and critical review sessions will all be open for viewers.

PRELIMINARY EVENT SCHEDULE

 

9am – 12:30pm: Senior Capstone Presentations (Students/Faculty Only)

2pm – 5:00pm: General Presentations & Project Displays

3:30pm – 5:00pmRecording Competition

5:30pm-7pm: Concert

Gear Spotlight: LEAP Motion Controller & GECO

LEAP Motion is an American company determined to create a more fluid interaction between users and their computers. Launched in 2012, the company has launched a few iterations of their ultracompact sensors, and we’ve got a few of them here in the Music Tech program!

The LEAP Motion sensor has two cameras and three infrared LEDs packed into its tiny body to track user movement with extreme precision. The signal is sent at roughly 200 frames per second, creating essentially no latency, making it an obvious candidate for music software development. The sensors even respond down to changes of 0.7 millimeters, offering more precision than the standard MIDI controller.

LEAP Motion has been quite open with developers and there’s already a strong selection of software for what’s still a relatively new product. The Airspace app store has these various programs available for purchase, including GECO, a MIDI translator for the LEAP Motion signal. Using the GECO software, users can control up to 40 different parameters in their DAW using both hands, across 16 different midi channels. You might think of this controller as a fully programmable theremin.

The software allows you to customize the range of the MIDI signal. For instance, you might not want the software to start responding until your hand is a foot above the sensor, and you may want it to stop responding after two feet. All of this is possible to specify within GECO, which will even recognize a closed hand versus an open one, and various other positions. The only qualm one might take with this flexibility is the same as we’ve mentioned on this blog in the past—so many options can become daunting for setup. However, once you’ve got all of your gestures assigned, the possibilities for expression and showmanship using the LEAP Motion controller are really exciting. Check out some videos of the controller with Ableton below!

Useful Links:

LEAP Motion Max For Live

GECO Review

Point Blank GECO/Ableton Demo

Theremin Style Woodwind Performance