MARL talk by Dan Ellis (Google)

When: Tuesday, May 15th @10am

Where: 6th floor Conference Room (609), 35 West 4th StreetMARL Logo

Title: “Supervised and Unsupervised Semantic Audio Representations”

Abstract: The Sound Understanding team at Google has been developing automatic sound classification tools with the ambition to cover all possible sounds – speech, music, and environmental.  I will describe our application of vision-inspired deep neural networks to the classification of our new ‘AudioSet’ ontology of ~600 sound events.  I’ll also talk about recent work using triplet loss to train semantic representations — where semantically ‘similar’ sounds end up close by in the representation — from unlabeled data.

Bio:  Dan Ellis joined Google in 2015 after 15 years as a faculty member in the Electrical Engineering department at Columbia University, where he headed the Laboratory for Recognition and Organization of Speech and Audio (LabROSA). He has over 150 publications in the areas of audio processing, speech recognition, and music information retrieval.

Joint work with Aren Jansen, Manoj Plakal, Ratheet Pandya, Shawn Hershey, Jiayang Liu, Channing Moore, Rif A. Saurous

Google Logo

Posted on | Posted in MARL | Tagged |

Welcome Professor Dr. Brian McFee

NYU Music Technology welcomes Dr. Brian McFee as an assistant professor in Music Technology and Data Science, effective Fall 2018!

Dr.McFee has been at NYU for the past few years as a fellow of the Center for Data Science. Previously, he was postdoctoral research scholar in the Center for Jazz Studies and LabROSA at Columbia University as well as conducting graduate research at UCSD.

Dr.McFee develops machine learning tools to analyze music and multimedia data which includes recommender systems, image and audio analysis, similarity learning, cross-modal feature integration, and automatic annotation.

Electronic Music Performance Concerts

Electronic IllustrationThis week!  Two Electronic Music Performances Wednesday / Thursday nights!
Come hear students of Dafna Naphtali’s two Electronic Music Performance classes, as they play new electronic music, compositions, experiments, and improvisations.
Wednesday May 9th, 8pm
 
Electric Pizza
NYU Electronic Music Performance – Section 001
with guest Hans Tammen conducting from Dark Circuits Orchestra score samples, drones, time-travel, facial gestures as control, and a self-designed speaker instrument.
Starring the creative and all-star students…Tristan Alleyne, Sam Grossman, Quinton Ashley, Angel E. Daniels, Ned Dana. Erez Aviram, Daksh Bhatia, Brendan Prednis, Harrison Shimazu, Emma Camell, John Sloan, Pari Songmuang. Directed by Dafna Naphtali
NYU Education Building
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thursday May 9th, 8pm
NYU Electronic Music Performance – Section 002
with guest computer conductor by Mohamed Kubbara
plus music made with plunderphonics, waving hands, secret signals,sportscasters and more..
Starring the creative and all-star students…Gregory Borodulin, Max Chidzero, Miles Grossenbacher, Thomas Miritello, Trevor Rivkin, Ethan Uno-Portillo, Nick Royall, Greg Tock, Jake Sandakly, Emily Thaler, Zoltán Sindhu. Directed by Dafna Naphtali
                                                                              Music Technology FacebookMusic Technology Instagram

Spring 2018 Graduate Thesis Defense Presentations

                     With the end of the semester right around the corner, it’s time for our graduate thesis defense presentations.  Projects to be presented include original hardware and software development, cognition research, audio analysis, assistive audio technologies, acoustic design, and much more.

Thesis Defense Image Schedule (See Link Below)

Check out the full schedule with times and thesis titles HERE

Live Stream

All defense presentations will be live streamed! Friends and family from near and far are welcome to tune in. Feel free to share THIS link!

 Music Technology on FacebookMusic Technology on Instagram

MARL talk by Tlacael Esparza

When: Thursday, May 3rd @1pmMARL Logo

Where: 6th floor Conference Room (609), 35 West 4th Street

Title: Sensory Percussion And The Future of Drumming

Description: Sensory Percussion is a platform for creating and performing music through acoustic control of digital sound. With the mission of bridging one of the oldest forms of musical expression with the new, Sensory Percussion translates the nuance of acoustic drumming into a flexible and expressive control language for electronic processes, allowing for a new frontier in performance and sound design. This presentation will include a technology overview and demonstration of Sensory Percussion’s capabilities.

Bio: Tlacael Esparza is a co-founder of the music tech startup Sunhouse and creator of Sensory Percussion, a radically new system for expressive electronic percussion being used on stages and in studios around the world. Tlacael is a Los Angeles native based in New York City, and a professional drummer with over fifteen years of experience. He has a background in mathematics and is a NYU Music Tech alumnus (2013), where he focused on applications of machine learning in music information retrieval. With Sunhouse, he is dedicated to building a future where music technology supports musicians and their creative endeavors.

Posted on | Posted in MARL | Tagged |

MARL talks by Hyunkook Lee and members of Applied Psychoacoustics Lab

When: Friday, May 4th @10am-12pmMARL Logo

Where:  6th floor Conference Room (609), 35 West 4th Street

10:00-11:00am

Dr. Hyunkook Lee

Title: Introduction to 3D Audio Research at the APL

Abstract: This talk will overview recent 3D audio research conducted at the Applied Psychoacoustics Lab (APL) at the University of Huddersfield. The APL, established by Dr Hyunkook Lee in 2013, aims to bridge gap between fundamental psychoacoustics and audio engineering. The talk will first describe some of the fundamental research conducted on various perceptual aspects of 3D audio, followed by the introduction of practical engineeringmethods developed based on the research. The topics to be covered include: vertical stereophonic perception, 3D and VR microphone techniques, vertical interchannel decorrelation, phantom image elevation effect, new time-level trade-off function, perceptually motivated amplitude panning (PMAP), virtual hemispherical amplitude panning (VHAP), Perceptual Band Allocation (PBA), etc. Additionally, the APL’s software packages for audio research will be introduced.

Bio: Dr Hyunkook Lee is the Leader of the Applied Psychoacoustics Lab (APL) and Senior Lecturer (i.e. Associate Professor) in Music Technology at the University of Huddersfield, UK. His current research focuses on spatial audio psychoacoustics, recording and reproduction techniques for 3D and VR audio, and interactive virtual acoustics. He is also an experienced sound engineer specialising in surround and 3D acoustic recording. Before joining Huddersfield in 2010, Dr. Lee was Senior Research Engineer in audio R&D at LG Electronics for five years. He has been an active member of the Audio Engineering Society since 2001.

11:00-11:30am

Maksims Mironovs

Title: Localisation accuracy and consistency of real sound sources in a practical environment

Abstract: Human ability to localise sound sources in a three-dimensional (3D) space has been thoroughly studied in the past decades, however, only few studies tested its full capabilities across a wide range of vertical and horizontal positions. Yet, these studies do not reflect the real-life situations where room effect is present. Additionally, there is not enough data for the assessment of modern multichannel loudspeaker setups, such as Dolby Atmos or Auro 3D. This talk will provide an overview of a practical localisation study performed at Applied Psychoacoustics Lab, as well as an insight into human localisation mechanism in the 3D space. Furthermore, a new response method for localisation studies will be presented and analysed.

Bio: Maksims Mironovs is a PhD student at the University of Huddersfield’s Applied Psychoacoustics Lab. In 2016 he obtained a First class BSc degree with Honours in Music Technology and Audio Systems at University of Huddersfield. During his placement, he spent one year at Fraunhofer IIS, where he was involved in multichannel audio research and development of the VST plugins. The primary focus of his research is the human auditory localisation mechanism in the context of 3D audio reproduction. Additionally, he is an experienced audio software developer and is currently working as part time lecturer and research assistant.

11:30am-12:00pm

Connor Millns

Title: An overview of capture techniques for Virtual Reality soundscape

Abstract: This presentation will cover the history of soundscape capture techniques and then introduce current recording practices for soundscape in VR. The results from an investigation into low-level spatial attributes that highlight differences between VR capture techniques will be discussed. The presentation will conclude with a discussion of future work on the influence of audio- visual interaction and acoustics on the perception of audio quality in the context of soundscape.

Bio: Connor Millns is a PhD student at the APL investigating capture techniques for Virtual Reality soundscapes and the influence of audio-visual interaction on Quality of Experience. He was also a student at the University of Huddersfield that completed the BSc (Hons) Music Technology and Audio Systems course with an industry year at Fraunhofer IIS. In his final year bachelor’s project. Connor undertook an investigation into the spatial attributes of various microphones techniques for virtual reality.

Posted on | Posted in MARL | Tagged |