WCBS News Radio Covers SONYC Project

WCBS News Radio Covers SONYC Project 

A Huge thank you to WCBS News radio for covering the Sounds of New York City (SONYC) project! The SONYC project is a National Science Foundation funded research project in conjunction with NYU MARL and NYU Center for Urban and Science Progress that monitors NYC noise levels through a complex sensor network system and machine learning and listening techniques.

“The noise levels in the city are incredibly high,” says Charlie Mydlarz, the senior research scientist for the Sounds of New York City (SONYC) project at the NYU Center for Urban and Science Progress. “In certain locations they are at levels that the World Health Organization considers to be harmful to health.”

Check out the video here:

https://wcbs880.radio.com/articles/nyus-noise-study-new-york-city-sweet-spot-mike-sugerman?fbclid=IwAR2Z_U-O6JXpy7aTmltlixgeMkrMbykKB5XJZxG9UsIOsfbIil6DIbIlGcI

 

 

Posted on | Posted in MARL | Tagged |

MARL Talk: Serge Belongie

From Visipedia to PointAR  by Serge Belongie

 

Abstract:

In this talk Prof. Belongie will provide an overview of his group’s research projects at Cornell Tech involving Computer Vision, Machine Learning, and Human-in-the-Loop Computing. The talk will cover projects involving identification of plant and animal species (Visipedia) and learning perceptual embeddings of food (SNaCK). It will conclude with a preview of a new effort to build a projector-based, human-computer interaction apparatus that allows computers to point to physical objects in the real world (PointAR).

Serge Belongie received a B.S. (with honor) in EE from Caltech in 1995 and a Ph.D. in EECS from Berkeley in 2000. While at Berkeley, his research was supported by an NSF Graduate Research Fellowship. From 2001-2013 he was a professor in the Department of Computer Science and Engineering at University of California, San Diego.

He is currently a professor at Cornell Tech and the Department of Computer Science at Cornell University. His research interests include Computer Vision, Machine Learning, Crowdsourcing and Human-in-the-Loop Computing. He is also a co-founder of several companies including Digital Persona, Anchovi Labs and Orpix. He is a recipient of the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review “Innovators Under 35” Award and the Helmholtz Prize for fundamental contributions in Computer Vision.

MARL Talk: Research at the Sonic Arts Research Centre, Belfast (SARC)

Abstract: The talk will provide an insight into current research at the Sonic Arts Research Centre, Belfast (SARC). Dr Franziska Schroeder and Prof Pedro Rebelo will present work from SARC, including in the areas of inclusive VR design and physics-based simulation and synthesis of mechano-acoustic systems and analog circuitry for development of new digital musical instruments.
They will also give insights into practice-based sonic arts research projects including their mobile listening app ‘LiveSHOUT’ as well as their work in the area of socially engaged sonic arts practice.

Dr Franziska Schroeder (Germany / UK)

Originally from Germany, Franziska is based at the Sonic Arts Research Centre, Queen’s University Belfast where she holds the post of senior lecturer in music and sonic arts.

Franziska trained as a contemporary saxophonist in Australia, and in 2006 completed her PhD at the University of Edinburgh, where her research focused on performance, digital technologies and theories of embodiment. She has published widely in diverse international journals and has given several invited keynote speeches on the topic of performance and emerging technological platforms.

Franziska has published a book on performance and the threshold, a book on user-generated content and a volume on music improvisation (Soundweaving, 2014). She performs as saxophonist in a variety of contexts and has released several CDs on the Creative Source label, as well as a recording on the SLAM label with a semi-autonomous technological artifact. In 2015 she released an album on the pfmentum label with two Brazilian musicians, and 2016 saw the release of a Bandcamp album with her female trio Flux.

Throughout 2018 Franziska is leading a research team at Queen’s University on a project that investigates immersive technologies in collaboration with disabled musicians and Belfast’s only professional contemporary music ensemble, the Hard Rain Soloist Ensemble (HRSE). As part of this team, Franziska designed a new VR narrative work entitled “Embrace”. This piece critically investigates ideas of disability, identity, and empathy. “Embrace” is the first showcase piece created at the Sonic Arts Research Centre within its newly established research group “SARC_Immerse”, a group that has positioned itself as leading in the field of high-quality audio use in virtual environments.

Franziska leads the Performance without Barriers research group, a group of PhD and post-doctoral students investigating inclusive music technologies. At Queen’s University Belfast Franziska teaches students in improvisation, digital performance and critical theory.

Prof Pedro Rebelo (Portugal / UK)

Pedro is a composer, sound artist and performer working primarily in chamber music, improvisation and installation with new technologies. In 2002, he was awarded a PhD by the University of Edinburgh where he conducted research in both music and architecture.

His music has been presented in venues such as the Melbourne Recital Hall, National Concert Hall Dublin, Queen Elizabeth Hall, Ars Electronica, Casa da Música, and in events such as Weimarer Frühjahrstage fur zeitgenössische Musik, Wien Modern Festival, Cynetart and Música Viva. His work as a pianist and improvisor has been released by Creative Source Recordings and he has collaborated with musicians such as Chris Brown, Mark Applebaum, Carlos Zingaro, Evan Parker and Pauline Oliveros.

His writings reflect his approach to design and creative practice in a wider understanding of contemporary culture and emerging technologies. Pedro has been Visiting Professor at Stanford University (2007) and in 2012 he was appointed Professor at Queen’s and awarded the Northern Bank’s “Building Tomorrow’s Belfast” prize. He is a professor of sonic arts at the Sonic Arts Research Centre, Belfast.

MARL talk by Dan Ellis (Google)

When: Tuesday, May 15th @10am

Where: 6th floor Conference Room (609), 35 West 4th StreetMARL Logo

Title: “Supervised and Unsupervised Semantic Audio Representations”

Abstract: The Sound Understanding team at Google has been developing automatic sound classification tools with the ambition to cover all possible sounds – speech, music, and environmental.  I will describe our application of vision-inspired deep neural networks to the classification of our new ‘AudioSet’ ontology of ~600 sound events.  I’ll also talk about recent work using triplet loss to train semantic representations — where semantically ‘similar’ sounds end up close by in the representation — from unlabeled data.

Bio:  Dan Ellis joined Google in 2015 after 15 years as a faculty member in the Electrical Engineering department at Columbia University, where he headed the Laboratory for Recognition and Organization of Speech and Audio (LabROSA). He has over 150 publications in the areas of audio processing, speech recognition, and music information retrieval.

Joint work with Aren Jansen, Manoj Plakal, Ratheet Pandya, Shawn Hershey, Jiayang Liu, Channing Moore, Rif A. Saurous

Google Logo

Posted on | Posted in MARL | Tagged |

MARL talk by Tlacael Esparza

When: Thursday, May 3rd @1pmMARL Logo

Where: 6th floor Conference Room (609), 35 West 4th Street

Title: Sensory Percussion And The Future of Drumming

Description: Sensory Percussion is a platform for creating and performing music through acoustic control of digital sound. With the mission of bridging one of the oldest forms of musical expression with the new, Sensory Percussion translates the nuance of acoustic drumming into a flexible and expressive control language for electronic processes, allowing for a new frontier in performance and sound design. This presentation will include a technology overview and demonstration of Sensory Percussion’s capabilities.

Bio: Tlacael Esparza is a co-founder of the music tech startup Sunhouse and creator of Sensory Percussion, a radically new system for expressive electronic percussion being used on stages and in studios around the world. Tlacael is a Los Angeles native based in New York City, and a professional drummer with over fifteen years of experience. He has a background in mathematics and is a NYU Music Tech alumnus (2013), where he focused on applications of machine learning in music information retrieval. With Sunhouse, he is dedicated to building a future where music technology supports musicians and their creative endeavors.

Posted on | Posted in MARL | Tagged |

MARL talks by Hyunkook Lee and members of Applied Psychoacoustics Lab

When: Friday, May 4th @10am-12pmMARL Logo

Where:  6th floor Conference Room (609), 35 West 4th Street

10:00-11:00am

Dr. Hyunkook Lee

Title: Introduction to 3D Audio Research at the APL

Abstract: This talk will overview recent 3D audio research conducted at the Applied Psychoacoustics Lab (APL) at the University of Huddersfield. The APL, established by Dr Hyunkook Lee in 2013, aims to bridge gap between fundamental psychoacoustics and audio engineering. The talk will first describe some of the fundamental research conducted on various perceptual aspects of 3D audio, followed by the introduction of practical engineeringmethods developed based on the research. The topics to be covered include: vertical stereophonic perception, 3D and VR microphone techniques, vertical interchannel decorrelation, phantom image elevation effect, new time-level trade-off function, perceptually motivated amplitude panning (PMAP), virtual hemispherical amplitude panning (VHAP), Perceptual Band Allocation (PBA), etc. Additionally, the APL’s software packages for audio research will be introduced.

Bio: Dr Hyunkook Lee is the Leader of the Applied Psychoacoustics Lab (APL) and Senior Lecturer (i.e. Associate Professor) in Music Technology at the University of Huddersfield, UK. His current research focuses on spatial audio psychoacoustics, recording and reproduction techniques for 3D and VR audio, and interactive virtual acoustics. He is also an experienced sound engineer specialising in surround and 3D acoustic recording. Before joining Huddersfield in 2010, Dr. Lee was Senior Research Engineer in audio R&D at LG Electronics for five years. He has been an active member of the Audio Engineering Society since 2001.

11:00-11:30am

Maksims Mironovs

Title: Localisation accuracy and consistency of real sound sources in a practical environment

Abstract: Human ability to localise sound sources in a three-dimensional (3D) space has been thoroughly studied in the past decades, however, only few studies tested its full capabilities across a wide range of vertical and horizontal positions. Yet, these studies do not reflect the real-life situations where room effect is present. Additionally, there is not enough data for the assessment of modern multichannel loudspeaker setups, such as Dolby Atmos or Auro 3D. This talk will provide an overview of a practical localisation study performed at Applied Psychoacoustics Lab, as well as an insight into human localisation mechanism in the 3D space. Furthermore, a new response method for localisation studies will be presented and analysed.

Bio: Maksims Mironovs is a PhD student at the University of Huddersfield’s Applied Psychoacoustics Lab. In 2016 he obtained a First class BSc degree with Honours in Music Technology and Audio Systems at University of Huddersfield. During his placement, he spent one year at Fraunhofer IIS, where he was involved in multichannel audio research and development of the VST plugins. The primary focus of his research is the human auditory localisation mechanism in the context of 3D audio reproduction. Additionally, he is an experienced audio software developer and is currently working as part time lecturer and research assistant.

11:30am-12:00pm

Connor Millns

Title: An overview of capture techniques for Virtual Reality soundscape

Abstract: This presentation will cover the history of soundscape capture techniques and then introduce current recording practices for soundscape in VR. The results from an investigation into low-level spatial attributes that highlight differences between VR capture techniques will be discussed. The presentation will conclude with a discussion of future work on the influence of audio- visual interaction and acoustics on the perception of audio quality in the context of soundscape.

Bio: Connor Millns is a PhD student at the APL investigating capture techniques for Virtual Reality soundscapes and the influence of audio-visual interaction on Quality of Experience. He was also a student at the University of Huddersfield that completed the BSc (Hons) Music Technology and Audio Systems course with an industry year at Fraunhofer IIS. In his final year bachelor’s project. Connor undertook an investigation into the spatial attributes of various microphones techniques for virtual reality.

Posted on | Posted in MARL | Tagged |

MARL talk by Yotam Mann

MARL Logo

“Making Music Interactive”

When: Thursday, April 19th @1pm

Where:  6th floor Conference Room (609), 35 West 4th Street

Abstract: Yotam Mann makes music that engages listeners through interactivity. His work takes the form of websites, installations, and instruments. He is also the author the open source Web Audio framework, Tone.js, which aims to enable other music creators to experiment with interactivity. In this talk, he discusses some of his techniques and motivations in creating interactive music.

Bio: Yotam Mann is a composer and programmer. He creates interactive musical experiences in which listeners can explore, create and play with sound.While studying jazz piano at UC Berkeley, Yotam stumbled across the Center for New Music and Audio Technologies (CNMAT), which opened his eyes to a new way of making music with technology that eventually inspired him to earn a second degree in Computer Science. He is the author of the most popular open source library for making interactive music in the browser, Tone.js. Now based in New York, Yotam continues to work at the intersection of music and technology, creating interactive musical experiences in the form of apps, websites, and installations. He was part of the inaugural class at NEW INC, adjunct professor at ITP, NYU Tisch, and 2016 Creative Capital Grantee in Emerging Fields.

 

Posted on | Posted in MARL | Tagged |

MARL Presents: Emilia Gómez

MARL Logo“Music Information Retrieval: From Accuracy to Understanding, from Machine Intelligence to Human Welfare”

When: Friday April 13th, @11am

Where: 6th floor conference room (609), 35 W 4th Street

In this seminar Gómez will provide an overview of her research on the Music Information Retrieval (MIR) research field, which aims at facilitating the access to music in a world with overwhelming musical choices.

Emilia Gómez is a researcher at the Joint Research Centre, European Commission and the Music Technology Group at Universitat Pompeu Fabra in Barcelona, Spain.  Her research background is within the music information retrieval (MIR) field. She tries to understand the way people describe music and emulate these description with computational models that learn from large music collections.

 

Posted on | Posted in MARL | Tagged |