Music Technology Open House 2019

The NYU Music Technology program invites you to join us at our annual open house!  This event showcases our awesome students from the undergraduate, graduate, and post-graduate programs.  Current students are encouraged to submit their work in the contests and exhibits described below.  Friends and family are welcome to attend and enjoy the food, refreshments, and fun! 

Music Technology Open House 2018 Promo Video
The Open House will be hosted on May 11th, 2019 in the
Steinhardt Education Building at 35 West 4th Street, NYC 10014
Reception will be located on the 6th floor with other events and showcases on the 7th and 8th floors

EVERYONE IS WELCOME!
The open house will play host to a variety of student and departmental showcases.  We encourage you to invite your friends and family to attend.  Live music, installation experiences, discussion, and critical review sessions will all be open for viewers.

PRELIMINARY EVENT SCHEDULE
9am – 12:30pm: Senior Capstone Presentations (Students/Faculty Only)
2pm – 5:00pm: General Presentations & Project Displays 
3:30pm – 5:00pm: Recording Competition
5:30pm-7pm: Concert

Dr. Agnieszka Roginska 2019 AES President Elect

 

     At the beginning of 2019, Music Technology’s Dr. Agnieszka Roginska officially became the AES’s newest President Elect before she becomes the president of AES in the following year. We wish Dr. Roginska the best this year and we look forward to seeing all of her amazing work with the AES this year.

Q: What are some major decisions the AES executive board has to discuss?

The board itself takes larger decisions such as the vision of the society. What I think a lot of people don’t see is that throughout the 3 days of AES, there’s nonstop meetings of sub  committees. For example, there is a a convention policy committee that meets and they decide we’re going to be, there’s a conference policy committee that decides on specific topics take based on proposals from different different people, there’s a big standards committee to decide on the all the AES standards like formatting standards.

Q: What motivated you to take on this challenge?

I have been involved with the AES for many years so feel like I have in-depth knowledge about what the AES is. I also was thinking a lot about how there is a lot of imbalance in terms of the diversity that exists in the AES and the entire audio industry. As you know, we’ve been paying attention to SWiTCH (Society of Women in Technology) here at NYU and I’ve always tried to be a champion for women in the audio industry. And in this past year, I had a personal revelation that if I wanted to see more women in more leadership roles in audio, I said, “you know what, I should be one of those women”.

Q: In what ways can Music Technology students benefit from being involved with the AES? 

In my opinion, I think that any student that’s involved in any type of audio should be an AES member. It is designed to allow all students to be a part of the AES. The AES the largest conglomerate of audio professionals in the world. So if you are a member, you now have access to an enormous network of professionals, companies, and activities that involve your field. The best thing students can do is to enter this network as early as possible. At first it might be really overwhelming when you go to the conventions, but when you enter this network and you meet the people, you realize that they’re not only great at what they do, but they’re also really nice people. And there’s always been a culture at AES of “passing it down”. For instance the more senior members of the society want to give back and mentor and younger members of the society. You soon realize how easy it is to enter the network.

Congratulations once again, Dr. Roginska!

The 2019 Audio Engineering Society Board of Governors Announcement:

http://www.clynemedia.com/AES/Election_Results_2018/AES_Election_Results_2018.html?fbclid=IwAR1pkzFgEWAwhYVpYliYaLPZqcwyThomdaSFhfxyamHg6zI_5y56ta0Xhl4

Dr. Roginska’s AES member profile:

http://www.aes.org/member/profile.cfm?ID=557211590

 

Posted on | Posted in Faculty | Tagged |

Music Tech Alumna Emily Lazar Wins Grammy

Congratulations to Music Tech Alumna (MM ’96) Emily Lazar for becoming the first woman mastering engineer ever to take home the Best Engineered Album (Non-Classical) Grammy for her work on Beck’s “Colors”.

Lazar has worked on over 2,000 albums with artists such as David Bowie, Foo Fighters, Destiny’s Child, Paul McCartney, and more! Emily Lazar earned a Bachelor of Arts Degree in Creative Writing and Music from Skidmore College and later attended NYU Steinhardt’s Music Technology program to get her Master’s degree. While at NYU, Lazar pursued Tonmeister studies and was awarded a Graduate Fellowship.

See the music video for Beck’s “Colors”.

Check out Refinery 29’s story on Emily Lazar’s Historic Win.

 

Grad Student David Baylies Performs at Electrobrass Conference

Grad Student David Baylies Performs at Electrobrass Conference 

At the beginning of November, Music Technology Master’s student David Baylies performed at the NYC Electrobrass Conference showcasing “Stella”, an electronic trumpet that David started in the summer of 2016  to allow trumpet players to use synthesized sounds within DAWS without having to change their trumpet technique. The Electrobrass Conference focuses on the advancement of American music through the combination of brass instruments and live electronics. Throughout the weekend, conference attendees had access to amazing clinics, seminars, and concerts given by some of today’s greatest musical minds!

At the conference, David performed an improvisational piece on “Stella” which a snippet can be heard here!

 

To find out more information about David’s work on “Stella”, you can find him at his website: https://www.openstella.com/

Congratulations David on your performance!

WCBS News Radio Covers SONYC Project

WCBS News Radio Covers SONYC Project 

A Huge thank you to WCBS News radio for covering the Sounds of New York City (SONYC) project! The SONYC project is a National Science Foundation funded research project in conjunction with NYU MARL and NYU Center for Urban and Science Progress that monitors NYC noise levels through a complex sensor network system and machine learning and listening techniques.

“The noise levels in the city are incredibly high,” says Charlie Mydlarz, the senior research scientist for the Sounds of New York City (SONYC) project at the NYU Center for Urban and Science Progress. “In certain locations they are at levels that the World Health Organization considers to be harmful to health.”

Check out the video here:

https://wcbs880.radio.com/articles/nyus-noise-study-new-york-city-sweet-spot-mike-sugerman?fbclid=IwAR2Z_U-O6JXpy7aTmltlixgeMkrMbykKB5XJZxG9UsIOsfbIil6DIbIlGcI

 

 

Posted on | Posted in MARL | Tagged |

WSN Covers the Holodeck

WSN Covers the Holodeck

A huge thank you to NYU’s independent student newspaper, Washington Square News, for highlighting Music Technology’s Dr. Agnieszka Roginska and her team’s work on the Holodeck! The Holodeck is “a staging environment in which participants can engage with various virtual reality environments” that has received a multi-million dollar grant from the National Science Foundation.

Check out the article down below!

The ‘Holodeck’ Propels NYU to the Future

MARL Talk: Serge Belongie

From Visipedia to PointAR  by Serge Belongie

 

Abstract:

In this talk Prof. Belongie will provide an overview of his group’s research projects at Cornell Tech involving Computer Vision, Machine Learning, and Human-in-the-Loop Computing. The talk will cover projects involving identification of plant and animal species (Visipedia) and learning perceptual embeddings of food (SNaCK). It will conclude with a preview of a new effort to build a projector-based, human-computer interaction apparatus that allows computers to point to physical objects in the real world (PointAR).

Serge Belongie received a B.S. (with honor) in EE from Caltech in 1995 and a Ph.D. in EECS from Berkeley in 2000. While at Berkeley, his research was supported by an NSF Graduate Research Fellowship. From 2001-2013 he was a professor in the Department of Computer Science and Engineering at University of California, San Diego.

He is currently a professor at Cornell Tech and the Department of Computer Science at Cornell University. His research interests include Computer Vision, Machine Learning, Crowdsourcing and Human-in-the-Loop Computing. He is also a co-founder of several companies including Digital Persona, Anchovi Labs and Orpix. He is a recipient of the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review “Innovators Under 35” Award and the Helmholtz Prize for fundamental contributions in Computer Vision.

SWiTCH Collaborates with Artist Camille Trust

The NYU Society of Women in Technology (SWiTCH) recently got the opportunity to record, engineer, and help produce 3 tracks for pop artist and singer songwriter Camille Trust over this past weekend!

From 8 am until 6 pm, SWiTCH members from all grade levels got together and set up microphones for band members and vocalists, patch up  patch bays, run consoles, troubleshoot signal flow, run Pro Tools, and engineer a recording session with Camille Trust and her band.

For more information about SWiTCH and how to be part of their upcoming events, email:

nyuswitch@gmail.com.

 

 

  Music Technology on FacebookMusic Technology on Instagram

 

 

Posted on | Posted in SWiTCH | Tagged |

MARL Talk: Research at the Sonic Arts Research Centre, Belfast (SARC)

Abstract: The talk will provide an insight into current research at the Sonic Arts Research Centre, Belfast (SARC). Dr Franziska Schroeder and Prof Pedro Rebelo will present work from SARC, including in the areas of inclusive VR design and physics-based simulation and synthesis of mechano-acoustic systems and analog circuitry for development of new digital musical instruments.
They will also give insights into practice-based sonic arts research projects including their mobile listening app ‘LiveSHOUT’ as well as their work in the area of socially engaged sonic arts practice.

Dr Franziska Schroeder (Germany / UK)

Originally from Germany, Franziska is based at the Sonic Arts Research Centre, Queen’s University Belfast where she holds the post of senior lecturer in music and sonic arts.

Franziska trained as a contemporary saxophonist in Australia, and in 2006 completed her PhD at the University of Edinburgh, where her research focused on performance, digital technologies and theories of embodiment. She has published widely in diverse international journals and has given several invited keynote speeches on the topic of performance and emerging technological platforms.

Franziska has published a book on performance and the threshold, a book on user-generated content and a volume on music improvisation (Soundweaving, 2014). She performs as saxophonist in a variety of contexts and has released several CDs on the Creative Source label, as well as a recording on the SLAM label with a semi-autonomous technological artifact. In 2015 she released an album on the pfmentum label with two Brazilian musicians, and 2016 saw the release of a Bandcamp album with her female trio Flux.

Throughout 2018 Franziska is leading a research team at Queen’s University on a project that investigates immersive technologies in collaboration with disabled musicians and Belfast’s only professional contemporary music ensemble, the Hard Rain Soloist Ensemble (HRSE). As part of this team, Franziska designed a new VR narrative work entitled “Embrace”. This piece critically investigates ideas of disability, identity, and empathy. “Embrace” is the first showcase piece created at the Sonic Arts Research Centre within its newly established research group “SARC_Immerse”, a group that has positioned itself as leading in the field of high-quality audio use in virtual environments.

Franziska leads the Performance without Barriers research group, a group of PhD and post-doctoral students investigating inclusive music technologies. At Queen’s University Belfast Franziska teaches students in improvisation, digital performance and critical theory.

Prof Pedro Rebelo (Portugal / UK)

Pedro is a composer, sound artist and performer working primarily in chamber music, improvisation and installation with new technologies. In 2002, he was awarded a PhD by the University of Edinburgh where he conducted research in both music and architecture.

His music has been presented in venues such as the Melbourne Recital Hall, National Concert Hall Dublin, Queen Elizabeth Hall, Ars Electronica, Casa da Música, and in events such as Weimarer Frühjahrstage fur zeitgenössische Musik, Wien Modern Festival, Cynetart and Música Viva. His work as a pianist and improvisor has been released by Creative Source Recordings and he has collaborated with musicians such as Chris Brown, Mark Applebaum, Carlos Zingaro, Evan Parker and Pauline Oliveros.

His writings reflect his approach to design and creative practice in a wider understanding of contemporary culture and emerging technologies. Pedro has been Visiting Professor at Stanford University (2007) and in 2012 he was appointed Professor at Queen’s and awarded the Northern Bank’s “Building Tomorrow’s Belfast” prize. He is a professor of sonic arts at the Sonic Arts Research Centre, Belfast.

MARL talk by Dan Ellis (Google)

When: Tuesday, May 15th @10am

Where: 6th floor Conference Room (609), 35 West 4th StreetMARL Logo

Title: “Supervised and Unsupervised Semantic Audio Representations”

Abstract: The Sound Understanding team at Google has been developing automatic sound classification tools with the ambition to cover all possible sounds – speech, music, and environmental.  I will describe our application of vision-inspired deep neural networks to the classification of our new ‘AudioSet’ ontology of ~600 sound events.  I’ll also talk about recent work using triplet loss to train semantic representations — where semantically ‘similar’ sounds end up close by in the representation — from unlabeled data.

Bio:  Dan Ellis joined Google in 2015 after 15 years as a faculty member in the Electrical Engineering department at Columbia University, where he headed the Laboratory for Recognition and Organization of Speech and Audio (LabROSA). He has over 150 publications in the areas of audio processing, speech recognition, and music information retrieval.

Joint work with Aren Jansen, Manoj Plakal, Ratheet Pandya, Shawn Hershey, Jiayang Liu, Channing Moore, Rif A. Saurous

Google Logo

Posted on | Posted in MARL | Tagged |