Where: 6th floor Conference Room (609), 35 West 4th Street
Title: Sensory Percussion And The Future of Drumming
Description: Sensory Percussion is a platform for creating and performing music through acoustic control of digital sound. With the mission of bridging one of the oldest forms of musical expression with the new, Sensory Percussion translates the nuance of acoustic drumming into a flexible and expressive control language for electronic processes, allowing for a new frontier in performance and sound design. This presentation will include a technology overview and demonstration of Sensory Percussion’s capabilities.
Bio: Tlacael Esparza is a co-founder of the music tech startup Sunhouse and creator of Sensory Percussion, a radically new system for expressive electronic percussion being used on stages and in studios around the world. Tlacael is a Los Angeles native based in New York City, and a professional drummer with over fifteen years of experience. He has a background in mathematics and is a NYU Music Tech alumnus (2013), where he focused on applications of machine learning in music information retrieval. With Sunhouse, he is dedicated to building a future where music technology supports musicians and their creative endeavors.
Where: 6th floor Conference Room (609), 35 West 4th Street
Dr. Hyunkook Lee
Title: Introduction to 3D Audio Research at the APL
Abstract: This talk will overview recent 3D audio research conducted at the Applied Psychoacoustics Lab (APL) at the University of Huddersfield. The APL, established by Dr Hyunkook Lee in 2013, aims to bridge gap between fundamental psychoacoustics and audio engineering. The talk will first describe some of the fundamental research conducted on various perceptual aspects of 3D audio, followed by the introduction of practical engineeringmethods developed based on the research. The topics to be covered include: vertical stereophonic perception, 3D and VR microphone techniques, vertical interchannel decorrelation, phantom image elevation effect, new time-level trade-off function, perceptually motivated amplitude panning (PMAP), virtual hemispherical amplitude panning (VHAP), Perceptual Band Allocation (PBA), etc. Additionally, the APL’s software packages for audio research will be introduced.
Bio: Dr Hyunkook Lee is the Leader of the Applied Psychoacoustics Lab (APL) and Senior Lecturer (i.e. Associate Professor) in Music Technology at the University of Huddersfield, UK. His current research focuses on spatial audio psychoacoustics, recording and reproduction techniques for 3D and VR audio, and interactive virtual acoustics. He is also an experienced sound engineer specialising in surround and 3D acoustic recording. Before joining Huddersfield in 2010, Dr. Lee was Senior Research Engineer in audio R&D at LG Electronics for five years. He has been an active member of the Audio Engineering Society since 2001.
Title: Localisation accuracy and consistency of real sound sources in a practical environment
Abstract: Human ability to localise sound sources in a three-dimensional (3D) space has been thoroughly studied in the past decades, however, only few studies tested its full capabilities across a wide range of vertical and horizontal positions. Yet, these studies do not reflect the real-life situations where room effect is present. Additionally, there is not enough data for the assessment of modern multichannel loudspeaker setups, such as Dolby Atmos or Auro 3D. This talk will provide an overview of a practical localisation study performed at Applied Psychoacoustics Lab, as well as an insight into human localisation mechanism in the 3D space. Furthermore, a new response method for localisation studies will be presented and analysed.
Bio: Maksims Mironovs is a PhD student at the University of Huddersfield’s Applied Psychoacoustics Lab. In 2016 he obtained a First class BSc degree with Honours in Music Technology and Audio Systems at University of Huddersfield. During his placement, he spent one year at Fraunhofer IIS, where he was involved in multichannel audio research and development of the VST plugins. The primary focus of his research is the human auditory localisation mechanism in the context of 3D audio reproduction. Additionally, he is an experienced audio software developer and is currently working as part time lecturer and research assistant.
Title: An overview of capture techniques for Virtual Reality soundscape
Abstract: This presentation will cover the history of soundscape capture techniques and then introduce current recording practices for soundscape in VR. The results from an investigation into low-level spatial attributes that highlight differences between VR capture techniques will be discussed. The presentation will conclude with a discussion of future work on the influence of audio- visual interaction and acoustics on the perception of audio quality in the context of soundscape.
Bio: Connor Millns is a PhD student at the APL investigating capture techniques for Virtual Reality soundscapes and the influence of audio-visual interaction on Quality of Experience. He was also a student at the University of Huddersfield that completed the BSc (Hons) Music Technology and Audio Systems course with an industry year at Fraunhofer IIS. In his final year bachelor’s project. Connor undertook an investigation into the spatial attributes of various microphones techniques for virtual reality.
Join NYU Music Technology for an evening of collaboration in distributed music and remote dancers. The concert will involve several combinations of remote and on-stage musicians and dancers connected through internet as a stepping stone towards augmented performances and virtual connections. Music selections will include those of classical, jazz and percussion-only genres.
WHEN: Sunday, April 29th, @ 3pm
WHERE: Frederick Loewe Theatre, 35 West 4th Street
Where: 6th floor Conference Room (609), 35 West 4th Street
Abstract: Yotam Mann makes music that engages listeners through interactivity. His work takes the form of websites, installations, and instruments. He is also the author the open source Web Audio framework, Tone.js, which aims to enable other music creators to experiment with interactivity. In this talk, he discusses some of his techniques and motivations in creating interactive music.
Bio: Yotam Mann is a composer and programmer. He creates interactive musical experiences in which listeners can explore, create and play with sound.While studying jazz piano at UC Berkeley, Yotam stumbled across the Center for New Music and Audio Technologies (CNMAT), which opened his eyes to a new way of making music with technology that eventually inspired him to earn a second degree in Computer Science. He is the author of the most popular open source library for making interactive music in the browser, Tone.js. Now based in New York, Yotam continues to work at the intersection of music and technology, creating interactive musical experiences in the form of apps, websites, and installations. He was part of the inaugural class at NEW INC, adjunct professor at ITP, NYU Tisch, and 2016 Creative Capital Grantee in Emerging Fields.
“Music Information Retrieval: From Accuracy to Understanding, from Machine Intelligence to Human Welfare”
When: Friday April 13th, @11am
Where: 6th floor conference room (609), 35 W 4th Street
In this seminar Gómez will provide an overview of her research on the Music Information Retrieval (MIR) research field, which aims at facilitating the access to music in a world with overwhelming musical choices.
Emilia Gómez is a researcher at the Joint Research Centre, European Commission and the Music Technology Group at Universitat Pompeu Fabra in Barcelona, Spain. Her research background is within the music information retrieval (MIR) field. She tries to understand the way people describe music and emulate these description with computational models that learn from large music collections.
This is a segment of our blog where we feature awesome gear that is widely used within the music industry that is also available for our students to check out in our Monitor’s closet during their studio time! This week: Roland Octopad “Pad-8”!
The first model of the Roland Octopad series was introduced in 1985 with the Pad-8. It was a historic and influential piece of technology at the time, allowing drummers and percussionists the opportunity to trigger virtually any MIDI sound source without the need of a full electronic drum set. Connect the Pad-8 to your laptop in Ableton or any similar DAW to play all of your samples and loops!
Students in the past have used the Pad-8 to perform in the annual Bleep Bloop concert with the NYU Composer’s Collective and in the Electronic Music Performance class within Music Technology!
The NYU Music Technology program invites you to join us at our annual open house taking place on May 5th, 2018! The open house has a number of ways to get involved, showcase your work, get some industry-standard experience, and win some prizes! Submit your work in the link below: https://docs.google.com/forms/d/e/1FAIpQLSd3LTHLSP1XkZC4x_9PHDMqQeC3v2hdkwDyDf3SEdt4E0z6Cw/viewform
The NYU Music Technology program invites you to join us at our annual openhouse! This event showcases our awesome students from the undergraduate, graduate, and post-graduate programs. Current students are encouraged to submit their work in the contests and exhibits described below. Friends and family are welcome to attend and enjoy the food, refreshments, and fun!
The OpenHouse will be hosted on May 5th, 2018 in the
Steinhardt Education Building at 35 West 4th Street 10012
Reception will be located on the 6th floor with other events and showcases on the 7th and 8th floors.
EVERYONE IS WELCOME!
The openhouse will play host to a variety of student and departmental showcases. We encourage you to invite your friends and family to attend. Live music, installation experiences, discussion, and critical review sessions will all be open for viewers.
Meet our student spotlight: Sebastián Coloma! Sebastián is a Latin American singer-songwriter, and a Music Technology graduate student at NYU. He started his artist career in 2017, with the release of his first single titled “Al Compás,” which he followed up with his newest single release called “Dime Tú”. At NYU, Sebastián has focused on music production and engineering, working with multiple artists in their projects, as well as producing and engineering for his own material. Sebastián is currently working on his master’s thesis, where he is designing a hybrid music production course for audio beginners.
♦ Congratulations on the release of your new music video, “Dime Tú”! It looks incredible. Can you tell us a little bit about who you are and what your Music Technology background is?
Of course! I am a current Music Technology graduate student, though I also did my undergrad here at NYU in Mtech. I am originally from Venezuela and Panama, and I have been focusing on producing and engineering for about 6 years now. Recently, I decided to also give the artist life a try, so I produced some of my own songs and released them independently.
Sebastián’s “Dime Tú” Music Video:
♦ What are your main sources of inspiration that you draw from when writing and creating your own music?
Most of my musical inspiration and influence comes from Latin American music. I grew up listening to all these Latin genres and they really shaped my musical style. I was always a big fan of Argentinian rock artists like Gustavo Cerati and Andrés Calamaro, but I also loved listening to salsa and merengue artist like Rubén Blades and Juan Luis Guerra. Recently, a lot of my music is inspired by the works of Mexican artist Natalia Lafourcade, who sings very traditional Latin folk songs. As far as to what I write about, almost all of my songs are based on personal experiences, which I think are usually things that people can relate to.
♦ What are some things that you’ve taken away from your time as a student in Music Tech that has helped you accomplish everything that you’ve done being an artist?
I think everything I learned about production and recording during my time in MTech really changed the way I approach composition, and it also enabled me to apply my production and engineering knowledge into my own music. Additionally, being in NYU and in MTech allowed me to connect and work with so many talented people throughout the entire process, including talented musicians, business people and even film directors. This music video I just released was actually directed by a Tisch graduate who I met during my time as an undergrad in MTech, because I did the score for one of her films.
♦ Where do you hope see yourself down the line if you continue to pursue writing and creating music?
Sebastián Coloma. Photo by Spencer Shafter.
Music has always been a passion to me, it’s something that I have always enjoyed doing so I think if I can continue doing that, even if just for myself, I would be happy. I think personally, art comes first, and then hopefully if more people relate to it and enjoy it, success comes along. So, I hope down the line I’ll be at a place where I am still enjoying writing and performing music, whether it be for an audience of thousands, hundreds or just a couple of close friends.
♦ For all the students out there in Music Technology who want to pursue becoming an artist, what advice do you have for them?
Surround yourself with people who are more talented and/or experienced than you are. Collaborate and work with other people who you can learn from. And lastly, write music that you love, for yourself. I think currently it is easy to judge one’s success as an artist based on youtube views, post likes, or amount of followers on social media, but it always breaks my heart to see talented artists get discouraged because they do not meet those “high” standards of success, or change their styles in hope of getting more likes. Do it because you like it, and not because of anything else.
Keep up with Sebastián and his music on his socials!
This is a segment of our blog where we feature awesome gear that is widely used within the music industry that is also available for our students to check out in our Monitor’s closet during their studio time! This week: Roli Blocks!
If you’ve heard about Roli’s products or have even been lucky enough to play one, you probably know how versatile and expressive this touch and pressure responsive keyboard is. Using 5D touch technology, 200+ free sounds to work with, and options to perform wirelessly, it is no surprise that big name artists such as Grammy award winning Alessia Cara and La La Land composer Justin Hurwitz have used Roli products to produce, compose, and arrange their music.
Check out what Roli Blocks can do and see if it inspires your next musical creation!
On December 2nd, 2017, the Immersive Audio Group traveled to Eisenhower Hall Theatre to record the West Point Military Band Holiday Show 2017. The West Point Military Band is the oldest continuously serving Army band, and the oldest unit at West Point. The Immersive Audio Group recorded at the Hudson Valley’s premiere performing arts center and worked closely with Brandi Lane on the production. A 360 video recording of the event was captured by Nokia Ozo engineer Kamal Rountree. The group members at the recording included Ying Ying Zhang, Charles Craig Jr., Scott Murakami, Aggie Tai, Chris Neil, David Degenstein, Jason Sheng, and Ian Anderson.
The NYU Immersive Audio Group is an interest group formed by students who have a passion for 3D audio in its various forms and applications. Students in this group are encouraged to form project teams where they make their own project or collaborate on existing research, including PhD students, postdocs and professors.
If you are interested in learning more about and/or joining the NYU Immersive Audio group, you can reach out to the members via the group’s socials: