In early May, several faculty members and graduate students from the NYU Steinhardt Music and Audio Research Laboratory (MARL) will attend the International Conference on Acoustics, Speech, and Signal Processing, the Institute of Electrical and Electronics Engineers’s premier forum for advances in the theory, algorithms, and applications that define modern signal processing. This year, the group will travel to Barcelona, where they will present their work alongside leading researchers shaping the future of audio and music AI.
MARL has contributed the following papers to ICASSP 2026:
Controllable Embedding Transformation for Mood-Guided Music Retrieval, Julia Wilkins, Jaehun Kim, Matthew E. P. Davies, Juan Pablo Bello, Matthew C. McCallum
Evaluating Compositional Structure in Audio Representations, Chuyang Chen, Bea Steers, Brian McFee, Juan Bello
Investigating Modality Contribution In Audio LLMs For Music, Giovana Morais, Magdalena Fuentes
AudioCards: Structured Metadata Improves Audio Language Models for Sound Design, Sripathi Sridhar, Prem Seetharaman, Oriol Nieto, Mark Cartwright, Justin Salamon
The MUSE Benchmark: Probing Music Perception and Auditory Relational Reasoning in Audio LLMS, Brandon Carone, Iran Roman, Pablo Ripollés