Originally from Uriangato, Mexico, Iran R. Roman is a post-doctoral researcher working at NYU's Music and Audio Research Laboratory. He holds a Ph.D. from Stanford University in Computer-based Music Theory and Acoustics. Iran aims at developing machines that can listen to music and speech like humans do. With this goal in mind, Iran has developed mathematical models that explain how the human brain synchronizes with the rhythms present in music and speech. In parallel to his PhD studies, Iran pursued industry research in machine listening at Apple, Tesla, Osillo biosciences, and Plantronics.
Selected Publications
- Roman, I. R., Roman, A. S., & Large, E. W. (2021). Hebbian learning with elasticity explains how the spontaneous motor tempo affects music performance synchronization. bioRxiv.
- Roman, I. R., Washburn, A., Large, E. W., Chafe, C., & Fujioka, T. (2019). Delayed feedback embedded in perception-action coordination cycles results in anticipation behavior during synchronized rhythmic action: A dynamical systems approach. PLoS computational biology, 15(10), e1007371.