Department of Music and Performing Arts Professions

International Double Reed Society Conference 2014

Conference Program

Click Here for the Live Program Schedule 

 

Conference Proceedings can be found here (70MB)

 

Detailed Lecture and Poster Schedule


Monday, June 23, 11:00 – 12 :00 and 15:00 - 16:00

Poster Presentations - Session 1:

1. Domenico Vicinanza, Ryan Stables, Graeme Clemens, Matthew Baker, Assisted Differentiated Stem Cell Classification in Infrared Spectroscopy Using Auditory Feedback

In this study we investigate ways in which data sonification can improve standard data analysis techniques currently employed in the analysis of stem-cells using Fourier Transform Infrared (FTIR) Spectroscopy. Four different sonification methods have been evaluated and their effectiveness has been evaluated through listening tests, designed to assess the discriminating capability of the auditory technique. We identify FM synthesis driven by feature extraction as the most perceptually relevant technique for the auditory classification of FTIR data. Whilst this technique is not commonly used in sonification research, it allows us to utilise the most salient characteristics of the absorption spectra, leading to an improved classification accuracy with a clear timbral differences between differentiated and non-differentiated cell-types.

2. Edward Childs, John Stephens, Benjamin Childs, dataSonification Open Source Project for Real-Time Data Sonification

A software platform for real-time data sonification is described in detail. The platform is designed primarily to process multiple real-time data streams simultaneously. The system was originally designed for processing financial data with the general goal to be able to monitor up to 20 different securities as their values were changing during a trading session. The sonifications were designed to make it easy to distinguish between different securities, and to convey as much information about each security’s activity using the shortest possible sound duration. The software platform was designed using multiple threads with a gatekeeper function to manage simultaneous sonifications events without confusion or system failure. This paper announces the release of this software together with its financial data stream implementation to the open source community. It is hoped that sonification researchers particularly interested in real-time data will use and adapt the software to their needs.

3. Raymond Winters, Julie Cumming, Sonification of Symbolic Music in the ELVIS Project

This paper presents the development of sonification in the ELVIS project, a collaboration in interdisciplinary musicology targeting large databases of symbolic music and tools for their systematic analysis. An sonification interface was created to rapidly explore and analyze collections of musical intervals originating from various composers, genres, and styles. The interface visually displays imported musical data as a sound-file, and maps data events to individual short, discrete pitches or intervals. The user can interact with the data by visually zoom in, making selections, playing through the data at various speeds, and adjusting the transposition and frequency spread of the pitches to maximize acoustic comfort and clarity. A study is presented in which rapid pitch-mapping is applied to compare differences between similar corpora. A group of 11 participants were able to correctly order collections of sonifications for three composers (Monteverdi, Bach, and Beethoven) and three presentation speeds (100, 1000, and 10000 notes/second). Benefits of sonification are discussed including the ability to quickly differentiate composers, find non-obvious patterns in the data, and 'direct mapping'. The interface is made available as a MacOSX standalone application written in SuperCollider.

4. Adam Łutkowski, Michał Bujacz, Maciej Ożóg, Making Information Flows in Hybrid Space Tangible: an Analog RF Power Detector for Sonification of Wireless Network Traffic

The paper presents a fully analog prototype device for sonification of electromagnetic power in the radio frequency range. The RF power detector was designed specifically for use in artistic performances, which attempt to make listeners aware of the hybrid space surrounding them, filled with information flows from wireless networks. The device is tuned to the RF range of 800Hz to 2.7GHz, to detect GSM, Bluetooth and WiFi network traffic. The modular design allows sonification using the power reading directly or through voltage to frequency converters, as well as leaves room for expansion using other signal processing circuits.

5. Matthew Kenney, Mark Ballora, Susan Brantley, Isotopic Data Sonification: Shale Hills Critical Zone Observatory

Each precipitation event has a unique fingerprint. This fingerprint is recorded in the duration of the event and the isotopic composition of the rainfall, as a result of differing proportions of oxygen isotopes. The ratio of O16 to O18 is crucial in identifying origin and movement of water within the hydrologic cycle. In some investigated watersheds, as rainwater flows through the ecosystem, it is continually recorded by a series of in-ground instruments and examined as a means of understanding the responsiveness of the hydrologic system of a particular region. Sonification of the unique fingerprints of each storm as it passes through the hydrologic system offers an opportunity to sonically represent fluctuations in rainwater hydrology over an extended period of time, allowing for a deeper understanding of the hydrologic cycle of the region. Sonification of a watershed can include data for groundwater, stream water, and precipitation data. Transformation of the data into sound creates a uniquely informative representation of the data, removed from the constraints of static visualizations such as the line graph, and – if the datasets are long time duration – can provide a unique look at both single weather events and larger global warming patterns within a particular geographical region.

6. Mo Zareei, Dugal McKinnon, Ajay Kapur, Dale Carnegie, Complex: Physical Re-Sonification of Urban Noise

This paper explores the aesthetic and social values of the noises of modern urban soundscapes and discusses some strategies for boosting the accessibility and appreciation of works of sound art and experimental music that employ them. A proposed audiovisual installation––entitled complex––is outlined as a practical application of techniques designed to reveal the sonic aesthetics of urban technological noise, primarily through re- sonification and visualization. This will be achieved sonically and physically, by mapping sonic data collected from New York City soundscape (using the Citygram project) onto custom- designed mechatronic soundsculptures.

7. David Worrall, Balaji Thoshkahna, Norberto Degara, Detecting Components of an Ecg Signal For Sonification

In recent state-of-the-art electocardiogram (ECG) studies, many authors mention that they had to manually correct automatically detected peaks or exclude artifact-loaded segments from the automatically annotated data they were studying. Given the importance of accurate feature detection for signal analysis, this is clearly a limiting factor. Our investigation into the use of sonification for analysis of ECG data for medical and diagnostic purposes is also hampered by the lack of such a reliable ground truth. In order to be able to undertake a comparative analysis of sonification and numerical techniques, we are investigating ways to improve algorithmic feature detection, particularly more robust algorithms for the detection of important landmarks in the signal in the presence of noise, whilst accounting for the variability in the very nature of the signal. This paper is a work-in-progress report of our efforts to date.

8. Rhys Perkins, An Enhanced Data Transformation Framework for the Sonification of Simulated Rigid Bodies

While simulated rigid bodies hold a wealth of information that can be understood through sound, current interpretive methods will often overlook important features of their data. This proves to be detrimental when placing the same data in the context of an auditory display where the user might wish to analyse or express specific dimensions under a range of circumstances. The following investigation describes a framework for a model-induced parameter mapping technique which allows for an explicit level of control over the flow of information, supported by a number of key conditions in both the auditory and visual channels. Given that the formative decisions behind this design tailors the data to meet the purposes of sonification, the user is presented with a viable alternative that overcomes a number of limitations inherent to employing a more conventional physical modelling approach.

9. Jonathan Schuett, Riley Winton, Bruce Walker, Comprehension of Sonified Weather Data Across Multiple Auditory Streams

Weather data has been one of the mainstays in sonification research. It is readily available, and every listener has presumably had some form of experience with meteorological events to draw from. When we want to use this type of complex data in a scenario such as in a classroom, we need to be sure that listeners are able to correctly comprehend the intended information. The current study proposes a method for evaluating the usability of complex sonifications that contain multiple data sets, especially for tasks that require inferences to be made through comparisons across multiple data streams. This extended abstract outlines a study that will address this issue by asking participants to listen to sonifications and then respond with a description of general understanding about what variables changed, and how said changes would physically be represented by real weather conditions.

10. René Tünnermann, Sebastian Zehe, Jacqueline Hemminghaus, Thomas Hermann, Weather to Go – a Blended Sonification Application

People often stay in touch with the weather forecast for various reasons. We depend on knowing the upcoming weather conditions in order to plan activities outside or even just to decide what to wear on our way to work. With weather to go we present an auditory weather report which informs the user about the future or current weather situation when leaving home in the morning or the office in the evening. The sonification is designed to be calm, coherent and expectable so that it can blend well into the user’s familiar environment. In this work the auditory display is activated when somebody leaves through the door. The activity is sensed by a multi-purpose sensor unit mounted at the door. When the door is opened, weather to go renders and plays sounds that characterize the weather forecast for the region where the system is located. That way, the system raises the user’s awareness for suitable clothes, transportation or route to take to the destination in the right moment.


Monday, June 23, 12:00 – 12:40

Lecture Presentations - Session 1:

Auditory Displays Applications: Aiding movement Part 1

Session Chair Matti Grohn

Gerd Schmitz, Daniela Kroeger, Alfred Effenberg, A mobile sonification system for stroke rehabilitation

Growing evidence suggests that sonification supports movement perception as well as motor functions. It is hypothesized that real-time sonification supports movement control in patients with sensorimotor dysfunctions efficiently by intermodal substitution of sensory loss. The present article describes a sonification system for the upper extremities that might be used in neuromotor rehabilitation after stroke. A key-feature of the system is mobility: Arm movements are captured by intertial sensors that transmit their data wirelessly to a portable computer. Hand position is computed in an egocentric reference frame and mapped onto four acoustic parameters. A pilot feasibility study with acute stroke patients resulted in significant effects and is encouraging with respect to ambulatory use.

Florina Speth, Michael Wahl, Specifying Rhythmic Auditory Stimulation for Robot-Assisted Hand Function Training in Stroke Therapy

Following a stroke 90% of all patients suffer from a loss of arm and hand function, whereby 30-40% of them never regain full functionality ever again. Robot-assisted hand function training (RT) intensifies and complements common ergo-therapeutic treatment effectively. Most of robotic rehabilitation devices are connected to multimedia-environments offering playful training to promote motivation. “Rhythmic Acoustic Stimulation” (RAS), an effective therapeutic technique for post-stroke-treatment, was never specified, applied nor evaluated for RT. This paper suggests specified sound designs with rhythmic stimuli for RT that aim to enhance function and motivation. Four pilot experiments are described evaluating, if specified rhythmic stimulation designs applied during a fine motor task influence motivation and function in comparison to no stimulation. As results of these experiments indicate that rhythmic stimulation designs may enhance function and motivation, they are discussed for further observations applied in RT with stroke-patients.


Monday, June 23, 14:00 – 15:00

Lecture Presentations - Session 2:

Invited Paper

Insook Choi, A Priori Attunement for Two Cases of Dynamical Systems

An application of a tuning function adopts a space metaphor in scientific methods for representing state space of non-linear dynamical systems. To achieve an interactive exploration of the systems through sounds, attunement is defined as an a priori process for conditioning a playable space for an auditory display. To demonstrate this process, two cases of dynamical systems are presented. The first case employs Chua’s circuit, in which system parameters are defined as energy introduction to the system and energy governance within the system. The second case employs a swarm simulation, defined as a set of rules to dictate social agents’ behaviors. Both cases exhibit complex dynamics and emergent properties. The paper synthesizes a comparative review of auditory display for the two cases while defining playable space with generalizable tuning functions. The scope of the discussion focuses on the relationship between playable space as a canonical architecture for auditory display workflow and its realization through attunement in applications of dynamical systems.

Auditory Displays Applications: Aiding movement Part 2

Session Chair Tony Stockman

Andrew Godbout, Chris Thornton, Jeffrey Boyd, Mobile Sonification for Athletes: A Case Study in Commercialization of Sonification

Several companies, including Under Armour, Nike and Adidas, are taking advantage of advances in sensor technology to sell wearable systems that measure, record, and analyze the motion of athletes. To date, these systems make little, if any use of sonification. Therefore, there is an opportunity for sonification methods in this domain, including the potential to reach a mass market. In the fall of 2013, Under Armour and NineSigma created the Armour39 Challenge, an open call for proposals to build new technology for the Armour39, Under Armour's wearable motion and heart-rate sensor. The authors of this paper responded to the challenge, proposing novel sonification systems to exploit the data from the Armour39. This paper presents these systems, including issues, solutions, and tools for sonification performed on a mobile device with a wearable sensor. The sonifications are rhythmic, exploiting the periodicity of human motion, and are demonstrated by sonifying athletic performance metrics in real-time for speed skating and running.

Kevin Smith, David David Claveau, The Sonification and Learning of Human Motion

This paper examines how sonification can be used to help a student emulate the complex motion of a teacher with increasing spatial and temporal accuracy. The system captures a teacher’s motion in real-time and generates a 3-D motion path, which is recorded along with a reference sound. A student then attempts to perform the motion and thus recreate the teacher’s reference sound. The student’s synthesized sound will dynamically approach the teacher’s sound as the student’s movement becomes more accurate. Several types of sound mappings which simultaneously represent time and space deviations are explored. For the experimental platform, a novel system that uses low-cost camera-based motion capture hardware and open source software has been developed. This work can be applied to diverse areas such as rehabilitation and physiotherapy, performance arts and aiding the visually impaired.


Monday, June 23, 16:00 – 16:40

Lecture Presentations - Session 3:

Spatial Audio and Auditory Displays Part 1

Session Chair Elizabeth Wenzel

György Wersényi, József Répás, Performance Evaluation of Blind Persons in Listening Tests in Different Environmental Conditions

Visually impaired people are often in target groups of various investigations, including basic research, applied research, research and development studies. Experiments in the development of assistive technologies - navigation aids or computer interfaces (auditory displays) - aim to incorporate the results of testing with blind subjects during development. Listening tests concerning the localization performance of blind subjects can be installed in various environments using different excitation signals. Generally, results can be collected only from a small number of participants and they are compared with results of blindfolded sighted subjects. The goal of this study was to include different environmental conditions (virtual reality, real life, free-field), different localization tasks and a larger number of participants both blind and sighted for comparison. Results indicate that blind subjects’ performance is generally not superior to sighted subjects’ performance from the engineering point of view, but further psychological evaluation is recommended.

Peter Lennox, Bruce Wiggins, Ian McKenzie, Hearing Without Ears

We report on on-going work investigating the feasibility of using tissue conduction to evince auditory spatial perception. Early results indicate that it is possible to coherently control externalization, range, directionality (including elevation), movement and some sense of spaciousness without presenting acoustic signals to the outer ear. Signal control techniques so far have utilised discrete signal feeds, stereo and 1st order ambisonic hierarchies. Some deficiencies in frontal externalization have been observed. We conclude that, whilst the putative components of the head related transfer function are absent, empirical tests indicate that coherent equivalents are perceptually utilisable. Some implications for perceptual theory and technological implementations are discussed along with potential practical applications and future lines of enquiry.


Tuesday, June 24, 09:30 – 10:10

Lecture Presentations - Session 4:

Auditory Display Applications: Spatial Navigation

Session Chair Brian Katz

Frederik Nagel, Fabian-Robert Stöter, Norberto Degara, Stefan Balke, David Worrall, Fast and Accurate Guidance - Response Times to Navigational Sounds

Route guidance systems are used every day by both sighted and visually impaired people. Systems, such as those built into cars and smart phones, usually using speech to direct the user towards their desired location. Sounds other than functional and speech sounds can, however, be used for directing people in distinct directions. The present paper compares response times with different stimuli and error rates in the detection. Functional sounds are chosen with and without intrinsic meanings, musical connotations, and stereo locations. Panned sine tones are identified as the fastest and most correctly identified stimuli in the test while speech is not identified faster than arbitrary sounds that have no particular meaning.

Mark Anderson, Use of Geospatial Data Sonification for Mobile Augmented Reality Audio Navigation

This paper presents MeanderMaps: a non-speech Augmented Reality Audio (ARA) application for Apple iPhone that aids in navigation purposes by sonifying geospatial data. Users request directions to a specified location on a Google Map overlay, and MeanderMaps uses spatial auditory cues such as distance and direction to guide him/her to the destination. As the user travels to consecutive waypoints known as path nodes, auditory cues indicate whether an incorrect turn has been made or if the user is traveling in the wrong direction. Preliminary findings are reported using qualitative and quantitative methods, evaluating the overall sonification model in addition to individual audio cues that (a) worked, (b) worked somewhat well, and (c) needed to be improved. Future improvements and modifications to MeanderMaps are presented.


Tuesday, June 24, 10:10 – 10:30

Poster Craze


Tuesday, June 24, 10:30 - 11:30 and 15:00 - 16:00

Poster Presentations - Session 2:

1. Aries Arditi, Auditory Display Of Coarse Optical Imagery: Concept for a Rehabilitation Aid for Blind Spatial Orientation

We describe a concept for a rehabilitation aid for blind persons that will present, on a sonic display, coarse optical information obtained from a spectacle-mounted camera. The aid will serve blind persons who have no light sense or who can at most detect ambient light. The approach is to map luminous intensity to loudness of continuous tones of distinct timbre representing a small number of directions relative to that of the user’s head.

2. Jean Rouat, Damien Lescal, Sean Wood, Handheld device for substitution from vision to audition

Sensorial substitution has great potential in rehabilitation, education, games, and in the creation of music and art. Current technologies allow us to develop sensorial substitution and sonification systems that would not have been imaginable two decades ago. It is desirable to let a large audience use and test sonification systems to provide feedback and improve their design. Handheld devices like smartphones or tablets include network connectivity (WIFI and/or Cellular radio) that can be used to transmit anonymous information about the configuration and strategies adopted by users. It is now feasible to obtain feedback from any user of substitution and sonification technology and not only from a limited number of subjects in the laboratory. Testing in the field with a large number of users is now possible thanks to telecommunication networks and machine learning tools to analyze big data.
This work presents a handheld implementation of a simple video sonification system designed to test the acceptability of vision to audition substitution systems and in the near future to provide feedback from users. A first beta version was publicly released in November 2013 as an iOS application for large scale testing. The extended abstract introduces the interface and the underlying technology.

3. Maryam Hosseini, Andreas Riener, Rahul Bose, Myounghoon Jeon, “Listen2droom”: Helping Visually Impaired People Navigate Indoor Environments Using an Ultrasonic Sensor-Based Orientation Aid

People with visual impairments face considerable limitations with their mobility, but still there is little infrastructure in place to help them. In this study, we present a new wearable electronic travel aid (ETA), “Personal Radar”, which assists blind people in navigating in indoor environments using an ultrasonic sensors. After briefly describing our initial system design, we report the improvements from the pilot study. Then, we introduce our experiment in progress. In the experiment, blind folded students and visually impaired people will navigate through a maze and an empty room based on auditory and vibrotactile feedback of the device. This system could serve as an effective research platform for obstacle detection, current location awareness, and direction suggestion for the blind.

4. Shashank Aswathanarayana, Agnieszka Roginska, I Hear Bangalore3D: Capture And Reproduction Of Urban Sounds Of Bangalore Using An Ambisonic Microphone

This paper describes the project, I Hear Bangalore3D, which is an attempt to capture and render 3D recordings of various iconic locations of the city of Bangalore. First order ambisonic recordings were done and processed so that they can be played back using a speaker array configuration through real time matrixing or using headphones through binaural renderings of the recordings. This project has both aesthetic and informational use to it. Coming off a sister project, I Hear NY3D, which took a similar route in Manhattan, this project also aims at comparing the noise levels and other information of different cities in different parts of the world.

5. Michael Musick, Tae Hong Park, Jonathon Turner, Interactive Auditory Display of Urban Spatio-Acoustics

This paper presents an interactive exploration platform and toolset for spatial, big-data auditory display. The exploration platform is part of the Citygram project, which focuses on geospatial research through a cyber-physical system that automatically streams, analyzes, and maps urban environmental acoustic energies. Citygram currently concentrates on dynamically capturing geo-tagged, low-level audio feature vectors from urban soundscapes. These various feature vectors are measured and computed via Android-based hardware, traditional personal computers, and mobile computing devices that are equipped with a microphone and Internet connection. The low-level acoustic data streams are then transmitted to, and stored in, the Citygram database. This data can then be used for auditory display, sonification, and visualization by external clients interfacing with the Citygram server. Client users can stream data bi-directionally via custom software that runs on Cycling ‘74’s Max and SuperCollider, allowing for participatory citizen-science engagement in auditory display.

6. Braxton Boren, Michael Musick, Jennifer Grossman, Agnieszka Roginska, I Hear NY4D: Hybrid Acoustic and Augmented Auditory Display for Urban Soundscapes

This project, I Hear NY4D, presents a modular auditory display platform for layering recorded sound and sonified data into an im- mersive environment. Our specific use of the platform layers Ambisonic recordings of New York City and a palette of virtual sound events that correspond to various static and realtime data feeds based on the listener’s location. This creates a virtual listening environment modeled on an augmented reality stream of sonified data in an existing acoustic soundscape, allowing for closer study of the interaction between real and virtual sound events and testing the limits of auditory comprehension.

7. Gershon Dublon, Edwina Portocarrero, ListenTree: Audio-Haptic Display in the Natural Environment

In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create this effect, an audio exciter transducer is weatherproofed and attached to the tree trunk underground, transforming the tree into a living speaker that channels audio through its branches. Any source of sound can be played through the tree, including live audio or pre-recorded tracks. For example, we used the ListenTree to display live streaming sound from an outdoor ecological monitoring sensor network, bringing an urban audience into contact with a faraway wetland. Our intervention is motivated by a need for forms of display that fade into the background, inviting attention rather than requiring it. ListenTree points to a future where digital information might become a seamless part of the physical world.

8. Myounghoon Jeon, Yuanjing Sun, Design and Evaluation of Lyricons (Lyrics + Earcons) for Semantic and Aesthetic Improvements of Auditory Cues

Auditory researchers have developed various non-speech cues in designing auditory user interfaces. A preliminary study of “Lyricons” (lyrics + earcons) has provided a novel approach to devising auditory cues in electronic products, by combining the concurrent two layers of musical speech and earcons (short musical motives). The purpose of the present study is to introduce iterative design processes and to validate the effectiveness of lyricons compared to earcons, whether people can more intuitively grasp functions that lyricons imply than those of earcons. Results favor lyricons over earcons. Future work and practical application directions are also discussed.

9. Ronja Frimalm, Johan Fagerlönn, Stefan Lindberg, Anna Sirkka, How Many Auditory Icons in a Control Room Environment Can You Learn

Previous research has shown that auditory icons can be effective warnings. The aim of this study was to determine the number of auditory icons that can be learned, in a control room context. The participants in the study consisted of 14 control room operators and 15 people who were not control room operators. The participants were divided into three groups. Prior to the testing the three groups practiced on 10, 20 and 30 different sounds. Each group was tested using the sounds that they had practiced. The results support the potential for learning and recalling a large number of auditory icons, as many as 30. The results also show that sounds with similar characteristics are easily confused.

10. Michael Nees, Thom Gable, Myounghoon Jeon, Bruce Walker, Prototype Auditory Displays for a Fuel Efficiency Driver Interface

We describe work-in-progress prototypes of auditory displays for fuel efficiency driver interfaces (FEDIs). Although research has established that feedback from FEDIs can have a positive impact on driver behaviors associated with fuel economy, the impact of FEDIs on driver distraction has not been established. Visual displays may be problematic for providing this feedback; it is precisely during fuel-consuming behaviors that drivers should not divert attention away from the driving task. Auditory displays offer a viable alternative to visual displays for communicating information about fuel economy to the driver without introducing visual distraction.


Tuesday, June 24, 11:30 - 12:30

Lecture Presentations - Session 5:

Spatial Audio and Auditory Displays Part 2

Session Chair György Wersényi

Kyla McMullen, Gregory Wakefield, Effects of Visual Augmentation on the Memory of Spatial Sounds

Spatial audio displays are created by processing digital sounds such that they convey a spatial location to the listener. These displays are used as a supplementary channel when the visual channel is overloaded or when visual cues are absent. This technology can be used to aid decision-makers in complex, dynamic tasks such as urban combat simulation, flight simulations, mission rehearsals, air traffic control, military command and control, and emergency services. Accurate spatial sound rendering is a primary focus in this research area, with spatial sound memory receiving less attention. The present study assesses the effects of visual augmentation on spatial sound location and identity memory. The chosen visual augmentations were a Cartesian and polar grid. The work presented in this paper discovered that the addition of visual augmentation improved location and identity memory without degrading search time performance.

Elizabeth Wenzel, Martine Godfroy-Cooper, Joel Miller, Spatial Auditory Displays: Substitution and Complementarity to Visual Displays

The primary goal of this research was to compare the performance in localization of stationary targets during a simulated extra-vehicular exploration of a planetary surface. Three different types of displays were tested for aiding orientation and localization: a 3D spatial auditory display, a 2D North-up visual map, and the combination of the two in a bimodal display. Localization performance was compared under four different environmental conditions combining high and low levels of visibility and ambiguity. In a separate experiment using a similar protocol, the impact of visual workload on performance was also investigated contrasting high (Dual-Task paradigm) and low workload (Single Orientation task). A synergistic presentation of the visual and auditory information (bimodal display) lead to a significant improvement in performance (higher percent correct orientation and localization, shorter decision and localization times) compared to either unimodal condition, in particular when the visual environmental conditions were degraded. Preliminary data using the dual-task paradigm suggest that the performance with displays utilizing auditory cues was less affected by the extra demands of additional visual workload than a visual-only display.

Ravish Mehra, Lakulish Antani, Dinesh Manocha, Source Directivity and Spatial Audio for Interactive Wave-Based Sound Propagation

This paper presents an approach to model time-varying source directivity and HRTF-based spatial audio for wave-based sound propagation at interactive rates. The source directivity is expressed as a linear combination of elementary spherical harmonic sources. The propagated sound field due to each spherical harmonic source is precomputed and stored in an offline step. At runtime, the time-varying source directivity is decomposed into spherical harmonic coefficients. These coefficients are combined with precomputed spherical harmonic sound fields to generate propagated sound field at the listener position corresponding to the directional source. In order to compute spatial audio for a moving and rotating listener, an efficient plane-wave decomposition approach based on the derivatives of the sound field is presented. The source directivity and spatial audio approach have been integrated with the Half-Life 2 game engine and the Oculus Rift head-mounted display to enable realistic acoustic effects for virtual environments and games.


Tuesday, June 24, 14:00 - 15:00

Lecture Presentations - Session 6:

Auditory Display Evaluation

Session Chair Gregory Wakefield

Milena Droumeva, Iain McGreggor, A Method for Comparative Evaluation of Listening to Auditory Displays by Designers and Users

The process of designing and testing auditory displays often includes evaluations only by experts, and where non-experts are involved, training is commonly required. This paper presents a method of evaluating sound designs that does not require listener training thus promoting more ecological practices in auditory display design. Complex sound designs can be broken down into discrete sound events, which can then be rated using attributes of sound that are meaningful to both designers and listeners. The two examples discussed in this paper include an auditory display for a commercial vehicle, and a set of sound effects for a video game. Both are tested using a repertory grid approach. The paper shows that the method can highlight similarities and differences between designer and user listening experiences. Comparing listening experiences could allow designers to be confident with the reception of their sound designs.

Timothy Neate, Norberto Degara, Andy Hunt, Frederik Nagel, A Generic Evaluation Model for Auditory Feedback in Complex Visual Searches

This paper proposes a method of evaluating the effect of auditory display techniques on a complex visual search task. The approach uses a pre-existing visual search task (conjunction search) to create a standardized model for audio, and non-audio assisted visual search tasks. A pre-existing auditory display technique is evaluated to test the system. Using randomly generated images, participants were asked to undertake a series of visual search tasks of set complexities, with and without audio. It was shown that using the auditory feedback improved the participant’s visual search times considerably, with statistically significant results. Additionally, it was shown that there was a larger difference between audio and non-audio when the complexity of the images was increased. The same auditory display techniques were then applied to an example of a real complex visual search task, the results of which imply a significant improvement in visual search efficiency when using auditory feedback.

David Poirier-Quinot, Brian Katz, CAVE-based virtual prototyping of an audio radiogoniometer: ecological validity assessment

This paper is part of a project concerned with the improvement of audio radiogoniometer design ergonomics and sound aesthetic. It introduces a virtual prototyping implementation of a simple radiogoniometer along with a methodology to assess its ecological validity. Said methodology involves a performance comparison between two different radiogoniometer designs, both implemented as virtual prototypes. While suggested assessment achievement supposes a companion study in a real environment (based on a physical prototype), significant results have already been gathered regarding the impact of the virtual environment on the virtual prototype validity.


Tuesday, June 24, 16:00 - 16:40

Lecture Presentations - Session 7:

Auditory Display Applications: Biomedicine Part 1

Session Chair Thomas Herman

Agnieszka Roginska, Hariharan Mohanraj, James Keary, Kent Friedman, Sonification Method to Enhance the Diagnosis of Dementia

Positron emission tomography (PET) scans of brains result in large datasets that are traditionally analyzed using visual displays and statistical analyses. Due the complexity and multi-dimensionality of the data, there exist many challenges in the interpretation of the scans. This paper describes the use of a sonification method to assist in improving the diagnosis of patients with different levels of Alzheimer’s dementia. A triple-tone method is introduced, and the audible beating patterns resulting from the interaction of the three tones is explored as a metric to interpret the data. The sonification method is presented and evaluated using subjective listening tests. Results show the triple-tone sonification method is effective at evaluting PET scan brain data, even for listeners with no medical background.

Teruaki Kaniwa, Hiroko Terasawa, Masaki Matsubara, Shoji Makino, Tomsaz Rutkowski, Electroencephalogram Steady State Response Sonification Focused on the Spatial and Temporal Properties

This paper describes a sonification approach of multichannel electroencephalogram (EEG) steady-state responses (SSR). The main purpose of this study is to investigate the possibility of sonification as an analytic tool for SSR. The proposed sonification approach aims to observe the spatial property (i.e. location of strong brain activities) and temporal property (i.e. synchrony of wave forms across channels) of brain activity. We expect to obtain useful information on brain activity locations and their dynamic transitions by taking advantage of spatial sound with multichannel loudspeakers that represent EEG measurement positions, while expressing the temporal property of multiple EEG channels with timbre by integrating respective auditory streams. Our final sonification evaluation experiment suggests the validity of the proposed approach.


Wednesday, June 25, 09:30 - 10:10

Lecture Presentations - Session 8:

Auditory Display Applications: Biomedicine Part 2

Session Chair Hiroko Terasawa

Yoon Chung Han, Byeong-jun Han, Skin Pattern Sonification Using NMF-based Visual Feature Extraction and Learning-based PMSon

This paper describes the use of sonification to represent the scanned image data of skin pattern of the human body. Skin Patterns have different characteristics and visual features depending on the positions and conditions of the skin on the human body. The visual features are extracted and analyzed for sonification in order to broaden the dimensions of data representation and to explore the diversity of sound in each human body. Non-negative matrix factorization (NMF) is employed to parameterize skin pattern images, and the represented visual parameters are connected to sound parameters through support vector regression (SVR). We compare the sound results with the data from the skin pattern analysis to examine how much each individual skin patterns are effectively mapped to create accurate sonification results. Thus, the use of sonification in this research suggests a novel approach to parameter mapping sonification by designing personal sonic instruments that use the entire human body as data.

Jean-Luc Boevé, Rudi Giot, Volatiles that Sound Bad: Sonification of Defensive Chemical Signals from Insects Against Insects

Defensive chemicals such as volatiles are essential for many insects against the attack of predatory insects, but in the research domain of chemical ecology there remains a need to better understand how intrinsic physicochemical constants of volatiles determine the intra- and interspecific diversification of such compounds produced by prey insects, knowing that many predatory insects primarily rely on chemical cues during foraging. To apprehend and explore the diversity of emitted chemicals as related to the receiver’s perception, here we aim to transform chemical into acoustic signals by a process of sonification, because propagating odours and sounds are similar in their spatiotemporal dynamics. Since insects often emit a complex mixture of repellents, we prototyped a sonification software to process physicochemical parameters of individual molecules, prior mixing these sonified data by following the chemical profile of specific insect defensive secretions. In a fruitful proof of concept, the repellence of insectivorous ants towards single chemicals was compared with the repulsive response of humans towards the same but auditory translated signals. Expected outreaches of our ongoing project called ‘SonifChem’ are, among others, to explore the repulsive and even the attractive bioactivities of chemicals emanating from any (biological) source.


Wednesday, June 25, 10:10 - 10:30

Poster craze


Wednesday, June 25, 10:30 - 11:30 and 15:00 - 16:00

Poster Presentations - Session 3:

1. Ismael Nawfal, Josh Atkins, Binaural Reproduction over Loudspeakers Using a Modified Target Response

Crosstalk cancellation (XTC) is a technique that can be used to play binaural content, typically meant for headphone playback, over two or more loudspeakers. Though effective at creating a binaural spatial sound field at the listening position, many XTC algorithms introduce spectral coloration, suffer from spatial robustness issues and create filters that are unrealizable in practice. Past approaches to dealing with this issue rely heavily on regularization. In this work we propose a new topology for loudspeaker binaural rendering (LBR) that performs better than conventional techniques without the need for regularization commonly associated with crosstalk cancellation based binaural renderers (XTC-BR). We then explore the use of a proposed LBR in the context of multiple output channels. A method is investigated to further optimize the filter design process by selecting an appropriate modeling delay and filter length using methods practiced in XTC filter design.

2. Paul Riker, iEAR: Immersive Environmental Audio for Photorealistic Panoramas

This paper presents iEAR, a flexible spatial audio rendering tool for use with photorealistic monoscopic and stereoscopic panoramas across various display systems. iEAR allows users to easily present multichannel audio scenes over variable speaker arrangements, while maintaining tight integration with the corresponding visual elements of the display media. Built in the Max/MSP Audio Programming Environment, iEAR utilizes well-established panning methods to accommodate a wide range of speaker configurations. Audio scene orientation is tied to the visual scene using an OSC connection with the visualization software, allowing users to render and spatialize multichannel environmental audio recordings in tandem with the changing perspective in the visual scene.

3. Samuel Clapp, Anne Guthrie, Jonas Braasch, Ning Xiang, Localization Accuracy in Presenting Measured Sound Fields via Higher Order Ambisonics

A spherical microphone array can encode a measured sound field into its spherical harmonic components. Such an array will be subject to limitations on the highest spherical harmonic order it can encode and encoding accuracy at different frequencies. Ambisonics is a system designed to reproduce the spherical harmonic components of a measured or virtual sound field using multiple loudspeakers. In ambisonic systems, the size of the sweet spot is wavelength dependent, and thus decreases in size with an increase in frequency. This paper examines how to reconcile the limitations of the recording and playback stages to arrive at the optimum ambisonic decoding scheme for a given spherical array design. In addition, binaural models are used to evaluate these systems perceptually.

4. Joel Miller, Martine Godfroy-Cooper, Elizabeth Wenzel, Using Published Hrtfs with Slab3d: Metric-Based Database Selection and Phenomena Observed

In this paper, two publicly available head-related transfer function (HRTF) database collections are analyzed for use with the open-source slab3d rendering system. After conversion to the slab3d HRTF database format (SLH), a set of visualization tools and a five-step metric-based process are used to select a subset of databases for general use. The goal is to select a limited subset least likely to contain anomalous behavior or measurement error. The described set of open-source tools can be applied to any HRTF database converted to the slab3d format.

5. Jianyu Fan, Spencer Topel, SONICTAIJI: A Mobile Instrument For Taiji Performance

SonicTaiji is a mobile instrument designed for the Android Platform. It utilizes accelerometer detection, real-time sound synthesis, and data communication techniques to achieve real-time Taiji sonification. Taiji (Tai Chi) is an inner-strength martial art aimed at inducing meditative states. In this mobile music application, Taiji movements are sonified via gesture detection, connecting listening and movement. This instrument is a tool for practitioners to enhance the meditative experience of performing Taiji. We describe the implementation of gesture position selection, real-time synthesis design, and data mapping. We then describe outcomes of subjective evaluations of the user experience.

6. Camille Peres, Cody Faisst, Nathan Slota, Daniel Verona, Chase Williams, Paul Ritchey, Sonification Synthesizer for Surface Electromyography

Surface electromyography (sEMG) is a means for measuring muscular activity beneath the surface of the skin. It is used as a biofeedback tool and also as a means for studying the biomechanics of human movement. sEMG data are typically displayed graphically on a computer screen and while this can be a useful way to display the data, it is not always ideal. This extended abstract proposes the development of a sonification tool that allows users to sonify sEMG data for scientific purposes and have real-time control over the sounds that the tool generates. The tool will consist of two parts: a graphical user interface (GUI) that allows the user to independently control the sound of each channel as the front end and sonification code as the back end, similar to a software synthesizer. Independent real-time control of each channel will allow the user to create sonification models, which are mappings of certain sounds to specific muscle groups. The GUI will allow the user to save and access sonification models for later use. A prototype of the tool is currently being developed using NI LabVIEW in parallel with a Delsys Trigno Wireless sEMG system. However, various issues with LabVIEW are forcing the design team to consider moving to a more conventional programming environment such as C++ or Java. It is anticipated that different sonification models will be better suited to different tasks, e.g. one model may be more ideal for data analysis and data exploration while another model may be better suited for biofeedback. This tool will allow the user to easily explore these various kinds of sonification models and test them for intuitiveness and accuracy.

7. Michael Nees, Kathryn Best, A Verification Task with Lateralized Tones and Accelerated Speech

Research has suggested that the left hemisphere of the brain may be specialized for processing auditory speech, whereas the right hemisphere may be specialized for processing nonspeech auditory stimuli. Due to contralaterality in auditory pathways, this functional specialization has been reflected in behavioral advantages for speech stimuli presented to the right ear and for nonspeech stimuli presented to the left ear. We used a verification task with lateralized presentations of brief tonal stimuli (sonifications) and accelerated speech stimuli (spearcons) to examine performance as a function of the presentation ear and the type of auditory display. The general pattern of results showed that reaction time and accuracy were facilitated when two accelerated speech stimuli were compared to each other. Based on the results of this study, reported effects of left and right ear advantages do not seem to be robust enough to warrant general ergonomic recommendations (i.e., left ear presentation of nonspeech sounds and right ear presentation of speech sounds) for auditory display design.

8. Jeff Rimland, Mark Ballora, Using Vocal-Based Sounds to Represent Sentiment in Complex Event Processing

There is an intricate and evolving relationship between sonification and Complex Event Processing (CEP) for improved situational awareness. In a paper presented at ICAD 2013 [1], we introduced a series of techniques using CEP for simultaneous sonification of both quantitative “hard” data and human-derived “soft” data in the context of assistive technology. The connection of CEP and sonification was explored further in the context of a severe weather tracker that relies on fusion of quantitative (sensor-based) weather data along with human observations about storms and related conditions [2]. An area of shortcoming in both of these earlier works was the difficulty in creating sounds that represented human sentiment about observed conditions (e.g. unanticipated obstacles for a blind person crossing a busy street, or impending dangerous weather conditions) in a format that enabled intuitive listening for improved situational awareness. This extended abstract provides an update on that continuing research by representing human sentiment data, via the use of vocal synthesis that is driven by Complex Event Processing.

9. Roger Dean, Relationships Between Acoustic Features and Perceptual Segmentation of Music Audio

For a successful practical sonification, boundaries where the information state changes need to be made readily apparent perceptually. I describe time series analysis techniques for the detection of segments perceived implicitly during continuous music audio. The detected segments are compared with those measured computationally, or determined musicologically, on the input audio. The degree to which perceptual segmentation can be predicted is discussed, together with some of the factors apparently responsible. This may give useful cues as to how best to structure sonifications for informational purposes.

10. Mohammad Adeli, Jean Rouat, Stéphane Molotchnikoff, On the Importance of Correspondence Between Shapes and Timbre

The results of a preliminary study of the audio-visual correspondence between musical timbre and visual shapes are reported. 22 participants had to play 20 musical sounds and choose a shape for each. An association between timbre and visual shapes emerged. Soft timbres seem to match with rounded shapes, harsh timbres with sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. The correspondence between timbres and shapes should lend itself to the development of perceptually supported musical interfaces and substitution systems. A larger scale experiment with more sounds and participants is underway and confirms the preliminary results reported in this paper.

11. Matti Gröhn, Miika Leminen, Minna Huotilainen, Tiina Paunio, Jussi Virkkala, Lauri Ahonen, Comparing Auditory Stimuli for Sleep Enhancement: Mimicking a Sleeping Situation

Recently, two research groups have reported that the depth and/or duration of Slow Wave Sleep (SWS) can be increased by playing short sounds with approximately 1 second intervals during or prior to SWS. These studies have used sounds with neutral or negative valence: sinusoidal 1-kHz tones or short pink noise bursts. Since music therapy research shows beneficial effects of pleasant, natural sounds and music, the sounds in the experiments may have been suboptimal. Thus, we aimed at choosing optimal sounds such that they could be used in increasing the depth or duration of SWS taking into account both the need of fast rise times, short duration, and pleasantness. Here we report results of a listening test mimicking a sleeping situation in which the subjects compared how pleasant, relaxing, and image-evoking they found 3 natural, short instrument sounds with fast rise times compared to a short pink noise burst used in the previous experiments. The natural sounds were selected from our previous listening test as the most pleasant ones. The results will be used as the basis for choosing the optimal sounds for the sleep studies.


Wednesday, June 25, 11:30 - 12:30

Lecture Presentations - Session 9:

Psychoacoustics, Perception and Cognition:

Session Chair Camille Peres

Derek Brock, Charles Gaumond, Christina Wasylyshyn, Brian McClimens, Collaboratively Identifying and Referring to Sounds with Words and Phrases

Machine classification of underwater sounds remains an important focus of U.S. Naval research due to physical and environmental factors that increase false alarm rates. Human operators tend to be reliably better at this auditory task than automated methods, but the attentional properties of this cognitive discrimination skill are not well understood. In the study presented here, pairs of isolated listeners, who were only allowed to talk to each other, were given a collaborative sound-ordering task in which only words and phrases could be used to refer to and identify a set of impulsive sonar echoes. The outcome supports the premise that verbal descriptions of unfamiliar sounds are often difficult for listeners to immediately grasp. The method of “collaborative referring” used in the study is proposed as new technique for obtaining a verified perceptual vocabulary for a given set of sounds and for studying human aural identification and discrimination skills.

Gregory Wakefield, David Kieras, Nandini Iyer, Brian Simpson, Eric Thompson, EPIC Modeling of a Two-Talker CRM Listening Task

An extension of the auditory module in EPIC is introduced to model the two-speaker coordinate response measure (CRM) listening task. The construct of an auditory stream is employed as an object in the working memory of EPIC's cognitive processor. Production rules are developed that execute the two-speaker CRM task. Analysis of these rules reveal two sources of possible error in the output of the auditory processor to working memory. Each is explored in turn and the production rules modified to provide a corpus-driven model that accounts for human performance in the listening task.

Robert Alexander, Sile O'Modhrain, Jason Gilbert, Thomas Zurbuchen, Auditory and Visual Evaluation of Fixed-Frequency Events in Time-Varying Signals

This study directly compares the auditory and visual analysis capabilities of participants in a structured data analysis task. This task involved the identification of transient fixed-frequency sinusoid events that were embedded within white noise and noise derived from solar wind time series. It was hypothesized that participants would be able to identify the number of embedded events more quickly and accurately through auditory data analysis than through visual analysis. While visual analysis outperformed auditory analysis overall, additional investigation revealed that auditory analysis outperformed vision in instances where these events were embedded in solar wind data. This task - involving the detection of transient periodic activity occurring within background turbulence - closely mirrors a type of spectral analysis conducted by heliospheric scientists. Additionally, several data examples contained embedded events that were correctly identified through audition while being consistently overlooked through visual inspection. The largest disparity between visual and auditory performance was found in the analysis of white noise spectra that contained no embedded events. In these instances, auditory analysis regularly resulted in the identification of events when none were present; a potential reasoning for these false positives is discussed. The results of this study suggest that the analysis capabilities of each modality may vary based largely on the complexity of the masking stimuli that are present.


Wednesday, June 25, 14:00 - 15:00

Lecture Presentations - Session 10:

Sonification for the Arts:

Session Chair David Worrall

Ludovic Laffineur, Rudi Giot, Louis Commère, Audiovisual and Pedagogical Network Installation

This paper presents an interactive installation designed to inform visitors of the network flow and the risks to connect their devices to any Wi-Fi hotspots. The system developed in C++ grabs pack- ets using LibPCap, analyses them at low level (e.g., packet length) and also provides high-level information (e.g., port number). This new approach is based on the network flow analysis as well as on network services analysis. The software communicates with ChucK through the OSC protocol and is developed with the Open- Frameworks library in order to create unique visualisation. Users can actively take part to an interactive and didactic audiovisual ex- hibition system using their mobile device to send e-mails, listen to a web radio, surf on a website, read RSS feeds, in short, the experience begins once visitors exchange data with the network.

Mark Ballora, Sonification Strategies for the Film Rhythms of the Universe

Design strategies are discussed for sonifications that were created for the short film Rhythms of the Universe, which was conceived by a cosmologist and a musician, with multi-media contributions from a number of artists and scientists. Sonification functions as an engagement factor in this scientific outreach project, along with narration, music, and visualization. This paper describes how the sonifications were created from datasets describing pulsars, the planetary orbits, gravitational waves, nodal patterns in the sun’s surface, solar winds, extragalactic background light, and cosmic microwave background radiation. The film may be viewed online at [1], and the sonifications described here may be downloaded at [2].

J. Parkman Carter, Jonas Braasch, Cross-Modal Soundscape Mapping: Integrating Ambisonic Environmental Audio Recordings And High Dynamic Range Spherical Panoramic Photography

We cannot ‘measure’ the soundscape any more than we can ‘measure’ the ocean, the city, or the wilderness. Being comprised of myriad complex elements, conditions and relationships between sound sources and sound perceivers, the soundscape—and any sufficient description of it—must account for several different, but significantly interrelated, dimensions: physical, spatial, temporal, perceptual, cultural, and historical. How, then, are we to meaningfully document the soundscape? If we are to begin to understand the soundscape’s impact on us—and our impact upon it—we need new methods to capture and represent the multisensory extents of a soundscape without reverting to one-dimensional quantitative abstractions. This project proposes an interdisciplinary method to record a soundscape’s multisensory attributes by combining aural and visual information in a structured way which links the directionality of view and sound arrival. This method integrates multi-directional Ambisonic audio recordings with high dynamic range (HDR) spherical panoramic photography in the form of interactive maps and virtual tours. Case studies using the cross-modal soundscape mapping method will be presented.