Skip to main content

Search NYU Steinhardt

Helping People Communicate

Posted

Among scholarly research in the field of speech therapy, the study of treatment methods represents a relatively narrow slice due to the time-intensive and costly nature of treatment research. Researchers in NYU Steinhardt’s Department of Communicative Sciences and Disorders (CSD) are tackling this high-impact area in a variety of ways, using both innovation and technology to improve quality of life.

Tara McAllister, associate professor and director of the CSD doctoral program, has created an iOS app that makes the kind of visual-acoustic biofeedback she uses in her lab more widely accessible to speech pathologists and the children they work with.

The free app called staRt, which has been downloaded more than 2,500 times since it was released in 2020, targets older children aged nine to fifteen who are cognitively and linguistically typical, but struggle with making the “r” sound in American English – which McAllister describes as the “single most consistent pain point in speech therapy for children.”

In biofeedback treatment, speech therapists use different technologies to enhance the sensory experience of producing speech so that the learner gets an augmented idea of how to make the target sound. staRt uses a real-time display of the frequency spectrum of speech paired with a visual of ocean waves to help the child practice recreating the image of what a successful “r” sound looks like.

A photo of Heather Kabakoff directing a child to look at the screen of an iPad, where the staRt app is open. Below the photo are two animated stills from the app. One depicts ocean waves with the caption "Image of Poor /R/." The other shows ocean waves with the caption "Image of Good /R/."

Dr. Heather Kabakoff (PhD '21, CSD) works with a child using the staRt app.

“StaRt offers a highly personalized approach that gives children insight into what sound they’re producing compared to the sound we’re trying to get them to make,” says McAllister, who began working on staRt in 2014. Collaborators include Steinhardt’s Music and Audio Research Lab; Tae Hong Park, associate professor of Music Composition and Technology; and Mario Svirsky, professor at NYU School of Medicine.

McAllister received funding through a National Institutes of Health (NIH) Small Business Technology Transfer (STTR) grant for this project. Inspired by the effects of COVID-19, the researchers are now studying the feasibility of visual-acoustic feedback via telehealth, a booming area of speech therapy post-pandemic.

They are also partnering with New York-based Sonority Labs to adapt staRt’s software into a browser version of the technology to offer even greater access.

Helping Make a Brain-Body Connection

In her research on understanding motor control in children with speech sound disorders, Maria Grigos, professor and chair of CSD, has been studying childhood apraxia of speech (CAS).

“With CAS, children might know what they want to say, but their brains aren’t sending the appropriate signal to move their mouths the right way at the right time,” explains Grigos. “Traditional speech therapy strategies aren’t effective with CAS, so we aim to improve the underlying motor skill within treatments designed for children with CAS – an area that has very little research demonstrating effectiveness.”

Funded through a five-year NIH grant, Grigos is studying the efficacy of Dynamic Temporal and Tactile Cueing (DTTC), which is a motor-based intervention that incorporates a hierarchical series of cues aiming to improve production across the whole word instead of sound by sound.

“Speech language pathologists typically teach children to improve the production of individual sounds: the ‘b’ in ‘bye,’” says Grigos. “Children with CAS can’t take that information and apply it to other words that have different combinations of sounds, so the DTTC approach instead systematically addresses motor control by teaching the speech movements within whole, functional words like ‘me,’ ‘up,’ and ‘hi.’ As children improve motor function, the words become more intelligible.”

Through Grigos’ study, young children aged two and a half to five years receive personalized DTTC treatment four times a week for eight weeks. Preliminary findings of Grigos’ research show that study participants refined their speech movements as they improved word accuracy in many of the words practiced over the course of DTTC treatment. Children also showed generalization of treatment gains to words that were not practiced in treatment.

It’s key that we continue this work to advance our understanding of how to optimally treat CAS and effectively use motor-based intervention in young children.

Maria Grigos, Associate Professor and Chair of Communicative Sciences and Disorders

Alongside the NIH grant, Grigos and collaborators Ying Lu, associate professor of Applied Statistics, Social Science, and Humanities at Steinhardt, and Julie Case, assistant professor of Speech-Language-Hearing Sciences at Hofstra University and CSD alum, are also funded by the Once Upon a Time Foundation. This work aims to train parents and caregivers to better support their children at home between DTTC speech therapy sessions.

“With CAS, there is still so much we do not know about how to achieve long-lasting improvements in speech production,” says Grigos. “It’s key that we continue this work to advance our understanding of how to optimally treat CAS and effectively use motor-based intervention in young children.”

Bringing Stroke Survivors’ Brains Back to Life

With NIH funding, Adam Buchwald, associate professor in CSD, is looking at reorganizing damaged areas of stroke survivors’ brains using noninvasive electrical stimulation called transcranial direct current stimulation, or tDCS.

“During a stroke, the blood supply to parts of the brain is shut down; that deprived tissue dies, and over time it disappears,” says Buchwald. “When it comes to strokes affecting speech and language, that damage tends to be lateralized to the left side of the brain. The right hemisphere tries to ‘pick up the slack’ and support speech and language more to compensate, but the best outcomes we see are when the left hemisphere is as active as possible, even though some of it is gone.”

Buchwald’s lab uses tDCS to penetrate the skull and stimulate residual cortical tissue in left hemisphere brain regions that support speech production.

Our hope is that if we engage these regions together, they will become part of a stronger network and the left hemisphere will once again become more active over time during speech production.

Adam Buchwald, Associate Professor of Communicative Sciences and Disorders

“We’re trying to tap into the brain’s capacity for plasticity by pairing tDCS with the targeted behavioral treatment tasks that we want the left hemisphere to be involved in – in this case, relearning how to produce complex sounds,” says Buchwald. “It’s a classic neuroscience idea that ‘neurons that fire together wire together.’ Our hope is that if we engage these regions together, they will become part of a stronger network and the left hemisphere will once again become more active over time during speech production.”

Preliminary findings of Buchwald’s research are promising; improvement in the condition where treatment was paired with tDCS has been more durable than treatment without tDCS, with treatment gains maintained several months after the treatment has been completed. Buchwald’s grant runs through 2025, and testing and data analysis are ongoing.