What We Know (And Don’t Know) about Stuttering: An Interview with Eric S. Jackson

Assistant Professor Eric S. Jackson is a clinician-scientist and director of NYU Steinhardt’s Stuttering and Vvariability (savvy) Lab in the Department of Communicative Sciences and Disorders. We sat down with Eric to discuss his latest research on stuttering — including two recently published papers on the subject and a new grant from the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health (NIH).

Eric S. Jackson

What sparked your initial interest in researching stuttering?

I’m a person who stutters — that was probably the initial catalyst. I used to work in the financial services industry, and at that point, I was in a pretty bad place with my own stuttering. After having an amazing experience with therapy, I realized, “I want to be a catalyst in other people’s lives in the way my therapy was for me.” So I went back to school to get my master’s so I could become a speech therapist.

After a couple of years of working as a clinician, I decided that I wanted to pursue a PhD because there wasn’t enough that the field knew about stuttering. It’s interesting because stuttering basically started the field of speech pathology — it’s probably researched more than any other speech-language impairment. But in some ways, it’s one of the disorders that we know the least about. We know stuttering is context-based, we know it’s socially driven, but we don’t fully understand it yet — and that’s what my work tries to figure out.

That’s interesting. Why is stuttering so hard to fully study?

The tricky thing about studying stuttering is that it’s variable. Sometimes people will stutter on one word in one situation, and then while producing the same word in another situation, they won’t stutter. It’s very context-based, which makes it hard to elicit stuttering in a controlled research environment.

You were just awarded a substantial R21 Early Career NIH grant to research “The Impact of Social-Cognitive Processing on Stuttering.” How will this study address the particular challenges in researching stuttering?

We’re actually getting people to stutter in the lab. One of the shortcomings of prior work is that researchers looked at the brain in non-social contexts, like when a person who stutters is reading words off a computer monitor. If stuttering is a social phenomenon, we should look at it in the context of social interaction — that’s what’s really novel about the project.

The technique we’re using is called functional near-infrared spectroscopy (fNIRS), a brain imaging technique that uses light to detect changes in blood flow in the brain and provides an indirect measure of neural activation. The problem with many standard, widely-used neuroimaging techniques is that the process is really unnatural. For example, during fMRI of the brain, a person has to lie down in a giant magnet — and can’t really talk because any movement is going to create noise in the data. The nice thing about fNIRS is that you just put a cap on somebody and he or she can sit upright, across from another person, and talk — allowing us to put “the social” into stuttering experiments.

You recently had two papers published about stuttering, one in the Journal of Fluency Disorders and one in Neuroscience. Could you talk a bit about each?

The paper in the Journal of Fluency Disorders looks at a phenomenon called anticipation. In many many instances, people who stutter know which word they’re going to stutter on — that’s anticipation. The tricky thing about anticipation is that the speech pathologist can’t see it. It’s a covert phenomenon, but it’s so central to the stuttering experience. In previous work, I created a scale called the Stuttering Anticipation Scale, which essentially helps us quantify how often people who stutter engage in different kinds of responses to anticipation. We’re trying to make the unobservable observable.

In this particular study, we did a factor analysis leveraging the scale where we tried to identify the different ways people were responding to anticipation. We found that people tend to display one of three responses: avoiding an anticipated word, implementing a speaking strategy to help them say the word, or stuttering regardless of anticipation. The next step is to research any links that seem to make an individual more likely to engage in one of those responses over another.

To date, the paper in Neuroscience is the largest fNIRS study of adults who stutter. We looked at the differences in speech planning versus speech execution, which is something that hasn’t really been previously looked at all that much. And we found differences between those two processes that will give us more information on overall speech production in the long run.

What is something you wish more people knew about stuttering?

The first thing that comes to mind is the covert nature of stuttering. People who stutter get very adept at knowing they’re going to stutter and preventing stuttering from coming to the surface. We have to do a better job of explaining that while stuttering may not be seen or heard, a person could still be stuttering beneath the surface.

Lend Your Voice to the NYU NSSLHA Voice Drive

The advent of synthetic voices has had an incredible impact on the speech-impaired community, giving those with communication loss access to a voice. However, the options available have tended to be limited, potentially stifling a user’s sense of individuality. In response to this, students within the NYU Steinhardt Department of Communicative Sciences and Disorders chapter of the National Student Speech Language Hearing Association (NSSLHA) are holding a voice drive to help individuals with speech loss reclaim ownership of their voices.

To participate in the drive, donors submit recordings of their own voices to a “Human Voicebank” to be potentially matched with a recipient who shares similar vocal characteristics. Once a match is made, the donor’s recordings are blended with 2-3 second samples of the recipient’s voice to create a synthetic voice that maintains the vocal quality and identity of the individual with communication loss.

The drive is being held in collaboration with VocaliD, a company that was founded by a speech-language pathologist to create custom digital voices.

A screenshot of VocaliD’s Human Voicebank recording interface.

The voice drive will run through June 21. To contribute your voice to VocaliD’s Human Voicebank, email nsslha.nyu@gmail.com.

Tara McAllister Develops App for Speech Therapy

Photo of Tara McAllister AppMispronouncing the “r” sound is among the most common speech errors, and is the most challenging to correct in speech therapy. For other sounds – such as “t” or “p” – speech pathologists can give clear verbal, visual, or tactile cues to help children understand how the sound is created, but “r” is difficult to show or explain. In addition, some children may have trouble hearing the difference between correct and incorrect “r” sounds, making it even more difficult for them to improve.

A growing body of evidence suggests that speech therapy incorporating visual cues — or visual biofeedback — can help. Visual biofeedback shows a someone what their speech looks like in real time. For instance, speech might be represented by dynamic waves on a screen.

Research led by Tara McAllister, assistant professor of communicative sciences and disorders at NYU Steinhardt, and published in May in the Journal of Speech, Language, and Hearing Research, suggests that visual biofeedback can be effective in helping some people to correct the “r” sound.

Read more about Dr. McAllister’s work in developing the app in Steinhardt’s At a Glance blog.