Filter by Categories
Clinical Practice Research
Ethical Legal and Regulatory Considerations
Planning, Managing, and Publishing Research
Research Design and Method

The Development of Hearing Under Complex Listening Conditions

This presentation describes the maturation of hearing in complex, multi-source acoustic environments and age-related changes in the ability to selectively attend and speech-in-speech recognition.

Lori Leibold

DOI: 10.1044/cred-pvd-c16004

The following is a transcript of the presentation video, edited for clarity.

Thank you very much to Karen for inviting me to present today. And I also received funding when I was a graduate student from many of the different ASHFoundation and ASHA awards, and so it’s a real honor for me to be included as one of the talkers now.

So I’m going to talk about developmental effects in speech perception in complex listening environments, and in contrast to Karen’s talk I’m going to talk about development on the other end of the spectrum. So I’m going to talk a lot about how children learn to perceive and understand speech in complex and dynamic listening environments. I have no disclosures related to this work.

I did want to thank right off the bat my collaborators and funding. Some of my collaborators are here today. But this is really a team approach. And of course Emily Buss from the University of North Carolina has her fingerprints all over all of the work that we do. Lauren Calandruccio if you have any questions especially about our more recent non-native speech perception work Lauren is the person to talk to. Nicole Corbin, Jenna Browning Angela Yarnell Bonino and Mary Flaherty, who’s in that general direction over there. And of course thank you to the NIDCD for funding.

So I wanted to give you an outline of what it is I’m going to talk about today. As Karen mentioned I’m a clinical audiologist. I also wanted to talk and frame the question in terms of what types of masking are in children’s — what types of masking do children experience in their natural listening environments. So I’m gonna have to talk about masking and it’s a bit of an intimidating audience to talk about some of these different terms of masking that we still go back and forth in terms of how best to define them. But I promise for those of you not that into hearing me talk about masking, it’s a fairly short description. Also susceptibility to masking during infancy and childhood. So how does noise and other competing speech influence infants’ and children’s ability to hear and understand speech. And so that’s what we mean by susceptibility, you might think of it as vulnerability. I’m also going to talk about maturational effects and targeted masker segregation. And fortunately for me I had talks by Barb and Elyse and Karen who really set up this problem. And I think when we come and start talking about how these processes work in infancy and childhood, we have a lot of really surprising findings I think, and I’m so excited to talk about some examples mostly from our lab. And finally I’m going to tackle a little bit what might the role of experience be in all of this.

Children live in a noisy world

So just to start with: Children live in a noisy world. And when I went to prepare this version of this talk about a year ago I went on to Google and I just did a quick search on the family meal, and this was actually the first one that came up. I know, right, they’re color-coordinated, all the vegetables are really impressive. What I wanted to focus on here is that this family’s doing a lot of things that we think promote good communication. So there’s no other, as far as I can tell, sources of competing sounds in the background, they don’t have the TV or other electronic media in the background, they’re looking at the child who’s talking. They look to be maybe slightly more engaged than is comfortable. But they’re involved. But at least in my house this is more like what the family meal is like. So this is an illustration from the Washington Post. And you can see we’ve got the TV on, the the teenager — this is a very good example at my house — a teenager’s looking at her iPad. We’ve got the dog — they do have vegetable so maybe a slight improvement from many days at my house. But this is really likely the better example of what listening environments are like for the developing child.

And then we break it down further. What we’re focused on a lot in my lab is what types of background sounds are in the environment, and how might they affect hearing and learning. And we’ve known for a long time, and there’s been quite a lot of study, about what I would call relatively steady noise in the background. And so this is a schematic of some sources of potential noise in the classroom, and it’s the things that we think about a lot when we think about our educational audiology classes, or classroom acoustics. So we have fans and we have fish tanks I think those are really out of favor now in most classrooms. We have noise from traffic outside, and the relatively steady state. And you can see just by looking at the time intensity wave form that these are relatively steady state. They do fluctuate over time, but not in ways that are overly dramatic just looking at it in this framework.

So when we think about steady noise, the way that we like to think about it in our lab, is it really impacts audibility. It impacts the fidelity with which the peripheral auditory system can represent targets sound. So if I’m listening to, say, Lauren, but there’s a lot of background sound I’m going to actually physically not — some parts of her message are not going to be physically transmitted to my brain because the background noise and her speech portions are overlapping on the basilar membrane. And my example here, we have the teacher who’s reading to her class, and we have an air conditioner. We call the teacher the target the air conditioner in this case is a masker, and it really physically obscures parts of the target in this case. In the literature this is often called energetic masking, and it’s the masking we almost always are dealing with an audiology clinic. So if you’re doing masking of PBK words, if you’re doing masking of tones, you know your traditional audiometric masking, these are the types of sounds that you almost always are using.

So the issue that I’m gonna focus on almost exclusively today is really what happens when the competing background sounds are speech. And this is a chart that’s taken directly from my first grader’s classroom. So he frequently comes home from school, and he’s been told to talk in a 0 or a 1. Which I love because he’s very loud, not unlike somebody he is related to. And this is a very common. So they have a lot of noise in the classroom, but most of it’s coming from the children speaking. And here’s another example.

As Karen I think did a really nice example of showing, if you look at the time waveform this is actually two competing sounds. They are summed and delivered to the peripheral auditory system. And you can see that they fluctuate. It’s very difficult to know looking from this waveform in particular what the target and what the masker is. And this is the focus of most of my talk, and if I had known that Karen was gonna do that awesome slide of transparent overlaid slides on top of one another — that’s informational masking. I love that example.

And the problem really for the developing child is not only do they, but people talk a lot. So there are emerging data coming out of our field and the field of speech-language pathology as well, they’ve actually tried to document how often children are exposed to competing speech. It turns out, most of the time. So conservative estimates would indicate that at least half the time children are in their home they’re listening to some form of competing speech. And when they’re away from home ,its most of the time. And my colleagues Sophie Ambrose and her colleagues at Boys Town also documented that TV and other electronic media introduce additional speech streams. So I think the estimate in their study was at sixty percent of homes in the U.S. have a television or some other form of competing speech on in the background most of the day. And the issue is that there are also data linking exposure to both noise and speech to academic outcomes and learning, especially language learning. And so this is a problem and something that we need to really get a good handle on.

So again I’m just going to basically do an uglier version of what Karen’s awesome slide was like. So when we think about this other form and what competing speech might do, in this case we have the teacher who’s the target, but now we have an assistant teacher and another student off to the side reading aloud. And they’re competing speech, and the issue is there is some competition and some overlap in the peripheral auditory system, but by most accounts at least in a typically developing child, the information that is sent to the brain should be sufficient for them to be able to parse apart the auditory scene. But as you can tell it’s it’s confusing. They’re similar, it’s hard to not only segregate what belongs to the target, what belongs to the masker, but also to stay focused and stay attended to the target. And we call that in the field often informational masking. So i put this in there because I’ve tried to do the talk without using some of the jargon, but if I get nervous or if I talk fast I’ll often include it so this is what I mean. So informational masking, one of the simple ways to think about it is what is the brain doing with the information that it receives from the peripheral auditory system.

I mentioned that children must learn about speech in the presence of all these different sounds, and one of the things that’s really important to know that I’m gonna sort of hit you over the head with a little bit, is that the ability to hear and understand speech matures at different rates for these different competing background sounds. And we think that’s because there are different processes that are developing that mediate these different abilities. The other thing — and if you fall asleep for the rest of my talk because you had really good ice cream at lunch — if all you kind of take home from my talk is this is a really important point. These central processes even though they’re not talking about the peripheral representation on the basilar membrane, these higher-order processes affect directly what you hear. So if you cannot segregate a sound, if you are not selectively attending to it, you don’t hear it. So functionally in some ways it may not matter where in the system this is occurring. But if children are poor at segregating sounds they’re not hearing the message. And I think that that is a really important take-home message to think about. So what a child, an infant and especially a young child hear in a background noisy situation is not what we hear — most of us hear. Maybe by the time we get to Karen’s participants we’re doing a u-shaped distribution.

Research

Now I’m going to talk a little bit about, mostly work that we’ve done in our lab, to really get a handle on: first, how well do they hear, and when in development are they adult like in these different backgrounds. So this summarizes probably about a decade of work from our lab and other labs. And as a general rule a lot of our paradigms use some form of speech recognition. In this particular study I’m going to show you a study that I did with my colleague Emily Buss we looked at forced-choice consonant identification. So the children had a touchscreen monitor, actually they held what looks like a little iPad, and they were just asked to touch the speech sound that they heard, and the background was either a steady noise like a shhh or it was two competing talkers. And I want to use this as an example, but just to let you know people have done, including our lab at many other labs have looked at constant identification, word identification, open set word identification, sentence recognition — the results are the same. So I’m showing examples that I think kind of highlight the main points, but this has generally been a consistent and overwhelming finding in the literature.

So if we think about children, in this particular study we measured consonant identification. We measured it at a fixed signal-to-noise ratio of 0 DB signal-to-noise ratio, and we looked at performance in two different maskers. And we looked at a wide age range of children. And you can see here on the x-axis that I looked at performance for these for age groups, and that’s somewhat an artificial distinction but this was a pretty big N study. And when I started looking at some of these effects in children we used to test five to ten year olds, and then we realized, well the ten-year-olds weren’t mature. So let’s include up to 13. And now we are including children up to the age of 16. And part of our fear was that we’d start seeing that the 18 year olds who are normal hearing sort of adult controls weren’t mature. But in fact that is not the case, so we think that we see most of the maturation around 16, fortunately. What’s shown here is percent correct identification. Just to orient you, higher up means better means you identified more consonants than lower on the y-axis. And if we look first at speech shaped noise one of the things you can see is a trend that many other people have observed in the literature: That young children maybe five, six, even seven year olds, often perform worse. They need a higher signal-to-noise ratio to perform as well as adults. But their performance was worse in the steady noise. And that’s a fairly consistent finding, these are normal hearing children. But you can see if you look at the eight to ten and eleven to 13 year old age groups they are more or less mature. What was really interesting in this study, and I think it’s pretty obvious, if you look at the filled bars, not only were the younger children substantially worse relative to the adults — so it’s a really pronounced age effect they’re really surprisingly bad at this — but you can also see that even the 11 to 13 year old age group were not performing in an adult-like manner, and that’s again, many studies have replicated this with different materials. So the ability to perceive speech in a speech background we think matures sometime after 13 or 14 years of age. And that’s in contrast to the ability to perceive speech in sort of a steady noise background.

And I wanted to show these data these are from Nicole Corbin who’s this wonderful PhD student works with Emily Buss and I. And it’s not just that the effects are bigger, but the whole time course of development doesn’t look to be the same for these two different recognition abilities.

This was a study that looked at open set monosyllabic word recognition. So kind of like PBK words, but we had this corpus of 800 words that were within the lexicon of a five and six-year-old child. What’s shown here this time is we did an adaptive procedure. So opposite of the figure I showed you before. We measured the signal-to-noise ratio where that listeners needed to get about 71 percent correct recognition. So lower is better, it means you could handle more background. And again we use noise and speech backgrounds. And what you can see, these are data in the steady noise masker and what you can see it’s sort of a general trend for improvement with age, with the youngest children showing an increase in susceptibility. If you look closely at these data, the line doesn’t help because it makes you look like, it makes it look like there’s a real linear trend. But if you look at the data for children probably about eight years and older, they look really indistinguishable from the adults. And there’s a lot of variability.

In contrast, these are data from, I think Nicole tested like 55 children, which is a lot of children to test, and what we see in the two-talker masker is really a different pattern of development all together. So we see kind of a really gradual, that’s a significant age effect but the slope is more shallow. And what was really interesting and we honestly don’t know what is responsible, is we see these effects around puberty we think where performance just drops to adult like levels. And that was something we hadn’t expected. We thought we would see a linear trend, and we actually wanted to fit functions that let us get an estimate of when you see mature performance, but we’ve talked to some people about what about what might be going on, but we think this is an interesting trend. But fortunately for us 16 year olds do look mature, so all of my previous studies where we used 18 to 24 year old adults as our baseline for mature performance I think are still okay.

The other thing that supports this idea that what’s going on when we ask children to perceive speech in a steady noise versus a two-talker masker or speech masker, is that they are uncorrelated. That the performance in one the threshold in the noise, is not correlated with performance in the two-talker from the same children. So that’s really interesting, and I think was surprising to us. We had some level we thought we would see the kids who were bad in one condition were bad in the other, but that’s not true. And so you can see the r value here is not very high, it’s not a significant correlation.

I was going to say before I transitioned to the slide was that one of the things we’ve seen repeatedly from our studies of school-aged children is that they’re much, they show these greater susceptibility effects when the background is two streams of speech, something very similar to the target speech, than in steady noise. And these data really though give us maybe a different insight into the problem. So we did a study that looked at infant speech perception. And rather than looking at recognition, obviously, we looked at just speech detection threshold. And we looked at 7 to 11 month old infants and adults and we also looked at school aged children because we wanted to get an estimate of how they performed using our traditional infant procedure. And in this particular example, if we look first at the open squares — so again we have an adaptive threshold so lower means they could tolerate more background — if you look first at the two-talker speech we really see the same effect that we saw with the recognition performance. And in fact the age effect for both this detection task and the recognition task I showed you in the previous slide, if we look at the average effect between the children and adults at the same age was a 7 dB effect both studies. Which is somewhat encouraging that we’re tapping what may be the same abilities. But you can see that for the school-age child– but you can see that for the infants they’re considerably worse. This ability to detect speech in the presence of other talkers, infants are really poor at.

What was surprising to us is that the children — the school age children — they look like adults in terms of speech detection thresholds, again not too surprising. They were a little better than they were at recognition and detection, and that maybe isn’t all that surprising. So they were able to do these higher order processing that we’ve heard from the other speakers. But what’s really interesting to us is they were no better at detecting speech when it was broadband noise, and that’s really kind of thrown us for a loop in terms of what might be going on. And our current explanation is that they have not yet at this age learned the ability to determine what’s target what’s background, and they tend to as a general rule we think try to integrate and listen to everything. And so these are some ongoing studies, we have some thoughts on doing a study where the talker is actually their mother or their father, and there’s some indication in the literature that we will see those familiarity effects pop out. And also, when in development does this ability mature? And we can them — not really two-year-olds, we can certainly test three and four-year-olds in the lab pretty routinely now. And so those are some of the questions that were really interested in looking at.

Why are children more susceptible to speech-on-speech masking?

Okay, so why? Why are they so poor as compared to adults at these tasks? And this is really reiterating some of the mechanisms in processes you heard about from the earlier speakers. Most of study developmental effects in speech perception agree that at least some of the problem is a failure just to segregate the target from the masker. So if you’re an infant or a young child, how do you know what the target talker is and what the background talker is. And how do you perform that task, what information do you use? We also think they have unrefined and undeveloped selective attention ability. So they might be able to use some of the same primitive or early cues that Dr. Sussman mentioned, but if they can’t then focus their attention on the target and disregard the masker it it’s not going to show up they’re functionally going to not be able to do the task as well.

And finally this is an area that really, we talked with colleagues who know a lot about these issues more and more. But the role that just general cognitive processing plays, so things like working memory and other executive functions and how do those interplay? Is this an auditory specific developmental process or is this just a general cognitive processing maturation. And we don’t know the answer to that.

So I’m going to stop for a second and give you a bit of a model of how we like to think about the process. So this is sort of my theoretical framework. For Dr. Chatterjee over here. Whenever I give a talk — Monita is one of my colleagues — if there’s not a hypothesis or a really clear theoretical framework, we have to stop and do that. So just an important note for those of you on the RMPTA if Anita is ever going to be in your audience — or any of us really, it’s a good thing to do.

So here’s sort of a really simplified model of how we like to think about development. So as Barb mentioned, we have these sources in the environment and they all sum before they arrive at the ear. So in this case we’ve got the teacher, we have the assistant and the student, and we have an air conditioner. So all kinds of different sounds, and they arrive at the ear of this developing child. And one way we like to think about this early stage is: what’s the fidelity with which that information is represented in the sensory system. So I’ve represented it here like a spectrogram, but really that’s what we’re getting at. How is the timing, the frequency, the intensity of information — is it sufficient to even do the task. So in the case of typically developing infants and children, the answer is yes. By all accounts the development of the peripheral auditory system is precocious. I wrote here by at least six months of age we think this peripheral representation is mature. I think more recent data from Carolina Dalla’s lab would indicate probably three months. So certainly the data I’m showing, the issue for a typically developing child isn’t their peripheral auditory system. It’s sending the brain a really nice — maybe a bit fuzzy at times when they’re young infants, but the representation of sound is thought to be more or less adult-like very early in life.

So the brain gets sent this information, and the question is what does the brain do with this information. We wanted to reconstruct the auditory scene so can segregate each of these sounds that arrived at its ear into three separate objects. And then if so, what about trying to selectively attend to one while disregarding the others. And so those are the mechanisms that we spend most of our time trying to parse apart slowly. And it’s tricky and kids of course because often we don’t know which of these processes we’re looking at because it’s really hard to get an infant to tell us how many streams of soundly heard or what they’re attending to. But there are some methods we think that we can employ that are getting us closer to trying to parse the specific mechanisms apart. And basically the punch line here is that the ability to use information the brain is sent from the sensory auditory system takes years of experience and neural maturation. So we’re talking not until adolescence.

So one of my favorite things to talk about is, okay so we know that they’re very poor in general at recognizing speech when other people are talking. So for a long time in my lab, we wanted to look at — well, if we introduce a cue that we know adults rely on to segregate a sound, whether that’s space or common onset time, or whatever that sort of acoustic cue might be. So one of the things we’re really interested in: At first we were interested in, okay, they’re really poor, so can we help them. And now a lot of our studies are really looking at the segregation types of studies as a way to understand what the fundamental mechanisms that are driving children’s performance are. So one of the initial studies we did in this area was done in combination with Lauren Calandruccio and Emily Buss, and we wanted to know whether children take advantage of differences in targeted masker sex. So it’s been known for many years that there’s a number of vocal characteristics between men and women, and that seems to be a really powerful segregation cue. So if you measure adults speech-on-speech recognition and you have the same sex, its matched you might have two men. That’s much harder than if the background talkers are a different sex. So if you have a male talker and a female background. You tend to get what we call a huge release from masking and it’s a very robust effect. And so we wondered, well do children take advantage of that information as well. And so the data here are shown again, their adaptive thresholds — so I didn’t put the arrows on any more from this point on — but lower is better. This shows data in a two male-talker masker. So it’s two men talking in the background. The blue circles are matched, it means that the target was also male. So everybody’s male. So you can see that those blue dots are higher than the red triangles which we then flipped and we made the target a female. So we have age as a function of years, and you can see that kids performed more poorly than adults. Universally. On both conditions. We’ve known that for a long time. But they do benefit from this sex mismatch, and just to prove to ourselves that this wasn’t due to some unique stimulus combination we did read the experiment with the same target stimuli, but we used a two female-talker masker and what you can see is the effect flips.

So this is evidence — and these are really big effects if you look at the difference between the two sets of symbols. As other people have observed for adults. The task is a four alternative forced-choice picture pointing response where even a 3 dB effect I think is a pretty meaningful effect. And we’re seeing effects on the order of 10, 15 dB. So it’s a whopping effect really. And so that’s encouraging that they use some of the same information that adults use to segregate sounds.

So what about babies? I told you that they’re really poor. This is really promising, if we knew that this benefited infants, this might have some implications for what types of environments and how we want to structure. Well it’s very surprising to us, we did a study that were just writing up now. And in part we started this study in 2013 and we’re under like our seventh experiment to convince ourselves that this is actually how infants behave. But it is. In this particular plot what I’ve shown are group data. They are infants and adults, the infants on the left panel, the adults in the right panel. We measured a speech detection threshold. So this is just: can detect the sound or not? Lower again is better. And what you can see the masker is a two female-talker masker, the red would be a female target talking. You can see that — and I have the legend backwards so it should be adults and infants, which I think is a bad error. So adults were better — so replace infants with adults. One of the things you can see that the infants were the same in whether it was a male or female target talker. And that was true for individual data as well. See what I did in the next one — yeah I did the same thing, so reverse it. We see it flips the other way around.

So these are really hot off the press data. These are Mary Flaherty’s data. And one of the things we wanted to look at then — so we did the study where we had a male and female target mismatches, and one of the things we wanted to do is really get at this a little bit more fine detail. So we wanted to look at what part of male and female speech were children and adults relying on. So what we did is we manipulated fundamental frequency. So we took a male talker, and he produced both the target and the masker speech. And we tested children with original speech — so in this case it’s the same target talker just like what Barb showed, really hard. And I’ll play that. So this is going to be a monosyllabic — word no it’s a disyllabic word in a two-talker masker. All the same talker. I’ll warn you it’s not one of the four pictures shown here. So see if you can hear what the target is. It’s really hard though, and it’s hard for adults. Then what we did is we took the same target stimuli and we shifted them with this nice fancy algorithm. So this is a, I believe this is a nine semitone shift, so this is a large shift. And see if you can hear what the target word is now. So the masker stays the same but we move the fundamental frequency of the adult higher in frequency.

What we’ve shown here is these are thresholds so their speech recognition threshold as a function of age, and what you can see if you look at the difference between the original — which are shown in blue — and the red I think the really interesting finding here is that young children, really children under 10 years of age, they don’t benefit from a 9 semitone separation. Which was really not what we expected. In fact this was a precursor study to a study that we want to do in listeners with hearing loss as children because we think that maybe if we introduce FO separation difference we can run that through an FM system, and that’ll be really helpful in natural environments. But the problem is of course if their typically developing peers don’t benefit from it, it’s unlikely that the children with hearing loss that that would be an effective strategy. So again these are the same talker, so we tried to control everything but FO, and there’s lots of work to be done. But I think this is a really interesting initial result.

One thing I’ll point out is I’m not showing these data here, but Mary’s also collecting data on children with mild-to-moderate/moderately severe hearing loss who wear hearing aids. And these children can represent this fundamental frequency difference. It’s a really hit you over the head difference. It’s a low-frequency cue. And we’re not sure if it takes them longer to use this cue or not, but certainly the older children with hearing loss sort of in the 13 to 14 year-old age range, they do seem to benefit so that’s really encouraging. But we’re just starting that study. Did I represent that correctly Mary? Okay.

Our working hypothesis

So our working hypothesis then, and this is where we’re gonna go to listening experience. Our working hypothesis really then is: So we have this peripheral encoding which looks to be mature very early on. We’re seeing these differences between infants and school-age children. You know we’re starting to get a picture of what types of acoustic information they rely on, and what they don’t. And our general working hypothesis, which is subject to change if we have data indicate otherwise — is really that the ability to reconstruct the auditory scene, to segregate, to selectively attend to sounds — requires extensive experience with sound.

And that’s gonna bring us to the last few slides of my talk. So a fairly large portion of my lab, as I mentioned I’m an audiologist by training — very proud to have this clinical connection. And a number of our studies in recent years have really looked at how hearing loss, for example, might impact development of these abilities. We’ve also started looking at questions related to non-native listeners. And many of the reasons are practical. I want to know if there are ways I can help children with hearing loss, and if we can extend this to the clinic. But a lot of them have to do with this idea of listening experience. So as Karen and Barb have mentioned, it’s tricky to think about the sound segregation — this scene analysis problems — when you have a listener with an impaired peripheral representation of sound. Because they do need to have sufficient audibility, sufficient resolution. If you don’t have that building block, you can’t do any of these other processes. And so we’re looking at effects of listening experience in children and children with hearing loss, and one of our ways to look at that question of experience more broadly is to start to look at children who have sort of real obvious differences in their experience with sound. So that’s been our approach in our lab.

So when we think about sound experience, we think about age and I talked about those studies. We think about infants. We’re starting to test preschoolers in the lab, which has been really fun and, and for those of you who work as pediatric audiologists you know that that period between VRA and when you can do play audiometry is sort of a black hole for us. But it also happens to be a period of rapid speech and language development. And so if we can start accessing that time period we think that that’s going to provide a lot of information. So the other area that we’re looking at is — well what happens to your experience with sound if you have hearing loss. If you have a cochlear implant or if you wear hearing aids. And what happens if you are not a native English speaker, or you’re bilingual and have spoken both languages since birth.

I’m not going to talk about our work on bilingual speech on speech recognition I encourage you highly to talk with Lauren Calandruccio about these issues but I think this is a really interesting way to look at these theoretical questions. But also to recognize the fact that over fifteen percent of the children we’ve seen our clinics are not native English speakers, at least when they enter school. And so I think this is a really timely topic that also allows us to look at both theoretical and clinical questions.

I’m going to talk about some of the work we’ve done with children who have hearing loss. And even though i have a picture of a child with a cochlear implant, most of the work we’ve done has focused on children with who are hard of hearing, who wear hearing aids in both ears.

So this is a study that we did in 2013, and really has had probably the biggest impact in how we thought about some of these studies. And what we did is we took a paradigm we’ve been using for children with who are typically developing with normal hearing thresholds. We had a four alternative four choice task. We measured their performance in noise and speech, so nothing really groundbreaking in terms of our method. But we looked at children with sensorineural hearing loss, wore bilateral hearing aids. We measured their SRTs or speech reception thresholds in these two backgrounds. And you can see that the children with hearing loss are divided into two age groups. That was our intent for the children with normal hearing, but the children normally were so good on the task, now we adapt signal-to-noise ratio in our adaptive studies because it’s so hard to find the right levels if you adapt one or the other. But we didn’t see an age effect for this particular population. Now one huge caveat with this population — these are kids in North Carolina. They were born prior to the implementation of newborn hearing screening. And so we’re basically repeating this study with the new generation of kids who were identified and mostly received early intervention services. So I want to make sure that I’m clear about that. We are seeing some of the same effects, but maybe not to the same degree. So because I think these data are a little bit alarming in some ways. But these are children many of them late identified — I think the average age of identification was over two years.

So what we show here is first if you look at performance in speech shaped noise, if you look at the children with normal hearing and the children with hearing loss — the children with normal hearing more about three-and-a-half dB they performed about three-and-a-half dB better. So they required a less advantageous signal-to-noise ratio relative to their peers with hearing loss. And that is not at all surprising. I should point out that the kids with hearing loss were wearing their hearing aids and we verified that their hearing aids were fitted appropriately using targets derived from the DSL method prescriptive program and these were regular wearers of hearing aids.

These data are in line with what many other labs have observed. It may reflect partly a peripheral representation problem. It may reflect other factors, some central factors as well. But these data people have known and talked about in the literature for 30 years probably, maybe more. What was really surprising to us is we saw the same affect in noise, but the effect was just dramatic. And we had kids who are mainstreamed in school. They would come in the booth, and many of them just could not do the task. It is really hard, and the magnitude the effect, the average effect was 8 dB. Which is a lot.

And Nicole Corbin who I mentioned before she joined the lab around the time we started this study, and it was even emotional sometimes to see these kids and how much they were struggling. Because we didn’t think they — we knew that they would probably perform worse, but the magnitude was pretty striking.

Followup and future work

So hearing loss then affects sound experience, and this is sort of the new part of my talk. And I think, there are some data in the literature, and there have been for years that would suggest that if you have hearing loss you have an impoverished representation of sound. But it also affects the types of input you get. And luckily for me, my colleagues at Boys Town led by Mary Pat Moeller published this supplement in Ear and Hearing — if you haven’t read it, it’s must reading if you’re a pediatric audiologist. And they actually showed that this is in fact true, and it has impacts on language development. And I think that that’s really going to allow us to think about these questions a little bit differently. So the idea is that when you have hearing loss, that you may have degraded or impoverished speech. But people also — there are also differences in the quantity of speech you get, and overhearing and the fact that if you’re wearing a hearing aid without an FM you have a radius with which you’re going to hear target speech really well. And so the amount of input we think differs too, or at least has for those children.

These are data also in the supplement I was talking about, but there are a number of studies that indicate that for children who wear hearing aids, half of them wear them four hours or less a day. And that’s really shocking, but luckily something we can do something about. And I think some of this comes from data indicating that parents have a different perception of what a mild hearing loss might be relative to a moderate. But not only is there this variability in how often children wear hearing aids, it is related even when you parse apart degree of hearing loss and age — it’s related to language outcomes. And I think that’s what this supplement is done is really had a large study to actually address some of these issues.

The second factor is that more than half of children’s hearing aids aren’t fitted optimally, even for audibility. And so we can do something about that as well. And I think that that we will be able to now we have some real tangible data to link to outcomes.

Another thing I have included in my talk is we did a follow-up study to the 2013 study, where we sent a survey of parents’ perceived impressions of their children’s communication outcome. It was a modified version of the AFAB for those of you who do are familiar with the adult AFAB. So this had parents basically just describe the types of problems, and what percent of the time their children had problems in different noisy environments. And from the data that I showed you we had a nice yield rate. Audiometric thresholds were related to the performance on our lab measure in a steady noise, highly correlated. They weren’t at all related to how they did in our two-talker masker, so that’s of interest. The other thing was that the parents’ proportion of problems was uncorrelated with how they did in our lab measure in noise, and the correlation coefficient was something like point eight five in the two-talker masker. And so in some ways we’re really excited about that. Maybe this is a way to get functional communication abilities, but again was a low sample, I don’t want to over interpret those data. But I think that as a field these are some of the ways we might need to start thinking about the problems that children are facing in background noises.

So I’m almost done here. So this is my question with respect to speech perception, and this is more of a clinical issue, but we usually measure speech perception in quiet, in noise, or in babble. And so people often think that if you measure performance in a speech babble, that’s like measuring in speech. And in fact once you have more than, I’m looking at the audience here four maybe — once you more than four streams of speech you’re really measuring performance in the noise it’s steady, the spectrum is steady, and it doesn’t reflect these kind of higher-order processes that we think really play an important role. And if we think about the data that we and other people have started to see, I think we’re actually underestimating the problems that children with hearing loss actually having their real-life environments because again we’re measuring the performance in quiet or these steady maskers and I think it’s really important for us to get a handle on how they’re performing so that we can intervene.

The other thing that I think is really encouraging, and I didn’t put it in these slides is, as Karen mentioned we might have some opportunity to do something about this. If these are acoustic cues that can help children perform and are not limited by the peripheral auditory system, maybe we can provide those either by technology or maybe through training. And so I think for me there’s a lot of exciting possibilities too.

Okay, and where do we go from here? These are just some of the things we’re doing in our lab, but again there’s many labs that are doing great work, and I think we’re starting to move in this direction. In our lab we really think that if we can isolate what’s going on in the typical system, that that’s going to allow us to then start thinking about some of these clinical problems. But also just theoretically having that knowledge. If we didn’t do the study of children who are typically developing, we would have no idea not to just go ahead and start messing around with fundamental frequency for example in a hearing aid. Uou wouldn’t know. So if they don’t take advantage of it, is it because of development? Or is it because of their hearing loss? So that’s just my shout out to the basic scientists and psychophysicists that we really need both groups at the table. We want to understand the influence of early auditory experience, which is like the world’s most ambitious question. But I think we can — I think we’re at a position where we can start designing some rigorous studies that really start tapping into some of these influences. In our lab we’re really interested, as Karen pointed out, in developing clinical tools. So I’m working on a grant right now with Lauren Calandruccio, Emily Buss, Matt Fitzgerald who’s here too is going to be one of our clinical sites — to look at developing a clinical tool to assess English and Spanish speech perception in noise or competing speech. So it’s going to be a while before we have a clinical tool, but the whole goal of this mechanism is to develop something that clinicians will actually use, that you don’t need to speak Spanish to administer because very few audiologists in the United States speak Spanish and that really critically for some of the work we’re doing will allow you to maybe measure performance in a two-talker masker and have some clinical norms associated with it. So hopefully next year at this time, but maybe that’s ambitious Lauren’s like staring at me — probably two years from now hopefully we’ll have something, and it’s going to be freely available and we will keep you posted.

And then finally a really fun thing that that we’re doing in our lab and are really interested in is can we develop practical solutions for the cocktail party effect. Although for babies I don’t know what the analogous term is. You shouldn’t take your baby to a cocktail party for sure. But if they were in a cocktail party. So, I have these discussions all the time with Ryan McCreery who’s the director of our audiology program. We like to just dig in and have this argument about audibility versus segregation or scene analysis. And just for the record, audibility is essential. If you don’t have audibility, you’re done, you can’t do anything. But the whole goal of hearing aid fitting at least for pediatrics is optimizing audibility and I think in these challenging environments, we also have to start thinking about once we get audibility what other factors are at play and how can we help children determine that a target and a masker are different. And I think if we can do that, maybe we have some ability to improve their performance. All right. Thank you.

Questions and discussion

Audience Question:

I’m Adam Bosen and I work at Boys Town with Lori. I knew your work was cool but I don’t know is this cool. The thing I’m interested in is do you believe that some of the deficits you’re seeing in the children with hearing loss, is that driven solely by the amount of cumulative quality auditory experience they’ve had? Or you do you believe it’s experience during critical windows that helps to develop some of these skills?

Well, I don’t think there’s any data, and in fact our approach is extremely crude. One of the benefits of looking at the cumulative numbers is you can have data logging on a hearing aid, you can have — its its quantitative. I suspect both. I mean I think there probably has to be some cumulative amount of experience, but I bet you that number differs if you’re a different age. So I think that we know from studies of children who don’t receive any intervention that there is certainly a critical period — what that is exactly. I think our data in our children who were late identified, I’m really interested to see what our children who are identified and received appropriately fitted hearing aids from three months of age look like. One of the things that suggests what you’re saying might be a factor is we had children in the original study who we didn’t see an age effect in our children with hearing loss. Which is something, we saw that big drop-off in our children with normal hearing. So the 14, 15, 16 year olds in that study didn’t improve to adult like levels. The problem really though is it’s really hard to do a study of adult controls because we have no idea what their experience was and when they were fitted. But i am wondering if you’re going to start seeing that trajectory — and a longitudinal study by somebody else. If somebody else wanted to conduct a longitudinal study I think that would be really important to look at. But I’m hopeful that the children with — I think if we see differences in the children with early input compared to this later cohort, that some evidence. But I think they’re just they’re tough studies to do so I don’t know the answer, but I suspect it’s both.

Audience Question:

Shae Morgan from the University of Utah. My question was, going to your conclusions that you said was that the segregation relies on a long period of experience, and you showed when you manipulate the pitch. Do you think that that is specific to pitch or do you think that that will generalize to other cues that are in the rich speech signal.

Terrific question so for probably two years we did these studies: can children benefit from X? Can children benefit from Y? It’s clear the children, even infants can benefit from some cues, and they don’t seem to benefit from others. And so part of the sort of game for us has been to try to get a picture of what it is they benefit and what it is they don’t. So school-aged children seem to achieve a really nice spatial release from masking. Onsets and offsets seem to be really salient for children. But these other — especially frequency based cues, and there’s some other data in the basic psychophysical literature that would suggest that children tend to integrate across the spectrum. So whereas if you set up a psychophysical study of an adult with just tone detection. If they know the tone they’re supposed to detect they are about 7 dB better than if that tone — the same tone — comes at an unexpected time. So adults seem to form these filters of attention. They exclude other information. And infants certainly don’t show — they seem to, doesn’t matter if they know it’s coming or not. They don’t seem to have the same expectancies in frequency and selectivity in listening. And so I guess, my inclination is there might be something special about some of the spectral information, but I don’t know for sure. There are certainly some cues the are very effective at using though.

Audience Question:

Gabby Merchant. So I had two things that I was curious about. One is that maturation of the two-talker masker versus the modulated noise masker. All these are auditory-only experiments and obviously children are not in auditory only environments usually. Do we have any idea of the role of the development of the visual system and the kind of interplay between the audio and visual effects?

Only, from what I’ve read, the data are surprisingly sparse. There’s some indication that — so infants can tell the difference between when visual and auditory information are not synchronous. But then there’s other data from school age children that suggests that, especially children who haven’t yet learned to read, that they don’t — Fred Whiteman has a really interesting study where young children don’t show an AV benefit and adults do. And so I think the pictures still a little bit fuzzy, but it wouldn’t surprise me at all to see effects of vision. Children tend to be auditory dominant when they’re babies, and that tends to equate overtime. But no, I don’t know and I think there’s a few researchers doing AV work with infants in terms of speech and hearing perception. I would suspect that it couldn’t hurt them for sure and it probably would help them.

Audience Question:

And then the other thing was about this segregation at the end. If we can give them the audibility, they still may have the issue of segregation. And do you think that has any implications on hearing aid fittings and talking about how there was some idea about not using directionality and things like that because of incident language learning but then, is there something out of that we should think about.

Yeah, I think it’s wide open. I mean some of the things we’re thinking about now is what’s the importance of knowing what the background is? So if you have an FM only and you deactivate the environmental mic, maybe you’re not learning important features about the background that help you segregate it out. So I think it’s wide open I think there are so few people, so if anybody’s interested in auditory development and these kinds of questions we need people in the field. It’s a very small field. The types of questions we’re really interested in would be something that we could actually implement in a clinic, for example would be, we think that we can route the signal through an FM system and deliver it to a child’s ear. And so one of the experiments that we have planned for next year in my lab is to invert — you can invert the phase of a target signal in one year and keep it keep its original phase in the other ear and in a two-talker masker that for adults with normal hearing — and we think adults with hearing loss — it provides a nice spatial cue in the the signal tends to pop out a little bit more and so that’s the kind of studies, how we’ve been thinking about it really kind of simplified. This FO, what we really thought is we could take the teachers voice, and we could shift it just in real-time, run it through the hearing aid and, I mean they might hear some incidental secondary, but we thought that would work really well. And we’re still interested in it, but it may be that you have a certain age with which that would actually be effective. I don’t know that really answered your question, but I think people ask me a lot at clinical meetings, well what should we do with FM because children are being fitted with FMs very early. And my until — I guess my take on that is until we have good data — we know that impoverished hearing impacts language and speech development. We know that’s true. And we know that an FM can really counteract some of those effects. And so until we know and have good data that we’re depriving them of these other — because I think most of those kids where don’t wear the FM all the time where they have the environmental mics on. But I think that those are really empirical questions that we could actually address, and they’re really interesting.

Audience Question:

Hi my name is Sarah Kennett, and I’m a PhD candidate at University of Arkansas at Little Rock, and I’m also a pediatric audiologist at Arkansas Children’s Hospital, and so for that I’d like to build on the language acquisition and that window of opportunity question. Because I do see those children and I’m encouraged by your work in sensorineural hearing loss. This question is about conductive hearing loss and specifically the transient conductive hearing loss that we see with this otitis media, and being part clinician and part researcher I understand the difficulty in designing a project like that. I would like to ask your thoughts on delays in this maturation in this particular population when we do struggle with chronic otitis media and during that period of language acquisition.

So my former colleague at UNC, since retired ,Joe Hall was really interested in this question. He also studied sort of effects of fluctuating hearing loss due to otitis media — he was starting to get into more issues of more complex perception. And so I think those are really interesting questions. You know some of the data that I’m aware of even from the time I was an audiology student to now would indicate more of a watch and wait that these children often their language skills, while they’re delayed initially, they’ll catch up. But I think the question of how they do on these kind of higher-order tasks, I’m not aware of anyone anyway. I’m not aware of anyone who is looking at that actively, but I think those are really interesting questions, and I think it’s not something that we do, but I guess my prediction would be it certainly has a potential to impact performance depending on how much time and how variable input is.

Audience Question:

Samuel Atcherson, University of Arkansas for Medical Sciences So I’ve been thinking a lot about — you talked about the FM, we don’t have enough empirical data. That has me wondering if we even thought about or factored in the communication proximity. When we interact with infants they are very close to us and that almost puts them in a situation where they have direct access to the signal of interest. And so I wonder if your data might change a little bit if that was worked into the signal-to-noise variables. What do you think about that? Like what we’re seeing as relatively poor performance with infants, but that’s keeping everything controlled across the age groups. Would it looked better for infants if you consider the proximity? So in other words the signal would be a little bit higher, but I don’t know how you would capture that in there.

So do you mean, are you including FM in this mix? You’re just saying that for most listening environments for an infant — and I know Ryan McCreery and Mark Brennan have sort of the pre microphone versions of different distances for children right — for babies because often babies are being held really close. So are you asking me a question about throwing an FM into that mix?

Okay, I see what you mean. I think that’s actually a really interesting observation. There’s something about intensity with infants and young children that seems to just like there’s some conditions where it’s almost like they’re holding out for this hit you over the head SNRQ. And maybe it’s because that’s what they experience most of the time. The trick I think is how you would design an experiment to look at that. But something like a discrimination task or — so you’re saying you might even get a different answer at different signal-to-noise ratios. Yeah we we haven’t done those studies because of the number of trials that you can get from an infant. But I agree. I think they might be primed for a certain pretty advantageous signal-to-noise ratio when they’re an infant and then they start getting used to really poor classroom acoustics when they get older, but yeah all really good questions. I haven’t thought about that too much.

Lori Leibold
Boys Town National Research Hospital

Presented at the 26th Annual Research Symposium at the ASHA Convention (November 2016).
The Research Symposium is hosted by the American Speech-Language-Hearing Association, and is supported in part by grant R13DC003383 from the National Institute on Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH).
Copyrighted Material. Reproduced by the American Speech-Language-Hearing Association in the Clinical Research Education Library with permission from the author or presenter.