I’m Karen Helfer from the University of Massachusetts and today I’m going to be talking about aging and how aging affects speech understanding. The two talks we had this morning did a great job of setting the table and talking about attention and how neurological, electrophysiological changes affect attending to speech or to complex sounds and the next two talks — my talk and Lori Leibold’s talk — are going to be more clinical in nature. We’re both audiologists by training and so we bring that to the table, and maybe a little bit of a different direction to approaching the same problem. So just my disclosure slide, nothing terribly important here so I’m just going to go on.
To give you an overview of what I’m going to talk about, we’re going to talk about why complex listening situations are so problematic as we get older. I’m going to give you a few examples of recent research from our lab. And then I’m going to spend a little bit of time talking about something that we’re particularly interested in, is when these problems, these age-related problems seem to begin. And I’m going to end with a few potential clinical applications.
Defining a complex listening environment
So what is what I’m calling a complex listening environment. For the purposes of this talk I conceptualize that as when you’re trying to understand one voice or message in the presence of one or more one or more other understandable voices or messages. And you’ll hear me use the phrase target to refer to the message that you want to hear, and maskers to the message or messages that you don’t want to hear.
Those of you who work in clinical audiology, what is the primary complaint? The primary complaint is understanding speech in noisy situations. Something we hear time and time again. It’s something that isn’t easily solvable because it’s not something that hearing aids do a terribly good job with. And if you break down these situations, if you if you pay attention really to what people are saying, in many of these noisy situations the noise is at least in part understandable competing messages. So for example restaurants or parties or small group conversations, or trying to understand with the television on in the background, or trying to understand on your cell phone when other people are talking in the background. These are all very common complaints that we hear our older patients complain of, and so that’s the type of thing I’m particularly interested in studying.
The question is why? Why do older adults have difficulty in these situations as they get older? And I will tell you right off the bat that this is a really complex problem. There are a lot of potential reasons why this could happen. And that’s because the ability to understand one voice in the presence of other voices takes a lot of mental power. There are a lot of different things that need to happen for that to happen successfully, and it’s amazing that we can all do it automatically, or for the most part automatically. But age-related changes in any of these different levels can affect the ability to successfully understand speech in competing speech situations. I’m going to give you a few examples here, there’s lots of references by the way that I’ve cited. There is a handout on that should be online, and the handout doesn’t list all the references, but it lists most of the most current references. So if you’re interested in something don’t feel like you have to write it down now.
Certainly, if you look at speech perception and aging, the most important factor is audibility. So without audibility we can’t even start thinking about other things that might affect the ability to understand speech. We know that as we get older, almost everybody loses some peripheral hearing, and that peripheral hearing acts to attenuate and to some extent to distort sounds are coming in. And that attenuation leads to certain speed sounds not being audible. And that is a very, very important factor when we talk about age-related changes in speech understanding. It’s well beyond the scope of this talk to talk about why that occurs. Certainly there’s a loss of inner hair cells and outer hair cells and spiral ganglion cells and metabolic changes, but we all know it’s very, very common that most people do lose some hearing as they get older. And certainly that does contribute to problems with competing speech perception.
But there are a lot of other things that happen as we get older that might or might not be related to that peripheral hearing loss. One is problems segregating sound sources. So we heard talks this morning about segregation and what it takes to segregate, and certainly there is research evidence that shows that older adults have more difficulty segregating sounds than younger adults. This could be due at least in part to lack of sensitivity to temporal fine structure cues because that’s one way that we can segregate sounds, but it could be other things also. Another thing that seems to happen as we get older — and here it’s difficult to separate really the effects of age versus age-related hearing loss — is an inability or lesser ability to take advantage of masker fluctuations.
So when you have one or two competing speech sounds like this, what you have are these instances of lower masker energy. And if you have a normal auditory system, you can take advantage of these lower master energy instances to glimpse portions of the target signal. And we know that it seems like with age, but certainly with age-related hearing loss, the ability to take advantage of these masker fluctuations certainly does decrease.
A couple of other things seem to happen that at least on the surface should affect the ability to understand one competing message in the presence of another. One is the ability to tell apart different voices. So there have been a few studies, not a lot of a lot of work here but a few studies, that suggest that as people get older they have a harder time discriminating between voices and/or identifying a voice. And certainly if you think about it, if you’re trying to understand one person a crowded room the ability to pull out that voice and identify that voice and know who that talker is, is really important. One reason why this is really important is because we know that visual speech cues, lip-reading cues are really important in competing speech situations. The ability to find the person that you want to pay attention to and look at them rapidly plays a large role in your ability to successfully cope in that kind of situation.
Another thing that seems to happen with aging, again this is another area with aging versus age-related hearing loss is a little bit hazy, is reduced spatial release from masking. So spatial release from masking just refers to the ability to take advantage of the fact that the target is in one location and the maskers are in other locations. And so this also seems to be reduced in older adults.
So Barb earlier today talked about hidden hearing loss — the whole idea that these people with normal hearing, at least some people with normal hearing, probably do have some underlying synaptopathy that contributes to problems understanding speech in noise. So far I think we’re getting there in identifying in humans, it’s certainly more accepted that it happens in animals. But certainly we see lots of adults in the clinic — younger and older — who report problems understanding speech in noise, but when we test them we find normal results. And so this could be one reason, and it certainly can contribute to why older adults have difficulty, because not every older adult has a lot of hearing loss. Excuse me, they don’t all have — they aren’t all normal hearing, let’s put it that way. Barb’s vernacular.
So everything I’ve talked about so far can be attributed at least in part to peripheral hearing problems, to things that happen in the periphery. But we also know that there are cognitive changes that happen with aging, and those cognitive changes probably do contribute to difficulty understanding speech in complex listening situations. And so I’ll give you two lines of evidence that suggests that this is the case. One is that if you test older adults and young adults with two different kinds of masking speech, one type of masking speech is meaningful and the other is non-meaningful. What you’ll see is that the younger adults have a greater difference, and that’s because the older adults take a greater hit from meaningful background speech. Two examples are, if you play a masker forward normally versus if you time reverse it. So if you time reverse it, it’s not understandable anymore, but it’s got — not exactly the same acoustics, but pretty close to the same acoustics as forward. And what we see is that older adults just have a lot more difficulty with that meaningful to not meaningful switch. Another example is if you have a masker in the listener’s native language versus a masker in an unfamiliar native language. Again older adults have much more difficulty with that switch to a meaningful masker. That’s not something at least the first one where it’s backwards versus forward speech, it’s hard to account for that with a peripheral explanation because the acoustics are the same with the masker, it’s got to be something else besides the acoustics that are affecting — that are causing that problem.
Another piece of evidence here that cognition matters for speech perception by older adults is that there’s lots and lots and lots of research now that has shown that cognitive abilities correlate with speech perception in complex listening situations. This is just a very partial list. And so we know that there is a correlation between cognitive skills and speech perception, especially speech perception in complex listening situations.
Not only does cognition matter but cognition and hearing loss interact, and they interact in a couple of interesting ways. One is that we think that if you have hearing loss, you need you need to spend more of your mental energy just to decode the signal. So if you are spending more of your mental energy to decode the signal, you have less left over for things like remembering and encoding and understanding complex sentences. So if you need to spend those resources on just understanding, there’s fewer left to remember. And certainly there are studies that show that in people with hearing loss, older and younger, memory is affected because they’re not able to encode that information or it’s harder to encode. And even if older adults can reach the same level of performance as younger adults, it seems to come at a cost of effort. They need to exert more mental effort to do so. And we hear this again in the clinic when we hear about fatigue — the fact that it is so fatiguing for people with hearing loss, especially older people with hearing loss, to maintain the ability to successfully communicate. They just get exhausted by it.
Some of this is conceptualized in something called the ease of language understanding or EOU model by Romberg and colleagues. What this model says in a nutshell is that when it’s easy to listen — in quiet for example, or if you have normal hearing — then you don’t need to exert a lot of your resources, your mental resources on understanding speech. But when there’s a mismatch between what you hear and what is stored in your long-term memory, then you need to exert more resources. And this mismatch could occur because of noise in the background, or it can be caused by hearing loss.
So there’s another way or another really interesting way that hearing and cognition interact. It’s kind of the reverse of this. And that is that it seems like there is this link now that most of you have probably heard of between age-related hearing loss and dementia. There’s quite a bit of research coming out now that suggests that people with hearing loss have a higher incidence of incident dementia, you also have greater cognitive decline. And there are quite a few theories here about why this might happen. One has to do with social isolation, so there does seem to be a link between social isolation and cognition. And we know that there’s a link between social isolation and hearing loss, and so the the third piece of that triangle perhaps is the relationship among all three. But there are other possible reasons why this can be the case also.
So it’s not all bad news. I don’t want everybody get too depressed yet. But there is some good news and that is that we know that there are certain abilities that do improve with age. We know that vocabulary improves with age. World knowledge improves with age. We know that older adults are especially adept at using some top-down processing like context to help fill in what they don’t hear. So not all bad news. And on this this enhanced ability to use top-down processing, actually there are neurophysiological evidence of that. So we see less hemispheric asymmetry in how sound is processed in older adults. And we also see greater frontal processing. And so this could be these compensatory resources or compensatory abilities that are being used to make up for what older adults aren’t hearing, or aren’t totally hearing.
Some more good news. We know that older adults can use speech reading or lip-reading cues. We know that we still get binaural benefits, so certainly two ears is better than one for most older adults. But the extent of the ability for older adults to use visual cues and the extent of binaural benefits tend to be reduced. So yes they can use them, but perhaps not as well as younger adults.
Research
So I’m going to switch gears now and talk a little bit about research from our lab. And so in the last few years we’ve been conducting a battery of cognitive tests in addition to whatever speech perception test we’re measuring for that study. And the batteries changed a little bit from year to year and from study to study, but these are some things we often measure. We offer measure working memory with a task called the SICSPAN which you are going to see in a minute. We do a task of inhibitory ability — a Stroop task. We often do the connections test which is basically a trail-making task which is processing speed and executive function. We’ve used this visual elevator task from the test of everyday attention, which is another task switching of attention. And then recently we’ve also use the NIH toolbox although there are some problems with that, if anyone knows they’re switching their format.
The other thing that we do in all of our studies is that we get some measure of self-perceived hearing functioning or hearing ability. Because we want to see the extent to which our lab based measures have relevance for the real world. And what we like to use are selected items from the SSQ. The SSQ is the Speech and Spatial Qualities questionnaire. It’s a questionnaire that’s really focused on complex listening situations.
So just to tell you that, perhaps not surprisingly, invariably we show connections between our cognitive measures and our speech perception measures. But interestingly, it doesn’t matter whether our cognitive measures are visual or auditory. It is not modality-specific. All of these tasks up here are visual tasks. But in our battery we’ve had auditory tasks too, but it just doesn’t matter. So what we are measuring cognitively in the visual domain, we see a connection with speech perception in the auditory domain. And interestingly we also see a connection with high-frequency hearing loss. So even though these are visual cognitive tasks the more high-frequency hearing loss you have, the poorer you do these tasks — not all of them but most of them — and that is even when we account for age statistically. So we don’t know why, there’s something going on there. And again this is something where there are hypotheses out there because we’re not the only ones to show this.
So, everyone has to be awake now because we’re going to have a working memory task now. This is a simulation of what we do for the SICSPAN. If you were a subject in our lab you would read this. You would see, “You will be reading questions on the computer screen. After each question, using the yes key or no key, after each question will see a word. At the end of each set of questions, you’ll be asked to recall these words in the order they were presented.” So instead of pressing a key, you’re just going to say yes or no.
So I decided to do kind of a meta-analysis of data from five different studies. So in all of our studies, or all of these studies I’m going to talk about, we’ve run younger adults, middle-aged adults, and older adults. Our middle-aged adults were 40 to 59 and our older adults were 61 or 62 and up. Generally no one older, very few people older than 79. So we did this because we want to look at, get a bigger group of data together to look at some connections between high-frequency hearing loss and cognition and kind of a metascore on the speech perception tasks.
And so first of all I’m showing you here the composite audiogram, and so the solid line are the middle aged adults, and the dotted line is the older adults. And you can see there is a difference in high-frequency hearing loss, what you don’t see here are error bars, and I will tell you those error bars are pretty big. Because especially in our older group there’s quite a range of hearing loss. We don’t exclude a lot of older adults but we do exclude older adults if they have a high frequency pure-tone average greater than 65 dB HL that is, and we exclude people who don’t have symmetric hearing thresholds. And we have some other exclusionary criteria also, but we don’t exclude otherwise.
Just to tell you the results of this kind of meta-analysis, when we look at data from all of the participants — so younger, middle-aged, and older — what we see is that both high-frequency hearing loss and cognitive skills matter. Really not surprising. So when you take a group of adults spanning the adult age range it’s not surprising that high-frequency hearing loss accounts for speech perception. And the total of the two, the total of the high-frequency pure-tone average and the cognitive skills accounted for about forty-five percent of the variance.
This is not after taking out age, this is with age. But age didn’t matter. So it was stepwise regression and high-frequency hearing loss went into the equation, and after that age didn’t command all. And high-frequency hearing loss alone accounted for twenty-six percent of the variance. Interestingly, when we look at older adults, just the group of older adults, only the cognitive variables mattered. Once you reach a certain age or certain amount of hearing loss, the amount of hearing loss didn’t really matter. It was just the cognitive variables. And they accounted for about forty one percent of the variance.
Interestingly, among our middle-aged adults, it was only high-frequency hearing loss that accounted for variance, and not much variance — only eight percent of variance — but cognition did not matter. And among our younger adults, the only thing that mattered was score in the SICSPAN, that working memory task, that surprisingly a lot of our younger participants have a lot of problems with. And that again it only accounted for a modest amount of the variance.
So here’s the take-home message from all of that. Amount of high-frequency hearing loss certainly is very important. If you’re looking at an individual adult the amount of high-frequency hearing loss they have seems to be very important. But if you’re looking among the group of older adults, once you get again above a certain age or above a certain amount of hearing loss, further hearing loss doesn’t seem to really account for much. And selected cognitive skills also important for explaining speech perception. I didn’t mention this, but all of our tasks our speech perception tasks are in the presence of competing speech — understandable competing speech. And cognitive skills are important, but especially for the older adults. Again especially the older you get, it seems the more important these cognitive skills are.
So I’m going to talk about just a few individual studies from our lab, more recent studies. One of the studies I’m going to talk about is on the effect of repetition. So we all know that when you don’t understand something, you ask someone to repeat it. And repetition is the most commonly used repair strategy. There’s a psychological literature here on something called repetition priming. It’s usually done with younger adults, but repetition priming is when repeated encounters with an item results in faster and more efficient processing of the item. So we know when things are repeated, we are able to process them faster and more efficiently, if were younger or older. But the question is: how does listening — all the psychological research, some of them is done in noise, but its steady-state noise. It’s not noise that is going to be causing any type of cognitive interference. And so our question was: How effective is repetition when there is understandable background speech. You might think that repetition would be particularly effective in the presence of a fluctuating masker because if you have a repetition, you get another chance to hear those glimpses, get those glimpses of the target. However, when you’re listening in the presence of competing speech, there’s also a cognitive load that happens. And it made it makes sense that you would have to use some working memory to take advantage of repetition. So perhaps repetition wouldn’t be as effective in a competing speech situation because you have to use cognitive resources just to understand the speech.
We’re also interested in how age affects the use of repetition. Repetition again — if you look at the psychological literature — is considered implicit memory. And implicit memory in general seems to be pretty resistant to aging. However there is research evidence that suggests that repetition might be affected by aging. So there’s a couple of studies that used event-related potential by Getzmann et al., and what they found was that even though if you look in terms of the benefit for repetition and older and younger adults were about equivalent, the electrophysiological basis of that benefit differed between younger and older adults. And we’re interested in it in our lab because we study priming. We studied the effects of giving a cue a pre-cue on the ability to understand speech and different types of maskers. And we do have data that suggests that older adults don’t use this pre-cueing as effectively as younger adults.
So in this first study what we did was we looked at the effect of immediate repetition in the presence of three types of noises: a steady state speech shape noise, modulated noise, and a single competing speech message. And so participants would hear a sentence. Sometimes that sentence was immediately repeated, and when the sentence was immediately repeated the masker was also immediately repeated. Now this isn’t like what happens in the real world. In the real world when someone repeats something, they say it in a different manner, they might say it louder, they might say it slower, and the background usually isn’t the same either. Except one real-world analogy here might be if you’re listening to a voicemail message, so you have something that’s recorded on voicemail and you play it over and over again, and that message is exactly the same, and the background is also exactly the same.
So we did use spatial separation here so that the target speech message was played from a front loudspeaker, the masker was played in a way that produced perceived spatial separation using the precedence effect so it is played from both the front and the side, with a four millisecond delay favoring the side and that pulls the percept to the side. These are the results, and so what you’re looking at here is repetition benefits. So this is second attempt minus first attempt. so how well you did the second time you heard it minus how well you did the first time you heard it. And this is the three different types of maskers. This is the modulated noise, this is the steady-state noise, and the speech masker. And there are two important things to realize here, and one is that the greatest repetition benefit occurred in the presence of the modulated noise masker, and the least occurred with competing speech. That’s one. But the other maybe more important thing here is that we saw virtually no difference among our three subject groups. So older is the squares, middle-aged were the X’s, and younger the diamonds — or triangles, I’m sorry. And you can see that there’s a lot of overlap and there were no statistical significance. Which suggests with this paradigm, older adults are just as adept at using repetition as younger adults. And so we did a follow-up study to this, but I’m not going to talk about it yet. I’m going to talk about another study first because the results of that study affected how we ran our follow-up study.
So a lot of competing speech research uses the stimuli that are very tightly aligned temporally. So they use sentences that all start and stop at the same time, and at the same syntactic structure. So examples of this are the CRM sentences, like “Ready Ringo go to blue star now.” And then we use sentences called the TVM sentences which are the same way. “Theo discussed the pin in the light today.” They start and they stop at the same time. They might not be exactly time aligned, but they’re pretty close.
But in the real world, how often do you have to listen to a sentence in the presence of other sentences that are tightly aligned? Probably never, or at least not too often. So this might be especially important for older adults because we know that older adults, when it comes to competing speech perception, seem to have a more difficult time when the target and the masker are similar to one another. And so when you think about it, this temporal alignment is a form of similarity. So this suggests that previous research including our own that use these tightly aligned stimuli might have actually overestimated the problems older adults experience in competing speech situations.
So we are also interested in looking at this in realistic spatial situations. And so we use two different spatial situations. We used a spatial situation where there are two masking sentences on one side, and then we used the situation there was one masking sentence from one side, one masking sentence from the other side.
So when there’s two masking sentences from one side, what you can do is you can use your ear away from that side, which has a better signal-to-noise ratio. And so that’s one reason why that situation usually is not very difficult. If you have two masking talkers on one side your other ear has a pretty good signal-to-noise ratio, and you can use that ear. But when you have maskers on both sides, there aren’t long-term signal-to-noise ratio advantages. But we still get a spatial separation advantage, and we think that’s because people can integrate these glimpses, very rapid glimpses that occur between the two ears over time. However there is a suggestion that with aging comes about problems integrating these rapidly changing glimpses from ear to ear. So we decided to look at this repetition and also look at the effect of these spatial situation differences.
So these were the questions we asked. How effectively can older adults exploit syntactic difference cues between target and masker? How does this interact with target masker spatial configuration differences? And how do hearing loss and selected cognitive abilities affect performance.
So we use this new set of sentences we developed. We’ve had these TVM sentences for years that all start with Theo, Victor or Michael. We developed a new set that are a little longer, have a little more of a memory load. And so this word here which we call the cue name start is either Theo, Victor, or Michael. This is always a color. This is always an adjective, and these are nouns. And all the senses have 13 syllables, they’re all the same length, although obviously there are differences as you’ll see in the next slide between actual lengths because of talker differences.
So this is how we manipulated temporal alignment. What you see here is the target sentence, and this is aligned trial. Here’s the target sentence, here are the two masking sentences. So this say Michael found the white kitchen and the full network here. And this one is: Victor found the gray bone and the final decade here. And this one is Theo found the black entrance and the loyal act here. But you can see they all start at the same time. They’re not exactly temporally aligned because there are differences in word length and talker differences but more importantly, is this is how we created the non-aligned stimuli. The computer program took a random point within these masking sentences and started the sentence there. and then appended the rest of the sentence to the end. And that’s how we created — these will be called looped or non-aligned sentences. They were discreet. But, so what we did was — this is a discrete sentence. This is a discrete sentence, but what the computer did, I don’t know if you can read this — yes, yes.
So I apologize for the density of this graph, but these are the results. Let me just walk you through this. The top panel is the effective alignment. So it’s non-aligned trials minus aligned trials. This left is what happens when all the masks, the two maskers are on the side, and you see we didn’t have a big effect of alignment because performance was quite good in this condition. So we really didn’t have a lot of room to improve. But here in the presence of the symmetric masker, the masker that is right and left — what we saw is that the older and middle-aged adults had a greater difference. They took a greater hit from these from this side, these aligned stimuli. There’s a greater difference between the aligned and not aligned. And the bottom panels looked at the two maskers to the side minus when the maskers were symmetric. So it’s the effect of this different spatial condition. And again we saw — it’s a little less clear here, but we did see some effect of age in the presence of aligned maskers, meaning that older adults and to some extent middle-aged adults were more greatly affected again by these symmetric maskers.
So in summary what this showed was that older adults were disproportionately affected by these syntactically aligned targets and maskers. And they were also at a greater disadvantage in the presence of the symmetric maskers versus younger adults. We did do some regression in the most difficult condition — the most difficult spatial condition, the symmetric masking condition. And we did this on all of the participants. And what we found was both age and cognition mattered in the aligned condition, but high-frequency thresholds did not matter. But in the non-aligned condition cognition, high-frequency thresholds mattered and not age. And they’re actually good reasons why high-frequency thresholds mattered more in the non-aligned related to masking, just physical masking, which I’m not going to go into and this presentation.
One thing we thought was really interesting was that even though performance was quite high with both maskers on the side, cognition still mattered. So performance with two maskers on the side was, it never went below seventy percent in any subject. But still in both aligned and non-aligned trials cognition still mattered there, which we thought was pretty interesting.
So now we can talk about repetition study two because we change some of our methodology between the first one the second one because of the results of the last study. So remember in repetition study one both the target and masking speech were repeated. Well again that’s not really realistic. Maskers usually aren’t repeated. And so this study was designed to look at repetition of target only versus repetition of target plus masker. Why did we include the target plus masker condition? Because we thought it was kind of a sneaky way, indirect way, of looking at how people subconsciously process to be ignored speech. So by comparing the two, by comparing target only repetition versus target plus masker repetition, it lets us look at maybe some subconscious processing of the masker.
We had five types of trials. We had no repeat trials, where the person just heard the sentence once. We ahd trials where only the target was repeated and our masker which in this case was a two-talker masker was not repeated. We had the same thing but with the modulated masker, the noise that was modulated. And then we did again the same thing with both the target and the masker repeated. There are five types of trials. And so how did results of the previous study I talked about drive this study? Well we use the symmetric masking condition, which we didn’t use our first one, and we use non-aligned targets and maskers, because we are now convinced that it’s the more realistic way to go. And we like doing more realistic things in our lab.
So what you’re seeing here is repetition benefit. Again this is second attempt minus first attempt. And here’s speech masker conditions, noise masker conditions. This is when both the target and the masker were repeated, this is when the target only was repeated. Same here for masker. The older adults are the red bars, middle-aged are blue, and younger are green. So when we did statistics on this, what we found was that the only statistically significant differences among groups here was that this one — in the target only condition, in the speech masker middle-aged adults had less benefit than the other two groups. Now one thing I didn’t tell you was that our older and middle age groups were running slightly different signal-to-noise ratios than our younger which is why that happened. And then the only other significant thing that happened here was that interestingly in the presence of that modulated noise masker, when the target only was repeated the younger adults had the most benefit, which we thought was pretty interesting. But remember I said that the reason we included that target plus masker comparison was because it might be — or the target plus masker repeated versus the target only repeated — was kind of an indirect way of looking at maybe some things related to how well you can ignore that repeated masker. Well this is what we found when we compared those two conditions, where the target plus masker was repeated versus when the target only was repeated, the older adults had the greatest difference. So they were at the greatest disadvantage, or more of more of a disadvantage when the masker also was repeated. Which again, we’re still analyzing these data, we’re still trying to figure out exactly why that might happen. We might never know exactly why that happened.
So I’m going to switch gears again, and talk a little more now specifically about middle-age. This is something that we’re particularly interested in our lab. And like Barb said ,it kind of came organically as some of us started having difficulty understanding speech in difficult listening situations and then are we would hear this from other people. And certainly if you work clinically, it is not unusual to have middle-aged people self-refer for assessment because they are having problems.
And there are good reasons for us to study this in the lab. One is to confirm these subjective problems. What do we see is going on objectively that people are reporting subjectively. It’s also a way of studying early aging, which is something that should be of interest to everyone. And also one of the nice things is that most of our middle-aged adults don’t have a lot of peripheral hearing loss or any peripheral hearing loss. So it’s a way of looking at some of these questions that we have about age related changes in speech understanding without or with less of a confounding factor of peripheral hearing loss.
So just some interesting things about middle-age and things that happen in middle age. So this is data that was adapted from Bainbridge and Wallhagen. And what they looked at was measured hearing loss, meaning pure-tone hearing loss, versus self-reported hearing loss. So the prevalence. And interestingly what they found is that in middle age there is more self-reported hearing loss, that’s the purple bars, versus measured hearing loss. So if you look statistically, this is from epidemiological studies, this is from large N. So middle-aged people tend to overestimate their hearing problems and I only went up to 60 to 64 here, but the trend reverses. Older adults tend to underestimate their hearing problems. So middle-aged people perceive problems even if they’re not measured audiometrically.
Surprisingly there haven’t been very many studies that have looked at anything other than pure tone hearing loss and middle-aged adults. But they’re out there. And invariably — not invariably because there are studies that haven’t shown an effect in middle age, but a lot of them do. So things like temporal processing, binaural processing, understanding phonemes, remembering speech and babble and some evoked potentials. You can see evidence of early aging changes in all these types of studies.
So our first study that we were interested in this. This was a senior honors thesis from one of my undergrads, and we looked at 12 younger and 12 middle-aged adults with normal hearing. They’re all women, the middle aged women were 45 to 55 years old. And we looked at speech perception in the presence of both competing speech and in steady-state noise, with and without spatial separation. And what we found was that the only significant difference was in this very difficult condition where everything comes from the front and it’s a competing speech masker. Everywhere else we didn’t see a difference between middle-aged and younger adults. And interestingly, this was not related to amount of high-frequency hearing loss. There is no correlation in this particular condition, and speech perception, and a high-frequency hearing loss.
So in the last few years in each of our studies we’ve included a group of middle aged adults — again 40 to 59 years old — and we see this interesting trend. What we see consistently is that when we look at speech perception in any kind of non understandable noise, whether it’s modulated noise or steady-state noise, we don’t see a significant difference between our middle-aged and younger adults. But when the masker is one or two competing speech messages, we do see a difference between our middle-aged and our younger adults.
And I’m going to show you two examples here so in this panel is speech perception in steady-state noise, and in this panel is speech perception in the presence of a single competing speech message. And this without spatial separation. And the middle aged are the squares, the younger the diamonds, and the older the triangles. And you can see in this condition the younger, and the middle-aged adults were pretty much in between, but closer to the younger adults. But in this condition you can see that we see the separation between younger and middle-aged adults, and the middle-aged adults are now closer to the older adults.
And this is a pattern we’ve seen in most of our studies. And so here’s an example from the first repetition study where again you can’t even see the difference between middle-aged and younger adults in the two types of noise maskers — the modulated and the steady state — because they totally overlap. But in the presence of competing speech we do see the separation. And this case, this is with spatial separation. So it does seem like there is a problem here with middle-aged adults listening in the presence of understandable competing speech messages.
And so again we did a little bit of a meta-analysis on the results from the last five studies. Older is green, middle-aged is blue, and younger is red. And so you can see that in the two types of noise maskers — and what you want to pay attention to are these two groups, the younger and middle-aged. There is a difference. In fact our middle-aged adults in some cases outperformed our younger adults in steady-state noise, which I think is our slacking younger adults. But in the presence of competing speech we do see a difference between our middle-aged and younger listeners. And again remember I said way back we do we use SSQs, some selected items from the SSQs and so these are just different SSQ questions, and again the older adults. So again this is our SSQ data and the older adults are light blue, the middle-aged are dark blue which was purple, and the younger are red. And what we found was that there were no statistically significant differences in self-perceived hearing problems between older and middle-aged groups. Even though the older adults had more hearing loss, which is again something that has been found by others. And on the middle age, group scores were significantly lower, which means more perceived problems, than the younger on all except two of these questions. So again these problems are real, at least the self-perceived problems.
Clinical implications
I’m going to end with a few clinical implications. So the Holy Grail, I don’t know if anyone else knows Monty Python, but the Holy Grail is an ecologically valid test of auditory function. So as someone said earlier today, this is very difficult to achieve because of time. And honestly I don’t know if we ever will have an ecologically valid test of auditory function. And that’s because we want it to do too much. We know that what we do in the clinic isn’t ecologically valid. It doesn’t touch upon what we need to know for the ability to understand speech in complex listening situations. We do have clinical speech-in-noise tests like the QuickSIN, the HINT and the LISN- which is a test from Australia, but even these have their limitations because they’re kind of static. They don’t have a lot of variables. But they do give a general idea of speech perception in noise. So the ideal clinical test battery, I think would include how well someone can use visual, spatial, and talker familiarity cues; how well they can understand complex sentences, not just repeat back words or sentences; and it would measure working memory or other cognitive abilities that we know are important for understanding speech in difficult listening situations. It would tell us whether the person had hearing loss or other supra-threshold processing problems. And it would tell us how effortful it is for the person to communicate in these difficult listening situations. And I have and a unicorn up here because there’s no such animal. And there never will be.
So what is an audiologist to do? So if you’re an audiologist, what do you do about this? You know that your test measures are not giving you a really good idea of what is happening in the real world. So I do think it’s important to get some idea of speech perception in noise. We have these tests, they’re better than what we’re doing in quiet, and so we should give them, but we should know their limitations. Certainly we should give them, in my opinion, if time is of the essence we should give them instead of word recognition in quiet. I still don’t understand why we why we do monosyllabic word recognition quiet in this day and age. Yes, thank you.
I think it’s important to get an idea of self-perceived hearing handicap, and if you look at there is a body of literature that suggests that self-perceived hearing handicap is way more important than measured hearing handicap when it comes to things like quality of life and outcomes related to hearing loss, negative outcomes. So I think it’s really important to get some measure of self-perceived hearing loss. And I think it’s important to believe our patients because we’re not measuring in the situations that they have problems in. So I think we really need to listen to them, and give a lot of weight to what they’re experiencing.
When we’re counseling our patients, again, I think it’s important not to minimize their problems, and I also think it’s important to talk about communication strategies that help differentiate what they want to hear from what they don’t want to hear. So one of the ways that we have of overcoming competing speech perception is to make what we want to hear as different as possible from what we don’t want to hear. And one of the ways we can do this is by looking at the person we’re talking to, because the person we’re talking to, what we’re hearing, that message is time-locked to what we’re seeing. Where as the background message is not. So visual speeches are very, very important here. If we’re talking to communication partners again to differentiate the target message from the background message using clear speech is one way of doing it. Topic maintenance — staying on the topic — because we know people can use top-down cues to understand what’s going on, and using repetition and rephrasing. And, because it wouldn’t hurt, encourage activities that increase the opportunity for social engagement. Again there are hints that suggests that social isolation is really damaging as we get older. And so certainly we want to be encouraging this.
I’m a big believer in hearing rehabilitation, I think it’s the future of Audiology. It was the past, I think it’s going to be the future. And so this is one area where we might be able to make a difference. So context is really important — people can use context in difficult listening situations to fill in the blank. We want to make sure that our patients can use this context. And we want to certainly talk about environmental management strategies, certainly decreasing those competing sounds. So if you’re trying to talk on the telephone, go somewhere quiet. Or if you’re trying to talk to your spouse and the TV’s on, go to another room, or turn the TV down, things like that, and minimizing sources of distraction.
So just a few take home messages. People have difficulty coping with complex listening environments as they age. These problems may not be apparent during audiologic evaluation. they may begin in middle-age. Even mild hearing loss can lead to problems. Even if accuracy doesn’t decline, patients may pay a price in terms of effort needed to communicate. And subtle and not-so-subtle changes in cognition likely contribute to these problems.
So I’d like to thank some people who have worked on this project, my co-investigators Rich Freyman and Alexandra Jesse. Our wonderful programmer Michael Rogers. And some of the students, some of whom are here who have worked on these projects. And the NIH for supporting my work. Thank you.
Questions and discussion
Audience Question:
Yes thank you that was a very interesting talk I’m Kristi Ward from Northwestern. I’m a PhD-AuD student and I was just wondering earlier on your presentation, when you run the regression models, you had cognition as a packaged variable in those models. I was wondering if you — I’m sure you have, but if you could just maybe expand on — once you look into those various cognitive processes that you test, are you seeing that some of them are more productive of performance in older adults especially than others.
Audience Question:
I’m Greer Bailey I’m an AuD student at West Virginia University, and I was just wondering in your studies what do you do to ensure comfortable audibility of stimuli for the auditory working memory task, especially for individuals who have high frequency hearing loss.
Audience Question:
Hi I’m Jacob Sommers, I go to Louisiana Tech. You mentioned that you think that audiologists should do speech-in-noise testing during the evaluation, which I agree with. I was just wondering with those results, what do you do with them. Do you just compare them to like — how do you use that to benefit the patient?
Audience Question:
Rich Tyler from Iowa. I have two comments. One is I’ve always thought that somebody has a hearing loss, you’d get them a hearing aid or cochlear implant, do the best possible job that you can fitting those appropriately, and then you provide your auditory training. Does your research suggests something else such as cognitive training as well might be helpful or anything else?
I think it’s premature to — I think it’s an interesting question, because you wonder, can you work top down? And you know what we do as audiologists right now are bottom up, so we’re mostly bottom up. There are, you know if you’re doing aural rehab and doing coping strategies that is top-down. But will that be effective? And I think Samira might have, she’s looked at some cognitive training, and it does seem too — is that true Samira, it does seem to affect speech perception in noise, if what I remember is correct. So perhaps yes, perhaps that’s the bend that we should be taking is working from the top down, at least in part. The question is how generalizable is any of that. And so that I don’t know. You would think that if there are links between cognition and hearing a noise. And if cognition can be improved, then you should be able to improve hearing a noise, you would think. But I don’t know if that’s really been demonstrated yet. But perhaps someday it will.
Audience Comment, Rich Tyler:
A second comment, it has to do with a doable clinical test for speech perception that could be used and meaningful. And so I published this trying to look at issues relating to selecting and fitting cochlear implants and hearing aids, and basically it’s a sentence perception test with this sentence where it comes from the front, and the masker sentence comes from one of the sides, and the side it comes from is the better hearing ear in order to maximize the likelihood of the spatial hearing binaural effect, and once you know what the better hearing ear is, you do the speech perception with the right ear the left are and both ears because in fitting the hearing aids or cochlear implants you need to know whether there’s a binaural advantage or not. And that can be used for counseling, but it also lets you know whether you should risk putting in a cochlear implant or whether you should try and leave it as is, or maximize the similarities between two ears. It doesn’t take very long, you don’t have to do every single ear, you just have to find the best ear that’s the side that the speech — the sentence masker goes on, and I think it’s very efficient and easy to do clinically.