Filter by Categories
Clinical Practice Research
Ethical Legal and Regulatory Considerations
Planning, Managing, and Publishing Research
Research Design and Method

Speech Understanding by Cochlear Implant Patients in Complex Listening Environments

This presentation focuses on speech perception outcomes when people with one or two cochlear implants listen in test environment simulations of restaurants and cocktail parties.

Michael Dorman

DOI: 10.1044/cred-pvd-c16006

The following is a transcript of the presentation video, edited for clarity.

I’m going to talk about speech understanding in simulations of complex environments by my implant patients. My disclosure slide says that the work is supported both by the NIDCD and cochlear implant companies who pay my staff salaries, my patient travel, and allow me to buy new golf balls every once in a while when I need new golf balls.

I spend most of my time at cochlear implant meetings. I rarely come to a meeting like this, that’s why it’s such a treat to be here. And so it’s likely unless you come to one of my specialty meetings, which I don’t advise because actually they’re very boring. I mean they’re surgeons and you know. So I’m going to start today, because you probably won’t hear me again, with three things that you should know about cochlear implants in addition to the topic that I’m going to talk about. And they are relevant to the topic, although it’s a bit of a stretch.

A history of cochlear implants

So we’ll start with a little history of cochlear implants. History starts in France with these two fellows — a surgeon and a physiologist who had a patient whose cochlea had been removed, had a stump of an eighth nerve. They put an induction coil in an electrode and they could stimulate the stump of the eighth nerve. The patient heard things that he described as cricket like. So electrical stimulation of the eighth nerve obviously was possible. Now revisionist historians have actually said that this was probably the first brainstem implant because the stump was actually almost non-existent, and so maybe the first implant was a brainstem implant.

Now word of this travel from France to Los Angeles where Bill House was told about this work in France by a patient, and he did the first cochlear implant that is into the scala tympani. Now what’s critical about Bill is that he’s famous for the single-channel cochlear implant. That is perfectly clear that he knew better than that at the beginning, because his first surgery was one wire, but the second one a few months later was five wires. So you have to believe that he knew perfectly well that you had to restore a wide range of frequencies in order to understand. He took a hiatus from his work to make biocompatible materials, and then start again in 1969. And again the first three patients had five wires, not one, because again I’m reasonably sure he thought this had to work better than one. But the technology at the time, his skill set, he couldn’t make them work better than one. So House eventually became known for the House single-channel implant although I have to believe that Bill knew all along that he was going to need more.

By the end of his life he convinced himself that one was good enough. Which tells you you can have a good idea at one time in life and a bad idea later on. But he is the father of cochlear implants.

Now word of Bill’s work which again was in the early nineteen sixties, spread north to San Francisco where Blair Simmons was at Stanford. And shortly after Blaire put five wires into the modiolus of the patient. In Australia Graeme Clark began work in the late 1960s on animals first and then humans, and out of his work comes Cochlear Corporation. The lads up the road at UC San Francisco, Michelson and Merzenich, took up work around 1970. Don Eddington started a project in 1970. Back to France Claude Henri Chouard started a project in the early 70s and by the middle 70s had a handful of patients with multiple channel implants. He was smart enough to take out a patent, and now claims that everybody stole his ideas from his patent. But the issue is, his patients had no speech understanding and why would one bother to steal that.

If you want a good story about this you should ask Professor Tyler who had one of the great grants of all time. He conned NATO into sending him to Europe on a busman’s holiday. Wandering about Europe testing the early cochlear implant patients — France, Germany, England. Ask him how that went.

Ok so my point here is that after Bill House’s initial work, word spread around the world and all of these projects got started just about the same time. And multi-channel implants came out of this, out of House’s original work, but the work was almost simultaneous worldwide.

Now then the last personal I’ll mention. Oh no, I’m wrong. And then in Vienna the Hochmairs, Inge and Erwin. And now we have all the modern manufacturers. In Australia Graeme Clark and Cochlear Corporation. The UC San Francisco group eventually evolved into Advanced Bionics. And the Hochmairs setup Med-El. The last person to mention is Blake Wilson, in 1983.

At this time quite reasonably the manufacturers held their signal processing to themselves, which you should if you own a company and you want to make money. You don’t give away your secrets. So the NIH decided to fund Wilson’s group — Dewey Lawson and Charlie Finley and Blake — to develop signal processing for cochlear implants which would not be proprietary. And in fact Blake and his team made a decision at that time to give away all of the IP. Today every implant uses some aspect of Wilson’s work. The amount of IP he gave up is estimated to be well over $50 million. So you can decide whether that was a smart decision or not. All right. And that leads us to the modern cochlear implant, which looks like this.

The central auditory system

All right now then. Second item: central auditory system. When I was a student this was the drawing of the central auditory system. This is the periphery. The cochlea is on the right, as I hope you understand. And then we have a brainstem, a midbrain, and then the wire as it were ends at the auditory cortex. This is the famous Netter drawing which may still be used in undergraduate classes I think. The problem is this is absolutely wrong. The wire doesn’t stop there. The wire in fact goes everywhere. As others have said in this meeting. The central auditory system certainly doesn’t stop at the Heschl’s gyrus, it goes all over the brain. Which is relevant to my next point. Here this is a current view of speech perception, the so-called dual stream model. You don’t have to know what all the boxes are, but the point is that speech information or acoustic information simultaneously is reasonably well thought to go ventrally to a lexical interface, and dorsally to an interface with the articulatory system, and the large blue area in front which is the inferior frontal cortex. That area the IFC now is the hot area for research because it is involved in almost all speech perception tasks that are even minimally complex, and involves attention. And it even turns out that Broca’s area which you learned was up there somewhere probably isn’t where you were taught it was.

So the auditory pathways go all over, and then we find out that speech perception is not entirely auditory. The famous McGurk effect where you have an auditory input that might be /ba/. The lips that you’re watching at the same time said something like /ga/. And what you hear is neither of the above. You hear something like /da/ or a voiced th. Many versions of the McGurk and MacDonald effect. You know it’s too bad if you’re the graduate student who’s the second one. Because everybody knows the McGurk effect, but what about the other guy? I mean that’s not fair. All right.

So that’s visual input alters speech perception. But so does tactile input. And this I find interesting. So here’s a classic experiment. What’s going on here is you have these wonderful micromanipulators that are wired to little pieces of tape at the edge of the lips. And while you’re listening, this thing can pull your lips up, or it will pull them down. Okay, up or down, while you’re listening. Now that will alter what you hear. If there’s a continuum of head to had. Head as spread lips, head. Had not. And so these are — I just made this up — these are two representations of the vowels. On the bottom we have percent ‘eh’ and the first two are heard as ‘eh’ and then if we pull slightly on the lips up then you hear one more member of that can continue as a. It’s really a cool experiment. The effect is tiny but that’s not the point. The point is that there is an effect. So the gizmo that’s doing speech perception not only has auditory input and visual input, but it’s also attending to tactile input. And so this has to be a multi-modal decision-making cell or groups of cells. Or maybe it’s even a amodal. Which is to say all of these separate modalities have to get translated into a common modality in order to make a decision.

And recently we were fiddling around with tactile input for reasons that aren’t very interesting. This is the stimulator for the so-called BAHA, a bone-anchored hearing aid, and we extracted it. And you hold the bit on the right between your fingers, and we drove it with the fundamental frequency in the amplitude envelope. So we have an implant patient listening, and then in one hand they’re holding this thing and it’s vibrating. And what it’s vibrating at is the F0 and the amplitude envelope. Once again we get a small benefit in the speech understanding. It’s trivial, absolute uninteresting, but it was real. And one of the patients — and these are things that make your day in the laboratory — one of the patients said Dr. Dorman, it sounded like you were talking through my finger. Isn’t that cool? I mean that is really, really cool. How could that possibly be that it sounds like it’s coming through the finger. Think about that.

Speech understanding by cochlear implant patients in complex environments

Okay, enough of an introduction. All of these are tangentially relevant to speech understanding in complex environments. Well first let’s start with the most simple environment ,which is a single loudspeaker in a booth. Speaking of your gizmo, it would also be cool if I could just look at the slide and the red dot would go wherever my eyes went. Yeah I, you know there’s money in this. I know there’s money in this. All right, single loudspeaker in a booth. Easy biomaterials and quiet. These are sentences that are kind of difficult or at least peculiar. And I’m showing you data from Renee Gifford’s shop in Vanderbilt — normal hearing listeners age 20 to 70 — because almost all my patients are older, and so you need this range. All right. So the red line is the bottom fifth percentile of normal — bottom fifth. So that’s the worst.

Now what I want to know is how many cochlear implant listeners, fit with a single implant, do normal in quiet. One implant, one implant all 22 channels. Make a hundred virtual channels, let’s say. A hundred virtual channels. The answer is about four out of 70. The good news is the mean score, the mean scores are relatively high. So when we write in chapters that cochlear implant patients in quiet with sentence material do relatively well, or do well, that’s real. Unfortunately only a few are actually at the normal level. But ninety percent is good. It’s about what I get in a normal conversation in a bar.

Well, we’re talking about complex environments, and those almost always involve noise. What happens to a cochlear implantation in noise? What happens to a normal person in noise?

On the left is a data point that I showed you. Normals, cochlear implant patients. Big cluster up here over 80%, yeah. My single, one cochlear implant patients do very well. But look what happens in noise. This amount of noise does nothing to a normal hearing listener. Plus 5 does essentially nothing to a normal hearing listener. But look what happens to my implant patients. They go from about eighty percent correct on average to around thirty percent. So noise is a problem for our patients. Noise is what happens in complex listening environments. Actually, is there a complex listening environment without noise? I don’t know. Okay, but think about that.

So noise is a problem. Getting rid of noise is obviously one of the things we need to do for cochlear implants. Let’s see why noise is a problem. It’s actually quite straightforward. There’s a standard implant signal processing. They all work kinda like this more or less.

Okay, so the microphone, filters, envelope detectors. So the point of a cochlear implant is you estimate the energy and how many filters you have, and you output a pulse whose amplitude is proportional to the energy in the filters. So here we have, obviously, a fricative vowel. This is the fricative energy here, and here our pulses in this channel which indicate this amplitude. Lower amplitude here, smaller pulses. So the amplitude of the pulse at a particular point in time, in a particular channel, tells you when the energy is at a particular place in the spectrum.

How many channels do we have? How many functional channels do you have in a cochlear implant? Classic data out of Bob Shannon’s shop many years ago, which have been replicated many, many, many times. He started with 22 channels in cochlear implant patients, and then turned off half. This is the 10. Then turned off more — seven. Turned off more — down to four. And you see that the performance with four was the same as with 22. And this study and many others like it led to the notion that the number of electrodes are not the same as the number of channels, processing channels or effective channels in a device and most modern cochlear implants you may have no more than four, six, or seven or eight effective channels in the cochlear implant.

And so we only have a handful of channels to deal with noise is the point. And the fewer channels you have to deal with noise, the worst off you are. And obviously if we could get 22 effective channels or a hundred virtual channels, then we’d be fine, we’ll be golden implants would work spectacularly well in noise, at least better than they do now. But we don’t have that.

Why don’t we have that, or what’s the problem, is shown here. So here we have clean speech, here’s the spectrogram. And as you know the information to tell you what the person said is in the form of frequencies. And as they change in response to the articulators moving, the formants change. Now here you can see the formants changing as your articulators move.

Here’s a 6 channel processor, where you have fixed frequency channels. And so now in order to see a foremant move, you have to be able to calculate — well there’s more energy here than there is here, there’s more energy in this channel then in that channel. So you have to look at the channels and estimate where the highest energy is in that channel at that point in time in order to track the formant movement. Well in the bottom I’ve added 5 dB noise, or a signal plus 5, and you can see that the indication of where the energy is, is greatly reduced. And in fact about all you can see now are kind of the big bits — where things start and when things stop. So you get some speech understanding, but it’s not very good because you’ve certainly lost all the details of the envelopes. More channels would help.

Well how do you mediate this? Well the thing to do — I have patients ask, “How should I do better when I’m listening in the bar?” And I say open your eyes because if you close your eyes In the bar it’s not good. And so these are data I’ve collected over the years audio on the left, audio visual on the right. Patients improve a lot when they can see the talker. What I like the most about vision is that even up at these high levels of speech understanding, where we really are approaching the ceiling, you can still get benefit from speech reading at high levels of speech understanding when other cues have lost their value. How much benefit can you get from vision? Well these are the AZ bio sentences or something like them. We actually invented a new a AV test that had equal list intelligibility. And so what 30 percentage point gains, 40 percentage point gains. A few patients get 40 to 50 percentage point gains. So the value of vision is very large.

Now the reason I show this is because this is cheap. All you have to do is open your eyes. Manufacturers don’t have to spend any money. And so now we can use this as a metric to gauge the value of other things we can do. So let’s say 30 percentage point is vision, let’s see how the other guys do.

Well most of the patients now have low frequency hearing on the ear contralateral to the CI. Here is an implant array courtesy of Tim Holden at Wash U, many years ago now. Recently we finished the study. We had three groups of patients implanted in one ear. Pretty good low frequency hearing in the other ear. Not so good and not so good — even more not-so-good. And the question was, what was the value of this low frequency hearing in the ear opposite the implant.

Well a summary of what we found is here. So we had CNC words. We had sentences. There was a discussion earlier about using words versus using sentences, and there seemed to be some support for sentences as real-world events. But actually the field of implants is going the other way. When you have sentences you have very large individual differences in cognitive function, where words you can actually minimize that. Although you want to have an ecologically valid measure but you also want to have something that is less influenced by large individual variation. So CNC words, here’s our three levels of hearing loss: 0 to 40, 40 to 60, over 60. This is the percentage point gain, and this is the proportion of patients who benefit. What you see is that for words in isolation, only about a third of the patients gained, even with this very nice low frequency hearing at 125 and 250. On the other end, three-quarters of the patients benefited when you had sentences in noise, and the gain was 26 percentage points which is getting up to my magic number of 30 for vision. So low frequency hearing obviously does something having to do with separating the word stream — segmenting the auditory stream. It simply is a very small effect for individual words, but a very large effect.

So if you were a researcher and you only were using single words in your lab, you would come to the conclusion that preserving hearing on the other side wasn’t worth the effort. But if you use sentences, you would have. And again what happens is that this low frequency hearing allows you to find word onsets. Word on sets in English are signaled about seventy percent of the time by higher amplitude and a pitch change. And this low frequency hearing allows you to identify strong syllables, and they are usually word onsets, and that allows you to segment the stream. That’s why it’s more valuable for sentences than for words. Because for words, that’s not useful.

All right, how much benefit is there? I’ll show you a series of plots, they look like this. This is percentage point benefit. This is the CI score — so I drive performance down with noise. This is a hundred percent correct, you can’t do any better than this. And this is 0. And the ellipses are some version of a ninety-five percent confidence interval for the AZ bio sentences. So the point is that in order for an intervention to be effective, the data point has the fall above this line. If it falls under the line, then it’s not an effective intervention. Okay? Good.

All right. These are patients with hearing in the contralateral ear worse than 60 dB, and you can see that the vast majority are under the dotted line meaning their low frequency hearing doesn’t help them. In the majority however a few do better, and so you say, “Well what should I do?” The answer is, “Well try.” Try with everybody, but the odds are with thresholds at 125 and 250 less than this that the odds of doing well are not high. But why not try, it’s easy. Here in the green are now patients with 40 to 60 dB low-frequency thresholds 125 to 250. And obviously now more the green dots are above the dotted line indicating that with this much hearing is clearly worth aiding, and the benefits can be very large, up to what 60 percentage point benefits or 50 percentage points — more commonly around 30. And then finally for very fortunate patients who have contralateral hearing thresholds and low frequencies of less than 40 dB, now almost all the blue dots are well above the line, and again the benefits can be 40 to 50 to 60 percentage points. So the clinical observation is you should always aid the other ear, and the patients will tell you pretty quickly if it’s not useful. But most commonly the aiding of the other ear will be of value, and the magnitude of the value is in the ballpark of the magnitude of opening your eyes. And that’s a big effect.

But what if the patient doesn’t have low frequency hearing in the contralateral ear?

Well then you give him a second implant. That’s easy. Now you have two implants, one in each ear. We’ve tested single implants and bilateral implants in lots of environments here too that we like. One is a cocktail party where we have a continuous male voice on the left side, a different continuous male voice on the other side — and then you’re trying to understand the female voice in front. I used to make terribly sexist comments about this situation, and I’m just not going to anymore. The election turned me off.

Over here we have roving speaker condition where we have eight loudspeakers surrounding the listener, directionally appropriate noise from all — this is the so-called r-space environment. And now the talker might be anywhere from here to here to here to here to here. And so you have to identify what’s being said with the talker roving. So what’s the value of having two implants versus one in this environment?

Well in the so-called cocktail situation, the benefit — so what we do is drive performance down in noise. That’s all of these data points. This is the better ear performance, this is the better of the two ears, and this is the benefit of having two ears versus the better ear — about 18 percentage points. It’s significant, it’s not huge. In this environment where the talker is roving and you can’t tell where he or she is going to be, the benefit is much larger — 28 percentage points — and now again we’re pushing that 30 percentage point number that I get for the value of vision. So bilateral implants can be a huge benefit when you’re in this environment where the speaker is roving.

The newest way to make a two eared patient is to insert an electrode into the poorer ear, but preserve the residual hearing in that ear. So these patients have low frequency hearing in the right ear, and that’s why you didn’t implant it. They have slightly poorer hearing in this ear, that’s why you did implant it. And now what the surgeons going to do is stuff this thing as far down as he can, depending on the surgeon, and preserve the hearing apical to the end of the electrode. And you can do that. It’s done quite commonly.

Here are data that my colleague Renee Gifford and I collected. This is the population from the U.S., this is the population from Harold Skarzynski’s group outside of Warsaw. So the black dots are the pre-implant thresholds in this ear that you just stuffed an electrode down, and after you stuffed it down you lose about 20 dB of threshold or 15 or 20 dB. But you still have plenty of useful residual hearing in the ear that has the implant. It’s moderately enjoyable to go to meetings with surgeons where they get really annoyed at each other because one will say, “Oh I had 97% successful preservation of hearing.” And some guy will go, “You didn’t, I only get 13-percent you clown.” And its really very amusing. And how long an electrode can you use? Skarzynski used 31 millimeter electrodes which go all the way to the apex. And he preserves hearing. Others use very short electrodes and they preserve hearing. Obviously there is some effective electrode length, but it’s not as large as you might imagine. And in the hands of a skilled surgeon you can get seventy-five, eighty percent hearing conservation with electrodes 24, 28 millimeters long. Less skilled surgeons have poorer outcomes.

So you have now a two eared patient. Now with bilateral implants you can use ILDs in order to localize sound sources. Because the processors are level detectors, and ILDs are present that are usable to locate sound sources. These patients have a different set of cues for localization now. They only here in low frequencies where ITDs are large and ILDs are very small. So what’s kind of cool for looking at bilateral implants versus hearing preservation patients is that they’re localizing on the basis of different cues. And we know that in fact ILDs and ITDs give you about the same level of sound source localization if you’re a normal hearing listener. In our lab that’s about 6 degrees of RMS error. And Bill Yost has done this all of his lives and he tells me that that answer is right. What does he know, I know.

What does having bilateral low frequency hearing buy you in these two environments? Well in the cocktail party it buys you about 17 percentage points, and in this roving, 13 percentage points which wasn’t different from 17. But they’re not very large effects for hearing preservation in these environments. But more is to come.

Sound source localization

Now I told you that bimodal patients with an implant in one ear and low frequency hearing the other ear can get 30, 40, 50 percentage point benefits if you have reasonably good low frequency hearing. And that’s a lot. But what you can’t do when you have an implant in one ear and low frequency hearing and the other is localize a sound source. Because you have one device, the implant, that is good at level differences but not time differences. Then you have your other ear with low frequency hearing which was good at time differences and doesn’t have a lot of level differences. And so let me show you the problem with bimodal listening.

So here we have RMS error — this is our loudspeaker arrangement, there we are. These are separated by 15 degrees, here is the RMS error. This is chance, Monte Carlo simulation. This is all Bill Yost’s work, I just borrow this stuff. This is a group of some 80 normal hearing listeners. The mean score for wide band noise is around six degrees of error. This line is it is interesting because it’s the 95th percentile of normal.. And so my question was: Do any of these listeners with implants approach the 95th percentile for normal? That’s the upper bound of normal. Can we restore normal sound source localization with a cochlear implant? And if we do then these data points will be below that line.

So here’s normal hearing. Here are some older people. Bill came down my office one day and he said we need older people Michael. I said, yeah why are you here? He was the first, so I was the second. These are bilateral hearing aided patients, where most of them do perfectly well, although some don’t. Here are our single ear — with one ear, you can’t localize the sound source. This is chance. This is right around chance. Here’s our bimodal patients with an implant in one ear and low frequency hearing in the other ear. And you see they too are just slightly better than chance, extremely poor sound source localization.

The good news about bimodal hearing is that it improves speech understanding in noise. The bad news is that it doesn’t restore sound source localization. And that’s important.

This is a group of bilateral CI patients we run. This is the mean. And you can see that the very best of my bilateral patients are just above the upper limit of normal. And as I wrote somewhere once, actually I don’t read new literature anymore, I just read what I wrote before so I can remember what I did. Rich has the same problem. So the best patients are just above normal, and this degree of localization ability — which is around 20 degrees, remember normal is six — is perfectly fine. I don’t believe there to be any difference in you if you had 20 degrees of error versus six. All you want to do is orient, right? And you don’t care if it’s here or over here, you’re just orienting to the right side. We don’t hunt with our ears and we don’t jump on prey by hearing them. All we have to do is find out where the sound source is and then our eyes will show where that is.

Here’s our hearing preservation of bilateral low frequency hearing. And that distribution looks like the distribution for bilateral CIs. And what’s cool about that, again, is they’re getting the same level of localization with two different sets of cues. The bilateral patients are using ILDs, and these hearing preservation are using ITDs.

Here’s an interesting case of single sided deaf. These patients have an implant in one ear, and perfectly normal hearing in the other ear. And you can see this is the first data point I have of an implanted patient hitting the 95th percentile for normal. And this takes a bit of explaining which I won’t do but it’s really a cool data point. And the last set of data points are for 3 patients who have bilateral implants and bilateral hearing preservation. The surgeon preserved hearing in both ears that he operated on. So these patients have access now to ILDs through their implants and ITDs through their low frequency hearing. And I thought, “Hey that’s cool. These guys are just going to hit it out of the ballpark, right?” Wrong. They’re no better than the others. This distribution looks I think like the rest of them. And the trick is, again, ILDs and ITDs I are equivalent cues. And for normals having both doesn’t help. You use the best one. Having the other one — no benefit. At least in this environment.

So there that’s my story about localization. And the message is we have two groups of CI patients who can localize — that’s the bilateral and the hearing pres. And one group, the bimodal, who can’t.

So you say, “Who cares? Why is that important?” Here’s the who cares. This is a new experiment of ours. Three loudspeakers and three monitors. This is Sarah who runs my laboratory. And the experiment is this: You’re sitting in the middle. Sarah will say “she’s here.” That’s the cue, and you have to find Sarah. Left, right, center. And then she says a sentence, and you have to repeat the sentence. So she’s here, you orient, and then you get the sentence.

What if you have one implant or if you’re bimodal. If you have one implant, and she says, “she’s here,” you can’t find her because you can’t localize. Well if you can’t find her in time, then you can’t use lip reading to give you that 30 percentage point advantage. So you’re looking around like this, and by the time you find her it’s over. And all you get is the auditory signal and no visual benefit. So that data point looks like this. These are bilateral listeners, but this is with one implant. We drive them down in noise. And if you use auditory only, or if you have one implant plus vision — vision doesn’t help you very much. It’s 15 percentage points or so. It’s a very trivial effect of vision because you can’t find this the talker soon enough.

On the other hand if you have two implants, now you can localize where Sarah is. She says, “she’s here,” you know which way to turn and then you can both hear it and use lip reading. And so when you do that, instead of this 15 percentage points, you get about 30 percentage points or more.

So the point here is that if you use localization in your experiments with vision then you can show much more value for bilateral implants than you do with auditory alone. See in other countries where you have to sell implants to the health plan, you have to show their benefit. Well if the benefit is 15 percentage points, the money manager says go away. If the benefit is 30 percentage points, you have a better chance with the money manager.

And so that is summarized here. If you have auditory only experiments, and you’re trying to show the value of two ears, the best I could do is around 15 percentage points. On the other hand if you have two implants and you use vision in your experiment, then I can get double that score, and this is an easier sell then this cell up here. So speech perception is inherently audio-visual, maybe tactile, and if we used all those inputs we might have a very different view of the value of bilateral implants and other kinds of things.

Improving the single CI

Well finally, let’s not give up on improving the single CI. So we’re going back to the speech understanding in a cocktail party — here’s here’s my cocktail party. Before the election, I imagined one of these was a presidential candidate and I had to stop thinking that. So if you have one CI here, and you have a beamformer microphone where the polar pattern looks something like this or some version of this depending on how you set it up. Then you’re suppressing signals on the side lobes and slightly on the back. And this environment is just made for that, and so you’re going to attenuate this signal because it’s in this hole. This signal over here is also on that hole, is also a head shadow. And so a beam-former microphone in this environment ought to work very well and it does — this happens to be one manufacturer but they all work more or less the same. This is a percent correct, you drive performance down with the standard microphone, and now you add some version of a beam-former directional microphone, and you get 31 percentage points. That means a big effect. And what interests me about this effect is this: that in that same environment of the cocktail party, I only had a 20 percentage point benefit from going from one to two CIs, and about the same with no hearing preservation versus hearing preservation. So fitting an appropriate directional microphone on a single CI can get you the equivalent benefit of having two implants or hearing preservation surgery. It’s a very large effect. And there’s obviously downsides to directional microphones which I suspect you all know, but when used appropriately they can give a very, very large benefit relative to these other effects.

And in the spirit of research I ventured to several bars recently with my patients, and my patients say that it’s much easier to understand the bartender with a directional microphone than without. And he was going to go to more bars. There you go.

So what have I told you? To hear well in complex listening environments you should first open your eyes. Should aid the contralateral ears — if it doesn’t have hearing provide bilateral CIs. Provide bilateral low frequency hearing by hearing preservation surgery, and if you’re stuck with a single CI, use some kind of beam-former or FM microphone to increase the signal-to-noise ratio. And thank you for staying here.

Questions and discussion

Audience Question:

Hi, I’m Kristen from the University of Tennessee. So we have a bit of a discrepancy in our apartment. The researchers feel that using beam-formers in a pediatric population has been really helpful for them. However the clinicians feel as though they probably shouldn’t be allowed to use them due to safety concerns from the attenuation. And so I was wondering if you could give us your input.

So the question is one that I’ve encountered a lot. As you may have guessed I just wander into these things. I’m an experimental psychologist by training, and I didn’t know what a beam-former was until a few years ago, and so when I first started to give this talk people said, “Oh no you shouldn’t use them because the children lose the environmental sound and are going to get run over by a bus.” I think maybe you know the literature better than I do, but the evidence isn’t great that’s going happen. In fact the evidence is just slim in general. A recent study by Jace Wolfe gives me a fair amount of hope that these will be more widely used, and I know that all the manufacturers are really anxious to know the answer to your question. And those of you who have populations might want to propose this and get funded, because this is a critical thing. Right now there is the bias you say against using them in children. It’s not without merit, but certainly the evidence isn’t widespread and huge. And there’s certainly work to be done. We’re doing one at the moment, speaking of that, and the benefit is certainly as large as the adults in my lab. And we’re now getting the real world data on value. Certainly if you are a manufacturer you would love these things to be effective for children. But again it’s a data issue. We just need more numbers.

Audience Comment:

So I just wanted to comment, and I think a big issue why clinicians don’t like fitting directional technology on kids is they don’t — especially younger kids — don’t always turn to face the source. And there are data showing that. And I I think that, just wanted to throw that out there, that that’s a big concern clinicians and researchers have about directional. Not that that’s an insurmountable problem but I think those are the data that I’ve heard.

Audience Question:

My question is, you’ve shown the visual benefit with postlingual deafness. Is the same true for prelingually deaf individuals?

There of course the speech perception baseline is almost zero. So it will certainly turn out differently. I don’t have an answer your question. It must be different. I was surprised that in the data on lip reading that we’ve done, is that our implant patients were no better than my undergraduates at ASU and the literature suggests that if you’ve been hearing impaired for a long time or deaf for a long time you are a better speech reader. But that’s just not what we came up with. Now my guess is that the ASU under graduates aren’t listening to me, and the only information they get is lip reading so they become very good lip readers. And so I think that’s why there’s no difference between my undergraduates — that’s a joke — my undergraduates and the implant patients. We were very surprised by that. In our material the implant patients were simply no better than undergraduates at lip reading.

Michael Dorman
Arizona State University

Presented at the 26th Annual Research Symposium at the ASHA Convention (November 2016).
The Research Symposium is hosted by the American Speech-Language-Hearing Association, and is supported in part by grant R13DC003383 from the National Institute on Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH).
Copyrighted Material. Reproduced by the American Speech-Language-Hearing Association in the Clinical Research Education Library with permission from the author or presenter.