The following is a transcript of the presentation video, edited for clarity. Click the PDF icon to download the presentation slides.
I’m Ann Geers. I used to be at Central Institute for the Deaf, and now l live in North Carolina but I work out of University of Texas, Dallas. My colleagues on this NIH project are Johanna Nicholas at Washington University and Emily Tobey who is at UT, Dallas. This is not the CDaCI cohort that John was talking about earlier; this is a separate sample as you’ll see pretty quickly, no financial relationships.
We have a longitudinal design that’s a little bit different than the kind John was talking about.
These are kids who were implanted between twelve and thirty six months of age, so they’re all early implant kids. We got a language sample from seventy six of them when they were three and a half years old, brought them back, got a language sample at four and a half years of age, and then we didn’t see them again until an average age of ten and a half, when we got sixty of them to come back for these data research camps that we do. That’s sort of our MO for a lot of the research we’ve done. We run camps, and bring kids from all over the country for three or four days and we test them and we entertain them and take them to Six Flags and stuff like that. It was a lot of fun. So these kids are not from anyone geographical location, twenty seven different states, one Canadian province, not representing any particular program.
I’m going to be talking about those sixty kids who completed the entire battery, thirty boys, thirty girls, just turned out that way, all deaf from birth, all in auditory-oral education. We’ve looked at differences between auditory verbal and auditory oral, and we don’t find any, so we put them all together, and they all come out of early, but good option in auditory verbal settings. Implanted between one zero and three years two months, 1998 to 2003. Half of them received a second implant, somewhere between forty six and a hundred and nineteen months of age.
This gives you an idea how their education changed between four and ten. At age four, seventy eight percent were in special education, you know, usually at an option center. Twelve percent were fully mainstreamed by that time. Two percent partially mainstreamed, and eight percent is what we’re calling home schooled, which means they were still home with their moms but with an auditory verbal therapist seeing them on a regular basis. By age ten, this had changed dramatically and it has for a lot of these implant kids. Only two percent were in special education, eighty five percent fully mainstreamed, eight percent partially mainstreaming, and five percent were still being home schooled with their mom.
This is a select sample; it’s not an unselected sample by any means. Most of these moms had a college education.
We asked the moms at the follow up camp about their communication mode; this is a mean age of ten. The vast majority of these kids were rated as using speech with ease, a few used speech with difficulty, and one child used occasional signing.
They wore their implant all day every day. Very few, two kids, wore hearing aid in the other ear. Their use of the device either increased or remained the same over this time period between four and ten years. The benefit from the device was described as very useful in most of the cases.
Now, this is a list of speech processors used as some point by kids in this study. In terms of generational processor, this turns out to be important in several of the research studies we’ve done; this processor rating indicates what generation of processor it was. So, at three and a half, the first implant that these kids had, nineteen of the kids had an Advanced Bionic PSP, and then there was Med Ed Tempo, and you see twenty eight had a Nucleus Sprint. By the time-oh, oh, oh [pause] By the time they were ten, most of them had moved up to Nucleus Freedoms, and their second device most of them were using the most recent generation; those were generation four processors, and I’m going to be referring back to that later, so I wanted to give you an idea that this is not a static thing that somebody receives a processor and stays the whole time with that particular device.
Study 1: Language Emergence
Study 2: Language Delay
Now, this is the study that I’m more interested in sharing with you today, which is based on the same sample of sixty kids where we were first looking at what proportion of these preschool language delays persist and what proportion resolve over time.
So we can think of these in terms of four groups and those of you who read the ASL literature are pretty used to this sort of thing. We have kids who are delayed at four and a half split into two groups, those who are within normal limits at ten and those who are still delayed at ten. These are the ones we’re calling persistent language delay, and we’re calling these late language emergence. And then we have those kids who were already within normal limits at four and a half, and we can have two things happen there; they can either be within normal limits at ten, or they can have regressed and be delayed at ten. These are the normal language development group, and the late emerging delay group.
Here’s the scores at four and a half sort of re-plotted in terms of these groups. Here’s the average range for normal hearing kids, and we see we’ve got this group here that we’re going to call the normal language emergence group, and they are all scoring at four well within the range that they should be scoring, but the rest of these kids are still below. So let’s look at each individual subject now plotted by their scores at ten on the CELF.
We see that we only really have three groups here and they’re fairly equal in size. We have those kids who continue to be delayed, we have these late language emergence kids who were delayed at four and a half but are now within the average range, and then we have our normal language emergence group who have stayed within the average range between four and ten.
So, we’ve got about three equal groups, which is nice for our statistics, and now we’re going to begin to pull apart what factors differentiate groups of children with normal language emergence, late language emergence, and persistent delay. What we’re most interested in, of course, is differentiating those kids with late language emergence and those kids with persistent delay because we can’t tell the difference here when they’re four and a half from their language scores, and we’d like to know who’s going to stay delayed and who’s going to catch up.
So, I’m going to be showing you some slides; I’ll go through this one so that when you look at them they’ll all be the same. These are looking at various characteristics of these three groups, the normal language emergence, the late language emergence, and the persistently delayed group. We’re going to have an overall Fand P to indicate whether that was a significant difference; and if one group is significantly better, it would be in red based on the post op comparisons.
So you can see that when we look at age at first hearing aid, which also corresponds to age first educational intervention, they’re very, very similar. The normal language emergent group got a hearing aid, got intervention, and got their first implant at a significantly younger age; but what’s important to us is that this was not significant: For the kids who were delayed at four, we can’t tell from their age of implant, we can’t tell by their age of intervention, we can’t tell from their age of hearing aid, which ones are going to catch up and which ones are going to continue to be behind.
Now, this is an interesting variable that did differentiate the groups: the percent of children who got their fist implant in the left ear. I don’t really understand this, but forty seven percent of these kids with persistent delay got their first implant in the left ear, compared to thirteen and twenty one percent in the other groups.
Mothers education level, no difference among the three groups; aided pure tone average pre-implant, no difference; grade first mainstream, the normal language emergence group were mainstreamed significantly younger, but that’s a result of their having normal language at four, not probably a causal relation; gender, no difference. If anything it’s a little odd that sixty three percent of the persistently delayed group were female; it’s kind of-but that’s not a significant difference. So, when we just look at these background characteristics, we don’t see much differentiating these two language delay groups, the one that that would catch up and the one that won’t, except for this interesting statistic about left year implant.
Just to show you because age of implant has been such a powerful variable in all of our research, how these three groups, I just had someone plot implant age in months for the normal language emergence, late language emergence, and persistent delayed groups, and you can see that, yeah, it’s true that normal language emergent group has a lot of kids that were implanted below eighteen months of age. But for these two groups, age of implant tells us nothing about who’s going to recover in language and who isn’t.
So, I’m going to look back at what we had. Remember we started testing them at three and a half; that was our first language measure. We did a parent-child conversational interaction, a thirty minute session video recorded with a standardized transcription of both language and speed sound production, and we had the parents fill out a CDI form.
What we got from the CHILDES analysis of the language sample was a number of different root words, mean of length of utterance in words, number of bound morphemes, and number of different bound morphemes. What we got from the analysis of speech using CASALA, which is another computer generated system for matching each phoneme produced by the child with the target phoneme derived from the language sample for the first hundred words. We got a number of different vowel sounds, a number of different consonant sounds, and Emily Tobey created something called a weighted developmental score in which each sound is multiplied by the age at which it occurs in a normal hearing population. So sounds that are in the literature, usually the template norm is being present by two year olds and multiplied by two, etcetera to give sort of a weighted idea of what is the maturity of the speech sounds being produced by these kids.
And then the CDI, as most of you know, just has a parent rating what vocabulary words have you heard your child produce more than once, and it has a list of words that they check off, they have a list of irregular words, and ratings of sentence complexity. So it’s a parent judgment.
Here’s the same kind of graph, but now we’re looking at three year old speech and language characteristics. Again, what you notice is there’s lots of red for the NLE group. It’s easy to separate out those kids who have normal language by preschool; they’re better in everything. But there’s not much — certainly these early grammar measures we got from a language sample do not significantly differentiate our late language emergence and our persistent language delayed groups. Neither does CDI ratings. They’re in the right direction, but the variability is so huge they don’t reach significance. However, the early speech measures do. Here we see that the children who are going to recover, who are going to be normal by the time they’re in elementary grades, started out with more different vowels, more different consonants, and a higher weighted developmental speech score. I’ll tell you later what I think about this, but I’m not sure why this is the case.
We measured lots of things at ten and a half but I’m going to tell you about some of those, non verbal intelligence as the WISC perceptual reasoning, the duration of implant use by the time we did the follow up testing, the kind of technology they were using; remember I told you that was going to come back to be important; whether they used two implants or not, what was their cochlear implant aided pure tone average threshold; just sort of an overall measure of audibility, how soft a sound could they hear; and a phoneme perception score in the Lexical Neighborhood Test.
So, here no difference in performance IQ; that’s been a variable that really kind of helped us predict language in the past, but when we’re talking about differentiating among these three groups, no significant difference. Duration of implant use at second test, doesn’t tell us much. Use of most recent technology, it is true that those kids with normal language emergence were upgraded; they had a much bigger tendency to use the most recent technology available at retest where only forty two percent of those kids in the persistent language delay group had upgraded their processor. Bilateral device use did not get a significant chi square, although the mean, the percentage using bilateral device in the normal language emergence group was sixty three percent compared to only thirty seven percent in the persistent delay group, not significant.
Here are the two variables that we measured at age ten that did differentiate between these two groups; they are cochlear implant aided pure tune average threshold; for the persistent language delay group was significantly higher. They just didn’t hear softer sounds; they were responding at almost twenty seven DB, whereas those kids in the normal language emergence group were set closer to below twenty DB. Audiologists have not considered that to be so critical as I think we’re beginning to think it is, to get the perception of soft speech. And finally, the LNT phoneme score, a speech recognition score, but we’re not going here looking at word scores because that’s so influenced by vocabulary; we’re looking at phonemes scores ninety four percent correct in the normal language emergence group compared to seventy eight percent for the persistently delay group. So, they’re not hearing as well; it has to be louder in order for them to hear it, and then they’re not perceiving as many phonemes, the persistently delayed kids.
Normal vs Late Language Emergence vs Persistent Delay
And finally I want to talk a little bit about what are the academic consequences for these kids who are in this, remain in this persistent delay group. One of the measures that we have used historically in deaf education to look at how well a deaf child is doing as he progresses academically is his verbal performance IQ gap. How close is the performance IQ representing his potential, the verbal IQ representing the language impairment due to hear loss, so how close is he to catching up. So we want to look and see what the size of that gap is. We’d like to look at phonological decoding skills for reading to see if they’re at age appropriate levels, and we’d like to look at reading comprehension skills.
We use the Wechsler look at the verbal performance gap and the Woodcock Reading Mastery Test to look at basic skills through word identification and word attack which is basically phonic skills, you’re reading nonsense words and you’re just looking at their phonics skills. And reading comprehension on the Woodcock looks at both word and passage comprehension; much more involved with syntax, much more involved with the global aspects of language. Similar kind of graph; we’ve already seen the performance IQ doesn’t differ significantly, but look at the size of the verbal performance gap in that persistently delayed group.
Now, these gaps for the normal language emergence and the late language emergence groups are very close to normal, which should be expected to be a zero, and it is very rewarding to see the two thirds of these kids by elementary grades have for all practical purposes closed that verbal performance gap. For those of you that old enough to be in deaf education a long time, that’s a pretty phenomenal thing, but, you know, this is what we used to see in the old days. Twenty four point verbal performance gap, that’s huge, and that’s what we’re still seeing in this persistently delayed group. Now, their basic skills, reading skills, are not different. I mean, yes, the normal language emergence group is very, very, very good, but between the late emergence group and the persistently delay group, that’s not a significant difference, and they’re both within the average range. It is in comprehension that we’re seeing the biggest consequences, and that’s related to this verbal performance gap. We’re just not, these kids aren’t living up to their potential and they’re not reading at their potential.
It is in a way, it really is exasperating to see kids who in every other way have the advantages that these kids have that we see in so many kids in this group, and trying to understand why some kids just aren’t getting there. And that leads us to thinking about language impairment. And this is just, we’re just at the beginning of trying to explain why some of these kids are in the persistently delay group; we still don’t know, for example, whether some of them would close that gap as they gain experience. You know, we’ve seen close the gap between four and ten, are they done? Or if we look at them again at twelve, fifteen, would that gap, would they continue to improve? I suspect not, but we need to get those data.
Does specific language impairment underlie PLD in some of those children? And how can we distinguish those kids with persistent delay that is due to some auditory phenomenon — left ear implant, bad thresholds — from those kids that it’s due to some other mechanism associated with SLI. We know that the group is too big; it’s thirty three percent of this population, and we know that’s too big to SLI, but some portion of those we should be able to identify as having some specific language impairment.
And we’re interested in trying to follow up this idea that early speech production seems to be reflecting long term language problems. And can we find a way to make that into a more reasonable assessment tool? Because if we know at three and a half who’s going to have problems, we can begin to develop intervention methods to address them. So that’s where we are right now and those of you who know a lot about specific language impairment in this room, you should know that we have looked at the natural things like digit span, novel word learning, non word repetition; we’ve looked at a lot of those traditional measures. The problem is that they are very auditory related, and they tend to be deficient in both the late language group and the persistent delay group, so they’re not, they’re not where we’re going to find our diagnostic information. But thank you for listening to all this data.
Questions and Discussion
I’m Uma Soman, Vanderbilt University. Thank you so much for this presentation. It’s very nice to finally make sense of what I was seeing in the classroom to who’s doing well, to catching up, and to going nowhere. So, thank you so much for creating these groups and sharing this with us. My question is related to the late language emergence children. Clearly, something clicked, something happened, they caught up. Do you think it is mostly related to the factors? Or were there any specific interventions that you want to investigate further as potential contributors to this catching up?
Can I ask one more question? You said the students went from seventy eight percent being in special education to two percent being in special education, so when they were in mainstream settings, did they still receive some form of special education services? Or were they completely off IEPs?
Hi. Laura Dilley, Michigan State University. Thank you for a fascinating presentation. I wonder to what extend do you feel the quantity or quality of speech language input to the children might account for some of the variability that you’re seeing in outcomes.
Hi. I’m Eileen Haebig and I’m at UW Madison. My first question is, I’m sure you’ve looked at this, but for the language scores that you presented, the standard scores, they were based off chronological age, right? So, if you calculated a standard score using their hearing age, did you see anything with that, like, especially with the late emergent, language emerging?
Yes. That would be kind of reflected in their duration of implant, duration of implant use, and that was just very similar across the group, yeah, no, that wouldn’t, wouldn’t have probably have done it.
And then just from the last comment that you made about maternal input, so you talked about the grammatical level of input, but you also looked at frequency, right? Different types of input?
Well, we looked at number of words, number of different words, number of bound morphemes, number of different morphemes, bound morphemes and MLU. Those were the variables that stood out as being-and we just did it for the parent output and the child output, and tried to look at how closely they were matched; and the closer the match, the better the progress, regardless of whether the children were at a low language level to begin with or a higher language level.
I’m Hope from Vanderbilt, and I have a question:- Why do you think the left ear was important? That just seems so weird.
Well, there’s literature out there. It seemed pretty weird to me too. There is literature out there with adults implant patients that shows better speech perception for right year than left year implants. There’s, there are four studies out there like that, but I have not seen any studies that show the effect on language, but I really, we really need to replicate all of these results and I’m talking to John about maybe replicating this with a different sample, a broader sample. Because I think, I just don’t know whether this is a sporadic result. But, you know, there are people who, who might believe that there are some brain lateralization that is important for language that may be effective. I don’t know.
So, I mean, it could go either way. It might go either way, but it might be just a spurious statistical thing.
It might be just a spurious thing, but I would say that nowadays most surgeons, all else being equal, implant the right ear; so there’s some evidence to indicate that there’s a preference for the right ear, all other things being equal. But back in the day when implants first started, I remember when the surgeon would say to the parents, look, both ears are very similar. Which one would you like your child implanted in? I don’t think that happens any more, but that was a long time ago.
Hi. Areej Asad from the University of Auckland, New Zealand. I just would like to know more about the CASALA results because it’s speech, so I wanted to know did you utilize it according to place, manner and voice or just in general because I noticed it’s connected to speech sample.
Emily Tobey is doing those more intricate analyses. So far this is what showed significant differences. Overall consonant correct scores, but we’re talking about three and a half year old very deaf kids, so they don’t have much, they don’t have a lot of phonemes to begin with. And so, when she tried to break it down into the categories, I mean, Peter Blamey’s program does let you break it down into all kinds of categories. There would be so few exemplars that it was very difficult to see significance; so, no, we have not see that, but she’s still working on it.
Yeah, because I use CASALA program and I’m doing my doctorate and I know that there are some, like there are some sounds, if it was a thing comparison, like the results would come in comparison with the adults’ production; it’s not according to the child’s phonetic inventory. So, if you’re looking a child’s phonetic inventory, you need to count it again, not by the program itself. If you’re looking at the phonetic inventory, not the percentage of consonant correct, then you need to count it as the child himself as, you know, that’s the child’s speech, regardless if it’s correct or wrong.
Oh, OK. I, yes, but that’s, that, there’s two numbers you can get. One is a correctness score and one is just an occurrence. OK, we’re using the correctness score because that is what gave us the significant differences.
Another question is, it’s really interesting that you put for the future kind of speech production assessment to give us more. My question for you, what about ELI speech intervention? Like ELI stimulation for the sounds we already know like in majority we say, oh, they have problems. I know as a speech language therapist we don’t really have now a specific kind of speech therapy, evidence space speech therapy approach that we can use. What do you think about that?
Well, speech and language are so intertwined and in oral and in auditory intervention, you know, you’re putting in speech and trying to get out the best approximation that you can, but you’re working on speech and language simultaneously; you’re not working on speech sounds in isolation like you might do for an older arctic case, for example. And I don’t see pulling those apart for intervention.
Oh, no, no. I still agree with you. We’ve had both of them. But what I mean is it’s not that much concentration about the late acquired age that we know kids with hearing loss have problems with. My idea what about developing a new evidence based speech therapy approach, based on the ones that have been already in the literature for children with speech disorders, and implemented beside the auditory verbal? That would help them out.
Well, it’s interesting idea. As young as three, you think, these very young kids doing a articulation intervention approach. I don’t know. I’d have to ask some of the teachers what they think about that. It’s interesting idea.
Hi. Caitlin Imgrund from the University of Kansas. So I was very fascinated by your talk, so thank you so much. I’m wondering about the demographics of your participants, in particular as you mentioned the SES of the sample was quite a bit higher than what we would expect to see in the general population, and although it’s very amazing that so many of these children were able to move into within normal limits, so your typical language emergence group, it did seem to me like perhaps given the fact that your SES was so high, if those children had not had hearing loss we might expect their average to be a little bit higher than the normal distribution. So, is it possible that even with these children that are moving to within normal limits were still not tapping into perhaps their true language potential? And if you could speak to that, that would be wonderful.
Hi, Ann, thank you very much. This is Debbie Moncrieff from the University of Pittsburg. And if you may remember, I focus on normal hearing children and auditory processing, and one of the areas that I specialize in is the symmetry between the two ears. And in the majority of the population the asymmetry is shown as a left ear deficit. So we now have evidence electrophysiologically that in the weaker ear pathway there is an increased gain in the neural signal, and that that increased gain in the neural signal is possibly leading to the loss of synchronization and speech perception and clarity.
I wasn’t going to come up, and then you started to go into: Is it us? I don’t think it is you. I’ve seen you present before and you came to Pittsburgh, and you know, we’ve been, we presented here actually together. This subgroup of your population has always intrigued me, and so I’m now doing some group at the DePaul School in Pittsburgh to start to look at this auditory processing phenomenon in children with hearing loss as well. So it could actually be in the, in the wiring. I think it’s genetic. I don’t think it’s acquired, although I have colleagues who think it is; but I think it may be in the pathway and that there may be something inherent that is preventing you from allowing you from those speech and language processes to be accessed.
I’m Susan Steinman from New York Eye and Ear. Thank you very much. I learned a lot today. So you mentioned early speech production as a potential predictor of later performance, and I was just curious about what other potential predictors you were interested in investigating. You mentioned verbal working memory not really being a good one, but whether it’s non verbal working memory, what else was there?