The following is a transcript of the presentation video, edited for clarity
LAUREN: Hi everybody, my name is Lauren Calandruccio I’m an associate professor at Case Western Reserve University, and I’m the chair of the 2019 Hearing Research Symposium, and no, I did not choose the starting time of 7:30a.m. But this is for us, we might as well get busy. So it is my absolute pleasure to introduce Dr. Konstantina Stankovic. Dr. Stankovic is an esteemed medical doctor and researcher, having earned both her MD and her PhD from Harvard Medical School and Harvard/MIT division of Health Sciences and Technology. Dr. Stankovic is a professor in the Department of Otology and Laryngology at Harvard Medical School. She is the Chief of the Division of Otology and Neurotology at Mass Eye and Ear Infirmary, where she holds the Sheldon and Dorothea Buckler Endowed Chair in Otolaryngology. Dr. Stankovic is also a clinical associate in Massachusetts General Hospital. Dr. Stankovic is an ear and skull base surgeon with a research program focused on sensory neural hearing loss. She is a past president of the American Auditory Society, and a fellow of the American Neurotology Society, and a fellow of the American College of Surgeons. She is on the advisory board of the Acoustic Neuroma Association. She is an ad hoc reviewer for over 20 different academic journals, and is an editor on the editorial board for 3 top tier journals in our field. Dr. Stankovic is a grant reviewer not only for research being done here in the U.S. but also all over the world. Her research is funded by the NIH, and several other funding organizations and agencies. Dr. Stankovic has delivered lectures and has fascinated audiences across the globe, 15 countries to be exact. She is a skilled surgeon and a compassionate physician, and an engaging lecturer. Most of all, also a brilliant researcher. We are extremely lucky to have her here with us this morning as our invited guest for the 2019 ASHA Hearing Research Symposium. Thank you Dr. Stankovic for accepting our invitation and for joining us here in Orlando. Without further ado, pleasure join me in welcoming Dr. Stankovic.
KONSTANTINA STANKOVIC: Thank you so much Lauren for a very kind introduction. So I am indeed delighted to be here today, and give you an overview of our approach to enable precise diagnosis and therapy of sensorineural hearing loss. And to put things in context, we always start with clinical problems, and then we use the tools of basic science and technology to try to solve them with the ultimate goal of always going back to the patients to improve their care. And I will illustrate our approach using 3 recent research units from the laboratory. The first one will be focused on optical imaging of the inner ear to enable cellular level diagnosis. The second one will focus on vestibular schwannoma, which is an intracranial tumor that typically causes hearing loss and tinnitus, and we’ll talk about the role of inflammation in both mediating the hearing loss and tumor growth. And then I’ll finish by talking about liquid biopsy of the inner ear to enable precise diagnosis and guide assessment of hearing loss.
So to put things in perspective, we all know that hearing loss is the most common sensory deficit across the globe. It’s currently disabled nearly half a billion people. And this is incredibly costly globally. The estimated annual cost of unaddressed hearing loss is about $750 billion. And things are not getting better. The World Health Organization estimates that over a billion young people are at risk of hearing loss, and primarily due to recreational noise exposures. And as you can see from the diagram at the bottom, hearing loss effects 16% of the workforce. The number doubles by the retirement age. And then it affects nearly 3 quarters of the octogenarians. So given the magnitude of the problem, it’s astounding that today in the 21st century we cannot biopsy this organ, and we cannot see cells inside it to really tell people why they’re going deaf and what is going on at a cellular level. And this slide reminds us why that is. This is a microcomputer tomography scan of a human temple bone, and you can appreciate that the cochlea is this coiled structure located well encased in the densest bone in the body located deep in the base of the skull. So for these reasons, the only source of information about the cellular basis of human deafness today comes from studying autopsy specimens. So that means that when somebody dies then we can harvest their temporal bones. And then through a laborious process that takes about a year where the bone has to be decalcified, embedded sections and stained, we can generate sections that look like this. So this is a cross-section of the human cochlea, and you can see that in cross-section it’s similar in size to Lincoln’s upper face on a penny. So it’s indeed humbling to realize how small and delicate this organ is.
And now if we zoom in, then we can appreciate that there are many different cell types in the inner ear, and they include not only hair cells and neurons that we all know and study with passion, but also some 30 other different cell types, and loss or damage of any one of them can cause hearing loss. So that really illustrates the need to know what’s going on. That it’s really humbling to realize that today we know the histopathologic findings in only 20 out of 200 deafness causing genes. So it means we know very little about what’s happening in a human inner ear. And when we study these histopathologic sections, that’s the end stage of degeneration, because we get it from people who die of unrelated causes decades after they were diagnosed with hearing loss.
So what is state of the art imaging of the inner ear today? This is a cross-section of an axial cut of a patient on the top, of who underwent a CT scan, computer tomography scan. So this is based on ionizing radiation, and here the bone is white, and you cannot see cells inside the inner ear, but you can appreciate that there is something in the inner ear, because they are these gray blobs. The MRI scan shown at the bottom in this one is heavily T2 weighted, which makes fluid look right, shows you that there are fluid spaces, and you can appreciate that there are 2 of them on this scan, however in reality we know that there are 3 fluid filled spaces in the inner ear. But again, you can’t see cells inside it. There are research tools that provide much higher resolution, including a microcomputer tomography scan on the right, and synchrotron radiation space contrast imaging at the bottom. However, these tools require much higher doses of radiation, so that at this point they are not translatable to living patients.
But we really need to know what’s going on. And that’s the case even if we know the genetic cause of deafness. Because, the degenerative changes are typically progressive. And that’s what we have learned from animal models. Animal models of deafness have been invaluable in teaching us what may be happening in the human inner ear. And so if we look at this mouse model, that model’s dominant negative connects in 26 mutation, and you know that mutations in connection 26 are thought to account for up to 50% of non-syndromic sensorineural hearing loss in certain ethnic groups, including the Caucasian group. And if you look now at this mouse, at 7 weeks of age, compared to, sorry at 2 weeks of age show on the left, and that’s the equivalent of a human newborn. Because mice are not born hearing, unlike humans. And if you compare that to a mouse that’s 7 weeks old, and that’s the equivalent of a human adolescent, you can see that there are very dramatic differences. Although it’s the same genetic mutation, you can see that at birth the organ of what is there, but by adolescence it’s gone. And likewise spiral ganglion neurons which are there at birth, a ma—decimated by adolescence. So if this translates to humans, then it means that if we see a patient with a known genetic mutation, and image their ear when they’re newborn versus when they’re 2, versus when they’re 20, we may see very different cellular level pathology, which would really dramatically influence treatment options. And, audiograms, while they have been the workhorse of diagnosing hearing loss, have limitations. They’re not perfect. And in particular they don’t uniquely define cellular level pathology.
And I’m showing that here on audiograms from 2 patients who were in their 50s when they died of completely unrelated causes, and they’ve had seemingly very similar looking patterns to their hearing loss. It starts out as normal, and then there is a steep drop about 2 kilohertz. However, if you now look in their inner ears, they look very different. And we can quantify cellular damage by counting different cell types along the length of the cochlea. And you know that in the cochlea everything is organized according to frequency, and there is a space frequency map. So that’s what’s plotted at the bottom, and these are the so called cytocochlear bands, where you plot whether cells are there or not, and black means the cells are missing. And you can see that for the patient on the left, that they, they had a normal complement of both hair cells and neurons, which is strikingly different from the patient on the right side, who had significant loss of hair cells in the region of maximal hearing loss, but his neuronal damage spread both to higher and lower frequencies. This is just one example, but there are numerous examples that we have looked at from our temporal bone collection. And all of them together highlight the need to really see what is going on in the living human inner ear at a sub-structural level, micro anatomic level.
And in fact I’ve just come from NIH where for 2 days we held a workshop on seeing inside the living human inner ear. And we all agree that this is a huge unmet medical need. Because in the 21st century we still use very descriptive terms, we just call it sensorineural hearing loss, it’s an umbrella term. We call it tinnitus, it’s a huge umbrella term where we know for sure that there are different subgroups of people who fall into that category. Even if we look at vestibular dysfunction, we call it Meniere’s Disease, but sometimes we have no idea how to differentiate that from vestibular migraines. But that has important consequences in terms of treatment, because it’s not just what we call it, but for example, in the case of Meniere’s Disease, when these attacks are disabling, then people just can’t take it anymore, and our approach is super barbaric. Our approach is to destroy the ear. We give it gentamycin, to destroy vestibular cells. However, if we do that to a patient who indeed has vestibular migraine, and not Meniere’s Disease, we can incapacitate them. So all of these examples highlight the need to see better inside the ear.
And for that reason, we have looked into optical tools. And optical tools are advantageous because there’s no radiation involved, and they provide higher spatial resolutions. And there is this technique which is called Optical Coherent Tomography, or OCT. And what it means is that you start with lights, laser lights, and then you split it into two beams. And one is the reference beam, and the other beam that goes to the sample, it reflects back off of the sample, and then you’re recombining with the reference beam, which generates an interference pattern. And that interference pattern allows you to infer what the structural details are of the structure US imaging.
Our collaborator Gary Tierney at Massachusetts General Hospital developed a high resolution version of OCT which we call Micro OCT, Micro Optical Coherent Tomography. And basically the way that’s accomplished is there are two key innovations. One is using a very broad bandwidth laser, and two is by shaping the sample life, so that it provides a high focus over a long distance. So when we apply that to a guinea pig inner ear, where we took out the guinea pig inner ear from the animal, so it doesn’t use (inaudible words) but still 3 dimensional intact, and the cochlea was not decalcified, it was not stained in any way. We made a little (sounds like cochleasome) a little opening in the bone of the cochlea, and are looking inside. And this is the work of a graduate student Jane Iyer, and you can appreciate that there is a basilar membrane, this is a scala tympani. On top of the basilar membrane there are recognizable structures like the region of the inner hair cells and outer hair cells. And then you see these two dark tunnels. They are the (sounds like tunnel of Corti (TC) and the Space of Nuel (SN)). Do you guys know what the Space of Nuel is? It doesn’t receive that much attention in anatomy course. But it’s a very cool space. It’s a the space between outer pillar cells and outer hair cells. And what I’ll show you now is a movie where we’re flying through this space. So, if we now fly through this space and (inaudible words), then if you pay attention to the bottom here, you’ll see nerve fiber bundles traversing this space. And you can see them here, these white tubular structures. And here we are flying through this space for several hundred microns, and then if we digitally section at the level of the basilar membrane, then you can see these radial fibers, the white fibers, and based on their direction and caliber, we believe that there are medial all of a cochlear different fibers, which as you know supply outer hair cells to modify their function including their motility and by doing that modifies the dynamic range of hearing.
We can also see individual cells, including hair cells. So here we are comparing the images of obtained using micro OCT and that’s in gray scale, compared to histopathologically processed specimens that were micro dissected, and uni-stained, and then imaged using confocal microscopy. So there is a drastic difference in the amount of processing that’s involved in these two imaging modalities. For micro OCT we pretty much did nothing. The tissue was 3 dimensional intact, and for this it was maximally invasive, and yet we can see individual hair cells with micro OCT, and you can also appreciate that on this cut where each one of these little humps is a hair cell.
So that is at baseline in a normal ear. But is this useful for diagnosing pathology? That’s the critical question clinically. We want to be able to tell people whether there is a defect. To do these experiments, we used mice, and we exposed them to loud noise levels, and we compared the normal mouse versus the noise traumatized animal, and there is a dramatic difference. You see that in a normal mouse there is the Organ of Corti sitting on the basilar membrane, and you can appreciate that there are different cell types. However, in the noise traumatized animal, all of that is gone. You just see the Basilar membrane with the scar tissue on top of it. We call that flat epithelium. Well that’s really important to know if you are considering gene therapy for hearing restoration where the gene therapy is based on trans-differentiation of the supporting cells into hair cells. And that’s the current gene therapy trial that’s happening in the United States. It’s sponsored by Novartis, it’s been going on for many years. We still don’t know whether it’s working or not. Nothing has really been published. But, if it doesn’t work, one of the reasons why it may not work is that it was given to wrong people. Because if it was given to people who don’t have supporting cells to re—dif—of transdifferentiate, it won’t work. So then we’ll incorrectly conclude that gene therapy’s not working, whereas, it was just given to people who wouldn’t benefit to begin with. In modern medicine, there is a huge emphasis on patient stratification. Don’t give the same drug to everyone. This is very prominent in oncology, where they really use toxins to get rid of cancer, and people can have dramatic side effects as you know. So you don’t want to put everybody through that if you know upfront that they won’t benefit. Or you want to tailor it based on their specific genotype, so that they can benefit from it. And we need to be doing the same for hearing loss and vestibular dysfunction so that we can really help people and make correct conclusions about whether an intervention is working or not.
So as we generate these… pictures, micro OCT images of the inner ear, and what we are working on now is actually incorporating this technology into a little tiny probe that we could fit into the human inner ear, a living inner ear, to for the first time see at a cellular level what is happening in the living human. That is technically an incredibly difficult problem. Because we’re talking about a super small probe that’s only 500 microns in diameter, and it has to curve around this tiny space, so the radius of curvature is very small. And you are bending this optical fiber, and it’s still supposed to work and give you useful information. But we do have some preliminary data that looks very promising, so we are pursuing that, and I hope to have an update for you in not too distant of a future. But in the meantime, we have to have a way of validating what we’ve seen. Because if we start getting these images, how do we know that we are right in calling something a hair cell versus calling it a fiber site, or something else. And I’ve already told you that doing standards in a toxiline and eosin based histology is super labor intensive and lengthy. It takes a year. That’s very impractical. Because if we want to test this on hundreds human temporal bones to develop criteria, imaging criteria for diagnosing of certain cell types that would take hundred years. That takes several PhD students and is really not practical. So we looked at other techniques, and then came across this synchrotron radiation, saved contrast imaging, which is a technique that uses very high energy photons. And the way these photons are generated is by accelerating electrons in a, pretty much it’s like cyclotron, you know the high energy particle physics that they study. So, it uses a synchrotron facility where these electrons are really, really accelerated, and as they bend they generate photons that move nearly at the speed of light. So they can penetrate through tissue, and to do this kind of imaging, the facility is huge. This was done in Canada where my student went there and brought human temporal bones. And there are only a few of these in the United States, and I think a total of maybe 40 across the world. And the accelerator is the size of a football field. But, the huge advantage of this is that now you can image a human temporal bone that is entirely intact. This is an image of a bone that wh—is not only 3 dimensional intact, but is al—it also wasn’t decalcified. We’re seeing through bones. And you can appreciate that there are these structures that run along the length of the cochlea. You can see 2 ½ turns of the human cochlea, and we can start to appreciate different cell types, like, outer and inner hair cells, inner hair cells, and even neuro fiber bundles within the (sounds like dialus).
So now I’ll show you a movie where we are flying through the human scala media. So this is a surface view from above. And then again, you can appreciate various cell types. Is the contrast good enough for you to see? Okay. So, going back to the clinical relevance. We know that it can, this is useful in detecting normal cells, but can we detect pathology? And, we can, and we, by digitally sectioning at the level of the basilar membrane, we can actually create a virtual whole mount. You, I, I’ll show you some images later of whole mount, where we have to micro dissect the cochlea, where we first decalcify, then remove the otic capsule, then micro dissect the cochlea that’s spiraling, and make half turns and then put these half turns set, and then uni-stain them to see different cell types. But here without doing any of that, we can see the microstructure of the human Organ of Corti. And if we now look at disease, then on the far left is a normal virtual human whole mount, in the middle you can see that where the green line is, if you focus on that blank strip right here, it looks different than the strip right above it. It’s because it’s black, it’s looking black because nerve fiber bundles are missing. Nerve fiber bundles should be traversing that space. So here we can already detect neuropathy. And, if we really forcefully insert a cochlear implant, then you can see damage that occurs. And this occurs in real people. We have learned that from citing human temporal bones of cochlear implantees, uh, that were studied after they died, and we see that all sorts of things happen. The cochlear implant my start going through the scala tympani, but then it can pierce the basilar membrane, it can fracture the osseus spiral lamina it can traverse through the scala media, all the way into scala tympani. It can champ into the lateral wall, all of that happens. But if you ask surgeons for the feedback, some, most will say it’s all the same. Uh, because we are really inserting the cochlear implant blindly. Of course we have to see the round window, or the cochlea also that we may through which we start inserting, and after that, it’s really we depend entirely on manual feedback. But what I define as forceful may be very different from what my colleague defines as forceful. So this again highlights the need to have better visualization of what we are doing, which could enable hearing preservation for everyone when we are doing cochlea implantation as opposed to selectively. Right now we can preserve residual acoustic hearing in 50 to 70% of people, which is a huge improvement compared to 10 years ago when we were telling people you’re guaranteed to lose all of your residual hearing as a result of cochlea implantation. But still why can’t we do it on everyone?” One of the reasons that we do better today is that electrodes have become better, they’re thinner, they’re more flexible, they’re designed to be minimally traumatic, but still some people lose hearing. But if we could optically enhance the cochlear implant so that we see as we’re guiding it in, then we have a much higher chance of preserving acoustic hearing, residual acoustic hearing for everyone.
So now let’s look at another optical imaging technique that again uses laser, but this time it’s called Two Photon Fluorescence Microscopy. And the way this works is that if you have two high energy photons, when they collide at the target, then light is emitted. And it’s emitted by a fluorophore. And that’s what’s missing here. These are images of a mouse inner ear. These, the, the color indicates intrinsic optical properties, so the tissue was not stained in any way. The red is auto fluorescence, and the green is what’s called second harmonic generation, which is a signal that’s generated by structures that have very regular molecular profiles, such as nerves or collagen fibers. And this is the work done in collaboration with Dr. Demetri Psaltis at Swiss Federal Institute of Technology and a then graduate student Shin Yang. And what you can see is not only the cell bodies of spiral ganglion neurons here that are shown in red, but even in intracellular organelles. You can see the cell nucleus in black, and those little bright red dots are mi—we think they are mitochondria. So where does the signal come from? Why is the ear auto fluorescing like that if you excite it with thee high energy photons? Well it turns out that we think that the signal comes from what’s called the FAD. That stands for Flavin adenine dinucleotide. It’s a very important molecule in lots of redox processes in the body. And the inner ear has a lot of it. It has one of the highest concentrations of FAD in the body. Otherwise FAD is found in very high concentrations in the liver. And increasingly, for example in the brain, you don’t have as much FAD. The main fluorophore in the brain is NADPH. So, this is unique and we can capitalize on that, because we may start to image cells and see cells based on their intrinsic molecular profiles. So an advantage of a technique like this is that it could give us not only structural information that I’ve shown you before with micro OCT, but possibly even metabolic information. And we do know from all of science in general that metabolic changes precede cell loss. And when we image hair cells— Everything looks very dark from my angle, but I hope that you can actually see what’s going on, that it’s not just a dark tunnel. But there are bright hair cells here. Can you see them? Can you see little dots? Every circle is a hair cell. So we can see individual hair cells in amongst image through the bone. And when we expose these mice to loud noise levels, again you can see that there is dramatic loss of hair cells. Inner hair cells is decimated and outer hair cells are completely gone. And that’s consistent with what we know in terms of their vulnerability to noise trauma.
Well, how is this translatable to humans? This is a picture of the human inner ear. This is the surgeon’s view. It’s the right ear. So the patient is lying like this with the tympanic membrane lifted up, and then you are looking at the basal turn of the cochlea. This is the basal turn of the cochlea, which we also call the promontory. And this is the round window that is actually hidden by this little niche, the bony niche which we call the round window niche. And it means that this round window, which is the only non-bony opening into the inner ear, is hidden. And in this view, in a person who’s standing like I am right now, the round window is in a horizontal plane. But in a surgeon’s view, now the round window looks like this and then you have this niche that’s precluding the direct visualization to the round window. So it means that if we want to image what’s happening, at the basal turn of the cochlea, and do it noninvasively, we’d have to get our endoscope bent around that little corner, and then image through the round window membrane. But that would limit us to a very small area of the cochlea. However it’s a very useful area of the cochlea because in lots of types of sensorineural hearing loss, the damage is typically most prominent at the base and typically starts at the base of the cochlea. But you could say why couldn’t we see through this bone? Can we make this bone transparent? And that goes under the rubric of optical clearing, and there are lots of optical clearing techniques that, that I use in histology, but they’re toxic. They typically involve chemicals that would be ototoxic. So as an alternative to that we thought could we make little tiny windows of shaved bone that are placed strategically along the length of cochlea. We could put one here, another one there, maybe to get a feel for different frequencies, and then look through that tiny window bone. But still leave a little bit of bone protecting the inner ear so that this remains a noninvasive technique. And to test the feasibility of this approach, we again worked with Demetri Psaltis in Switzerland, and this is the work of Marilisa Romito who recently defended her PhD thesis, and here we are putting little chips of bone of the cochlear bone on top of the slide that has hair cells on them, and we are defining how thin that piece of bone has to be so that you can start seeing through it. And it turns out that it has to be 60 microns thick, because then you look, viewing through it, using two photon fluorescence you can see hair cells. However, if that bone and chip is any thicker, you can’t really see through it. This was in vitro using slides. Now is that relevant if you have a 3 dimensionally intact cochlea? And this was done ex vivo, so the cochlea was removed from the animal, but then we are shaving very precisely, little by little, layer by layer the otic capsule, which is the cochlear capsule, and you can see how it generates a tiny window of bone, of removed one bone. And we are using same to second laser to shave the bone and then we are measuring the thickness of the bone using optical coherent tomography. And when doing that now ex-vivo in a 3D intact preparation viewing through a thin bone, we can indeed see hair cells, rows of hair cells, and here we enhance their fluorescence by staining them with propidium iodide B4. And we need to do that in ex-vivo preps because the intrinsic signal FAD rapidly decays postmortem. So, we very much look forward to trying to do this in-vivo without putting any dyes into hair cells.
So I hope that by now I’ve convinced you that seeing the ultra, this microstructural detail in the inner ear is really important to establish precise diagnosis which will guide therapy. And when it comes to therapy, you’ll hear a lot more from Jeff Holt when it comes to gene therapy, but here I’m just showing you one slide and the work we did together with this where we used synthetically made adeno associated virus where it was made synthetically to overcome the issues of immunity. You know that today gene therapy is typically delivered using viral vectors where you remove from the virus everything that makes it bad and pathologic, and then you insert the gene that you are interested in delivering. Well, the body mounts an immune response to that, and the most commonly used viruses are adeno associated viruses and they are not the same as adeno virus which is a common cold virus, but they are related. So lots of people who have had common colds, which is most of us, may be immune to this therapy. So that was the idea behind developing synthetic viral vector. And when we tried it in both mouse tissue on the left and human tissue on the right, and in this case the viral vector was carrying gene fluorescent protein so GFP, and this was in the proof of feasibility, you can see that lots of cells take it up. Meaning that this viral vector is very efficient in transducing different cell types in the inner ear which makes it a promising candidate for future clinical studies.
So now we’re switching gears a little bit because we’re moving from technology development and develop, the development of diagnostic tools to understanding molecular pathophysiology of these poorly understood tumors that cause hearing loss and tinnitus in 95% of people. And the common theme here will be the highlighting of the role of inflammation. And the role is placed in both hearing loss and tumor growth. So let’s first look at the conundrum. The conundrum here is that on the top this is an MRI scan of a patient with a vestibular schwannoma. These are axial or horizontal cuts, and the patient was given a contrast agent which is called gadolinium, and the tumor avidly takes up gadolinium. So you can see that there is a sizable tumor here, but remarkably enough, this tumor was an incidental finding. It did not cause any hearing loss in this patient. The patient ended up getting a scan for completely unrelated headaches. So this tumor is sizable, it’s even compressing the brain stem here, because you see how it’s concave on one side and convex on the other side. But it should be bowing outside on both sides. In contrast, this is another patient at the bottom who initially presented with significant hearing loss on the right side. As you can see based on the diametric profiles, and a really poor word discrimination of only 8%. So for this patient their hearing was useless to them because they couldn’t understand anything. And yet when you image them, their tumor is very small. It’s right here. You see that’s much smaller than the big tumor on the other side. There are numerous examples like that, and many papers have been written by many groups across the globe saying that there really isn’t a, a clear and very strong and robust correlation between the tumor size and the associated hearing loss, which led us to hypothesize that maybe there’s secreted factors that could cause direct damage to the cochlea, because the cochlea’s nearby. And if this tumor secretes evil humors, they could get to the ear and damage it. But how do you test that? I told you that it, we can’t routinely biopsy the inner ear today in living people. If we could, then we could collect perilymph, the inner ear fluid of these patients, and test that. Do those with significant hearing loss have a lot more of these ototoxic proteins in them than those that don’t have hearing loss? So a then graduate student (name) had to devise a way to test the hypothesis. And she decided to use mouse cochlear explants. So this is a micro dissected mouse cochlea and this is half, a half turn, and hair cells are stained in green and nerve fibers are stained in red. And then she applied to them vestibular schwannoma secretions from a human. What that means is that we would go to the operating room, take a tumor chunk and then incubate it in culture media for a couple of days and then use that condition media and apply to cochlea explant. And then we can quantify the degree of cellular damage. And we can do that comparing the cochlear base versus the apex, because as you know they are organized according to frequency. And what’s interesting about these tumors is that it typically caused much more prominent hearing loss at the base at high frequencies than the apex. So then we can be quantitative about the loss of hair cells and neurites, and when we do that for many different tumors, we see a pattern. And here for simplicity I’m just contrast—I’m just contrasting a poor hearing tumor and a good hearing tumor. But this holds when we look at a much larger series. And basically in the top row it’s the apical turn, in the bottom row it’s the basal turn of the cochlea. And then if you look at the explant treated by tumor secretions from a tumor that caused complete anacusis, so complete loss of hearing. You can see that there is lots of loss of these hair cells and nerve fibers. However, when you treat cochlea explants with tumors that did not cause significant hearing loss with tumor secretions, then there isn’t much damage. In fact this doesn’t look very different from a very important control. So the very important control here is to treat cochlea explants with secretions from the normal healthy nerves. Because otherwise, you could say oh you are just seeing these differences because you’re mixing tissues. You’re using human secretions and applying them to mouse explants. But that control tells us no, it’s not that. There is actually something in these secretions that is ototoxic. But what is that? This, it’s a concoction of lots of different proteins that are being secreted. And we decided to zoom in on one of them. It’s called TNF Alpha, which stands for tumor necrosis factor alpha. And we decided to zoom in on that one because it’s been implicated in two other forms of human deafness. And in particular in people with oto new inner ear disease, for sudden idiopathic sensorineural hearing loss, if you measure TNF alpha in their serum it’s elevated. So what we did is blocked TNF Alpha in these tumor secretions. How do you do that? well I use a neutralizing antibody. And when we do that you can see that by using the TNF Alpha neutralizing antibody, the cells start looking much healthier, which proves that TNF Alpha is an ototoxic molecule in the secretions. It was surely not the only one. I wish the story were a simple as that. However it’s a very interesting candidate, and it’s a very interesting candidate because there are already lots of drugs out there that are developed to target this molecule. They’re developed for the rheumatologic diseases, like rheumatoid arthritis, and psoriatic arthritis.
But in the meantime, I showed you that TNF Alpha is an ototoxic molecule, but what’s the mechanism of action? And to address that, Johan (Sahiun(?) did these very tricky experiments where you are perfusing the cochlea in a living guinea pig with TNF Alpha. Well, the first step that you have to demonstrate is that by perfusing anything through a guinea pig cochlea you’re not causing any hearing loss, because if that’s causing hearing loss then your technique is not good. You have to go back and refine your technique And after actually couple of years of refining his technique he was able to demonstrate that indeed if you just profuse the cochlea with perilymph alone, you can do that without causing any threshold shift. And we’re showing compound action potential at the top and distortion product oto acoustic emissions at the bottom. The compound action potential as you know is an incentive compound potential of the auditory nerve. It’s very similar to Wave One of the auditory brain steam involved response. But you can measure it directly by putting an electrode on the round window, or nearby. And of course distortion product oto acoustic emissions reflect the activity and function of outer hair cells. But now if you profuse the cochlea with TNF Alpha, there isn’t really a dramatic change in thresholds. These are acute experiments. This was done for 24 hours of perfusion. You could say well this is an acute model, but in humans these degenerative changes are happening over many years. And typically by the time people come see us they have a tumor that’s clearly identifiable, but it must of been growing for some time. But what is a more sensitive measure of neural damage than thresholds? It’s amplitude. It’s, you know how we can now measure Wave One amplitude of ABR, of auditory brainstem evoked response, and is a very highly correlates with neuronal damage. We cannot do that in humans routinely yet, because amplitude measurements are trickier. First of all, you don’t have a baseline for a given patient. They come see you when they have hearing loss, so you don’t know what their, their ABR Wave One may have been. And secondly, people’s body habits changes. If you gain weight, lose weight, how much tissue there is between these generators in the brainstem and the surface electrodes that you use to pick up the signal changes, so that will influence the absolute amplitude of Wave One ABR. But in animals, we can do it in a very controlled fashion, because we can choose animals that all have the same weight, and the same age, and the same everything. And when we do that and now look at the amplitude of the compound action potential, you see that there is a drop in the animals perfused by TNF Alpha, and this is statistically significant, consistent with TNF Alpha being neurotoxic. And is there a correlate at the histologic level? And this is the work of Satchi(?) Katsumi, and Becky Lewis. And Becky is actually an audiologist. She was the first audiologist in my lab. I hope to have more going forward. But, what we are looking at now at the synaptic level, because if there is a drop in the amplitude, then the neural, or the, there must be loss of synapse. So something must of happened at the synaptic level. So we can look at that, and we can pr—immune stain the presynaptic and the postsynaptic connections. The presynaptic connection in the inner ear, it’s a very specialized structure, it’s called the ribbon synapse. There are only 2 places in the body that have this specialization. It’s the ear and the eye. And we can use a protein that labels, sorry, an antibody that labels a protein that’s a key component of this ribbon synapse, and we can immuno stain its post synaptic partners. And when we do that you can see that in the TNF Alpha perfusion group, there are many more of these what we call orphan synapses. You see only a red dot because they don’t have their partner, whereas in the other two groups, namely in the control group shown on the left, and in the prevention group shown on the right, we see many more partners. And when we plot that along the length of the cochlea, so looking at the effect at various frequencies, we see that the effect is statistically significant in the same frequency range where we see a significant drop in the compound action potential. So that tells us that there is a correlation between physiology and synaptic morphology, which is reassuring and we’re excited about being able to prevent the effect by giving an antibody, which is actually a commercially available drug that prevents TNF Alpha from interacting with its receptor. And that’s (sounds like antonnerscept) which is shown here in the prevention group.
And I have already told you that there are so many drugs already developed and on the market for, uh, that target this pathway. And this was recently reviewed by then resident Yin Ren. And it makes it, makes these drugs very attractive for repositioning, or repurposing in vestibular schwannoma. But in the meantime here we can capitalize on what has been done by others already, which is really handy. But there most surely will be molecules that are not easily targetable with a drug. And in that case you need molecular therapeutics, because you need to get to the molecule itself, at the DNA or the RNA level. And this is the work of Yin Ren where he developed nanoparticles that included a small interfering RNA to silence the TNF Alpha gene. And the way you do that, you start with peptides, and these peptides have 3 regions that have, they have a tumor targeting region, then they have a cell penetrating region, and they have a membrane targeting region. And when you put these peptides together, they self-assemble into a nanoparticle, and if you put this siRNA, then it ends up being in the center. And so when we deliver these nanoparticles to human vestibular schwannoma cells, and so these are cells that we get from removing a tumor sample, then a busy graduate student runs to the operating room, brings into the lab, dissociates these cells, grows them in a dish, and then applies these nanoparticles. And the critical thing here is do these tumors, vestibular schwannomas, express the receptor for the nanoparticles we have. And the receptors for these nanoparticles include integrin and neuropilin. And we show that yeah, they express them both at the cellular level when we have dissociated cells in a dish, and at the tumor level. And so the next experiment was giving the nanoparticles to both primary human vestibular schwannoma cells, so these are cells that we obtain from these tumors, as well as any mortalized human vestibular schwannoma cell line. And what that means it’s immortalized because it’s been typically transected with the virus that allows it to propagate forever. So you can freeze it and thaw it and freeze it and thaw it, which you cannot do with primary cells. These primary cells, you can just use them once and that’s it. But we see that the cells, human vestibular schwannoma cells, very avidly take up the nanoparticle. Each one of those little dots is a nanoparticle, and that results in a dramatic reduction in TNF Alpha, expression both at the gene level as shown here in red, and at the protein level.
So, now that’s a lot of talk about one molecule. About TNF Alpha and how we go about figuring out what it’s doing and how to inhibit it, either with existing drugs, or with molecular therapeutics. And by the way, that story about molecular therapeutics, it was the first demonstration of the feasibility of this in our field. But does that make sense in the context of what we know about these tumors in general? So this is the work of a then graduate student Jessica Sagers, who performed the meta-analysis of vestibular schwannoma transcriptome. And what that means is that she took the published literature, where others have already sequenced the genes that are expressed in these tumors and she put them all together to identify genes that are commonly and concordantly up or down regulating these tumors. That amounted to having a total of 80 tumors and 80, 18 control nerves. That may not sound like a lot, however, it’s a lot for a rare tumor. In the cancer field in general, when you’re talking about prostate and breast cancer, they can talk about hundreds and thousands of specimens. But we can’t in this field. So when you analyze these genes together, according to how they fit into different pathways, then the top ranking pathway was neuro inflammation. Well that’s really interesting and reassuring because a few years prior, a medical student from the Netherlands, Martin Briette(?), did a different type of analysis where he did a PubMed search for all genes and proteins that were known and validated to be dysregulated in vestibular schwannoma. Validated it means that it must have been validated at some level meaning at the protein level, by immunohistochemistry, or by western blot, and, there was an appropriate control group, and it was statistically significant. So when you do that, he again found that the number one ranking pathway was inflammation. So that begs the question is it possible that by inhibiting this inflammation, you can prevent tumor growth, or at the very least preserve hearing? And the quintessential anti-inflammatory medication is aspirin. So, the first step was to treat human tumor cells with aspirin. And we started with these human vestibular schwannoma cells, and we had control cells from a healthy great auricular nerve and then we treat these cells either with aspirin, or with two close cousins, sodium salicylate, which is used to treat tumors, uh, which is used to treat people who are allergic to aspirin, or 5-aminosalicylic acid which is used in people with inflammatory bowel disease. And then when we treat these cells with these drugs, then we can have several outcome measures. One of them is the rate of proliferation, and we can quantify that by looking at BRDU in preparation. We can quantify cell death by doing tunnel staining. We can look at prostaglandin secretion in the culture media. And prostaglandins are secreted when this enzyme, which is inhibited by aspirin, is active. And that enzyme is called COX-1 or cyclooxygenase-1. And then finally, we can use the cell viability assay, which is a calorimetric assay. And when we use all these different assays we find that aspirin and related salicylates can inhibit proliferation of these cells in a dose dependent manner, but they don’t kill these cells. So we call them that they have cytostatic but they’re not cytotoxic. Well, for this kind of tumor that’s great. These are histologically benign tumors but they’re bad because of their location. So, we call them malignant by location. They are located at, at a very important space. As they grow they expand into the cerebellopontine angle, as you saw they can compress the brain stem. There are all these vital centers in the brain stem, all the cranial nerves arise from the brain stem. So as these tumors grow, if untreated, they could cause not only hearing loss and dizziness and tinnitus, but also facial paralysis, they can cause aspiration, and dysphonia. They can cause vocal fold paralysis, they can cause pain when it’s compressing the trigeminal nerve. So, you can’t just ignore them saying they’re benign. They are malignant by location. But if we stop them from growing, that’s all we need to do. Because then, we don’t, don’t need to remove them. And today, incidentally, we don’t have any FDA approved drugs to treat these tumors. Today, we treat grown tumors by either surgery or radiation. Surgery is not a piece of cake. This is a craniotomy. So that means it has to remove a piece of bone, you have to retract the cerebellum out of the way, you have to drill the internal auditory canal, and then do micro dissection of the tumor from both the inner ear side and the brainstem side. and if you, you irradiate these tumors that doesn’t make them go away, it typically stops them from growing. And if the tumor grows afterwards, after radiation, thankfully that’s not common, because in 95% of patients radiation can stop these tumors from growing. If it grows then it makes surgery more difficult from having had radiation beforehand. And radiation can have other side effects including hearing loss and facial paralysis and worsening dizziness. So, there’s really a big unmet medical need for drugs that would prevent these tumors from growing.
So with that we said, well could we start a clinical trial of aspirin for vestibular schwannoma? But then you first have to answer the question of would the long term use of aspirin be toxic to hearing. Because there are lots of papers published that large doses of aspirin can be ototoxic. In fact, that’s a commonly used model to induce tinnitus in animal models. Just give them a lot of aspirin. Well here, we’re not giving a lot, we’re not giving 2,000 milligrams, which is the dose that causes tinnitus in humans, and some people on rheumatoid arthir—with rheumatoid arthritis are on dosages that are that high. Here we are testing 650 milligrams. So 325 milligrams twice a day. But would the regular use of aspirin be toxic? And we did this study in collaboration with Barry Kirhan(?) at the Harvard school of Public Health, and this is the work of the then resident Brian Lin. And it was a prospective epidemiologic study involving nearly 55, well over 54,000 female participants in this nurse’s health study. And that amounted to nearly 729,000 person years of follow-up time, because many of these patients were followed up to 20 years. And it turns out that those who took aspirin regularly, and regular use means using it at least 3 times a week, were not at higher risk of hearing loss. However, importantly enough, those who took ibuprofen or acetaminophen regularly were at high risk of hearing loss. So this tells you that these are very important modifiable risk factors for hearing loss. Pain killers are the most commonly used medications in the western world, with ibuprofen and acetaminophen leading the way. so when counseling your patients, and they tell, you should warn them about this. If their, uh, their hearing loss is developing at an accelerated rate, and they say “Well I don’t know, I’m not doing anything,” and you ask them “Well, do you have problems with insomnia, and they say yeah every night I take Benadryl PM. But you know what PM is in Benadryl, is actually acetaminophen. So then it seems every night they are taking it. But the good news about aspirin is that it’s safe. So now based on that, we are actually running a prospective randomized placebo controlled double blind phase two clinical trial of aspirin for vestibular schwannoma. This is a multicenter clinical trial that in addition to Massachusetts Eye and Ear, involves Massachusetts General Hospital, the Mayo clinic, Stanford, University of Iowa and the University of Utah. And the objective is to recruit 300 patients, which would really be the largest study of vestibular, of, um, vestibular schwannoma patients when it comes to studying any drug, because drugs that have been studied in these patients have usually involved small patient cohorts of a handful, handful up to 20 patients, and they are being randomized into receiving aspirin or not, and we have designed the study so that everybody has the option of getting aspirin. Basically they can cross over to the aspirin group if their tumor grows. So we are blinded to what they’re receiving, but once the tumor grows, then can become un-blinded and if they were receiving placebo now they, now they have the option of receiving aspirin. I would not recommend that everybody starts taking aspirin at this point who have this tumor, because we don’t know whether it’s working or not. The study was designed to test that. But if you have interested patients, then do send them our way so that we can figure it out. And we also tell the patients that they pretty much have nothing to lose. Because otherwise if you’re just monitoring the tumor with scans alone, they’re not doing, you’re not doing anything about it, and we are, the only added thing to then is a blood draw. But otherwise they’re getting the scans and audiograms in the same intervals as they would be getting if they were not in the study. And we hope that their blood will give us clues as to who responds and who doesn’t respond and why, so that we can develop better therapies going forward.
That’s a story about a known pathway. Aspirin has been around for a long time, it’s one of the greatest successors, uh, successes in pharmacology. But now what about the pathways that have not been studied extensively, or in fact never in this tumor class? So that it now is a story about inflammasomes. And the inflammasome is a multiprotein, intracellular complex that mediates inflammation. It’s a part of innate immunity. And why we decided to look into inflammasome is that there are several recent papers including papers from Andy Griffith’s group at NIDCD where they identify that a mutation is the gene called NORP3 inflammasome gene, is associated with both syndromic and non-syndromic hearing loss in people who have lots of systemic inflammation. So they identified this inflammasome gene as being responsible for cochlear autoinflammation. So you know how lots of papers were written before about cochlear autoimmunity, that’s different than cochlear autoinflammation.
And for those reasons, then others have studied the gene in animal models and have shown that it’s upregulated in different kinds of stress including voice trauma. So, Jessica Sagers, as a part of her PhD thesis then looked at these human vestibular schwannoma tumors that were grouped into those associated with good versus poor hearing. And good hearing is defined as having pure tone average of less than 30 decibels and word recognition better than 70%. And she looked at 15 tumors in each group, plus she had a control group of those great auricular nerves which are normal nerves. And what you can see is that there are multiple genes that are part of this inflammasome complex that are upregulated in tumors in general, but then in particular in tumors associated with poor hearing, which is shown here in red, that’s at the gene level. Does that hold at the protein level? It does. So to answer that question we did immunohistochemistry of these human tumors. And as an example, I’m showing you a good hearing tumor on the left, and a poor hearing tumor on the right, and what you can see is that in all of tumors associated with poor hearing, there was a massive infiltrate of inflammatory cells called macrophages. All of them, the vast majority of them, 9 out of 11, also expressed high levels of a pro inflammatory cytokine called interleukin one beta. Well, that’s really interesting because this cochlear autoinflammation that I just told you about, those patients can have high levels of interleukin one beta in their blood, and you can stabilize or improve their hearing by giving them an inhibitor of interleukin one beta signaling. And that’s a drug that’s already commercially available and it’s called Anakinra. So from that standpoint, it’s actually really good to be thinking about it because you can help patients, and the vast majority of those tumors associated with poor hearing also highly expressed that NORP3 inflammasome gene.
So, this is not all bad. You know there is a good, the good, bad, and ugly of everything, including inflammation. So now I’ll tell you a little about the good inflammation, because it all really depends on context and what kinds of signaling is initiated afterwards. And the way we work, we wouldn’t, certain pathways wouldn’t exist there if they were all bad for us. They play a physiologic role. So working with Gary, Gary Brenner at Massachusetts General Hospital. I have several collaborators who are called Gary. 3 at 3 different institutions, but they all do this work. But now this is Gary Brenner. And working with his team we showed that one component of this inflammasome gene which called apoptosis-associated speck-like protein is actually a tumor suppressor gene. So then we said what if we delivered gene therapy with a tumor suppressor gene to suppress the growth of these schwannomas. But we need to choose an animal model, and the animal model that we used is we injected into a mouse sciatic nerve either human tumor cells, so this is a xenograft, or mouse schwannoma cells, so that’s an autograft. And what we see is that if we deliver now intra, we, we perform intratumoral injection of a viral vector that carries this ASC gene, we can dramatically reduce tumor proliferation compared to the mice that receive intratumoral injection of either an empty viral vector or saline. So then this would be a parallel strategy to preventing tumor growth, but of course the ideal strategy would be to replace the gene that’s not working with a gene that works.
So now after this very focused approach, and highlighting couple of drugs that are important in mediating inflammatory pathways, let’s take a broader view of the potential drug repositioning or repurposing in vestibular schwannoma. And to highlight the importance of that approach, I’ll first tell you about the traditional drug development. It’s super inefficient because it has a very high failure rate. 80 to 90% of drugs fail, which is really astounding if you think of it, because they make it to clinical trials, based on their success in animal models. They were super effective in animal models. And you know why the vast majority of them fail? It’s actually for safety reasons. Then, secondly, they’re lengthy. These clinical trials take 10 years on average. And they’re super expensive, in excess of two and a half billion dollars. But if you compare that with drug repositioning, it has a low failure rate, at least in terms of safety, because you are using drugs that are known to be safe. They’re already FDA approved for some other indications. Secondly these clinical trials can be dramatically shortened. They can be shortened in half. And they’re much less expensive. They’re still pretty expensive, they still cost about $300 million, and you could say why? I mean you are using drugs that already have been approved. Well, you still cannot skip these regulatory trials in phase one and phase two studies. What you are skipping is all preclinical studies. So can we do that at a large scale level? Historically drug repositioning has been around for a long time, and historically can be grouped into those opportunistic and serendipitous discoveries. So the opportunistic, uh, an example of an opportunistic discovery is Sildenifil citrate which is another name for Viagra. It was developed as an antihypertensive. But then it was repurposed for erectile dysfunction. And it’s actually a fascinating story, because when the drug company discovered that it wasn’t working, it wanted it back, and patients wouldn’t give it back. And then (The audience laughs) they found out why. So in 2012 global sales exceeded $2 billion. So this was an opportunistic repurposing of a drug. A serendipitous example of drug repositioning was thalidomide. You know that thalidomide is a sounds like paratogenic) drug. It was first introduced to treat morning sickness in pregnant women, and then all these children who were born afterwards were born without limbs and really disfigured. It was horrible. And then it was repurposed for erythema nodosum leprosum and multiple myeloma, and then the thalidomide and its more potent drug derivative resulted in global sales exceeding a billion dollars in 2018. So for a drug that was initially going to be banned altogether. But now we have an opportunity to do this drug repositioning approach in a systematic fashion where we have these big data, Omni scale data and we have the computational power to interrogate them.
And but the approach that we have taken is to use the meta-analysis that I’ve shown you already, and then ask, now that we know which drug, which genes are commonly and importantly dysregulated in these tumors, which drugs can convert these abnormal transcriptional signatures to a more normal one. How do you do that? Well, it turns out that you can start with tissue, and then you treat it with a drug. And then you describe how all these genes and their expression is up or down regulated in response to treatment with this drug. So then you can do it for hundreds and thousands of drugs. And then you can deposit those data in a publicly available databases, and one of them is called Comparative Toxicogenomic Database. And then you can, as the input to that database, use your rank list of all your different, differential expressed genes, and now you say which of these drugs in the database will convert this to a more normal pattern. Then you can narrow it down only to drugs that are FDA approved, and they reside in this other database, which is called the drug bank. And the output is a list of drugs with high potential for repositioning in your disease of interest. That’s called Signature reversion Principle. Because you started with a transcriptional signature that’s pathologic, and now you’re goin’ to reverse it to a more normal one. And when we do this for vestibular schwannoma, and when we look specifically at a subset of vestibular schwannoma where it’s bilateral as a part of the neural fiber (inaudible word) 2 syndrome, then we identified that all these drugs can be lumped into 3 major categories. Not surprisingly by now, given what I’ve told you, the most abundant group was anti-inflammatory medication, they’re shown in blue. Closely followed by antineoplastic, so antitumor medications, and hormonal related medications. So then we decided to validate this computational hit. This was entirely done (sounds like in silica), using primary vestibular schwannoma cells. And we decided to zoom in on one particular drug which is called Mifepristone, because it’s a drug that readily crosses the blood brain barrier. It’s actually used for chemical abortion, and is being used in many different trials of many different neoplasms including a related meningioma tumor, and even in people with glioblastoma multi forma and lots of other tumors. And we know that this drug is safe, because some people have been on it for 20 years, taking it daily. And we treated primary human vestibular schwannoma cells with mifepristone, and we see that it reduces the metabolic activity in a dose dependent manner. And it reduces cell proliferation even more dramatically while increasing cell cytotoxicity. And we can measure cytotoxicity by using green dye that enters the cell if there are holes in the membrane, and then it binds to DNA. But importantly, when we give the drug to normal human schwan cells, then it has no effect. So it means that it’s a good candidate for repositioning in patients with vestibular schwannoma. And we are now designing a phase two clinical trial. Phase two means we can skip phase one. We know that the drug is safe, and we are designing it as an open label study initially, meaning everybody will be able to get it to figure out whether it’s effective in vivo. What makes us optimistic is that when we used it in related human meningioma cells, where actually we used human (sounds like arachnoidal) cells in which the NF2 gene was knocked out, then it wasn’t as effective. And we already know that in patients with meningiomas, who take it, or have taken it up to 20 years, on average it’s not that effective. However in that group, like in any group, there are some super responders who have even been cured of their diseases.
So now that leads me to the third and final part of the talk. Well, we’ll talk about the liquid biopsy of the inner ear. And you know that biopysing the tissue in the inner ear would be too risky because it could cause hearing loss or vestibular dysfunction. The organ is so small. However, that the entire organ is bathed in a fluid. And it should be enriched for biomarkers of hearing loss and vestibular dysfunction, just the way blood is enriched for markers of liver disease or the way spinal fluid is enriched for markers of brain disease. And we have been interested in this idea for a long time, and a graduate student from 8 years ago, Andrew Lysaght, reported on the proteome of human perilymph. And this was the first application of the modern mass spectrometry techniques to look at the proteome. And in this case we collected pathologic samples. We took samples from patients undergoing cochlear implantation, or patients undergoing vestibular schwannoma tumor removal. But more recently we had defined a proteome of normal human perilymph, and you could ask, and how exactly did you get normal human perilymph? Well it turns out that very rarely, about once a year, we have to drill through the normal human inner ear to provide access to the life threatening tumors of the ventral pons. So basically as you’re approaching the tumor from lateral approach to the front, the inner ear is in the way. And people are willing to have their hearing lost to preserve their life.
And we looked at 3 samples like that and we can see that their proteome overall is different. They, these samples cluster together compared to the tumors, sorry, compared to the perilymph tumors from patients with vestibular dysfunction, mainly tumors who underwent, uh, patients who underwent labyrinthectomy because of disabling Meniere’s Disease.
Well this is a list of proteins, and again how do we know we’re right, and how do we validate that? And we, I’ll just show you one validation, type of validation where we looked at the protein that had not been described in the inner ear before, which is called a Hepatocyte Growth Factor Activator. But we were really interested in this because the AGF, hepatocyte growth factor has been implicated in human deafness. Because both mutations in the gene and in its receptor, which is called cMET, caused human deafness. And now in human proteins we identified an activator that controls the activity of this gene, and interestingly enough we find it abundantly expressed in spiral ganglion neurons and in supporting cells around hair cells. And this is the work of then research fellow (sounds like Jeone Lyn). And now, could we do this routinely? Could we do liquid biopsy routinely on everyone with hearing loss with minimal risk of causing hearing loss? And others have collected perilymph using glass micro capillaries, and a recent medical student Sam Early, in the lab looked at devising a novel device, which is based on a microneedle. And this microneedle was micro machined so that it’s beveled, and it had a step-off. And the step-off was designed to have a safety feature so that you would avoid deep penetration to the scala tympani and avoid bumping into the lateral wall or the basilar membrane. And another safety feature that he incorporated is that as the needle pierces the round window membrane the potential drops. And it drops because there is a chemical difference in sodium concentration in the inner ear versus the referent solution that we bathe the outside of the ear with. And he now tested it first in a semipermeable membrane in vitro and here you can see that the hole is pretty small. And then he tested it in human cadavers. And this is a picture of the human round window and you can see that the hole made by this probe is about 200 to 250 microns in diameter, and it’s pretty small, so that it can either heal spontaneously or it can heal if you just put a little soft tissue seal. And it could be done repeatedly. And we do think that combining these approaches of liquid biopsy with some of the imaging approaches that I’ve shown you really holds great promise for advancing our field. And here I’m showing you that drop in the potential after piercing the round window membrane. And then when we withdraw the microneedle out of the inner ear, the potential comes back. And he was also very quantitative about how much of this sample collected this way can be attributed to perilymph. Because the middle ear, we call it sweats, when you look it under the microscope, these cells start secreting fluids, and there saline, you are irrigating the field, the surgical field with saline. So then how do you know how much of what you are collecting is contaminated by those factors. So in a very quantitative manner he really defined that about 70% of the 5 microliter sample, can be attributed to perilymph where 3 ½ microliters is due to perilymph.
Well the next question is but what’s the minimal amount of fluid that you can take and still be safe? By the way there isn’t that much fluid in the inner ear. There is about 150 microliters total. And so taking 3 microliters should be safe. But can we push that limit? What if we take at least half a microliter? Would that still be informative? And to answer that question we performed experiments in mice, where we exposed mice to loud noise levels and then collected their perilymph. This is technically very tricky. So this was a heroic effort of the two research fellows at the time, Lucas Landegger, and (name), because in mice there is only one microliter total of perilymph. Half a microliter comes from the cochlea, and the other half comes from all vestibular organs combined. Everything else that you get from the inner ear is CSF, is because in mice the path between the cochlea and the cerebrospinal fluid and is widely patent that connection which is called the cochlea active is widely patent. So when you review papers and somebody tells you, oh we collected 5 microliters of perilymph from a mouse, you can say immediately that’s not possible. It means 4 microliters of what you collected is CSF. And it also means that your technique has to be impeccable, because as soon as you open you have to put a blast, a micro-capillary, collect the fluid, because if you wait, if you’re not perfectly timed, you will start getting CSF. So that’s why it took them really couple of years to perfect the technique. And then we have been able to measure directly for the first time these inflammatory cytokines in perilymph. And one of the cytokines hadn’t been described in the ear before. It’s called CXCL1 which is a chemokine, it means it attracts other inflammatory cells into the inner ear. And we were really surprised to find that CXCL1 shown here in red, is actually really abundantly expressed, expressed in pillar cells. This is the work of a Fellow, Sasha Vasilijic. Because it tells us then that these cells of the inner ear, the innate cells of the inner ear may play an important role in innate immunity in ways that we don’t fully understand. They may play a, because this is a normal inner ear, this is not even acoustically traumatized. And yet we see that this protein which is normally expressed in inflammatory cells, is expressed in normal cells of the inner ear. And then when we looked for the expression its receptor of the DARC receptor, which stands for Duffy Antigen Receptor for Chemokines, we find that it’s abundantly expressed in spiral ganglion neurons and inner hair cells, so the two cell types that are very vulnerable when it comes to both noise exposure but also all sorts of other insults including inflammatory insults in the study of autoinflammation. So this whole pathway is very intriguing, and we are pursuing it further to try to identify its function in the inner ear. But in the meantime it’s interesting that mice that are knockout for this DARC receptor, are actually protected against acoustic trauma. That was published by another group, but they didn’t know the mechanism, and they didn’t look at the expression of this protein in the inner ear.
So then taken together, today I have shown you how we are using a multi prong approach to enabling personalized diagnosis and therapy of hearing loss. And I’ve shown you examples of how we are developing tools to enable cellular level diagnosis of hearing loss. And I’ve shown you several examples of therapeutic approaches that we are pursuing, including the drugs where we’re talking about both drug repurposing and the development of new molecular therapeutics, and gene therapy, as well as devices. But when it comes to devices, I really don’t have time to talk about that today, so I will just very briefly mention that in collaboration with Anantha Chandrakasan at MIT we have developed a prototype chip for a fully implantable cochlear implant. So this is a cochlear implant that would not have external components at all. And we have made it so that it is in the form factor of the existing cochlear implants. This is our fully implantable cochlear implant. And instead of having an external microphone, it uses a piezoelectric device that detects vibrations of the malleus. And whenever you’re talking about fully implantable devices, you have to wonder and how exactly will they be powered. And if you’re putting an implantable battery, it has to last for many years, ideally 20 years or longer. Well that’s a difficult problem, and it’s still unsolved. But in the meantime, we have looked into the possibility of harvesting the energy from the body. And the energy has been harvested from the body before from kinetic movement for example, or thermal gradients, but these are unstable sources of energy, and the harvesting systems were built to be really large. People would need to be wearing a backpack of equipment to harvest their energy from kinetic movement. But we actually harvested energy from the inner ear itself, from the endocochlear battery in the inner ear. And it turns out that as you know, there is this endocochlear potential, it’s hundred millivolts of positive potential, which is really critical for hearing, because it drives conduction currents through sensory cells. So then of course we couldn’t tap a lot of it, we can take only a small fraction of it, so we took less than 1% of that energy and we used it to power a radio transmitter which was located a meter away. So that was the first demonstration of the feasibility of harvesting energy from an electrochemical gradient in the body, and our energy harvester is small enough to fit on the tip of the index finger, which means that in future applications it could fit in the middle ear or the mastoid. But that energy is not enough to power a hearing aid or a cochlear implant, because we are talking about very small picowatts of power, compared to milliwatts of power that are required to power these devices. However, in the future, it may be a part of a hybrid battery where a small part of it is harvested from the inner ear, or instead of supplying devices, you could use it for sensing, because sensing doesn’t need to be done continuously.
So with that I’d like to acknowledge people in my lab who have contributed to the work I have described. And my many collaborators across the globe. And the funding agencies that have supported us. And I’m ready to take questions.