The use of artificial intelligence (AI) in communication sciences and disorders (CSD) has the potential to enhance patient care and outcomes. Last year’s Research Symposium at the ASHA Convention focused on how researchers are working to evaluate the accuracy, utility, limitations, and risks of using this technology in our work.

Guest Editor Jordan Green has coordinated a collection of articles developed from the Research Symposium on Artificial Intelligence in Communication Sciences and Disorders. Read more about these articles in the latest issue of the Journal of Speech, Language, and Hearing Research (JSLHR) below!

Automatic Speech Recognition

In the introduction, Dr. Green highlights the potential benefits alongside the limitations of AI for audiologists and speech-language pathologists. Green writes that although new technologies have the potential to enhance the quality of life for millions of people with communication impairment, computer algorithms cannot replace the work of scientists and clinicians.

The forum opens with four articles on developing automatic speech recognition (ASR) for individuals with speech disorders. Hasegawa-Johnson et al. discuss a project that aims to collect and transcribe dysarthric and dysphonic speech in order to improve ASR in people with speech disorders.

Tobin and colleagues show that ASR should be trained using conversational speech of people with disordered speech. Next, Cave describes how people with amyotrophic lateral sclerosis use ASR technology and how clinicians can help them customize this technology—and use it in a practical way. Last, Romana et al. demonstrate how updating transcripts, adding disfluency annotations, and including word timings can improve existing data used for ASR.

More on AI in Speech

For individuals with locked-in syndrome, neural speech decoding could help them communicate at a quicker pace than existing technologies, as Dash and colleagues show in their article. Next, Liss and Berisha argued for a methodological shift in how researchers analyze speech, shifting from speech features to clinically validated speech measures.

In the final article of the forum, Ramanarayanan discusses how clinicians can assess individuals’ neurological and mental health during remote clinical assessment. In this research, Ramanarayanan showed how eye gaze, body movement, and physiological signals—in addition to speech—could be used.

Emerging Technologies in the ASHA Journals

By keeping up with the latest in AI and other technologies, you can discover new ways to give the people you work with the care and expertise that they need. The ASHA Journals publish more than 800 articles every year, helping all ASHA members find the research that they need for their clinical or research work.

To read the entire forum, check out the latest issue of JSLHR, or explore the individual articles below. Thanks for reading, and we hope to see you at this year’s research symposium at the ASHA Convention!

Explore The Forum

Cave, R. (2024). How people living with ALS use personalized automatic speech recognition technology to support communication. Journal of Speech, Language, and Hearing Research, 67(10), 4186–4202. https://doi.org/10.1044/2024_JSLHR-24-00097

Dash, D., Ferrari, P., & Wang, J. (2024). Neural decoding of spontaneous overt and intended speech. Journal of Speech, Language, and Hearing Research, 67(10), 4216–4225. https://doi.org/10.1044/2024_JSLHR-24-00046

Green, J. R. (2024). Artificial intelligence in communication sciences and disorders: Introduction to the forum. Journal of Speech, Language, and Hearing Research, 67(10), 4157–4161. https://doi.org/10.1044/2024_JSLHR-24-00594

Hasegawa-Johnson, M., Zheng, X., Kim, H., Mendes, C., Dickinson, M., Hege, E., Zwilling, C., Moore Channell, M., Mattie, L., Hodges, H., Ramig, L., Bellard, M., Shebanek, M., Sarι, L., Kalgaonkar, K., Frerichs, D., Bigham, J. P., Findlater, L., Lea, C., . . . MacDonald, B. (2024). Community-supported shared infrastructure in support of speech accessibility. Journal of Speech, Language, and Hearing Research, 67(10), 4162–4175. https://doi.org/10.1044/2024_JSLHR-24-00122

Liss, J., & Berisha, V. (2024). Operationalizing clinical speech analytics: Moving from features to measures for real-world clinical impact. Journal of Speech, Language, and Hearing Research, 67(10), 4226–4232. https://doi.org/10.1044/2024_JSLHR-24-00039

Ramanarayanan, V. (2024). Multimodal technologies for remote assessment of neurological and mental health. Journal of Speech, Language, and Hearing Research, 67(10), 4233–4245. https://doi.org/10.1044/2024_JSLHR-24-00142

Romana, A., Niu, M., Perez, M., & Mower Provost, E. (2024). FluencyBank timestamped: An updated data set for disfluency detection and automatic intended speech recognition. Journal of Speech, Language, and Hearing Research, 67(10), 4203–4215. https://doi.org/10.1044/2024_JSLHR-24-00070

Tobin, J., Nelson, P., MacDonald, B., Heywood, R., Cave, R., Seaver, K., Desjardins, A., Jiang, P.-P., & Green, J. R. (2024). Automatic speech recognition of conversational speech in individuals with disordered speech. Journal of Speech, Language, and Hearing Research, 67(10), 4176–4185. https://doi.org/10.1044/2024_JSLHR-24-00045