The impact of artificial intelligence (AI) over the past few years has been felt in nearly every industry around the world. New tools have given individuals free (or inexpensive) access to large language models (LLMs), which use text inputs to understand and generate language.

Given the pervasiveness of AI and LLMs, it’s no surprise that this year the theme of Peer Review Week is “Rethinking Peer Review in the AI Era.” As we join with publishers across the globe to reflect on the role of AI in peer review, read more below to see what we’re already doing in this rapidly growing area.

AI at ASHA Journals

We use a number of AI-based or AI-aided systems at ASHA Journals to improve the author, reviewer, and reader experience. These systems, including Paperpal Preflight, Similarity Check, and Scite, have been thoroughly vetted and have seen wide-scale adoption throughout the publishing industry.

We understand AI and LLMs are quickly evolving. To that end, we’re monitoring their progress and will post any updates to the Artificial Intelligence section of our Author Resource Center or other applicable sections of the ASHA Journals Academy.

The Risks of AI in Peer Review

There are of course many ways an individual may engage with an AI system. They can range from doing just a basic search and reading the resulting summary to using a chatbot to ask questions and deliver detailed reports. When doing a manuscript review, a reviewer may need to verify information online and end up consuming AI-generated results in the process, and that’s just fine. This situation happens every day, and is a reality of modern internet searches.

The key thing editors, editorial board members, and reviewers need to keep in mind is that no portion of a manuscript should ever be uploaded to an AI system during review for any purpose. This restriction protects the intellectual property of authors submitting to ASHA Journals and honors our shared commitment to the confidentiality of peer review. It also ensures that authors receive thoughtful feedback from subject matter experts.

Working With AI

Authors can use AI when writing articles, as long as they disclose the use of AI- or LLM-generated text and assume responsibility for the accuracy, originality, and integrity of that text. Work generated from an LLM should be properly referenced like material gathered from any other source.

Although AI can make many aspects of your work easier, we ask our authors, reviewers, and editors to ensure that it doesn’t compromise the quality work expected of ASHA Journals. Although the potential of these tools continues to grow, AI cannot match the intelligence and experience our contributors share.

Artificial Intelligence is constantly growing, and at a rate that cannot be ignored. Instead, members of the peer review community can benefit from learning more about this technology and making informed decisions about its future in peer review. We believe that the best, safest, and most ethical way to use AI in your work is in collaboration with human intelligence and expertise.

The ASHA Journals Board will continue to follow the progression of AI in scholarly publishing in order to determine the future of AI use at ASHA Journals. In addition to using and evaluating these tools, we encourage you to share your opinions with us via email or by contacting us on X or Bluesky.