One of the central tenets of evidence-based practice (EBP) is that not all evidence is equal. Most of us will agree that the findings from some studies are much more persuasive than others. Does the famous saying about pornography, attributed to Supreme Court Justice Potter Stewart, “I don’t know how to define it, but I know it when I see it” apply here, or is there some set of objective criteria upon which we can get widespread agreement as to what constitutes strong evidence?

The usual approach to this issue is the application of a scheme of levels of evidence, i.e., a formal system of categorizing evidence based on study design and, in some cases, study quality and relevance. There are over 100 such schemes in use today (Lohr 2004). There are clear benefits of ASHA’s potential adoption of a single scheme of levels of evidence for use throughout the Association. Communication would be made simpler by having a single standard and terminology familiar to all. A single scheme would also help to ensure consistency across documents (such as practice guidelines and systematic reviews) developed by ASHA. An additional benefit would follow from the adoption by ASHA of a scheme developed and in widespread use outside of ASHA. This would help combat suspicions of bias by Association staff or members in their assessment of evidence.

Are there any existing schemes that would provide the ideal fit for ASHA?

Is it even theoretically possible to have a single scheme used throughout the Association, or should there perhaps be separate schemes for diagnostic studies, treatment efficacy studies, cost-effectiveness, etc.?

Is there any potential harm in the adoption of a single system of levels of evidence throughout the Association?

With what tradeoffs between reliability (everyone using the same scheme) and validity (the imperfections of that scheme) are we comfortable?

What are the characteristics of the ideal scheme of levels of evidence for ASHA?

ASHA’s Advisory Committee on Evidence-Based Practice and staff of the National Center for Evidence-Based Practice in Communication Disorders are currently grappling with these questions.

Resource

Lohr, K.N. (2004). Rating the strength of scientific evidence: Relevance for quality improvement programs. International Journal for Quality in Health Care, 16(1): 9–18.