SpeechBITE started because there were other allied health databases that were being developed in Australia — focusing on other professions — so PEDro had been developed for physiotherapy. OTseeker had been developed for occupational therapy, and there was a new one called PsycBITE which I became a founding member of, which was developed for psychological treatment for acquired brain impairment, and it was a multi-disciplinary database.
Having worked on PsycBITE, and gotten that up and running, I could see that we really needed something similar for the speech pathology profession. There really was nothing around that was available to clinicians and students, where they could quickly and easily access evidence. So, in 2006, I approached Speech Pathology Australia and they gave me some seeding grant funds, and we got this database up and running. We launched it in 2008.
What are some of the considerations in developing an evidence database?
It’s really just about the quick access. Because there are databases around like Medline, PsychINFO, Ovid, Scopus, there are a lot of these big databases available. The problem is, as a speech pathologist, if you want to enter search terms like, I want to look for treatment for people with brain injury, looking at their cognitive communication disorders — if you do a Boolean search with lots of “ands” you can end up with two thousand papers. And your average clinician, or researcher, doesn’t have time to go through all those papers.
SpeechBITE has been set up so that we do all the searching. We search nine databases every month. We look at all those papers — we get about two thousand papers a month — and we search through those two thousand abstracts and we screen them for eligibility to be entered onto speechBITE.
To be entered on speechBITE, it has to be of relevance to a speech pathologist’s practice or future practice, it has to be a paper within the scope of speech pathology practice, so it has to be about communication and/or swallowing and people who are at risk of having problems. It has to have empirical data in it. It has to be a fully peer-reviewed, full publication. So we don’t have conference abstracts, we don’t put book chapters, we don’t put conference proceedings. It has to be a peer-reviewed scientific article. So, all of a sudden, it’s a much tighter group of papers for a speech pathologist to look at. Then we index all those papers so that a person can find them on our database. Then finally, the group comparison studies — so randomized controlled trials and non-randomized controlled trials — are all rated for their methodological quality.
We have a lot of tools available to rate methodological quality. We chose the PEDro scale because it’s a feasible thing to do. It’s a very technical skill to rate the methodological quality. We go through and find data and look for evidence of specific reporting, when we’re doing these ratings. It takes quite a lot of skill. It took a lot of training for all of us to do it.
To help others learn how to do it, we’ve got a free online tutorial on the speechBITE website. Researchers or research students — nearly all of our PhD students would go on and do PEDro-P training so that they can learn these skills and know when they’re reading a paper, what to look for. We’ve put a lot of effort into this training tutorial and all of our raters have to achieve a certain level of accuracy before we can have them as a speechBITE rater.
So, these have all been challenges in terms of making this very scientifically robust and sound. But at the same time making it a workable, ongoing solution for being able to critically appraise these papers on an ongoing basis.
What are some of the common methodological weaknesses you encounter in the rating process?
That’s the challenge: When you think that a paper is a randomized controlled trial, so it must be gold standard. But what we’ve found a huge range. Some papers say they are a randomized controlled trial, and then it wasn’t a random allocation of participants, so we’ve actually classified those papers as “non-randomized controlled trials.” It takes some skill to be able to tell what is true randomization, what is quasi-randomization.
Sometimes they may well do, for example, a comparison of the groups at baseline, before they do a treatment, but they don’t report it adequately. So, it’s difficult to tell when you’re rating a paper whether these groups were similar to start with. And if they’re not similar to start with, there’s a fundamental flaw through the entire paper if it’s not accounted for statistically. Because if a treatment group is starting off and they’re “better” before you’ve even started the treatment, at the end of the study, it’s hard to tell whether it’s the treatment that worked, or whether they were just better to start with. Reporting similarities of groups at baseline has been one of our lowest criterion where people are not getting the point for doing it.
Reporting generally — things like whether there’s an independent assessor. When you run a clinical trial, it’s not that hard to, rather than you scoring your data, getting somebody else to do it. Or building that into your research design. Many papers don’t have an independent assessor. That’s another risk of bias.
What we’re saying is, there are many risks of bias that could be relatively easily addressed in the research design stage, but they just – it’s not happening.
How can rating tools help researchers?
I think what the PEDro rating system has done — it’s certainly done it in the field of physiotherapy — once researchers become aware of what we’re looking for in a well-designed study, they start designing well-designed studies. The PEDro scores have increased on the physiotherapy database, they’ve been around much longer, they launched in 1999. They’ve certainly demonstrated that having the PEDro-P there for researchers to think about when designing their research has improved on the reporting, and therefore the validity and believability of those findings.
I’m hoping with speech pathology, that that’s the same for us. That as we all get more familiar with the PEDro-P — I certainly use it when I design my randomized controlled trials now. I have a checklist and I go through, “Have I covered that? Have I done that? Have I got the independent assessors built in? Am I doing a between-groups statistical comparison?” Having those items in your head really helps you design a better study.
What we’re hoping, over time, is that researchers will look at some of these tools and use them in designing their treatment research, and therefore will come up with more robust findings.
As a researcher, it’s really important if you want to do treatment research to make sure you’re familiar with all these tools, because there seems to be a lot around. But they’re invaluable, in terms of helping you plan what you’re doing.
Once you start getting really high quality research, it opens a lot of doors in terms of where you can publish that work, and the impact you can have.
How can evidence databases help clinicians?
What I’m keen for is for speech pathologists to think about using speechBITE almost every day. If you’ve got a clinical research question, you should be able to get an answer out of speechBITE in less than three minutes. It’s a very fast, easily accessible way of quickly getting the best evidence and being efficient about it.
We know that we can take results of these sorts of studies and apply them with confidence. That’s what we’re wanting to do, is give speech pathologists confidence about a paper. That’s in answer to a frequent, frequent reported barrier to using evidence-based practices, “I don’t know how to assess whether a paper is a good one or not. I don’t have enough time to read it anyway. There’s way too many papers out there.” So, speechBITE is trying to deal with all those issues and almost fast-track you to a point where you’ve got a very short list of papers that you might want to look at.
It may be that there’s one good systematic review that was published in this current year, and that will summarize previous research for you and give you some direction. And I think that’s the value. Clearly, as a speech pathologist, it’s then up to you to figure out how you’re going to use that evidence and whether it fits in with your practice context, and how it matches what your client wants out of their treatment, and how it marries together with your own expert opinion. So we know that it’s a multifaceted decision. And speechBITE certainly doesn’t give all those answers. But it really does provide you with the absolute latest evidence.