Filter by Categories
Clinical Practice Research
Ethical Legal and Regulatory Considerations
Planning, Managing, and Publishing Research
Research Design and Method

Treatment Fidelity in Single-Subject Designs

An overview of approaches and considerations for measuring treatment fidelity.

CREd Library and Ralf Schlosser

DOI: 10.1044/cred-ssd-r101-001

Treatment fidelity, also called procedural integrity or treatment integrity, refers to the methodological strategies used to evaluate the extent to which an intervention is being implemented as intended. Maintaining high treatment fidelity helps ensure that changes observed during a study reflect an alteration of the subject’s behavior and not an alteration in the behavior of the experimenter.

Video: An Overview of Treatment Fidelity in Single-Subject Research

Well, there are a number of terms that are being used. One of them, as you know, is treatment fidelity. But there are other synonyms or more or less equivalent terms such as “procedural integrity” or “treatment integrity” or “procedural reliability” so there are four or five different terms that are being used. What it means though, as a concept related to treatment is that you measure that the intervention is implemented as you had planned. You have a treatment protocol, and you want to make sure the treatment is carried out as you had intended. Why is that important? It’s important because if you don’t know how the treatment was implemented, it becomes very difficult to know what the causal factor in the change was. So you reach a certain outcome, but you cannot really attribute it to something concrete because you don’t know how well the treatment was implemented. It affects internal validity. It affects external validity. It’s a very important aspect of treatment research.

How do you measure treatment fidelity?

You have to think about how the treatment breaks down into steps. You want those steps to be reflective of what the active ingredient of the intervention is. Sometimes you see in the literature there are steps laid out and they are measured, everything with good reliability, good treatment integrity, but the steps may not necessarily be reflective of what the key construct is. It’s important that those steps are related to what your theory of change is. You have a certain theory, and you want to make sure those steps are reflective of it. That’s one aspect.

The other challenge is that you have to think about what method are you going to use to evaluate treatment fidelity. You can use self-monitoring. That is when the experimenter him or herself basically does check marks or takes notes. So that’s one method. The second method is when you have a second observer, and the second observer basically takes notes or records how well the experimenter does. That’s the second method. And the third method is when you have the experimenter take notes, and the second observer, and then you compare. And you derive what is called interobserver agreement on treatment fidelity. So the first and the second step are not mutually exclusive, you can do both. So that’s a big thing to think about in terms of measuring treatment fidelity.

Another challenge is: How many sessions do you need? Do you need 100%, or is it okay to have maybe 20% to 30%? There are no clear-cut rules about this. But in general, journal editors and reviewers like to see above sort of 20%, at least, of the observations.

What advice do you have for developing and implementing a treatment fidelity protocol?

In the beginning when I started doing this kind of work, I was under the illusion that I could design this at the desk, and then the experimenter will implement it as I had planned.

When you design an intervention at your desk, it all looks very deliberate and concrete and logical. Then you actually ask somebody to implement it, and there are steps missing, unforeseen circumstances happening, and the whole thing will fall apart. And you have to start from scratch. It’s an iterative process to develop a good treatment fidelity protocol.

I have learned the hard way that you always have to pilot. For multiple reasons, pilot studies are really important, but related to treatment fidelity, it is essential. You don’t know if this is actually doable. You prepare a data collection sheet, and the observer says, “This is too cumbersome. I couldn’t keep up.” Especially if it’s done live. If you have recorded — you might want to think about that. Do I do live? Do I do video recording? These are pros and cons too. Live observation requires somebody to be there right then. And the video recording has added flexibility. You can watch it any time, it can be replayed. So you have to think about that, too.

What I’ve observed in the real implementation is sometime the experimenter feels very conscious about being watched. They might feel anxious. “Am I doing the right thing?” The researcher has to be careful about how to approach this. This is not about “big brother is watching you.” It’s more about, “We’d like for you, as the experimenter to do the best job possible to deliver the intervention. So, we’d like to work with you and give you feedback, as you proceed with doing this. And we can help you do better, if needed.” You kind of phrase it like, “We are in this together. We’re trying to deliver the best intervention we can. Let’s make it happen.” So that kind of anxiety goes away.

How does your approach to treatment fidelity change through the course of a research program?

There’s a progression of research. There are many people who have written about this in our field. Initially, you want to have really good control, and really good treatment integrity. So that’s the primary objective — you want to have it as perfect as possible. But then, as you go into real practice, there are all kinds of constraints imposed on implementing an intervention that, it’s different from a study, as you know. There, it becomes sometimes important to do a study with less integrity. The treatment fidelity sort of becomes the independent variable that you’re manipulating. Can the same treatment outcomes be obtained by having less perfect implementation? Because, assuming the clinician is not a robot, and has to be responsive to what happens with the clients, you want them to be more flexible. But can you still obtain the same outcomes? You should follow this progression in measurement. Do multiple studies, and hopefully get to the point where we can reach outcomes in real-life settings with real-life expectations with what is reasonable in terms of fidelity.

~ From a video interview with Ralf Schlosser, Northeastern University.

The Need for Special Consideration of Treatment Fidelity in Single-Subject Experimental Designs

A defining feature of single-case studies is that each condition remains in effect for extended periods of time (e.g., from several days to several weeks) to allow sufficient data to be collected from which judgments will be made. Thus, the possibility of implementation drift and of incorrect implementation is logically high. Another defining feature of single-case methods is intra-subject and/or inter-subject replication of the experimental conditions. Monitoring relevant variables across the course of an investigation can assist in assuring that the defining variables of the respective conditions are implemented similarly in each replication. Finally, applied single case studies often, but not always involve implementation by humans. […] The possibility of bias and drift are well known.

~ From Woolery (1994) .

Practically speaking, researchers expect that treatment agents will implement a treatment as planned. This is particularly acute in treatments that must be implemented by third parties such as teachers, parents, or research assistants. When significant behavior changes occur, the researcher often assumes that these changes were due to the intervention. However, it may well be the case that the treatment agent changed the intervention in ways unknown to the researcher and these changes were responsible for behavior change. In contrast, if significant behavior changes do not occur, then the researcher may assume falsely that the lack of change is due to an ineffective or inappropriate intervention. In this case, potentially effective treatments that would change behavior substantially if they were implemented properly may be discounted and eliminated from future consideration for similar problems.

[…] Stability in a dependent variable does not necessarily imply the stable application of the independent variable. [Further,] unless a researcher knows precisely what was done, how it was done, and how long it was done, then replication is impossible.

~ From Gresham (1996) .

Steps and Considerations for Measuring Treatment Fidelity

  • Provide clear, unambiguous, and comprehensive operational definitionsof the independent variable(s). Consider the intervention across four dimensions: verbal, physical, spatial and temporal.

  • Determine the criteria for accuracy for each component of the independent variable.

  • Determine the number or percent of sessions for which it is practical to evaluate treatment fidelity.

  • Record the occurrence/nonoccurrence of the implementation of each component. Calculate the percentage implemented for each component across sessions (component integrity), and the percentage implemented for all components within sessions (session integrity).

  • Report treatment integrity data and/or methods when publishing the results of studies.

~ From Gresham, Gansle & Noell (1993)  and Gresham (1996) .

See Kaderavek and Justice (2010)  and Kovaleski (2015)  for a selection of sample fidelity checklists.

Further Reading

Billingsley, F.F., White, O.R. & Munson, R. (1980). Procedural reliability: A rationale and an example. Behavioral Assessment, 2, 229–241.

Gresham, F.M., Gansle, K.A. & Noell, G.H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 262, 257–263. [Article] [PubMed]

Gresham, F.M. (1996). Treatment integrity in single-subject research. In Franklin, R.D., Allison, D.B. & Gorman, B.S. (Eds.), Design and analysis of single-case research (pp. 93–117). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Hinckley, J.J. & Douglas, N.F. (2013). Treatment fidelity: Its importance and reported frequency in aphasia treatment studies. American Journal of Speech-Language Pathology, 22, S279–S284. [Article] [PubMed]

Kaderavek, J.N. & Justice, L.M. (2010). Fidelity: An essential component of evidence-based practice in speech-language pathology. American Journal of Speech-Language Pathology, 19, 369–379. [Article] [PubMed]

Kovaleski, J.F. Treatment integrity protocols. RTI Action Network (Available from the RTI Action Network Website at www.rtinetwork.org).

McIntyre, L.L., Gresham, F.M., DiGennaro, F.D. & Reed, D.D. (2007). Treatment integrity of school-based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis, 40, 659–672. [Article] [PubMed]

Schlosser, R. (2002). On the importance of being earnest about treatment integrity. Augmentative and Alternative Communication, 18(1), 36–44 [Article]

Wolery, M. (1994). Procedural fidelity: A reminder of its functions. Journal of Behavioral Education, 4(4), 381–386 [Article]

Ralf Schlosser
Northeastern University

The content of this page is based on selected clips from a video interview conducted at the ASHA Convention. Additional digested resources and references for further reading were selected by CREd Library staff.

Copyright © 2015 American Speech-Language-Hearing Association