Filter by Categories
Clinical Practice Research
Ethical Legal and Regulatory Considerations
Planning, Managing, and Publishing Research
Research Design and Method

Monitoring and Reporting Treatment Fidelity

CREd Library and Jacqueline Hinckley

DOI: 10.1044/cred-vb-r101-001

Video: Monitoring and Reporting Treatment Fidelity

Treatment fidelity means assuring that the treatment in a research study is conducted consistently and reliably. That is very important is because the outcomes of treatment research ends up affecting patient care and the quality of care that patients receive.

In most of our literature in communication sciences and disorders, we don’t report whether we actually monitored treatment fidelity or not. When it comes to accumulating evidence, compiling studies in treatment in literature syntheses or practice guidelines, etc., it ends up being a weakness in our overall literature. Reporting what was done is very important.

The best way to assess treatment fidelity in a research study is to, first of all, be very clear in the treatment that you’re setting up — a treatment manual is very important, which can also be published in ASHA Journal supplementary materials. Then, in addition to that, monitoring fidelity — either as the treatment is being administered in the research study, or at least retrospectively looking back at samples of the treatment research sessions and ensuring that they checked every single box on the treatment protocol. Because it is easy for clinicians, even in a research study, to drift away from the treatment protocol even though they may be doing that unintentionally. We want to make sure that’s not happening. Because if it does happen, then the outcome you’re reporting from your research is not an outcome from the treatment you thought you were giving.

It’s just like reporting the reliability and validity of any of our other measures. Usually we talk about reliability and we report it in relation to the dependent variables. But in the case of a treatment research study, reporting treatment fidelity is the equivalent of reporting reliability of our independent variable. That is equally critical.

I think that when we don’t do it, what happens is the details of the treatment are much less accessible. Whether it be in the publication or later when someone wants to build on or replicate that research. There may not be a clear knowledge or data base on what that treatment was. In that case replication becomes much more difficult, and our whole evidence base for how we treat patients is affected.

Treatment Fidelity Concepts

Treatment fidelity is a measure of the reliability of the administration of an intervention in a treatment study.

Moncher and Prinz (1991) included two concepts in their basic definition of treatment fidelity: Treatment integrity refers to how well a treatment condition was implemented as planned (Vermilyea, Barlow, & O’Brien, 1984; Yeaton & Sechrest, 1981).

Treatment differentiation refers to whether the treatment conditions being studied differed from each other sufficiently so that the intended manipulation of the independent variable can be assumed to have occurred.

Therapist drift […] refers to the modification of a treatment protocol in small and gradual ways, unintentionally or unknowingly, in which a clinician varies the original treatment protocol in an attempt to respond to a client’s specific behaviors (Peterson, Homer, & Wonderlich, 1982; Waller, 2009).

Type III error [refers to] when treatment fidelity measures are not carried out, [and] it is not possible to know whether the results of the study are attributable to the planned treatment or to the treatment that was actually implemented (Dobson & Cook, 1980; Linnan & Steckler, 2002).

~ From Hinckley & Douglas (2013) . Emphasis added.

Measuring Fidelity

What is measured?

An intervention’s “how and why” are sometimes referred to as the active ingredients. […] Active ingredients most typically include specific treatment targets (e.g., specific grammatical forms), the therapeutic techniques (e.g., modeling these forms during interactive play), and the requirements for dosage (e.g., highly concentrated exposures several times per week). In combination, the active ingredients describe how and why the intervention brings about predicted outcomes.

As interventions move through the scale-up process, researchers typically will document an intervention’s active ingredients as they occur in real life. Fidelity measures should document active ingredients relative to procedure (i.e., did the interventionist follow right steps and provide the appropriate dosage?) and quality (i.e., how well was the intervention delivered?).

Documentation of treatment procedure is generally straightforward; the researcher often uses a checklist to document fidelity.

Assessing fidelity relative to intervention quality is more difficult than documenting procedural fidelity. Assessment of treatment quality captures the manner in which a treatment is delivered (Mihalic, 2004). Qualitative measures document the dynamic processes of treatment essential to the intervention’s active ingredients. […] This component of fidelity assessment seeks to differentiate between treatments implemented well versus interventions implemented poorly.

~ From Kaderavek & Justice (2010) .

How is it measured?

Direct fidelity measures occur when the practitioner is directly observed either live or via a videorecording and fidelity is assessed using some sort of objective observational tool with a priori coding categories. Direct observation results in the most thorough and objective data, whether one is assessing procedural or process intervention components. Negative aspects of direct observation include (a) the time and personnel requirement and (b) the fact that direct observation may not reflect the practitioner’s “natural” implementation because he or she is aware of the observation (Cochrane & Laux, 2008). However, despite its challenges, direct observation is considered the gold standard.

Indirect fidelity measures are an alternative to direct assessment; indirect fidelity measures include self-report checklists and rating scales, interviews, logs, and permanent products (e.g., a client satisfaction survey and examples of student work following an educational intervention). Self-report checklists and rating scales allow practitioners to rate their compliance with targeted behaviors.

~ From Kaderavek & Justice (2010) .

We can improve our treatment fidelity practices by attending to three recommended levels of treatment fidelity (Lichstein, Riedel, & Grieve, 1994). First, treatment delivery, including measures of treatment fidelity, should be monitored to ensure that clinicians are delivering the treatment in the intended manner. Strategies to ensure fidelity of treatment delivery include use of a detailed, scripted treatment manual; structured training; supervisory monitoring and feedback; and delivery and accuracy checklists (Burgio et al., 2001).

A second recommended level of treatment fidelity is treatment receipt, or a reporting by the person receiving the treatment. Measures of treatment receipt could include either a performance measure—for example, performance of homework—or a self-reported measure about the treatment components.

The third recommended level of treatment fidelity is treatment enactment. Measures of treatment enactment could include direct observation of the treatment as it is being delivered and/or reviews of clinician documentation that was completed during treatment administration. Examples of treatment fidelity measures encompassing all of these levels have been developed in the field of psychology (Gearing et al., 2011).

~ From Hinckley & Douglas (2013) .

How is fidelity measurement evaluated?

The implications for fidelity measures vary at different levels of the scale-up process. During efficacy studies, the interventionist or researcher carefully identifies the essential ingredients fundamental to the intervention and sets a priori limits on acceptable levels of fidelity. Typically, treatment implementation in efficacy studies is carefully designed to achieve 100% fidelity to the prototype. In fact, a critical quality indicator of an efficacy study is the assurance that the IV has been implemented with a high degree of fidelity (Gersten et al., 2004).

However, as an intervention is scaled up so as to examine its effectiveness, it is assumed that the fidelity of implementation will decrease as a result of contextual demands and individual variation. Throughout the scale-up process, researchers and practitioners should evaluate the fidelity of implementation and consider the possible effects of fidelity variation.

Questions to ask when evaluating intervention research include the following:

  • Does the researcher clearly describe the active ingredients of the IV?
  • Does the researcher provide manualization and training to increase the fidelity of the IV?
  • What procedures and tools (i.e., logs and coding of videotapes) were used to ensure IV fidelity?
  • Does the researcher provide data documenting the fidelity of implementation of the intervention?
  • Was the intervention implemented with high fidelity?
  • If the fidelity of implementation varied, did the researcher account for fidelity variation as a moderating factor?

~ From Kaderavek & Justice (2010) .

Strategies to Increase Treatment Fidelity

Design of Study

Treatment fidelity goals in this category include establishing procedures to monitor and decrease the potential for contamination between active treatments or treatment and control, procedures to measure dose and intensity (e.g., length of intervention contact, number of contacts, and frequency of contacts), and procedures to address foreseeable setbacks in implementation (e.g., therapist dropout over the course of a multiyear study).

~ From Bellg et al. (2004) .


The first set of active ingredients—identification of treatment targets and therapeutic techniques—is typically specified when an intervention is manualized. To increase fidelity, an intervention should have a treatment manual detailing specific behaviors to take place during the treatment (e.g., targets to be addressed, techniques and materials to be used, and expected behaviors of the participants). The treatment manual describes the gold standard of treatment implementation against which fidelity can be assessed.

~ From Kaderavek & Justice (2010) .


Fidelity should document both the procedure and the process of intervention; documentation of fidelity increases the consistency of implementation (Cochrane & Laux, 2008; Gresham et al., 2000; O’Donnell, 2008). Logs, check sheets, and patient surveys (i.e., questioning the individual receiving intervention about the components of the intervention) are viable fidelity tools.

~ From Kaderavek & Justice (2010) .

Computer-Assisted Intervention

Examples of computer-assisted intervention have been published, and their effectiveness has been documented (Shriberg, Kwiatkowski, & Synder, 1990). The role of the SLP can vary in computer-assisted intervention. In some cases, the software “drives” the goal setting, stimuli exposure, and modifies intervention targets in response to the individual’s accuracy levels (e.g., Segers & Verhoeven, 2004). […] The software may potentially prompt the SLP to deliver specific interventions within specific dosage parameters and serve as a tracking device. The role of software to increase intervention fidelity is likely to be a focus of future intervention research.

~ From Kaderavek & Justice (2010) .


The adequacy of training to implement the intervention needs to be evaluated and monitored on an individual basis both during and after the training process. General strategies in this category include standardizing training, measuring skill acquisition in providers, and having procedures in place to prevent drift in skills over time.

~ From Bellg et al. (2004) .

Delivery of Treatment

The gold standard to ensure satisfactory delivery is to evaluate or code intervention sessions (observed in vivo or video- or audiotaped) according to a priori criteria.

Creating forums or case conferences where providers can discuss intervention cases and review skills required for each intervention can help ensure that interventions are standardized across providers and are being conducted according to protocol.

~ From Bellg et al. (2004) .

Receipt of Treatment

Receipt of treatment involves processes that monitor and improve the ability of patients to understand and perform treatment-related behavioral skills and cognitive strategies during treatment delivery.

We recommend that researchers be able to answer the following questions: How will you verify that subjects understand the information you provide them with? How will you verify that subjects can use the cognitive and behavioral skills you teach them or evoke the subjective state you train them to use? How will you address issues that interfere with receipt?

~ From Bellg et al. (2004) .

Reporting Treatment Fidelity

Treatment fidelity […] can affect the internal validity of a study and potentially the outcome of the study itself. In building a scientific basis for clinical practice, we must be certain that a treatment that may ultimately become an evidence-based practice has been consistently administered in order to ensure that the conclusions of the study are valid. These individual studies may be entered into systematic reviews or meta-analyses on which clinical practice guidelines are built. Recommendations for clinical practice will come from this research; thus, a lack of treatment fidelity reporting could affect the treatment that is ultimately received by large numbers of individuals (Bhar & Beck, 2009; Cherney, Patterson, Raymer, Frymark, & Schooling, 2008).

~ From Hinckley & Douglas (2013) .

Recommendations for Reporting Trials of Nonpharmacologic Treatment

Description of the different components of the interventions […].

The information that is required for a complete description of nonpharmacologic treatments depends on the type of intervention being tested. […] For rehabilitation, behavioral treatment, education, and psychotherapy, authors should report qualitative and quantitative data. Qualitative data describe the content of each session, how it is delivered (individual or group), whether the treatment is supervised, the content of the information exchanged with participants, and the instruments used to give information. Quantitative data describe the number of sessions, timing of each session, duration of each session, duration of each main component of each session, and overall duration of the intervention. It is also essential to report how the intervention was tailored to patients’ comorbid conditions, tolerance, and clinical course.

Details of how the interventions were standardized.

Assessment of nonpharmacologic treatments in RCTs presents special difficulties because of the complexity of the treatment and the variability found across care providers and centers. […] Authors should describe any method used to standardize the intervention across centers or practitioners. […] The description of any standardization methods is essential to allow adequate replication of the nonpharmacologic treatment. We recommend that authors allow interested readers to access the materials they used to standardize the interventions, either by including a Web appendix with their article or a link to a stable Web site. Such materials include written manuals, specific guidelines, and materials used to train care providers to uniformly deliver the intervention.

Details of how adherence of care providers with the protocol was assessed or enhanced.

Assessing treatment adherence is essential to appraising the feasibility and reproducibility of the intervention in clinical practice. […] Authors should report the use of any adherence-improving strategies. […] Readers must be aware of these methods and strategies in order to accurately transpose the results of the trial into clinical practice and appraise the applicability of the trial’s results.

~ From Boutron et al. (2008) . Footnotes omitted.

Further Reading: Treatment Fidelity Concepts, Components, and Examples

Dumas, J. E., Lynch, A. M., Laughlin, J. E., Smith, E. P. & Prinz, R. J. (2001). Promoting intervention fidelity: Conceptual issues, methods, and preliminary results from the early alliance prevention trial. American Journal of Preventive Medicine, 20(1), 38–47 [Article] [PubMed]

Gearing, R. E., El-Bassel, N., Ghesquiere, A., Baldwin, S., Gillies, J. & Ngeow, E. (2011). Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clinical Psychology Review, 31(1), 79–88 [Article] [PubMed]

Kaderavek, J. N. & Justice, L. M. (2010). Fidelity: An essential component of evidence-based practice in speech-language pathology. American Journal of Speech-Language Pathology, 19(4), 369–379 [Article] [PubMed]

Lane, K. L., Bocian, K. M., Macmillan, D. L. & Gresham, F. M. (2004). Treatment integrity: An essential—but often forgotten—component of school-based interventions. Preventing School Failure: Alternative Education for Children and Youth, 48(3), 36–43 [Article]

Moncher, F. J. & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11(3), 247–266 [Article]

Peterson, L., Homer, A. L. & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15(4), 477–492 [Article] [PubMed]

Further Reading: Barriers to Implementing Treatment Fidelity Procedures

Perepletchikova, F., Hilt, L. M., Chereji, E. & Kazdin, A. E. (2009). Barriers to implementing treatment integrity procedures: Survey of treatment outcome researchers. Journal of Consulting and Clinical Psychology, 77(2), 212 [Article] [PubMed]

Sanetti, L. M. & Reed, F. D. (2012). Barriers to implementing treatment integrity procedures in school psychology research: Survey of treatment outcome researchers. Assessment for Effective Intervention, 37(4), 195–202 [Article]

Further Reading: Frameworks for Treatment Fidelity

Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., Ogedegbe, G., Orwig, D., Ernst, D. & Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology, 23(5), 443 [Article] [PubMed]

Borrelli, B., Sepinwall, D., Ernst, D., Bellg, A. J., Czajkowski, S., Breger, R., Defrancesco, C., Levesque, C., Sharp, D. L. & Ogedegbe, G. (2005). A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. Journal of Consulting and Clinical Psychology, 73(5), 852 [Article] [PubMed]

Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J. & Balain, S. (2007). A conceptual framework for implementation fidelity. Implement Sci, 2(1), 40 [Article] [PubMed]

Lichstein, K. L., Riedel, B. W. & Grieve, R. (1994). Fair tests of clinical trials: A treatment implementation model. Advances in Behaviour Research and Therapy, 16(1), 1–29 [Article]

Santacroce, S. J., Maccarelli, L. M. & Grey, M. (2004). Intervention fidelity. Nursing Research, 53(1), 63–66
Waltz, J., Addis, M. E., Koerner, K. & Jacobson, N. S. (1993). Testing the integrity of a psychotherapy protocol: Assessment of adherence and competence. Journal of Consulting and Clinical Psychology, 61(4), 620 [Article] [PubMed]

Further Reading: Reporting Treatment Fidelity

Boutron, I., Moher, D., Altman, D. G., Schulz, K. F. & Ravaud, P. (2008). Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration. Annals of Internal Medicine, 148(4), 295–309 [Article] [PubMed]

Gresham, F. M., Gansle, K. A., Noell, G. H. & Cohen, S. (1993). Treatment integrity of school-based behavioral intervention studies: 1980–1990.. School Psychology Review, 22(2), 254–272

Gresham, F. M., Macmillan, D. L., Beebe-Frankenberger, M. E. & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented?. Learning Disabilities Research & Practice, 15(4), 198–205 [Article]

Hinckley, J. J. & Douglas, N. F. (2013). Treatment fidelity: Its importance and reported frequency in aphasia treatment studies. American Journal of Speech-Language Pathology, 22(2), S279–S284 [Article] [PubMed]

Jacqueline Hinckley
University of South Florida

The content of this page is based on selected clips from a video interview conducted at the ASHA Convention, with additional digested resources and references for further reading selected by CREd Library staff.

Copyright © 2015 American Speech-Language-Hearing Association