The evidence-based practice movement has sought to tighten the link between research and practice. This movement has placed increased scrutiny on the quality and rigor of clinical practice research to ensure that the ‘tools of the trade’ are being objectively validated as having causal and beneficial effects on recipients.
The Standard of Evidence
The gold-standard of any single piece of evidence in support of a clinical practice is the randomized controlled trial (Shadish, Cook, & Campbell, 2002). Reviews of the literature across many of the prevention and intervention sciences, such as education, mental health, or speech-language pathology, show how few practices or tools currently have even one randomized controlled trial in support of their use. Even for those tools or practices with high-quality research support, findings across studies do not always converge. Thus, even well-designed, rigorously-conducted randomized controlled trials raise more questions than answers- questions about whether the program works, for whom it may work, or what conditions are necessary to ensure a program’s benefit. To build a body of work that seeks to answer these nuanced and clinically important questions, researchers need to consider more than the quality standards for individual research studies. It is also important to follow a programmatic process for building knowledge across studies.
The Programmatic Research Cycle
A programmatic approach to clinical practice research organizes research activities systematically so that the field’s knowledge of its practices becomes increasingly trustworthy and applicable. The steps are best understood as an iterative process: after the confirmation steps (steps 3 and 4) are taken, new questions and ideas will be raised and likely require a return to the exploration steps (steps 1 and 2) (see also Campbell et al., 2005).
Step 1: Explore Clinically Relevant Problems
Clinical practice research begins with “use-inspired” questions that are exploratory in nature. Use-inspired questions add to our theoretical understanding of a phenomenon and also have a clear applied benefit or implications. An example of a “use-inspired” question is, ‘What do young children look at when reading books with adults?’ Findings demonstrated that preschool-aged children rarely looked at the print when they read a book with an adult (e.g., Justice, Skibbe, Canning, & Lankford, 2005). These results shifted the fields’ thinking away from the importance of exposure to print and towards the idea of attention to print, and raised applied questions about how to help young children take advantage of rich print-learning opportunities. Importantly, these studies lay a foundation for more applied exploration through the designing and piloting of an intervention.
Step 2: Design and Pilot an Intervention
The research generated from the previous step was critical to focusing research activities around a key idea or problem. In the example given, the “problem” was children’s lack of attention to print during shared reading and, thus, lack of learning about print through that activity. In this step, the researcher considers how the practices of a field (e.g., practices of speech-language pathology, practices of education) can have a direct, or indirect, impact on amelioration of that problem. As such, one of the most important elements in the design and pilot of an intervention is the theoretical model of change (Mrazek & Haggerty, 1994). Such a change model requires thoughtful specification of the intervention activities and how these activities will lead to a particular desired outcome. As a concrete example, let’s consider the model of change underlying the design of the Print Referencing (Justice, McGinty, Piasta, Fan, & Kaderavek, 2010). Print Referencing is a practice in which adults draw young children’s attention to print during shared reading through comments, questions, or non-verbal behaviors (e. g., ‘What is the letter that starts this word?’; ‘Where do I start reading?’ ‘This is a long word.’). Use of the Print Referencing is expected to enhance preschoolers’ attention to (and potentially interest in) print during shared reading (thus addressing the ‘problem’ as described in Step 1). This enhanced attention to print will then lead to children’s accelerated learning about print and alphabet letters in preschool (short-term outcome), and this enhanced knowledge of print will lead to higher levels of reading success in kindergarten and first grade (long-term outcome).
It is critical to understand the feasibility of each incremental step in the change model. For example, will adults use the materials, supports, and Print Referencing technique as expected? Will the children like the lessons as designed and respond as expected? When questions such as these are addressed and the intervention is fully developed, it should be tested in a small pilot study.
Step 3: Efficacy Trial
The efficacy trial rigorously tests a fully developed intervention that reflects an established and promising model of change (based on the Pilot). The efficacy trial is larger and often more methodologically rigorous than the Pilot study, as it seeks to establish a causal link between the intervention and the desired outcomes (see Shadish et al., 2002 for more discussion of design issues related causal inference). Notably, the efficacy trial examines and measures an intervention conducted under tightly controlled and optimal circumstances (e.g., Flay et al., 2005). This means that design decisions within an efficacy trial tend to promote internal validity over external validity and tend to put controls in place that minimize variability. Decisions about recruitment, training, monitoring of intervention implementation, and inclusionary and exclusionary criteria for the sample are all relevant to ensuring optimal conditions for the trial. Measurement allows for the articulation of the conditions under which the findings from an efficacy trial were obtained (Song & Herman, 2010). Returning to the example of the Print Referencing intervention, should the researcher allow the trial to occur in any preschool program or only state-run preschool programs? Can participating programs be in two different states? What if the personnel requirements for preschool teachers are quite different across states? What if one state has universal preschool (serving all children) and one state has targeted enrollment (serving children with social or demographic risk criteria only)? Efficacy trial design decisions must weigh issues of practicality and intervention definition (i.e., a classroom-based intervention needs to be tested in a classroom, not a laboratory) against issues of internal validity (i.e., how to ensure the most control over the conduct of the intervention/change model).
Step 4: The Effectiveness Trial
The next step of the programmatic research cycle is the effectiveness study. The effectiveness study continues to build knowledge about an intervention’s power by examining its impact in circumstances considered to be “real-world conditions” (Flay et al., 2005). For example, recent efficacy studies of the Print Referencing intervention examined its use within classrooms serving primarily four-year old children participating in targeted preschool programs (i.e., programs that prioritized enrollment of children with social and/or demographic risk factors). All the children in the study had sufficient English-language skills such that they could be assessed in English. These controls (regarding recruitment and participation) helped minimize the variability of the sample for the efficacy trial but resulted in a sample that is not representative of the broader population of preschool children (e.g., it does not represent dual language learners, it does not represent children with severe disabilities, it does not represent classrooms serving 3 year-old and 4 year-old children). If this broader population is the intended audience of this intervention, an important shift that would occur between the efficacy trial and an effectiveness trial would be the sampling approach. Seeking to generalize results from an efficacy trial to the broader intended population is the key purpose of an effectiveness trial.
Summary
This short article highlighted key steps within a programmatic approach to clinical practice research. The information discussed in this article does not address all the methodological and conceptual decisions that must be made at each step. Rather, this article illustrates how high standards of evidence must be paired with a programmatic approach to clinical practice research to build a cumulative knowledge base for our discipline.
References
Campbell, M., Fitzpatrick, R., Haines, A., Kinmonth, A. L., Sandercock, P., Spiegelhalter, D., et al. (2000). Framework for design and evaluation of complex interventions to improve health. British Medical Journal, 321, 694–696.
Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., Moscicki, E. K., & Ji, P.(2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175.
Justice, L. M., McGinty, A.S., Piasta, S., Fan, X., & Kaderavek, K. (2010). Print-focused read-alouds in preschool classrooms: Intervention effectiveness and moderator of child outcomes. Language, Speech, and Hearing Services in Schools, 41, 504–520.
Justice, L. M., Skibbe, L., Canning, A., & Lankford, C. (2005). Pre-Schoolers, print and storybooks: An observational study using eye movement analysis. Journal of Research in Reading, 28, 229–243.
Mrazek, P. J., & Haggerty, R. J. (1994). Reducing risk for mental disorders: Frontiers for prevention intervention research. Washington, D.C.: National Academy Press
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental design for generalized causal inference. Boston: Houghton Mifflin Company.
Song, M., & Herman, R. (2010). Critical issues and common pitfalls in designing and conducting impact studies in education. Educational Evaluation and Policy Analysis, 32, 351–371.