Eight years ago, I (Natalie) asked the communication sciences and disorders (CSD) community if they thought “implementation science” was a buzzword or a game changer (Douglas, Campbell, & Hinckley, 2015). My colleagues and I went on to suggest that implementation science really could and would change the game of clinical research and clinical practice in our field—but what does this really mean?
Implementation science is “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health [and educational] services” (Bauer et al., 2015, p. 3). This is a fine definition, but it only scratches the surface as to why you might consider incorporating implementation science into your own work. Both clinicians and clinical researchers strive to provide treatments to clients that result in the best outcomes. However, the traditional research pipeline posits that clinical researchers develop and test treatments in an environment outside the clinician, outside the context where the treatment will be delivered (e.g., school, hospital), and outside the larger policy structure (e.g., insurance) that often dictates the ins and outs of service provision.
Put simply, implementation science offers an alternative approach to a general question such as, “Why is this clinician not implementing evidence-based practice?” by arguing that researchers we empirically study the multitude of factors that influence service delivery. Asking these types of questions can allow data-driven solutions that mutually benefit the clinical researcher, the clinician, and, ultimately, the client.
Partnerships between these individuals is foundational to all aspects of implementation science work. We cannot conduct implementation research without research–practice partnerships being at the forefront of their thinking. See, for example, Alonzo and colleagues’ work (2022) for a detailed discussion of research–practice partnerships in schools and Henrick and colleagues’ (2017) five key dimensions of cultivating partnerships. To do this might feel overwhelming, but it does not require a total rehaul of one’s clinical research program. We invite readers to consider where to begin by taking a closer look at options for stakeholder engagement and implementation science theories, frameworks, strategies, and outcomes.
Implementation Science Theories, Models, and Frameworks (TMF)
The field of implementation science is guided by implementation theories, models, and frameworks (TMF). Clinicians and researchers alike can benefit from these guiding TMF, which provide a structure for implementation efforts. In a recent scoping review of implementation science in CSD, over 40% did not use an implementation framework (Douglas et al., 2022), which in many ways is quite understandable. TMF may seem overwhelming given the vast number that exist; however, various organizing schema have been created to categorize TMF by goal or purpose. For example, Nilsen (2015) categorized the various theoretical approaches that have been used in implementation science: (a) those that describe and/or guide the process of translating research into practice (Process Models); (b) those that understand and/or explain what influences implementation outcomes (Determinant Frameworks, Classic Theories, and Implementation Theories); and (c) those that evaluate implementation efforts (Evaluation Frameworks). Thus, based on your team’s specific implementation goal(s), certain TMF may be more useful. If you aim to implement an intervention into practice, for example, then the most ideal choice would be a process model (e.g., the Exploration, Adoption/Preparation, Implementation, Sustainment [EPIS] model or the Quality Implementation Framework [QIF] model) that outlines specific steps to implementation. Alternatively, if you are looking to understand facilitators and barriers to implementation, then a determinant framework (e.g., the Consolidated Framework for Implementation Research [CFIR] or the Theoretical Domains Framework [TDF]) would help provide context. Finally, if you aim to evaluate implementation efforts, then a useful tool would be an evaluation framework (e.g., Reach, Efficacy, Adoption, Implementation, Maintenance [RE-AIM] that works to examine these aspects of implementation. It is important to note that these TMF options are not “all-or-nothing” options. Clinicians and researchers may use some elements of TMF to meet their implementation objectives without using every component of it. Further, teams may elect to combine TMF to meet their implementation aims. Both the structure and the flexibility of implementation TMF provide helpful supports to move implementation efforts forward.
Implementation Science Strategies
Although the above frameworks provide a scaffold or guide to implementation, implementation science strategies are the actions that teams may take to support their implementation efforts. More than 70 identified implementation strategies exist; however, some are more applicable than others based on the implementation needs and challenges of a specific initiative, specific stakeholders, and a specific setting. For example, some strategies center around training and educating stakeholders (e.g., conducting ongoing trainings or distributing educational materials), whereas others may involve stakeholder interrelationships (e.g., identifying and supporting “champions” and early adopters or organizing clinician implementation team meetings). Other strategies may involve financial strategies (e.g., using financial incentives), whereas still other strategies may involve changing the infrastructure (e.g., changing record-keeping systems or changing the physical structure and equipment; Powell et al., 2015). These examples of implementation strategies support the uptake of the implementation initiative, and the result of these strategies impacts implementation outcomes.
Implementation Science Outcomes
In the traditional research pipeline, outcomes generally focus on the results of a specific intervention (treatment outcomes) or service delivery. But in implementation science, outcomes are focused on the effect of deliberate steps taken to implement an intervention or practice of interest (Proctor et al., 2009, 2011). Implementation outcomes involve how a specific practice “works” within a setting. See Table 1 for the eight primary implementation science outcomes and their definitions.
Table 1: Implementation outcomes and definitions
IS Outcome | Definition | Possible Research Questions |
Acceptability | Satisfaction with the implementation initiative | Is the intervention acceptable to providers? |
Adoption | Uptake, utilization, or intention to try the implementation initiative | How often are providers delivering the intervention? |
Appropriateness | Perceived fit, relevance, or compatibility of the implementation initiative | Do providers feel that the intervention is appropriate for their patients and/or setting? |
Cost | Financial cost of implementation efforts | Are the benefits of the intervention more than the cost of the intervention? |
Feasibility | Actual fit or suitability for everyday practice | Is the intervention feasible for providers to deliver? |
Fidelity | Accuracy or determination of whether the implementation practice is delivered as intended | Are providers able to adhere to the intervention as studied? |
Penetration | Degree of spread | Is the intervention spreading to other providers (either in the same setting or other settings)? |
Sustainability | Maintenance of the implementation initiative | Do providers continue to use the intervention when active implementation efforts are over? |
The future of implementation science research in CSD is bright and thriving, considering the exponential increase in implementation science studies during the past 5 years (Douglas et al., 2022), the targeted funding mechanisms dedicated to implementation work, and the underlying groundswell of clinical researchers and clinicians who recognize the urgent necessity for our best interventions to reach clients now. Instead of waiting 17 years for a fraction of our interventions to make it to the real world (Balas & Boren, 2000), we can leverage stakeholder engagement, implementation science frameworks, strategies, and outcomes to respond to our clients’ critical needs in a timely manner. We have the opportunity to model the authentic research–practice partnerships required to ensure that the human right of communication is indeed accessible for all.