For policymakers
There has been less research on the impact of policy on implementation success. However, case studies and qualitative research indicate that factors such as advocacy, mandates, policies related to evidence-based treatments, and fiscal policy have an impact on the success of an implementation effort. Importantly, a single mandate to implement EBPs is not likely to result in widespread implementation. Raghavan and colleagues discuss the importance of using strategies across multiple levels of the ecology of implementation: adding resources to offset initial costs of implementation and promoting organizational learning, developing fiscal policies and contracting strategies, monitoring of implementation processes and outcomes, and aligning regulations and policies to support implementation. Legislation and social interventions are also needed to educate the public about evidence-based treatments, to promote mental health parity, access to services, and to decrease stigma.
Implementation activities and strategies that have been used in large-scale implementation programs include:
Infrastructure building and commitment
Stakeholder relationship building and communications
Financing
Continuous quality management
Service delivery practices and training
Plans and resources need to be developed to support such multifaceted approaches. It is important to involve stakeholders at multiple levels (e.g., consumer/family, therapists, supervisors and administrators, administrative support staff, etc.) throughout the implementation process.
While there is evidence to suggest that it is cost-effective in the long run to implement evidence-based treatments, there are short-term costs that can be difficult for agencies to absorb. Interviews with policymakers have highlighted strategies that have been used to successfully navigate the implementation of new interventions and programs and address barriers that arose along the way.
For agencies/systems
Often, providers who want to increase their capacity in an EBP begin by looking for training. Training is certainly an essential ingredient, but in order to successfully build long-term capacity, we draw upon Implementation Science for methods to promote the integration of scientific evidence into healthcare policy and practice. Implementation scientists seek to investigate and address major barriers or facilitators (e.g. social, behavioral, attitudinal, organizational, economic, management) to effective implementation, develop and test new approaches to improve programming, and determine a causal relationship between the intervention and its impact. When considering the implementation of a new practice, a number of different questions may be helpful for consideration. Although partnering with an implementation expert may facilitate this process, the questions below may help to shape an implementation plan. Links are provided to publicly available surveys that can be used to gather information. For further information, see here.
- Is there a need for a change that is substantial enough to create buy-in for the implementation effort?
- What are the target outcomes where we would like to see a change?
- Is now the right time to try to effect this change? Or are there competing demands that may be higher priority right now?
It is important to select an EBP that fits the need identified within the organization or system. Decision makers may conduct their own review of the research evidence for different practice or consider repositories of information about EBPs. It is important to consider what is known about the use of potential EBPs in organizations with similar structure, missions, and populations.
- The organization may conduct its own review of the research evidence for different practices
- The organization may consult practice guidelines (e.g., American Psychological Association, American Psychiatric Association, NICE, VA)
- Or the organization may consider repositories of information about EBPs, such as the SCP’s page on research-based treatments, NREPP, the California EBP Clearinghouse, the Society of Clinical Child and Adolescent Psychology, the National Child Traumatic Stress Network, and the Office of Juvenile Justice and Delinquency Prevention Model Programs Guide.
Implementation theory and research can provide guidance on the development of an implementation plan. Six key factors are important to consider:
Inner setting
the characteristics of the organization itself, including its structure, the networks and communication systems, the culture of the organization, the degree to which climate of the organization is positive toward change, and readiness for change
Outer setting
the context within which the organizations exists, including client needs and the resources to meet those needs, the degree to which the organization is linked to other organizations in its professional network, the degree to which those other organizations have already implemented EBPs, and external pressures from the larger system such as mandates or guidelines
Qualities of the EBP
the strength of the evidence for the EBP, the degree to which selection of this EBP provides an advantage over the selection of others, the extent to which the EBP can be adapted, opportunities to pilot the intervention before full implementation, the complexity of the EBP, the appeal of the EBP’s packaging, and its cost
Characteristics of the individuals who will implement the EBP
their knowledge and beliefs about the EBP, their readiness and self-efficacy about change, and their level of commitment to the organization
Process
development of a plan in advance of beginning, engagement of the implementation team, execution of the plan, and evaluation of progress
The needs, perspectives and preferences of the clients who will ultimately receive the EBP
The process of implementation is not complete after providers are trained and using the EBP within their practice setting. Many factors can impact the long-term sustainment of EBPs. Therefore, it is important to consider the following:
- What resources will be needed to sustain the EBP practice over time?
- How might the organization or system begin to plan for sustainability from the first stages of the implementation?
- How does the organization currently sustain practices? Can we build on the infrastructure that is already in place?
- How will sustainability be assessed and monitored? What steps will be taken if the assessment indicates sustainability is in jeopardy?
For treatment developers
Traditionally, interventions have been developed in research or academic settings under controlled conditions, with carefully selected clients being treated by highly trained and supervised therapists. In contrast, clinicians in community settings are often under- supervised and may not have a graduate or professional degree, and may be serving clients with diverse co-occurring disorders. Further, interventions developed outside of a real-world context may not synchronize well with the constraints set by insurance payers, such as frequency or duration of sessions. Lack of attention to these types of acceptability and feasibility barriers has led to a research-to- practice gap, where many interventions with strong research support are not being implemented in the community because they are not acceptable to or feasible for providers.
An ultimate implied goal of treatment development is to create interventions with utility in clinical practice. The involvement of end-users during the development process may be key to creating pragmatic treatments for complex clients that can be transported to practice settings.
For example, in the deployment focused treatment development model (DFM) articulated by Weisz and colleagues, intervention development is a multi-stage process that accounts for both rigorous science and implementation.
Intervention Development in the DFM:
Phase 1
In Phase 1 of the DFM, treatment developers examine and integrate the empirical literature and current practices, and garner stakeholder input in the manualization and delivery of the intervention under development.
Phase 2
During Phase 2, data on the acceptability and feasibility of the intervention, as well as the acceptability and feasibility of controlled research on the intervention, are gathered through an efficacy study.
Phase 3
In Phase 3 of the DFM, controlled case studies with real-world clinicians and clients are used to adapt the interventions to the social ecological context.
Phase 4
Later phases focus on studies of effectiveness and readiness for implementation, including effectiveness studies, studies of mechanisms of action or key elements of the intervention, and studies that examine readiness for implementation in different settings and populations.
More recently, Lyon & Koerner suggested that the principles of user-centered design can be used to develop or refine interventions that will fit the treatment context and meet the needs of stakeholders. User-centered design is an approach that grounds the process of product or treatment development in information collected about the individuals and settings where products will ultimately be used. This process involves: clear identification of end users and their needs, prototyping and rapid iteration, simplifying existing intervention procedures, and exploiting natural constraints. The principles of user-centered design that can inform treatment development include:
Principle
Description
Learnability
Opportunities to rapidly build understanding
Efficiency
Minimize time, effort, cost
Memorability
Make sure users can remember/apply essential elements
Error reduction
Prevent error or allow rapid recovery
Satisfaction/reputation
Make it acceptable and valuable
Low cognitive load
Simplify task structure
Exploit natural constraints
Build for the intended destination context
In order to facilitate the translation and implementation of new treatment models into clinical practice, consideration of typical clients, service providers, and settings, as well as the perspectives of stakeholder groups from all levels (e.g., clients, family members, therapists, administrators, payers) are important from the earliest stages of the intervention development and testing process. Developers are also encouraged to provide guidance for adapting the intervention in ways that remain consistent with the core theory and elements of the intervention.
For trainers
Although 73 distinct implementation strategies have been identified, training and consultation are the implementation strategies that have received the greatest scientific attention to date. Increasingly, implementation programs include both training and consultation components to develop provider competency, often culminating in certification of clinicians who complete these programs (for examples, see Creed et al., 2016, Godley et al., 2011, and McManus et al., 2010). Some models and guidance for comprehensive training programs have been described, as well (Godly et al., 2011, McManus et al., 2010, Stirman et al., 2010). Cascade or train-the- trainer models have also begun to garner support as effective and scalable strategies for training large numbers of clinicians when ongoing involvement of experts is not feasible.
Implementation Strategies
Feature Box Title
Several studies have demonstrated that completion of in-person or web-based training can result in knowledge or skill acquisition, as compared with no training or self-directed learning such as reviewing treatment. Some evidence suggests that online training is superior to workshops for conveying knowledge, whereas workshops convey greater self-efficacy. However, these methods appear to fall short of changing therapists’ ability to apply skills in session, leaving therapists below the competency benchmarks required for clinical trial clinicians. Studies have repeatedly demonstrated that some form of ongoing support after initial training is critical to translating acquired knowledge to adequate in-session skills and improving client outcomes.
Feature Box Title
Several recent articles have reviewed the literature on consultation as an implementation strategy to increase therapists’ application of skills. In contrast to supervision, which applies to interactions with non-independent trainees, consultation is the provision of ongoing support by a specialist in an effort to increase an independent therapist’s competence in the area of the specialist’s expertise. This type of competency-based training is a necessary component of helping therapists move from mere education about, or exposure to, evidence based psychotherapies to the skillful application of new skills. Consultation provides therapists with timely corrective feedback on specific cases, followed by opportunities to modify and repeat efforts. When consultation is added to initial training efforts, therapists are more likely to achieve benchmark levels of treatment fidelity then with training alone. Ongoing communication with trainers, even after consultation ends, may also moderate barriers to adoption, sustainment, and skill at the organization, client, and therapist level.
Researchers have begun to investigate elements of consultation associated with successful outcomes and identified eight key components: ongoing training, problem solving of implementation barriers, engagement of providers, case support, accountability, mastery skill building, appropriate adaptation, and planning for sustained practice. Therapists have also identified connection with other therapists, authentic interaction around real cases, and consultant responsiveness to the therapists’ needs as central to their perceptions of consultation. Evidence also suggests that modeling in consultation may have a greater impact on the use of EBP elements than behavioral rehearsal of those elements.
Feature Box Title
Regardless of therapists’ ages or backgrounds, those who participate in intensive EBP training typically demonstrate experience significant increases in skill post-training. In fact, recent research suggests that even theoretical orientation prior to training, specifically CBT orientation, is not a valid predictor of expert-rated competency at pre- or post-training. However, matching training strategies to therapist characteristics may be an efficient strategy for promoting skill acquisition and application. For example, immediate feedback in practice sessions was related to increased skill for therapists with no graduate degree and stronger vocabulary performance, while tape supervision was more effective for increasing skill among therapists with graduate degrees, and a combination of these strategies was most effective for therapists with weaker verbal and abstract reasoning performances.
Feature Box Title
Consistent with the model proposed by Aarons and colleagues. fidelity monitoring and feedback have been shown to increase successful implementation and sustainment. For example, in a large child services system, as compared to implementation without fidelity monitoring, greater retention of trained staff resulted when an EBP was implemented with ongoing, supportive fidelity monitoring. Further, poor fidelity to EBPs can lead to decreased symptom change for clients. Therefore, fidelity monitoring is strongly recommended to increase both training and client outcomes.
Implementation in non-traditional settings/treatment milieus
Many EBPs in psychology are designed for delivery by individual therapists who deliver individual, family, or group therapy. However, many organizations deliver services through a treatment team or milieu, where treatment is delivered by a group of professionals and/or paraprofessionals who may work in tandem with a therapist or counselor. For example, EBPs may be integrated into schools, services for people experiencing homelessness, residential settings, or Assertive Community Treatment (ACT) teams. Although few studies have examined the degree to which EBPs may be implemented with fidelity across traditional and non-traditional settings, there is some evidence to suggest that EBPs may be delivered with equivalent levels of quality across settings. However, implementation of an EBP in these contexts may require special considerations or adaptations to maximize the opportunities inherent in a team approach.
Individual
Milieu
one professional
many professionals
The degree to which team members may share an understanding of an EBP may influence the ways in which team members coordinate their approach to treatment. The least coordination on this continuum is represented by multidisciplinary teams, in which the team draws on knowledge from different disciplines while each discipline stays within their boundaries. For instance, therapists may be trained in an EBP while paraprofessionals and professionals from other disciplines work independently to deliver services according to their own training. (One may think of this as working in parallel.) Training all team members in the principles of an EBP allows the team to move toward an interdisciplinary approach, or one in which team members analyze and attend to links between disciplines, developing into a coordinated and coherent approach. For example, training in the principles of an EBP can create a shared language and understanding of the mechanisms by which clients change, facilitating links across disciplines. (One may think of this as working jointly.) The most coordinated approach on this continuum, transdisciplinarity, involves members from different discipline working together, using a shared conceptual framework while drawing on their discipline-specific knowledge and scope of work. (One may think of this as working together in coordination.) Careful attention must be paid to ensuring that EBP training does not move staff outside of their scope of work, however. For example, the aim of training non-therapists in EBP principles is not to develop those staff into therapists; rather, the aim would be to help those staff members to deliver their intended services in a way that is consistent with the EBP and extends the EBP work being delivered by therapists.
Training in the principles of an EBP can create a shared language and understanding
A number of pragmatic challenges emerge in the implementation of an EBP in a treatment milieu. By definition, these milieus are a therapeutic context far beyond the traditional 50-minute therapy hour. The milieu of an inpatient unit, for example, exists across three shifts of workers, and over both week days and weekends. Delivering training that reaches staff around the clock and across the calendar days is resource-intensive. Staffing patterns may also have paraprofessionals rotating through different services, and high turn-over rates further complicate the intensive training of all staff. Therefore, successful strategies may include the integration of EBP-focused training into staff orientation, visual cues to support EBP use in the milieu (e.g., posters with key points), and shadowing of EBP champions by new unit staff.
A second challenge relates to the communication structures within the milieu context. Assessment of the communication within an organization is always an important early step in planning implementation, but these structures may be particularly salient in a treatment milieu. Poor communication among disciplines may present a significant barrier to movement along the continuum of multidisciplinary-interdisciplinary- transdisciplinary teams, as well as communication of treatment plans, case conceptualizations, and other key ingredients across work shifts. If communication is weak, early phases of the implementation plan may include strategies to improve communication systems prior to engaging in EBP training.
When there is a need to adapt the content or delivery strategy of an EBP (e.g., shorter therapeutic interactions, simplification or tailoring of content), it is important to monitor both fidelity and outcomes to ensure that core aspects of the intervention—those most closely linked to the theory of change—and the expected changes in symptoms and functioning remain intact.
For researchers
A number of research questions are relevant to the implementation of an evidence-based psychological treatment. These include questions about the most effective strategies for implementation, the impact of implementation on clinical practice and outcomes, costs and cost-effectiveness, and how to spread and sustain the intervention in routine care. Different research designs are recommended for different types of questions. Bauer and colleagues provide an overview of implementation science research considerations that includes a discussion of study design, and a comprehensive overview of implementation research designs is under development. Depending on the nature of the research question, a number of implementation outcomes may be relevant to assess. Proctor and colleagues outline several, including feasibility, fidelity, acceptability, penetration, and cost. Additionally, service-level and patient-level outcomes can also be assessed in the context of implementation research. A number of frameworks can be used to inform implementation research, and the research design should take into account and assess the multilevel factors (e.g., system, organization, individual, and intervention) that can influence implementation success. These frameworks, among over 60 others that have been developed in health-related fields, provide guidance on what to assess and steps to take to overcome potential barriers to implementation. They can inform the selection of appropriate implementation strategies.
implementation outcomes that may be relevant to assess:
Feature Box Title
feasibility
Feature Box Title
penetration
fidelity
Feature Box Title
cost
Feature Box Title
acceptibility
Measures are available for an increasing number of the constructs that may influence implementation. In addition to utility for preliminary assessment and selection of implementation strategies, these measures may be entered into analyses as predictors of implementation outcomes, or they can be studied as mechanisms or proximal outcomes in tests of implementation strategies. For example, recent research has been conducted to understand how implementation strategies impact factors such as organizational culture and the use of new treatments. Furthermore, when the goal of research is to identify successful strategies for implementation, an understanding of determinants of successful implementation at multiple levels (e.g., system, organization, provider, client, and intervention) can inform the selection of the appropriate strategies to investigate. Qualitative research, and mixed-methods strategies are also common in implementation research. It can be used for a variety of purposes, including triangulation and expansion with quantitative data, better understanding processes that facilitate or hinder implementation, and contextualizing findings.
Because factors at so many levels can influence implementation, and due to the variety of outcomes that can be studied, research on implementation often involves collaboration between multiple disciplines and areas of research. Experts in health economics, medical anthropology, organizational behavior, medical informatics, psychometrics, health services, biostatistics, and from other fields often collaborate with psychologists, social workers, and healthcare professionals to study questions related to implementation.
Funding for implementation research is available through government funding agencies and foundations. Proctor and colleagues have specified ten characteristics of successfully funded implementation research applications.
The 10 Characteristics of Successfully Funded Implementation Research Applications
demonstrating a need for the new intervention
the evidence base for the intervention
a clear conceptual model
evidence that key stakeholders have been engaged in a meaningful way
evidence of a setting's readiness or capacity (or, if this is what will be targeted in the research, evidence that researchers have collected some data on their current readiness of capacity)
clearly defined and conceptually justified implementation strategy/strategies (where applicable)
team experience with the setting
evidence of feasibility
a clear and appropriate measurement plan and sound analytic strategy that links back to the measurement plan
information about how the proposal aligns with the policy or funding context
Resources to guide implementation research are increasing rapidly. Training resources and institutes for implementation research have been developed to provide training and mentorship on implementation theory, research design, and execution of implementation research. Webinars and cyberseries have also been developed with the goal of providing overviews and links to resources on implementation science. Additionally, annual and biannual conferences provide opportunities to present research and learn about advances in the field.