Dissemination and Implementation
Section 2
Dissemination and Implementation Frameworks
There are multiple models and conceptual frameworks for the targeted and widespread dissemination and implementation of health care interventions. A major challenge for research teams conducting pragmatic trials is selecting which model or framework will be beneficial (Zatzick et al. 2016). There are more than 60 models and frameworks for dissemination and implementation research (Tabak et al. 2012), and in this section, we will briefly describe select frameworks and introduce dissemination and implementation constructs that may be useful to investigators conducting pragmatic trials.
Exploration, Adoption/Preparation, Implementation, and Sustainability (EPIS)
The Exploration, Adaption/Preparation, Implementation and Sustainability (EPIS) framework articulates variables that might influence the ability to effectively implement evidence-based practices and considers both external (outer) and internal (inner or local) context of the organizations (Aarons et al. 2011).
- The exploration phase involves awareness of an issue or problem that could lead to implementing an evidence-based intervention or quality improvement approach. The outer context, at is broadest, involves state and federal funding, state legislatures, private foundations, and patient advocacy groups who support or encourage exploration of an evidence-based practice. At a local level, exploration is driven by an organization’s collective knowledge and skills, readiness for change, and receptiveness to change (Aarons et al. 2011).
- The adoption/preparation phase involves gathering and weighing research evidence. It can be driven externally by legislative changes, patient advocacy, and inter-organizational networks that may serve as partners or competitors. Adoption of an intervention is also influenced by the organization’s size, structure, and leadership (Aarons et al. 2011).
- The implementation phase is externally affected by funding, leadership, and inter-organizational networks at the level of states, counties, organizations, or groups of individuals. Additionally, the investigators who develop an intervention can help guide implementation across organizations. The internal context varies by structure (centralized vs. disperse), the priorities and goals of the organization, readiness and receptiveness for change, and the culture of the organization (Aarons et al. 2011).
- Sustainability of an intervention is driven by external leadership and policy, funding, and public academic collaborations and partnerships. At a local level, leadership and organizational culture influence sustainability, as well as the critical mass of other, possibly competing, evidence-based practices. Sustainability frequently involves fidelity monitoring and support and/or additional staffing (Aarons et al. 2011).
Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM)
Efficacy of an intervention is only a small piece of a much larger puzzle, and the “gold standard” for determining efficacy has been the explanatory clinical trial. But with PCTs, the “rigor” of trials must be balanced with “relevance” (Glasgow and Chambers 2012). Glasgow and Chambers suggest that in order to conduct research that is both rigorous and relevant we must understand that “there is no single research design or method to identify 'truth'” (Glasgow and Chambers 2012). The health care enterprise is complex; there is much variation and a one-size-fits-all approach of traditional research does not foster rapid dissemination and implementation of research into practice.
One factor that affects the impact of an intervention is the reach of the program (a program is the vehicle through which an intervention is delivered), which effects the percentage of the population who will receive intervention (Abrams et al. 1996). One way to conceptualize and measure dissemination and implementation potential is the use the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) framework, which emphasizes not only effectiveness and reach, but also adoption, implementation, and maintenance (Glasgow et al. 1999) as shown in Figure 1.
(Figure adapted from RE-AIM.org. Used with permission.)
To implement an intervention, one must consider not only its effectiveness, but also methods for:
- Optimizing the reach of the interventions towards the populations that could benefit
- Supporting the decisions that health systems are making to adopt the intervention
- Supporting implementation of the intervention so that it is delivered with as much quality as possible
- Enabling the sustainability or maintenance of the intervention
Population Impact
While the core of implementation research considers implementation strategies and the associated outcomes, implementation also has a broader impact on the entire system and the health of the population. To put it another way, the public health impact of an intervention depends both on the proportion of the population who are at risk and are expected to benefit from the intervention, and on the proportion of people who are candidates for the intervention who actually receive it (Koepsell et al. 2011). With this in mind, the process of subject recruitment for a trial could provide important information about potential impact in the broader population. Expanding on this notion, effect size (a function of the trial) and reach (a function of the delivery of the intervention) have been used to project the population impact of a specific intervention (Zatzick et al. 2009).
Additionally, it is critical to consider the demand for the intervention one wishes to implement. There are two questions that bear on demand:
- Is the intervention something that is needed by a health system or provider who will be expected to implement it? Although health systems have an ethical obligation to provide treatments that work, to fully make an intervention happen, the leadership of a health system must recognize its value and be willing to devote resources—money, staff and patient time, etc.—to it.
- Is the intervention something that patients or consumers of the intervention want and/or need? People have individual preferences and have the right to decline treatments that conventional evidence says are effective.
SECTIONS
sections
- Introduction
- Dissemination and Implementation Frameworks
- Let It, Help It, Make It Happen
- Changes to Policy and Guidelines
- Legislative Changes
- Creation of Targeted Tools
- Stepped Wedge Designs
- Intervention Staffing and Training Flexibility
- Pragmatic Implementation Process Assessments
- Partnering With Quality Improvement and Population Health Initiatives
- Implementation in the Trial Versus in the Real World
- Additional Resources
- FAQ
REFERENCES
Aarons GA, Hurlburt M, Horwitz SM. 2011. Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors. Administration and Policy in Mental Health and Mental Health Services Research. 38:4–23. doi:10.1007/s10488-010-0327-7. PMID:21197565.
Abrams DB, Orleans CT, Niaura RS, Goldstein MG, Prochaska JO, Velicer W. 1996. Integrating individual and public health perspectives for treatment of tobacco dependence under managed health care: a combined stepped-care and matching model. Ann Behav Med. 18:290–304. doi:10.1007/BF02895291. PMID:18425675.
Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. 2012. Effectiveness-implementation Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation Research to Enhance Public Health Impact. Medical Care. 50:217–226. doi:10.1097/MLR.0b013e3182408812. PMID:22310560.
Glasgow RE, Chambers D. 2012. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci 5:48–55. doi:10.1111/j.1752-8062.2011.00383.x. PMID: 22376257.
Glasgow RE, Vogt TM, Boles SM. 1999. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 89:1322–1327. PMID: 10474547.
Koepsell TD, Zatzick DF, Rivara FP. 2011. Estimating the population impact of preventive interventions from randomized trials. Am J Prev Med 40:191–198. doi:10.1016/j.amepre.2010.10.022. PMID: 21238868.
Tabak RG, Khoong EC, Chambers DA, Brownson RC. 2012. Bridging Research and Practice. Am J Prev Med 43:337–350. doi:10.1016/j.amepre.2012.05.024. PMID: 22898128.
Zatzick DF, Koepsell T, Rivara FP. 2009. Using Target Population Specification, Effect Size, and Reach to Estimate and Compare the Population Impact of Two PTSD Preventive Interventions. Psychiatry 72:346–359. doi:10.1521/psyc.2009.72.4.346. PMID: 20070133
Zatzick DF, Russo J, Darnell D, Chambers DA, Palinkas L, Van Eaton E, Wang J, Ingraham LM, Guiney R, Heagerty P, et al. 2016. An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement Sci 11:58. doi:10.1186/s13012-016-0424-4. PMID: 27130272
current section : Dissemination and Implementation Frameworks
- Introduction
- Dissemination and Implementation Frameworks
- Let It, Help It, Make It Happen
- Changes to Policy and Guidelines
- Legislative Changes
- Creation of Targeted Tools
- Stepped Wedge Designs
- Intervention Staffing and Training Flexibility
- Pragmatic Implementation Process Assessments
- Partnering With Quality Improvement and Population Health Initiatives
- Implementation in the Trial Versus in the Real World
- Additional Resources
- FAQ