Implementation in the Trial Versus in the Real World- ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 11


Implementation in the Trial Versus in the Real World- ARCHIVED

In the chapter What is a Pragmatic Clinical Trial, we introduce the PRagmatic Explanatory Continuum Indicator Summary (PRECIS) domains for PCTs, which specify criteria that make a trial more pragmatic than explanatory. For a trial to be on the pragmatic end of the spectrum, one would expect “only ordinary attention to dose setting and side effects” and “no special strategies to maintain or improve compliance are used” (Thorpe et al. 2009).

Some PCTs use existing ordinary implementation processes and some require extra monitoring and support; this has implications for post-study implementation, as described in more detail using the case example below.

Case Example: Suicide Prevention Outreach Trial (SPOT)

  • The goal of the Suicide Prevention Outreach Trial (SPOT) is to compare outcomes in patients who receive care-management or online skills training for suicide prevention versus usual care in three healthcare systems.

Care managers and skills coaches received approximately 14 hours of initial training on suicide prevention interventions conducted by videoconference and teleconference followed by weekly or bi-weekly supervision teleconferences. This training sets SPOT apart from a purely pragmatic trial according the PRECIS domains, but was necessary because of the introduction of new clinical work processes and informatics tools. The investigators did not monitor the fidelity of the intervention (e.g., reviewing the content of online messaging or phone calls); making this aspect of the trial consistent with a pragmatic design. The implications for potential implementation (if the program is proven effective) are that similar training and subsequent supervision will likely be required.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Thorpe KE, Zwarenstein M, Oxman AD, et al. 2009. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 62:464–475. doi:10.1016/j.jclinepi.2008.12.011. PMID: 19372436.


Version History

Published August 25, 2017

Partnering With Quality Improvement and Population Health Initiatives- ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 10


Partnering With Quality Improvement and Population Health Initiatives- ARCHIVED

When developing and implementing a pragmatic study, it is may be useful to partner with quality improvement and population health personnel at sites; if the results of the PCT are positive, they may align with organizational goals for improving care. Tools and technologies can be developed so they are easily adaptable for use in various departments with different electronic health record systems.

Case Example: ICD-Pieces

  • The goal of the Improving Chronic Disease Management with Pieces (ICD-Pieces) trial is to improve care for patients with chronic kidney disease, diabetes, and hypertension by using a novel technology platform (Pieces) that uses the electronic health record to identify patients and by assigning practice facilitators within primary care practices or community medical homes.

The investigators of ICD-Pieces used a novel information technology (Pieces) to identify patients with chronic kidney disease, diabetes, and hypertension through the electronic health record and assign practice facilitators to these patients to improve their care. Pieces can be integrated into various electronic health record systems—Epic, CPRS, All Scripts, or customized spreadsheets. The IT algorithms in ICD-Pieces were designed to be relatively simple for the IT team to implement should the healthcare system decide to allocate resources for this purpose, as improving the care of patients with multiple co-morbidities aligns with quality improvement and population health goals.

If the results are positive, participating healthcare systems would implement the results in different manners, but it is likely that quality improvement or population medicine groups would drive the uptake.

Strategy Details
Diffusion Once results are available there will be initial presentations to administrators and clinical leaders. Next steps will include presentations to other groups in the healthcare system and publication of trial results
Dissemination Investigators will have information about how many hospitalizations, readmissions, emergency room visits, cardiovascular events and deaths could be potentially saved through the intervention. This could be translated into better outcomes and even cost savings. Healthcare systems will be interested in this information.  Visits to clinics, education sessions and webinars will be held in the participating sites.
Implementation If trial demonstrates value, quality improvement and population health departments will allocate IT resources and practice facilitators across healthcare systems. Patient lists, practice alerts and order sets will be available for use across the healthcare systems.
Sustainability If quality of care measures improve after implementing the intervention—and the detection of patients translates into better care, then it is expected that quality improvement and population medicine departments will continue to coordinate allocation of IT resources and practice facilitators and monitor performance to drive long-term sustainability.

SECTIONS

CHAPTER SECTIONS


Version History

Published August 25, 2017

Stepped Wedge Designs- ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 7


Stepped Wedge Designs- ARCHIVED

Stepped wedge designs, on the surface, seem ideally suited to implementation because the intervention is eventually turned “on” at each of the sites. However, implementation is influenced by many internal and external factors, and is decidedly complex and timing of the decision to fully implement is an issue. We describe the utility of stepped wedge designs in the chapters Experimental Designs and Randomization Schemes and Designing With Implementation and Dissemination in Mind. Below, we describe the diffusion, dissemination, and implementation plan of the Lumbar Imaging with Reporting of Epidemiology (LIRE) trial, which uses a stepped wedge design, as well as a few of the unexpected complications that arose.

Case Example: Lumbar Imaging with Reporting of Epidemiology (LIRE)

The goal of LIRE: Determine if inserting epidemiological benchmarks (essentially representing the normal range) into lumbar spine imaging reports reduces subsequent tests and treatments.

The issues:

  • At one health system, despite buy-in from leadership, several individuals within certain clinics did not want the intervention. This had two effects. First, an individual radiologist could remove the intervention from a report during its creation. Since the dictation system defaulted to including the intervention information, and it required an extra step to delete the intervention text, this removal at the individual radiologist level happened relatively infrequently. Second, the leadership of a few clinics within the same health system wanted the text slightly modified and would not allow the intervention to be used until it was modified. Because this change required IRB approval, it took several months and resulted in a lack of adherence to random assignment during that time.
Strategy Details
Diffusion Any site with a radiology information system (RIS), radiology dictation system or EMR can automate the insertion of the prevalence information that constitutes our intervention. In fact, the text had already diffused to a limited extent at two health systems due to a publication in Radiology (McCullough et al. 2012). Investigators needed to ask these sites to stop using the intervention text prior to the start of the trial.
Dissemination Investigators worked closely with site PIs to individualize approaches as how to best introduce radiologists and primary care providers (PCPs) to the intervention. Since the intervention was “turned on” centrally, neither radiologists nor PCPs needed to alter their workflow. However, investigators needed to inform both groups prior to roll-out to garner their acceptance. They informed them through a variety of approaches including meetings with clinic leadership, staff meetings, and email notifications.
Implementation The stepped wedge design facilitates both implementation and sustainability because by the end of the study, the intervention is “on” at all sites.
Sustainability The decision whether to sustain the intervention varied by site. Two of the health systems do not have active plans to de-implement the intervention. Again, the stepped wedge design makes it easier for sites to sustain the intervention since this requires that they simply accept the status quo. One of the health systems is prudently waiting for the study results before deciding to keep the intervention. The final health system changed their EMR vendor following the study. After they switched systems, they decided not to re-implement the intervention but plan to re-visit this decision once study results are available If the evidence indicates that the intervention brings value, they will consider re-implementing the intervention.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

McCullough BJ, Johnson GR, Martin BI, Jarvik JG. 2012. Lumbar MR imaging and reporting epidemiology: do epidemiologic data in reports affect clinical management? Radiology. 262:941–946. doi:10.1148/radiol.11110618. PMID:22357893.


Version History

December 5, 2018: Revised the LIRE sustainability information as part of the annual update (changes made by K. Staman).

Published August 25, 2017

Legislative Changes- ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 5


Legislative Changes- ARCHIVED

Legislation can be an important way both to generate interest in a particular trial and to sustain its impacts.  At the onset of the Strategies and Opportunities to Stop Colorectal Cancer (STOP CRC) in Priority Populations trial, colorectal cancer screening was adopted as an incentive metric for Oregon’s Coordinated Care Organizations (CCOs, the Oregon equivalent of Accountable Care Organizations). This meant that Medicaid Health Plans and CCOs would receive incentives for reaching performance or improvement targets for colorectal cancer screening. The initial performance target was set at 47%, and the improvement target was set at 3%. Over the subsequent years, additional state legislation was passed to reduce patient out-of-pocket costs for colorectal cancer screening. In 2014, passed legislation required that a colonoscopy initiated as a screening procedure be billed as a screening procedure—even when polyps were removed. Legislation passed in 2015 further required payers to cover the cost of a follow-up colonoscopy (with no out-of-pocket costs) among patients who screened positive on fecal testing.

Case Example: STOP CRC

  • STOP CRC improves the rates of colorectal-cancer screening by mailing fecal immunochemical testing kits to patients at Federally Qualified Health Centers. STOP CRC has explored facilitators for use (e.g., making it more user-friendly by using wordless instructions). Clinics also used plan-do-study-act (PDSA) cycles for exploring additional implementation strategies (e.g., improving workflow for printing and mailing kits and enhancements to patient materials).
Strategy Details
Diffusion Electronic health record tools and training videos are available to all health systems affiliated with OCHIN (100+ health systems across 18 states). Study materials (introductory letters, FIT kit inserts, reminder letters) are available on a public website.
Dissemination Publications and presentations (65 presentations at regional and national venues;23 publications to-date). Dr. Coronado offers technical assistance to additional Federally Qualified Health Centers as part of contracts with the Washington State Department of Health and the Oregon Health Authority.
Implementation A toolkit with clinic materials is publically available (www.MailedFIT.org). The toolkit includes procedures and materials needed to conduct STOP CRC. The electronic health record tools, developed in Epic, were adapted for Allscripts.
Sustainability Control clinics were able to implement the mailed interventions in year 2. Partnerships with health plans and direct-mail vendors assist smaller health centers to implement the mailed program. State legislation, described above, expands access to colonoscopy services.

SECTIONS

CHAPTER SECTIONS


Version History

December 5, 2018: Updated sustainability and implementation information for STOP CRC (changes made by K. Staman).

Published August 25, 2017

Changes to Policy and Guidelines – ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 4


Changes to Policy and Guidelines – ARCHIVED

Case Example: Trauma Survivors Outcomes and Support Trial (TSOS)

The investigators of the Trauma Survivors Outcomes and Support Trial (TSOS) have been working with the American College of Surgeons Committee on Trauma for over a decade to integrate findings from pragmatic trials into guidelines that regulate trauma care nationally. From the mid-1990s through 2012, a series of single and multisite trials were conducted that suggested that alcohol screening and brief interventions delivered from trauma centers could reduce alcohol consumption and perhaps also recurrent traumatic injury. These efforts in part facilitated a universal alcohol screening and intervention requirement from the American College of Surgeons Committee on Trauma for US Level I and Level II trauma centers in 2014 (American College of Surgeons. 2014). The study team convened policy summits with the College to review the results of these trials, and, as part of the 2014 orange book the College included a post-traumatic stress disorder (PTSD) and comorbidity screening and intervention best practice guideline without a formal requirement.

Resources for Optimal Care of the Injured Patient (2014)

The next step, and an explicit goal of the TSOS trial, is to provide the college with multisite pragmatic trial evidence that could further inform regulatory policy. The clinical goal of TSOS is to coordinate care and improve outcomes for trauma survivors with PTSD and comorbidity. The trial is being conducted at 25 US level 1 trauma centers, and frontline providers receive intervention training at each center. Sites are asked to recruit 40 patients over the course of the 4-year study, and patients undergo a baseline electronic health record PTSD screen, are randomized, and then complete 3, 6, and 12 month follow-up assessments (Zatzick et al. 2016).

With TSOS, nearly all the clinical investigators are front-line trauma center providers who understand that if the College issues the requirement, they will have to perform those mandated screening and intervention procedures at their trauma centers. When the research team was developing the screening and intervention process, they remained focused on the ultimate implementation. The clinical and policy team developed a unique set of methods for understanding implementation mechanisms: these begin with immersive participant observation by study team members at training site visits, over the telephone, and in their own clinical activities. Team members record field observations in real time—in ways that don’t drive up the cost of the trial—and these notes, logs, and other observations are reviewed with the a mixed-methods expert. These field observations give the team some preliminary information regarding the implementation science constructs that will inform sustainable implementation and acute care regulatory policy. In the final year of the trial, an American College of Surgeons’ policy summit has been scheduled to facilitate translation of results into national policy (Zatzick et al. 2016).

The TSOS diffusion, dissemination, implementation, and sustainability strategy is shown below.

Strategy Details Details
Diffusion TSOS will publish results in a peer-reviewed academic journal.
Dissemination The TSOS study team will present at health services, psychiatric and trauma-surgical conferences in order to disseminate the results of the study. The TSOS team will also work to present the results of the study in the American College of Surgeons Committee on Trauma (ACS COT) Resources for Optimal Care of the Inured Patient guidebook, and thus disseminating through the mechanism of a nationally recognized resource guide for trauma care practice. Dissemination will also occur through the end of study policy summit with the ACS COT.
Implementation The TSOS team is currently working with the ACS COT. Co-Investigator Dr. Jurkovich and PI Dr. Zatzick are ACS-COT members. They will attempt to contribute to the guidebook PTSD screening and intervention suggestions based on trial results, which could result in trauma center verification requirements. Such policy contributes to a “make it happen” implementation context. Additionally, electronic health record screenings are occurring at 25 trauma center sites nationally for PTSD and the intervention will be implemented at all 25 sites in the TSOS stepped-wedge design.
Sustainability The ACS COT resources guidebook trauma center verification requirements could possibly be influenced if the TSOS trial proves effective in reducing the symptoms of PTSD and co-morbidity; changes to the resources guidebook could in turn influence the sustainability of the TSOS intervention, given that verification requirements often result in additional resource allocation for services. Additionally, study team members are positioned to help influence national policy, thereby contributing to sustainability of guidelines derived from positive TSOS study investigative results.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

American College of Surgeons. 2014. Resources for Optimal Care of the Injured Patient. https://www.facs.org/~/media/files/quality%20programs/trauma/vrc%20resources/resources%20for%20optimal%20care.ashx. Accessed August 1, 2017.

Zatzick DF, Russo J, Darnell D, et al. 2016. An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement Sci. 11:58. doi:10.1186/s13012-016-0424-4. PMID: 27130272.


Version History

Published August 25, 2017

Dissemination and Implementation Frameworks – ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 2


Dissemination and Implementation Frameworks – ARCHIVED

There are multiple models and conceptual frameworks for the targeted and widespread dissemination and implementation of health care interventions. A major challenge for research teams conducting pragmatic trials is selecting which model or framework will be beneficial (Zatzick et al. 2016). There are more than 60 models and frameworks for dissemination and implementation research (Tabak et al. 2012), and in this section, we will briefly describe select frameworks and introduce dissemination and implementation constructs that may be useful to investigators conducting pragmatic trials.

Exploration, Adoption/Preparation, Implementation, and Sustainability (EPIS)

The Exploration, Adaption/Preparation, Implementation and Sustainability (EPIS) framework articulates variables that might influence the ability to effectively implement evidence-based practices and considers both external (outer) and internal (inner or local) context of the organizations (Aarons et al. 2011).

  • The exploration phase involves awareness of an issue or problem that could lead to implementing an evidence-based intervention or quality improvement approach. The outer context, at is broadest, involves state and federal funding, state legislatures, private foundations, and patient advocacy groups who support or encourage exploration of an evidence-based practice. At a local level, exploration is driven by an organization’s collective knowledge and skills, readiness for change, and receptiveness to change (Aarons et al. 2011).
  • The adoption/preparation phase involves gathering and weighing research evidence. It can be driven externally by legislative changes, patient advocacy, and inter-organizational networks that may serve as partners or competitors. Adoption of an intervention is also influenced by the organization’s size, structure, and leadership (Aarons et al. 2011).
  • The implementation phase is externally affected by funding, leadership, and inter-organizational networks at the level of states, counties, organizations, or groups of individuals. Additionally, the investigators who develop an intervention can help guide implementation across organizations. The internal context varies by structure (centralized vs. disperse), the priorities and goals of the organization, readiness and receptiveness for change, and the culture of the organization (Aarons et al. 2011).
  • Sustainability of an intervention is driven by external leadership and policy, funding, and public academic collaborations and partnerships. At a local level, leadership and organizational culture influence sustainability, as well as the critical mass of other, possibly competing, evidence-based practices. Sustainability frequently involves fidelity monitoring and support and/or additional staffing (Aarons et al. 2011).

Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM)

Efficacy of an intervention is only a small piece of a much larger puzzle, and the “gold standard” for determining efficacy has been the explanatory clinical trial. But with PCTs, the “rigor” of trials must be balanced with “relevance” (Glasgow and Chambers 2012). Glasgow and Chambers suggest that in order to conduct research that is both rigorous and relevant we must understand that “there is no single research design or method to identify 'truth'” (Glasgow and Chambers 2012). The health care enterprise is complex; there is much variation and a one-size-fits-all approach of traditional research does not foster rapid dissemination and implementation of research into practice.

One factor that affects the impact of an intervention is the reach of the program (a program is the vehicle through which an intervention is delivered), which effects the percentage of the population who will receive intervention (Abrams et al. 1996). One way to conceptualize and measure dissemination and implementation potential is the use the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) framework, which emphasizes not only effectiveness and reach, but also adoption, implementation, and maintenance (Glasgow et al. 1999) as shown in Figure 1.

 

(Figure adapted from RE-AIM.org. Used with permission.)

To implement an intervention, one must consider not only its effectiveness, but also methods for:

  • Optimizing the reach of the interventions towards the populations that could benefit
  • Supporting the decisions that health systems are making to adopt the intervention
  • Supporting implementation of the intervention so that it is delivered with as much quality as possible
  • Enabling the sustainability or maintenance of the intervention

Population Impact

While the core of implementation research considers implementation strategies and the associated outcomes, implementation also has a broader impact on the entire system and the health of the population. To put it another way, the public health impact of an intervention depends both on the proportion of the population who are at risk and are expected to benefit from the intervention, and on the proportion of people who are candidates for the intervention who actually receive it (Koepsell et al. 2011). With this in mind, the process of subject recruitment for a trial could provide important information about potential impact in the broader population. Expanding on this notion, effect size (a function of the trial) and reach (a function of the delivery of the intervention) have been used to project the population impact of a specific intervention (Zatzick et al. 2009).

Additionally, it is critical to consider the demand for the intervention one wishes to implement. There are two questions that bear on demand:

  • Is the intervention something that is needed by a health system or provider who will be expected to implement it? Although health systems have an ethical obligation to provide treatments that work, to fully make an intervention happen, the leadership of a health system must recognize its value and be willing to devote resources—money, staff and patient time, etc.—to it.
  • Is the intervention something that patients or consumers of the intervention want and/or need? People have individual preferences and have the right to decline treatments that conventional evidence says are effective.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Aarons GA, Hurlburt M, Horwitz SM. 2011. Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors. Administration and Policy in Mental Health and Mental Health Services Research. 38:4–23. doi:10.1007/s10488-010-0327-7. PMID:21197565.

Abrams DB, Orleans CT, Niaura RS, Goldstein MG, Prochaska JO, Velicer W. 1996. Integrating individual and public health perspectives for treatment of tobacco dependence under managed health care: a combined stepped-care and matching model. Ann Behav Med. 18:290–304. doi:10.1007/BF02895291. PMID:18425675.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. 2012. Effectiveness-implementation Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation Research to Enhance Public Health Impact. Medical Care. 50:217–226. doi:10.1097/MLR.0b013e3182408812. PMID:22310560.

Glasgow RE, Chambers D. 2012. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci 5:48–55. doi:10.1111/j.1752-8062.2011.00383.x. PMID: 22376257.

Glasgow RE, Vogt TM, Boles SM. 1999. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 89:1322–1327. PMID: 10474547.

 

Koepsell TD, Zatzick DF, Rivara FP. 2011. Estimating the population impact of preventive interventions from randomized trials. Am J Prev Med 40:191–198. doi:10.1016/j.amepre.2010.10.022. PMID: 21238868.

Tabak RG, Khoong EC, Chambers DA, Brownson RC. 2012. Bridging Research and Practice. Am J Prev Med 43:337–350. doi:10.1016/j.amepre.2012.05.024. PMID: 22898128.

Zatzick DF, Koepsell T, Rivara FP. 2009. Using Target Population Specification, Effect Size, and Reach to Estimate and Compare the Population Impact of Two PTSD Preventive Interventions. Psychiatry 72:346–359. doi:10.1521/psyc.2009.72.4.346. PMID: 20070133

Zatzick DF, Russo J, Darnell D, Chambers DA, Palinkas L, Van Eaton E, Wang J, Ingraham LM, Guiney R, Heagerty P, et al. 2016. An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement Sci 11:58. doi:10.1186/s13012-016-0424-4. PMID: 27130272


Version History

Published August 25, 2017

Introduction – ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 1

Introduction - ARCHIVED

The process of implementing research findings in clinical practice is complex, and diffusion, dissemination, and implementation strategies are needed to promote real change. Implementing clinical practice guidelines often requires organizational change at the health system or provider level, as well as individual change (Greenhalgh et al. 2004).

Dissemination Research

The process for dissemination has been rooted in the way we approach the scientific endeavor in the clinical research enterprise, which involves a step-wise approach from discovery, to clinical efficacy, effectiveness, and implementation research, the so-called bench to bedside approach (Glasgow et al. 2003; Glasgow et al. 2012). Although there has been increasing acknowledgement of the need to improve the dissemination of best practices into routine clinical care, the process has, until recently, been viewed as fairly linear: best practices and evidence-based guidelines are published and the clinician changes his or her behavior. However, there are many questions and considerations that are key aspects of dissemination research:

  • What evidence is needed to motivate change within health systems?
  • How is the message being framed and packaged?
  • How are the messages being interpreted and received, and how does the information fit with other sources of information?
  • How will the information transform into action?

Implementation Research

When research uncovers new information or new knowledge that will improve the care of patients, the findings may not—or may be slow to—be adopted or “translated” into clinical practice, creating a “translation gap.” These gaps persist between clinical research and the implementation because of complex provider-level and system-level barriers to rapid translation (Curran et al. 2012). Typically, clinical trials are intended to optimize what can be learned about the intervention and the associated outcomes. Implementation research looks at the black box between the intervention and the health outcomes and recognizes that broader public health outcomes are not just about what the intervention is, but also about how to get the intervention delivered in a way so that people can benefit from it. Proctor et al (Proctor et al. 2009) identified as set of key outcomes related to implementation that need to be considered:

  • How feasibly can an intervention be delivered in a particular health system?
  • How much fidelity to the intervention is needed?
  • How acceptable is it to the variety of stakeholders within the health system?
  • What will the uptake be?
  • What are the costs associated with having the intervention integrated into a system of care?
  • How sustainable can it be?

Dissemination and Implementation of Pragmatic Trial Results

Dissemination and implementation strategies vary for different types of interventions and trials, and are ultimately tied to the evidence that comes from a trial. For a new drug or intervention, the development pathway from discovery to implementation is well traveled (and complex): the process involves a sequence of explanatory clinical trials (Phases I, II, and III) that ultimately test the efficacy, effectiveness, and safety of an intervention. When this information is used for regulatory approval and labeling of drugs, there are known mechanisms to translate information about the drug or intervention into action and broader uptake. But with pragmatic research, there is no specific event at the end of a trial, such as regulatory approval, that signals the most appropriate dissemination and implementation pathway. Rather, when an intervention from a pragmatic clinical trial (PCT), such as those conducted by the Collaboratory, is shown to be beneficial, there are a multitude of different mechanisms for enabling uptake of the interventions in different healthcare settings.

This chapter provides a guide to different considerations for dissemination and implementation of pragmatic trial results. The chapter Dissemination Approaches for Different Stakeholders provides more specific guidance on dissemination strategies for specific types of stakeholders.

Key terms:

  • Implementation science is the study of methods to promote the integration of research findings and evidence into healthcare policy and practice (Eccles and Mittman 2006). Implementation Science includes:
    • Dissemination research: the scientific study of targeted distribution of information and intervention materials to a specific public health or clinical practice audience.  The intent is to understand how best to spread and sustain knowledge and the associated evidence-based interventions (National Institutes of Health (NIH) Program Announcement PAR-16-236).
    • Implementation research: the scientific study of the use of strategies to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health (NIH PAR-16-236).

 

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Eccles MP, Mittman BS. 2006. Welcome to Implementation Science. Implement Sci. 1. doi:10.1186/1748-5908-1-1.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. 2004. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 82:581–629. doi:10.1111/j.0887-378X.2004.00325.x. PMID:15595944.

Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. 2012. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health 102:1274–1281. doi:10.2105/AJPH.2012.300755. PMID: 22594758.

Glasgow RE, Lichtenstein E, Marcus AC. 2003. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health 93:1261–1267. PMID: 12893608.

Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. 2009. Implementation Research in Mental Health Services: an Emerging Science with Conceptual, Methodological, and Training challenges. Admin Policy Ment Health. 36:24–34. doi:10.1007/s10488-008-0197-4. PMID: 19104929

Simon, GE, Richesson, RL, & Hernandez, AF. 2020. Disseminating trial results: We can have both faster and better. Healthcare (Amsterdam, Netherlands)8(4), PMID: 32992107


Version History

February 11, 2020: Added Resource box with link to Building Partnerships to Ensure a Successful Trial (changes made by K. Staman).

December 5, 2018: Minor edits as part of the annual review process (changes made by K. Staman).

Published August 25, 2017

Outcomes Measured via Digital Health Technology

Choosing and Specifying Endpoints and Outcomes


Section 6

Outcomes Measured via Digital Health Technology

The use of digital health technologies (such as smartphones, tablet computers, and portable, implantable, or wearable medical devices) present a wide array of challenges and opportunities for medical research. There is much to be learned about how well these devices work, their validity and reliability. While these devices hold abundant promise, they are imperfect measures and are not commonly used in clinical trials (Clinical Trials Transformation Initiative 2016). For example, if a participant is wearing an activity monitor and claps at a concert, it is possible that the device could record this as running? Or, will a geo-spatial device record running on a treadmill as activity at all?

Some examples of the utility of these devices include:

  • A PCT can be designed where a patient has an application (app) for their phone that provides passive or active surveillance. For example, an app with geo-sensing can ping a person who enters the hospital with the question: Why are you in the hospital? Or Are you ill?
  • The Personalized Patient Data and Behavioral Nudges to Improve Adherence to Chronic Cardiovascular Medications (Nudge) trial used mobile phone technology to remind patients about medication adherence.
  • Some devices will transmit data about a participant’s health status to a data warehouse every night. The devices can measure physiologic functions, such as how active a person is, heart rate, etc. As a hypothetical example in a PCT designed to evaluate how to prevent cardiac death, a patient could wear a heart monitor that detects arrhythmias and other heartbeat abnormalities, as well as whether or not a patient is hospitalized.
  • Patients with type 1 diabetes can use continuous glucose monitors (CGMs) to monitor their blood glucose levels, and this information can be sent to a smartphone, and CGM-specific receiver, or an insulin pump (Clinical Trials Transformation Initiative 2016).

Digital health technologies can be part of a decentralized clinical trial, in which some or all study-related activities occur at a location separate from the investigators location. As described in the Living Textbook Chapter on Decentralized Clinical Trials, a critical consideration when using digital health technology is quality assurance: one must ensure that the right patient receives the right treatment and provides the right data. For example, if the mobile health technology is a pedometer, one must ensure that others in the household do not wear it.

For more on digital health technologies, see FDA’s Digital Health Technologies for Drug Development.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Clinical Trials Transformation Initiative. 2016. Developing Novel Endpoints Generated by Mobile Technology for use in Clinical Trials. https://www.ctti-clinicaltrials.org/projects/novel-endpoints. Accessed July 24, 2017.


Version History

March 3, 2026: Update as part of annual review (changes made by K. Staman).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

December 4, 2018: Updated text as part of annual review and added resources column (changes made by K. Staman).

Published August 25, 2017

Outcomes Measured via Direct Patient Report

Choosing and Specifying Endpoints and Outcomes


Section 7

Outcomes Measured via Direct Patient Report

Patients, family-members, clinicians, and researchers all want to know how treatments will improve a person’s day-to-day living and quality of life. Thus, collecting, and prioritizing the patient’s direct report on how they are feeling and functioning is often critical to evaluating the effectiveness and impact of PCTs. Patient-reported outcomes (PROs) are measured via reports taken directly from patients without interpretation by anyone else (including clinicians), and are the gold standard for assessing how people are feeling and functioning (DHHS 2025). Any PCT endpoint utilizing a PRO should be meaningful to patients and the associated measure should have appropriate validity support in the target population and setting. Additionally, other considerations are also applicable to the choice of PRO endpoints within PCTs: ideally, a PRO utilized to evaluate a PCT will have already been integrated into routine practice within clinical care settings and the data easily accessible via the electronic health record (EHR) (Zigler et al. 2024).

PRO measures reflect meaningful aspects of health and provide information about outcomes that are experienced uniquely by the patient, such as pain intensity, fatigue, and satisfaction with social roles. In an article from the NIH Pragmatic Trials Collaboratory PCO Core and EHR Core, investigators from 6 of the program’s pragmatic clinical trials shared challenges encountered for using PCOs as endpoints using the electronic health record, including

  • competing healthcare system priorities
  • clinician buy-in for adoption of PRO measures
  • low adoption and reach of technology in low-resource settings
  • lack of consensus and standardization of PRO selection and administration in the electronic health record (Zigler et al. 2024).

The authors suggest that, given the multiple barriers, study teams may need to use separate data collection systems or integrate externally collected PRO data into the electronic health record.

“When using patient-reported outcome measures for embedded pragmatic clinical trials investigators must make important decisions about whether to use data collected from the participating health system’s electronic health record, integrate externally collected patient-reported outcome data into the electronic health record, or collect these data in separate systems for their studies (Zigler et al. 2024)”

The authors developed this decision tree for using PROs in PCTs.

Decision tree for selecting PROS
From Zigler et al, 2024. Used with permission from the authors.

PROs still are not consistently used in clinical care, especially across different health systems and clinics, bringing unique challenges to PCTs prioritizing a patient-reported endpoint. As patient-centered outcomes become increasingly tied to quality and reimbursement, PROs are expected to be incorporated far more widely into clinical care in the coming years (Jensen et al. 2017). For now, pragmatic trials that originally plan to rely solely on data collected for billing and clinical purposes may have difficulty incorporating PROs as trial outcomes. This is not an insurmountable hurdle, however, as solutions have been developed to collect this type of information within PCTs (See the Case Study from GGC4H for an example involving RedCap.)For PRO data being extracted from the EHR, researchers planning trials should also consider the generalizability of the data across the entirety of their target population (Boyd et al. 2023).

For outcomes that represent internal sensations or experiences patients have outside the clinical visit, such as pain, symptoms, and physical functioning, the patient is the best source of information. Caregivers and other family members, especially for pediatric PCTs, can also provide important data. For other types of patient-reported health outcomes, such as co-morbidities and hospitalizations, these outcomes may also be obtained from the EHR or claims data. In such cases, the data reported by patients may supplement, contradict, or agree with EHR and claims data. For example, the EHR may not contain data on over-the-counter medications or the complete history of hospitalizations for a particular patient, and so patient report might be needed to supplement the EHR data for comprehensive outcome capture. To better understand these sources of data and how to use them in PCTs, the ADAPTABLE Supplement Report describes results of a literature review of standards for variables of interest. These deliberations resulted in a LOINC (Logical Observation Identifiers Names and Codes) patient-reported item set for ADAPTABLE.

For more on PROs, see the Living Textbook resource chapter, Patient-Reported Outcomes.

SECTIONS

CHAPTER SECTIONS

Resources

Collecting patient-reported outcome measures in the electronic health record: Lessons from the NIH Pragmatic Trials Collaboratory. This paper describes best practices for collecting patient-reported outcomes.

Read more on Patient-Focused Drug Development in the Living Textbook.

Living Textbook Chapter: Patient-Reported Outcomes
This chapter describes how PROs are used in different settings and how to choose and integrate a PRO measure into an embedded pragmatic clinical trial protocol.

White Paper: Patient-Reported Outcomes
This white paper covers how to use, measure, interpret, and implement PRO measures.

REFERENCES

back to top

Boyd AD, Gonzalez-Guarda R, Lawrence K, et al. 2023. Potential bias and lack of generalizability in electronic health record data: reflections on health equity from the National Institutes of Health Pragmatic Trials Collaboratory. Journal of the American Medical Informatics Association. 30:1561–1566. doi:10.1093/jamia/ocad115.

Jensen RE, Snyder CF, Basch E, Frank L, Wu AW. 2016. All together now: findings from a PCORI workshop to align patient-reported outcomes in the electronic health record. Journal of Comparative Effectiveness Research. 5(6):561–567. PMID: 27586855 doi:10.2217/cer-2016-0026.

Patient-Focused Drug Development: Selecting, Developing, or Modifying Fit-for-Purpose Clinical Outcome Assessments. 2025.

Zigler CK, Adeyemi O, Boyd AD, et al. 2024. Collecting patient-reported outcome measures in the electronic health record: Lessons from the NIH pragmatic trials Collaboratory. Contemporary Clinical Trials. 137:107426. doi:10.1016/j.cct.2023.107426.


Version History

March 3, 2026: Updated as part of annual review (changes made by K. Staman).

March 18, 2024: Added manuscript to Resources bar (changes made by K. Staman).

October 3, 2022: Minor nonsubstantive edits to the text. Added resource (changes made by K. Staman and L. Stewart).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

June 26, 2020: Added links to new PRO chapter and PRO white paper (changes made by K. Staman)

December 4, 2018: Added reference (changes made by K. Staman).

Published August 25, 2017

Outcomes Measured via the Electronic Health Record

Choosing and Specifying Endpoints and Outcomes


Section 3

Outcomes Measured via the Electronic Health Record

The identification of outcomes within EHRs may be easier when computable phenotypes have been created for the conditions of interest. As described in the Electronic Health Records–Based Phenotyping chapter of the Living Textbook, "a computable phenotype is a clinical condition, characteristic, or set of clinical features that can be determined solely from data in EHRs and ancillary data sources and does not require chart review or interpretation by a clinician.… For a phenotype definition to be valid, it must identify the condition for which it was developed and meet the desired degrees of sensitivity and specificity."

Watch the video module: Defining Outcomes With Electronic Health Record Data

The NIH Collaboratory’s Electronic Health Records Core, which includes representatives from the NIH Collaboratory Trials, has published an article describing approaches for using "phenotypes" data to identify clinically equivalent populations across multiple sites, includiong how to assess whether the data collected from healthcare systems are comparable, valid, and reliable (Richesson et al. 2013).

Some key questions and considerations related to outcomes within EHRs include the following:

  • Is the outcome a medically significant such that a patient would seek care?
    • Will the endpoint be medically attended?
    • Does it require hospitalization?
    • Is the treatment for the outcome generally provided in inpatient or outpatient settings?
      • Outpatient events may include diagnoses that justify a specific test order, also called a "rule out," and rule-out diagnoses might not indicate true outcomes.
  • What is the intensity of medical care?
    • If high, as with a myocardial infarction, then there will be a clear record in claims and/or EHR data.
    • If low, as with the gout example, there may or may not be a record of the event. A solution to this problem is to use a PRO, and reach out to the participant at specified intervals.
  • Where would the signal show up?
    • EHR (laboratory values, treatments, etc.)
    • Claims data (does the event generate a bill?)
    • Both
  • What sensitivity is required?
    • Conditions may not be consistently and reliably recorded.
  • Will the data be structured or unstructured?
    • If structured, the data may be usable as is.
    • If unstructured, some work will need to be done to ensure the data are captured in a uniform way. (Prompts can be added to the EHR system, artificial intelligence and large language models can be used, clinicians can be given special training, etc.)
  • Are there overlapping conditions (eg, chest pain and unstable angina)?
    • The data may need adjudication, especially if they are claims data.

An additional consideration, discussed in greater detail in the Using Electronic Health Record Data in Pragmatic Clinical Trials of the Living Textbook, is the ability to capture outcomes from a number of different sources over time. For example, in the Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long-term Effectiveness (ADAPTABLE) trial, patients enroll through an online portal, which also provides online consent and randomization. Data collected during routine care are used in the study, however, to capture complete information about all outcomes—which include myocardial infarction, mortality, and hospitalizations—participants are asked to login to the portal and report hospitalizations. If the patient does not go back to the portal, the call center at the Duke Clinical Research Institute calls for follow up.

For more the Food and Drug Administration has issued a guidance that discusses selection of data sources, development and validation of definitions for study design elements, and data traceability and quality when using data from the EHR (FDA 2025). The EHR Core has also developed the following the Living Textbook Chapters:

 

SECTIONS

CHAPTER SECTIONS

Resources

For information on challenges and prerequisites to using EHR data, see the article Enhancing the use of EHR systems for pragmatic embedded research: lessons from the NIH Health Care Systems Research Collaboratory

For more on what to measure when using EHR data, view the Living Textbook Grand Rounds Series: Choosing What to Measure and Making it Happen: Your Keys to Pragmatic Trial Success (Devon Check, PhD; Rachel Richesson, PhD)

Using Clinical Data to Advance Discovery; NIH Collaboratory EHR Workshop Video Module (17:10)

Dr. Josh Denny of the NIH’s All of Us Research Program describes how researchers are building powerful algorithms for use across EHR systems to advance clinical research.

REFERENCES

back to top

Richesson RL, Hammond WE, Nahm M, et al. 2013. Electronic health records based phenotyping in next-generation clinical trials: a perspective from the NIH Health Care Systems Collaboratory. J Am Med Inform Assoc. 20(e2):e226-e231. doi:10.1136/amiajnl-2013-001926. PMID: 23956018.

 

Real-World Data: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products. FDA Guidance for Industry. 2025. [accessed 2026 Feb 26]. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/real-world-data-assessing-electronic-health-records-and-medical-claims-data-support-regulatory

 


Version History

March 2, 2026: Updated as part of annual review (changes made by K. Staman).

September 30, 2022: Made minor nonsubstantive text edits and added resources to the Resource Bar (changes made by K. Staman and L. Stewart).

January 22, 2021: Added embedded video (change made by G. Uhlenbrauck).

July 2, 2020: Added a callout to the new Electronic Health Records–Based Phenotyping chapter; and made minor corrections to layout and formatting (changes made by D. Seils).

December 4, 2018: Added key questions (changes made by K. Staman).

Published August 25, 2017