Skip to content

COVID-19 Resources

Access the latest information on COVID-19 for clinical researchers
  • Home
  • About
    • NIH Collaboratory
      • Coordinating Center
      • NIH Collaboratory Trials
      • Core Working Groups
      • Steering Committee
      • Distributed Research Network
      • Our Impact
    • Living Textbook
      • Table of Contents
      • How to Use This Site
  • Resources
    • Data and Resource Sharing
    • Training Resources
    • Tools for Researchers
    • Publications
    • Knowledge Repository
  • Webinar
  • Podcast
  • News
    • News Feed
    • Calendar
    • Subscribe
return to home
Subscribe to Newsletter go to twitter feed go to linkedin go to blue sky feed
Search
NIH Collaboratory
Living Textbook of
Pragmatic Clinical Trials

COVID-19 Resources

Access the latest information on COVID-19 for clinical researchers
home button

Rethinking Clinical Trials

A Living Textbook of Pragmatic Clinical Trials

  • Design
    • What is a Pragmatic Clinical Trial?
    • Decentralized Pragmatic Clinical Trials
    • Developing a Compelling Grant Application
    • Experimental Designs and Randomization Schemes
    • Endpoints and Outcomes
    • Analysis Plan
    • Using Electronic Health Record Data
    • Building Partnerships and Teams to Ensure a Successful Trial
    • Intervention Delivery and Complexity
    • Patient Engagement
  • Data, Tools & Conduct
    • Assessing Feasibility
    • Acquiring Real-World Data
    • Assessing Fitness-for-Use of Real-World Data
    • Study Startup
    • Participant Recruitment
    • Monitoring Intervention Fidelity and Adaptations
    • Patient-Reported Outcomes
    • Clinical Decision Support
    • Mobile Health
    • Electronic Health Records–Based Phenotyping
    • Navigating the Unknown
  • Dissemination & Implementation
    • Data Sharing and Embedded Research
    • Dissemination Approaches for Different Audiences
    • Implementation
    • End-of-Trial Decision-Making
  • Ethics & Regulatory
    • Privacy Considerations
    • Identifying Those Engaged in Research
    • Collateral Findings
    • Consent, Disclosure, and Non-Disclosure
    • Data and Safety Monitoring
    • Ethical Considerations of Data Sharing in Pragmatic Clinical Trials
    • Ethics for AI and ML
    • IRB Responsibilities and Procedures

Incorporating Implementation Research Into PCTs

CHAPTER SECTIONS

Implementation


Section 3

Incorporating Implementation Research Into PCTs

Expand Contributors

Devon Check, PhD
Hayden B. Bosworth, PhD
Anna Krupp, PhD, RN
Anthony Gerlach, PharnD, FCCm, FCCP
Diana Burgess, PhD

Contributing Editor

Karen Staman, MS

Within the NIH Collaboratory, the most common way that trial teams incorporate implementation research methods is through conducting an evaluation of implementation-related successes and challenges, and factors impacting implementation of the intervention in the context of the trial (consistent with a “hybrid type I” effectiveness-evaluation design (Curran et al. 2012)). Such evaluations are guided by implementation determinant and evaluation frameworks and can provide valuable information about how an intervention may need to be modified, and/or what strategies may be needed to support effective implementation of the intervention. Depending on the objectives and timing, these frameworks may be formative and/or process evaluations, defined below.

Implementation evaluations typically use qualitative or quantitative methods, sometimes integrating the two types of data (ie, mixed methods).

  • Qualitative data often focus on implementation determinants (ie, barriers and facilitators – see Section 4) and include data from semi-structured interviews or focus groups with patients, providers, health system leaders, or other partners; direct observation of clinical processes; and document review.
  • Quantitative data on implementation outcomes (eg, RE-AIM outcomes – see Section 4) may come from administrative data or data from structured surveys intended to assess provider and patient behavior and receptivity to change.

Formative Evaluation

Formative evaluations are conducted early on and used to modify and optimize “implementation potential” of an intervention in the context of the trial. Use of formative evaluation in the NIH Pragmatic Trials Collaboratory is often seen in a trial’s pilot or planning phase to inform the full-scale conduct of the trial.

Case Example: RAMP

To modify and optimize the implementation potential of the RAMP program, the RAMP trial engaged patients (in this case, rural Veterans with chronic pain in the VA healthcare system), community partners, VA healthcare system leaders and staff, and 3 Veteran experts. This process included building 2 standing panels—the Veteran Engagement Panel and Community Partner Advisory Panel—both of which met 3 times per year and communicated by email as needed.

RAMP used mixed methods to iteratively gather and analyze feedback regarding rural Veteran’s needs (pre-pilot study) and strengths and weaknesses of the intervention and study processes (post-pilot study). The team gathered field notes through several meetings with the Veteran Engagement Panel, Community Partner Advisory Panel, and VA leadership and staff, which were entered into a REDCap data collection system. Pilot study participants were queried by survey. All relevant data were organized thematically using RAMP’s guiding models and frameworks. The study team created a high-level summary of the synthesized feedback regarding strengths (what worked well) and weaknesses (areas for refinement). The participation of partners at multiple levels and community engagement enhanced the overall significance and potential impact of the RAMP study in a myriad of ways. Perhaps most importantly, the engagement elevated the voices of rural Veterans so their pain-related needs could be considered in the design, development, and refinement of the study intervention, including optimizing the study for implementation within the VA healthcare system.

Process Evaluation

Process evaluations are conducted near the end of a trial to characterize how well an intervention was implemented, with the goal of informing future implementation efforts.

Case Example: BEST-ICU

The BEST-ICU trial aims to compare the effectiveness of two implementation strategies on increasing adoption of a multi-component, evidence-based intervention—the ABCDEF bundle. The process evaluation of BEST-ICU involves multiple components to monitor the implementation of each strategy, as described below.

  • Registered nurse (RN) implementation facilitator implementation strategy: This strategy includes a specially trained and dedicated RN (without a patient assignment) from the ICU who works on the study unit each weekday (8 hours/day and 5 days/week) to support implementation of the ABCDEF bundle. Core activities include serving as a clinical facilitator, bundle coordinator, providing performance monitoring and feedback, acting as a champion, and coaching. The study team routinely monitors the implementation using the following dimensions and methods:
    • Fidelity monitoring: Trained observers evaluate adherence and quality of delivery of the implementation facilitator intervention using standard reporting tools each month.
    • Dose monitoring: The amount of time the RN implementation facilitator is scheduled to work each week is monitored monthly using staffing data.
    • Facilitator adherence: RN implementation facilitators complete a brief log at the end of each shift to report the amount of time spent in each core activity during the shift.
    • Feedback collection: Gathering feedback from study unit participants is planned at the completion of the study.
  • Real-time audit and feedback strategy: This strategy uses data from the electronic health record to centrally display a dashboard for all healthcare practitioners to visualize. The data are updated in real-time. Each individual component of the ABCDEF bundle will be marked green, yellow or red.  Green indicates an individual component that has been documented the minimum daily requirements, yellow indicates at least one occurrence but not the minimum daily requirement and red will indicates component has not been documented in last 24 hours. The study team routinely monitors implementation using the following dimensions and methods:
    • Fidelity monitoring: Trained the study team to evaluate the real-time audit and feedback intervention monthly using a study-developed implementation fidelity checklist. This checklist includes direct observation of the centrally placed dashboard to make sure all patients and bundle components are displayed correctly.
    • Feedback collection: Gathering feedback from study unit participants is planned at the completion of study.

Combination of Formative and Process Evaluation

Many trials blend aspects of formative and process evaluation in their measurement of implementation.

Case Example: STOP CRC

STOP CRC aimed to increase colorectal cancer screening among underserved populations through the implementation of evidence-based interventions within healthcare systems, via a mailed fecal immunochemical test (FIT) outreach program.

The formative evaluation of the STOP CRC trial included the implementation research objectives of the study, which involved identifying barriers to screening within these populations and developing and evaluating tailored strategies to overcome these barriers (Coronado et al. 2014; Coronado et al. 2018).

The process evaluation of the STOP CRC trial involved multiple components to monitor the implementation and impact of the intervention throughout the study period. Key aspects included:

  • Fidelity assessment: Evaluating whether the intervention was delivered as intended, by tracking the distribution and return rates of FIT kits.
  • Adherence monitoring: Recording the extent to which patients completed the screening tests and followed through with subsequent diagnostic procedures if needed.
  • Provider engagement: Conducting interviews and surveys with healthcare providers to gather insights on their experiences with the intervention and identify any barriers to implementation.
  • Patient feedback: Collecting qualitative data from patients through focus groups and interviews to understand their perceptions of the intervention and any challenges they faced in participating.

The results of the STOP CRC trial demonstrated a significant increase in colorectal cancer screening rates in the intervention group compared to the control group. The process evaluation provided valuable information on the factors contributing to the success of the intervention and highlighted areas for improvement in future implementations.

Case Example: TSOS

One option for pragmatic implementation process assessment was developed by the Trauma Survivors Outcomes & Support (TSOS) pragmatic trial research team: Rapid Assessment Procedure Informed Clinical Ethnography (RAPICE) (Palinkas and Zatzick 2019).

As part of site visits and to provide training, the TSOS research team spent hundreds of hours annually immersed in pragmatic trial rollout in trauma care systems (Zatzick 2019). During these activities, the principal investigator and other team members logged field notes of their clinical research experiences. These data were reviewed regularly (eg, monthly) with a mixed-method expert consultant. Themes related to intervention delivery, sustainable implementation, barriers, and facilitators were iteratively discussed and documented. When appropriate, observations were fed back to frontline providers rolling out the TSOS intervention; other implementation process observations were collected by the study team and presented at the study team policy summit and other formats (eg, peer-reviewed publications). It is noteworthy that these procedures, because they were embedded as part of the trial, did not substantially increase trial timelines or costs.

In summary, within the NIH Pragmatic Trials Collaboratory, implementation research is commonly integrated using hybrid type I trial designs, which pair effectiveness evaluation with an assessment of implementation successes, challenges, and contextual factors. These evaluations are structured using determinant and outcome frameworks and may take the form of formative evaluations—conducted early to optimize implementation potential—or process evaluations—conducted during or after implementation to monitor fidelity and inform future scale-up. Formative evaluations, such as in the RAMP trial, leverage mixed methods and engagement to adapt interventions to local needs, enhancing relevance and uptake. Process evaluations, like those in BEST ICU, systematically monitor fidelity, dose, and engagement using real-time dashboards and staff logs. Some trials, like STOP CRC, blend formative and process strategies to both tailor and monitor implementation, resulting in improved outcomes and deeper insights into intervention delivery. Innovative methods such as RAPICE, used in TSOS, demonstrate how embedded ethnographic data collection can offer real-time, low-burden feedback to improve implementation while maintaining trial efficiency. Collectively, these approaches exemplify how pragmatic trials can simultaneously evaluate clinical effectiveness and generate actionable insights into how interventions can be sustainably adopted in diverse healthcare settings.

Previous Section Next Section

SECTIONS

CHAPTER SECTIONS

sections

  1. Introduction
  2. Factors Influencing Implementation of PCT Results
  3. Incorporating Implementation Research Into PCTs
  4. Implementation Frameworks
  5. How PCTs Prepare for Implementation

Resources

Grand Rounds

Trauma Survivors Outcomes & Support (TSOS) Pragmatic Trial: Revisiting Effectiveness & Implementation Aims (Doug Zatzick, MD)

Presentation

Trial Objectives and Design: An Overview of Hybrid Designs

REFERENCES

back to top

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. 2012. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care. 50:217–226. doi:10.1097/MLR.0b013e3182408812. PMID: 22310560.

Coronado GD, Petrik AF, Vollmer WM, et al. 2018. Effectiveness of a mailed colorectal cancer screening outreach program in community health clinics: the STOP CRC cluster randomized clinical trial. JAMA Intern Med. 178:1174. doi:10.1001/jamainternmed.2018.3629. PMID: 30083752.

Coronado GD, Vollmer WM, Petrik A, et al. 2014. Strategies and Opportunities to STOP Colon Cancer in Priority Populations: design of a cluster-randomized pragmatic trial. Contemp Clin Trials. 38:344–349. doi:10.1016/j.cct.2014.06.006. PMID: 24937017.

back to top

Palinkas LA, Zatzick D. 2019. Rapid assessment procedure informed clinical ethnography (RAPICE) in pragmatic clinical trials of mental health services implementation: methods and applied case study. Adm Policy Ment Health. 46(2):255–270. doi:10.1007/s10488-018-0909-3. PMID: 30488143.

Zatzick DF. 2019. Trauma Survivors Outcomes & Support (TSOS) Pragmatic Trial: Revisiting Effectiveness & Implementation Aims. https://rethinkingclinicaltrials.org/news/april-19-2019-trauma-survivors-outcomes-support-tsos-pragmatic-trial-revisiting-effectiveness-implementation-aims-doug-zatzick-md/.


Version History

Published August 7, 2025

current section :

Incorporating Implementation Research Into PCTs

  1. Introduction
  2. Factors Influencing Implementation of PCT Results
  3. Incorporating Implementation Research Into PCTs
  4. Implementation Frameworks
  5. How PCTs Prepare for Implementation

Citation:

Implementation: Incorporating Implementation Research Into PCTs. In: Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials. Bethesda, MD: NIH Pragmatic Trials Collaboratory. Available at: https://rethinkingclinicaltrials.org/chapters/dissemination/implementation/incorporating-implementation-research-into-pcts/. Updated August 28, 2025. DOI: 10.28929/276.

Footer Menu

  • How to Use This Site
  • About NIH Collaboratory
  • Enrollment Reporting
  • Grand Rounds
  • Funding Statement
Link to Twitter Link to LinkedIn Link to Blue Sky Link to NIH Collaboratory email

Reference in this Web site to any specific commercial products, process, service, manufacturer, or company does not constitute its endorsement or recommendation by the U.S. Government or National Institutes of Health (NIH). NIH is not responsible for the contents of any “off-site” Web page referenced from this server.

Log in
Privacy Statement
WordPress is a content management system and should not be used to upload any PHI as it is not an environment for which we exercise oversight, meaning you the author are responsible for the content you post. Please use this system accordingly. Site Map