October 9, 2025: New Living Textbook Chapter Explores Decentralized Elements of Pragmatic Clinical Trials

The NIH Pragmatic Trials Collaboratory this week published a new chapter of its Living Textbook of Pragmatic Clinical Trials. The chapter, Decentralized Pragmatic Clinical Trials, covers activities of a pragmatic trial that can occur remotely—at a location separate from an investigator’s location—such as participant engagement, recruitment, consent, study interventions and procedures, collection of patient-reported outcomes, and follow-up.

The chapter describes special considerations for decentralized trials, such as community health considerations and the vigilance needed to assure data quality, particularly as it relates to adherence with the study intervention, outcome ascertainment, and event monitoring.

The new chapter includes the following sections:

  1. What Is a Decentralized Trial?
  2. What Decentralized Elements Are Used in Pragmatic Trials?
  3. Community Health Considerations for Decentralized Approaches
  4. Quality Assurance

Most of NIH Collaboratory Trials have decentralized elements, as described in detail in Section 2.

August 11, 2025: New Living Textbook Chapter Explores Implementation in Pragmatic Clinical Trials

The NIH Pragmatic Trials Collaboratory Implementation Science Core, led by Devon Check and Hayden Bosworth, has developed a new chapter on implementation to assist study teams with the complex process of using and studying implementation strategies to help implement research findings into clinical care. The chapter includes sections on:

Case studies are used to illustrate how pragmatic clinical trials embedded in healthcare systems use implementation frameworks, including examples from RAMP, BEST-ICU, STOP CRC, TSOS, ABATE, STEP-2, and GRACE.

For more, see our collection of chapters on Dissemination and Implementation, which includes chapters on Dissemination to Different Stakeholders, Data Sharing and Embedded Research, and End-of-Trial Decision-Making.

June 23, 2025: How to Choose Patient-Reported Outcome Measures in Pragmatic Clinical Trials?

Living Textbook iconA new section of the Living Textbook of Pragmatic Clinical Trials describes considerations for choosing patient-reported outcome measures in pragmatic clinical trials.

“Where possible, investigators are encouraged to use measures with adequate support for validity that are in the public domain,” the authors wrote.

The authors provide a set of questions to guide investigators in choosing appropriate measures. For example, investigators may want to understand whether the patient-reported outcome is in electronic health records, is in the public domain, and is valid for the use case in question.

The considerations were developed by members of the NIH Pragmatic Trials Collaboratory’s Patient-Centered Outcomes Core in collaboration with the Health Care Systems Interactions Core, the Electronic Health Records Core, and colleagues at the NIH.

June 12, 2025: Living Textbook Chapter Covers Consent, Disclosure, and Nondisclosure for Pragmatic Trials

A new chapter of the Living Textbook of Pragmatic Clinical Trials describes regulatory requirements for informed consent, waivers and alterations of consent, mechanisms for notification, and research participants’ perspectives on a variety of approaches to consent and notification—all with a focus on special considerations for pragmatic clinical trials.

For a variety of reasons, the  application of ethical principles and regulations regarding informed consent can be complex for pragmatic clinical trials. For example, pragmatic trials often use novel study designs, including cluster randomization, in which the unit of randomization may be a clinic, hospital, or healthcare system rather than the individual. Some pragmatic trials also use stepped-wedge designs, in which the study intervention is introduced to sites at different times.

The new chapter has 5 sections:

  • Section 1 discusses reasons why the application of ethical principles and regulations regarding informed consent can be complex for pragmatic trials.
  • Section 2 describes the regulatory requirements for informed consent.
  • Section 3 focuses on waivers and alterations of the informed consent process.
  • Section 4 provides examples of mechanisms for notifying participants about the trial when consent is not required.
  • Section 5 presents findings on research partners’ preferences regarding various approaches to research and consent.

The chapter was developed by members of the NIH Pragmatic Trials Collaboratory’s Ethics and Regulatory Core.

May 14, 2025: New Living Textbook Chapter Provides Guidance for Investigators Facing Tough Decisions After a Trial Ends

Icon for the Health Care Systems Interactions CorePragmatic clinical trials embedded in healthcare systems rely on partnerships between investigators and healthcare system leaders to conduct research. As the end of a pragmatic trial approaches, research teams and their partners often face uncertainties around this undefined phase when researchers are waiting on results. End-of-trial decision-making, including whether to sustain an intervention, has implications for research teams, healthcare systems, and patients.

A new chapter of the Living Textbook of Pragmatic Clinical Trials, published this week, describes the challenges investigators face during this common period of ambiguity and offers considerations for decision-making that honors researchers’ responsibilities and fosters ongoing collaboration with healthcare system partners while awaiting trial results:

  • Section 1 introduces possible trial outcomes and describes the intersection of posttrial responsibilities, sustainment, and deimplementation.
  • Section 2 provides case studies describing how research teams from 3 NIH Collaboratory Trials approached end-of-trial decision-making.
  • Section 3 focuses on considerations for investigators and an end-of-trial decision-making framework.
  • Section 4 provides approaches that investigators might take to support research teams and healthcare system partners as they navigate the last part of a trial, before outcomes are known.

The chapter was developed by members of the NIH Pragmatic Trials Collaboratory’s Health Care Systems Interactions Core.

November 4, 2024: Update to Living Textbook Offers Tips for Developing a Compelling PCORI Grant Application

Living Textbook iconThe NIH and the Patient-Centered Outcomes Research Institute (PCORI) are major funders of pragmatic clinical trials embedded in healthcare systems. An existing chapter of the Living Textbook of Pragmatic Clinical Trials provided guidance on how to Develop a Compelling Grant Application for the NIH. The updated chapter now includes information about how to develop a grant application for PCORI.

While both organizations share common goals to advance public health, PCORI-funded research requires investigator and patient/community partnership throughout the entirety of the research process from design through results dissemination and implementation.

The updated chapter provides practical advice about how to develop and submit an application in the following sections:

  1. Find the Right Program Official and Study Section
  2. Find the Notice of Funding Opportunity
  3. Write a Strong Proposal That Addresses Review Criteria
  4. Review Criteria
  5. Diversity, Disparities, and Inclusion Across the Lifespan
  6. Award Status
  7. Additional Resources

Regardless of whether the submission is for NIH or PCORI, first and foremost, investigators should develop and clearly define a clinical research question with a testable hypothesis and select an experimental design best suited to answering the research question.

Use of Medicare Data in PCTs

Assessing Fitness for Use of Real-World Data Sources


Section 5

Use of Medicare Data in PCTs

As of 2023, about half of all Medicare beneficiaries were enrolled in Medicare Advantage (Biniek et al 2023)—approximately 30.8 million people (Ochieng et al 2023). Medicare Advantage is a private managed care alternative to the traditional fee-for-service Medicare. With Medicare Advantage, healthcare organizations receive a set amount of money to cover the healthcare costs of enrolled patients, and the amount is determined by the patient's risk score, which is based on patient characteristics and health conditions (Centers for Medicare & Medicaid Services 2024). Thus, Medicare Advantage plans have different incentives to document diagnoses for patients than fee-for-service plans, as diagnoses are linked to risk scores, which could lead to aggressive coding practices (Keating 2023). These differences can impact the reliability and relevance of the data used for PCTs.

Some important differences have been highlighted in examples from the literature, including differences in population, medication, preventive services, and across states and counties.

  • Population
    • Medicare Advantage plans include a higher share of members who require chronic disease management, and those with serious mental health conditions or substance abuse issues (Waddill 2021).
  • Medication use
    • Use of high-risk medications that should be avoided in older adults were lower in Medicare Advantage patients (Figueroa et al 2023).
  • Preventative services
    • Medicare Advantage plans are incentivized to enhance preventive services, such as screenings, including higher mammogram rates in older patients with dementia (Raver et al 2024).
    • Lung cancer screening is higher in patients who have Medicare Advantage (Hughes et al 2023).
  • Variation across states and counties
    • There is wide variation in enrollment rates across states and counties, which could be reflective of urban vs rural populations, the number of Medicare beneficiaries and their healthcare use patterns, and differences in the firms that offer Medicare Advantage across different geographic regions (Ochieng et al 2023).

Researchers should be aware that differences in the data from Medicare Advantage or fee-for-service Medicare claims could be reflective of a true pattern, an artifact of billing, enrolled population, or treatments and medications.

Finally, trials that include both Medicare Advantage and fee-for-service Medicare populations would need to purchase both sets of data, which can be expensive depending on the number of years of data needed. The alternative to purchasing both sets of data is to push trial data into the Virtual Research Data Center and pay a flat fee for access to both, which comes with some logistical challenges but is worth considering.

SECTIONS

CHAPTER SECTIONS

Biniek JF, Freed M, Damico A, Neuman T. 2023. Half of All Eligible Medicare Beneficiaries Are Now Enrolled in Private Medicare Advantage Plans. KFF. https://www.kff.org/policy-watch/half-of-all-eligible-medicare-beneficiaries-are-now-enrolled-in-private-medicare-advantage-plans/. Accessed April 15, 2024.

Centers for Medicare & Medicaid Services. 2024. Capitation and Pre-payment. https://www.cms.gov/priorities/innovation/key-concepts/capitation-and-pre-payment. Accessed April 15, 2024.

Figueroa JF, Dai D, Feyman Y, et al. 2023. Use of high-risk medications among older adults enrolled in Medicare Advantage plans vs traditional Medicare. JAMA Netw Open. 6(6):e2320583. doi:10.1001/jamanetworkopen.2023.20583. PMID: 37368399.

Hughes DR, Chen J, Wallace AE, et al. 2023. Comparison of lung cancer screening eligibility and use between commercial, medicare, and medicare advantage enrollees. J Am Coll Radiol. 20(4):402-410. doi: 10.1016/j.jacr.2022.12.022. PMID: 37001939.

Keating NL. 2023. Challenges and opportunities to address aggressive coding practices by Medicare advantage plans. Ann Intern Med. 176(7):987–988. doi:10.7326/M23-0534. PMID: 37276598.

Ochieng N, Biniek JF, Freed M, Damico A, Neuman T. 2023. Medicare advantage in 2023: Enrollment update and key trends. Kaiser Family Foundation. https://www.kff.org/medicare/issue-brief/medicare-advantage-in-2023-enrollment-update-and-key-trends/. Accessed June 26, 2024.

Raver E, Xu WY, Jung J, Lee S. 2024. Breast cancer screening among Medicare Advantage enrollees with dementia. BMC Health Serv Res. 24(1):283. doi: 10.1186/s12913-024-10740-7. PMID: 38443911.

Waddill K. 2021. Medicare Advantage Plans Draw More Members with Chronic Diseases. TechTarget. https://healthpayerintelligence.com/news/medicare-advantage-plans-draw-more-members-with-chronic-diseases. Accessed April 15, 2024.


Version History

October 21, 2024: Made nonsubstantive changes to the text (changes made by D. Seils).

Published October 11, 2024

Interim Reassessment of Sample Size in Cluster Randomized Trials

Analysis Plan


Section 8

Interim Reassessment of Sample Size in Cluster Randomized Trials

A defining characteristic of cluster randomized trials is the randomization of groups, or clusters, of individuals to study arms and the resulting potential for correlation of outcomes within clusters. This potential correlation must be considered in the design of the trial and in the primary analysis. Thus, in addition to estimating the effect size in a cluster randomized trial, the researchers must estimate the ICC for a valid calculation of the target sample size (Campbell et al 2004; Donner and Klar 2010; Chow et al 2020).

See the Intraclass Correlation section of this chapter for more on the ICC.

In ideal situations, preliminary data for sample size calculations are available from the planned enrollment sites for the individuals and clusters to be studied, and these data can be analyzed to inform estimates of the ICC. However, in many situations, preliminary data and reliable estimates of the ICC may not be obtainable at the time of study design. Thus, the researchers may wish to use interim data collected during the trial itself to estimate outcome data for the ICC and to reassess the sample size (Wittes and Brittain 1990).

Approaches

Formal methods for sample size reestimation in cluster randomized trials have been proposed, along with strategies for modifying the study design. The main approaches consider either an initial sample of clusters or an initial sample of individuals from a fixed number of clusters, with the interim analysis estimating the key variance components needed for a recalculation of sample size. Lake and colleagues (2002) focus on the scenario in which cluster sizes are fixed and the key design question is the necessary total number of clusters. In contrast, van Schie and Moerbeek (2014) consider the scenario in which the total number of clusters is fixed but the sample size from each cluster can vary.

In both scenarios, the proposed methods involve analyzing interim data from the trial and generally do not guarantee control of type I error. However, extensive simulations verify that both strategies lead to minimal error rate inflation, allowing the researchers to adjust sample sizes and obtain a final sample size with approximately the desired statistical power, even when the preenrollment sample size assumptions are inaccurate. Finally, in both scenarios, the analyst must select the timing of the interim analysis for sample size adjustment. Recommendations suggest conducting the interim evaluation after 25% to 75% of the originally planned enrollment (either the number of clusters or the number of individuals). Additional research has studied methods for stepped-wedge designs (Grayling et al 2018) and the use of Bayesian methods (Shen et al 2022).

In summary, there are 2 main methods for increasing the effective sample size in a cluster randomized trial: (1) enroll more individuals per cluster when the number of clusters is fixed; or (2) add more clusters. However, adding clusters may not be feasible if additional clusters are unavailable or trial resources are limited.

Case Study: FM-TIPS

In this section, we discuss the Fibromyalgia Transcutaneous Electrical Nerve Stimulation (TENS) in Physical Therapy Study (FM-TIPS) as an example of conducting an interim reassessment of sample size in a cluster randomized trial. FM-TIPS, an NIH Collaboratory Trial supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), is a cluster randomized pragmatic trial examining whether the addition of TENS to routine physical therapy improves movement-evoked pain between baseline and 60 days compared with physical therapy alone among patients with fibromyalgia (Post et al 2022).

Learn more about the NIH Collaboratory Trials.

The FM-TIPS research team originally calculated the trial’s sample size such that a 2-tailed statistical test at the 0.05 significance level would be able to detect a difference of at least 1.0 in mean change in movement-evoked pain (on a scale of 0 to 10) with an assumed SD of 2.0. To ensure 80% statistical power for the primary analysis, they calculated the sample size to have an equal number of participants per clinic (range, 9-12 clinics/arm). They conservatively estimated an ICC of up to 0.14, which would require complete outcome data for 456 patients, assuming 19 patients per clinic and 12 clinics per arm. To account for a dropout rate of up to 24% by day 60, the research team aimed to enroll a total of 600 patients (300 per arm; 25 from each of 24 clinics). Some variability in the number of patients enrolled from each clinic was expected. Therefore, the research team capped enrollment at 30 patients per clinic to enroll up to 20% more than was originally planned at each clinic.

During study design, the research team also planned an interim reestimation of the ICC. They considered enrollment targets of one-quarter, one-third, one-half, and three-quarters of the total planned sample size (N = 600) as options for the timing of the assessment. For example, van Schie and Moerbeek (2014) recommend recalculating the ICC after enrollment of 50% of the planned number of participants. The research team determined that conducting the reassessment after enrolling half or three-quarters of the patients would yield the best estimate of the ICC. However, they were concerned that this timing might be too late if the original ICC estimate was too conservative to allow for necessary approvals from the study’s sponsor for a potential sample size reduction. Therefore, they planned the interim reassessment to occur after enrollment of the first 200 participants from both arms combined, corresponding to one-third of the planned sample size. Although this meant that fewer than 200 participants would have 60-day primary outcome data available at the time of the interim reassessment, the reassessment would allow the research team to evaluate the SD of the outcome while accounting for important aspects of the study design, such as the number of clinics and the number of patients per clinic. Thus, while the interim reassessment would not allow the research team to evaluate the treatment effect, it would allow them to assess outcome variability for sample size reestimation.

It is worth noting that an interim reassessment of the mean and SD of the primary outcome could have been considered. The FM-TIPS research team did not take this approach, because the minimal clinically important difference in the outcome was available in the existing literature and from previous studies. Since there were limited preliminary data available at the time of the interim reassessment, the research team relied on the original estimate of the mean and SD and focused on reestimating the ICC to support a reassessment of the sample size.

Methods

Using a formula described by Ahn and colleagues (2015) and the online calculator developed by Campbell and colleagues (2004), the FM-TIPS research team activated 24 clinics in 5 healthcare systems beginning in January 2021. They added a healthcare system and several clinics, and deactivated some clinics, in November 2022. (It was not feasible to add more healthcare systems or clinics due to challenges associated with the COVID-19 public health emergency and due to a lack of physical therapy clinics interested in participating in a research study.) Observed enrollment was variable across clinics. The research team determined that the sample size reassessment should account for this variability through the use of the coefficient of variation (CV = SD of cluster size/mean cluster size), which is commonly used to characterize variability in cluster sizes. Therefore, they included the CV of patients per clinic in the sample size reassessment.

The research team reestimated the ICC by using a modeling approach aimed at evaluating the relative contributions of different sites to the variance compared with other study characteristics. Specifically, they used a generalized linear mixed model with type I sums of squares to obtain the ICC estimate. To maintain blinding to treatment effect in the interim reassessment, they did not include the main effect of the treatment. In this model, they considered the size of each clinic (small or large), the interaction between clinic size and treatment arm (size × arm), and a categorical variable for movement-evoked pain at baseline (0-3, 4-6, 7-10) to be fixed factors, and they considered sites × (size × arm) to be a random factor.

Although an interim analysis can provide a new estimate of the ICC, any sample size reassessment should account for uncertainty in the new ICC estimate. To this end, the FM-TIPS research team considered the jackknife method, which allows the analyst to estimate the SE of the ICC without making parametric assumptions about the data. Using the jackknife method, the analyst performs calculations based on a leave-one-out resampling of the data wherein resampling of clusters, rather than individuals, is used to account for cluster randomization. In FM-TIPS, this approach corresponded to calculating the ICC while leaving 1 clinic out at a time and assessing the influence of each clinic. These ICC estimates were then used to calculate the jackknife-based SE for the interim ICC estimate.

Results

At the time of the interim reassessment in FM-TIPS, there were 28 active clinics, of which 26 clinics (13 in each study arm) had at least 1 patient who had completed the day 60 visit at which the primary outcome could be ascertained, and of which 21 clinics had more than 1 such patient. Consistent with the statistical analysis plan, there were 228 patients enrolled (including 183 patients with a day 1 assessment and 144 patients with a day 60 assessment). The reestimated ICC based on the adjusted model was 0.05, and the jackknife-based estimate of the SE of the ICC was 0.07.

Based on the observed enrollment of the clinics during the initial part of the study, the research team assumed the sample size per clinic would have a CV of 0.6. The Table shows the statistical power for different ICC values assuming 13 or 14 clinics per arm and the observed CV of 0.6, using the originally assumed difference in mean and SD for the primary outcome variable. The degrees of freedom for the test were calculated based on the number of clusters in the study.

Table. Statistical Power for Different ICC Values in FM-TIPSa
Power No. per Arm Completing Day 60 (Total Completing Day 60) No. of Clusters (Clinics) per Arm ICC
0.92 169 (338) 13 0.05
0.94 182 (364) 14 0.05
0.81 169 (338) 13 0.10
0.84 182 (364) 14 0.10
0.78 169 (338) 13 0.12
0.81 182 (364) 14 0.12
a Assuming a CV of 0.6 and a difference of 1.0 in mean change in pain (SD, 2.0; α = 0.05), using a 2-tailed t test with degrees of freedom based on the number of clusters (specifically the number of clinics − 2).

Accounting for the variable sample size per clinic and assuming 13 clinics per arm, the required sample size was an estimated 450 enrolled patients to obtain 342 patients who completed day 60, assuming a 24% dropout rate. This sample size would provide greater than 90% statistical power for an ICC of 0.05, greater than 80% power for an ICC of 0.10, and 78% power for an ICC of 0.12 (ICC + 1SE = 0.05 + 0.07). The research team assumed that, with the increased enrollment, the jackknife-based estimate of the SE would not increase and that the final ICC would not exceed 0.10. Thus, they reduced the target sample size from 600 patients to 450 patients. The interim reassessment of sample size was presented to and approved by NIAMS and the trial’s data and safety monitoring board.

Conclusion

When a reliable estimate of the ICC is unavailable during the planning of a cluster randomized trial, calculating the ICC from interim trial data is a viable option. In FM-TIPS, conducting an interim reassessment allowed the research team to estimate a new ICC based on actual trial data. This approach not only provided the research team with a point estimate of the ICC but also yielded the CV of enrollment per clinic for use in the sample size recalculation. The choice of timing for the interim assessment is important. It is also essential to preplan the interim reassessment and incorporate it into the statistical analysis plan. Interim reassessment of sample size may pose logistical challenges. For example, while the interim ICC estimate in FM-TIPS was smaller than the initial estimate, which enabled the research team to reduce the sample size, it is possible that a trial’s sample size will need to increase if the interim ICC estimate is larger than anticipated. In such situations, operational adaptations such as securing additional funding to enroll more patients may be necessary and should be considered in advance.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Ahn C, Heo M, Zhang S. 2015. Sample Size Calculations for Clustered and Longitudinal Outcomes in Clinical Research. Boca Raton, Florida: CRC Press.

Campbell MK, Thomson S, Ramsay CR, MacLennan GS, Grimshaw JM. 2004. Sample size calculator for cluster randomized trials. Comput Biol Med. 34(2):113-25. doi: 10.1016/S0010-4825(03)00039-8. PMID: 14972631.

Chow S-H, Shao J, Wang H, Lokhnygina Y, eds. 2020. Sample Size Calculations in Clinical Research. 3rd ed. Boca Raton, Florida: CRC Press.

Donner A, Klar N. 2000. Design and Analysis of Cluster Randomization Trials in Health Research. London, UK: Arnold.

Grayling MJ, Mander AP, Wason JMS. 2018. Blinded and unblinded sample size reestimation procedures for stepped-wedge cluster randomized trials. Biom J. 60(5):903-916. doi: 10.1002/bimj.201700125. PMID: 30073685.

Lake S, Kammann E, Klar N, Betensky R. 2002. Sample size re-estimation in cluster randomization trials. Stat Med. 21(10):1337-50. doi: 10.1002/sim.1121. PMID: 12185888.

Post AA, Dailey DL, Bayman EO, et al. 2022. The Fibromyalgia Transcutaneous Electrical Nerve Stimulation in Physical Therapy Study protocol: A multisite embedded pragmatic trial. Phys Ther. 102(11):pzac116. doi: 10.1093/ptj/pzac116. PMID: 36036838.

Shen J, Golchi S, Moodie EEM, Benrimoh D. 2022. Bayesian group sequential designs for cluster‐randomized trials. Stat. 11(1):e487. doi: 10.1002/sta4.487.

van Schie S, Moerbeek M. 2014. Re-estimating sample size in cluster randomised trials with active recruitment within clusters. Stat Med. 33(19):3253-68. doi: 10.1002/sim.6172. PMID: 24719285.

Wittes J, Brittain E. 1990. The role of internal pilot studies in increasing the efficiency of clinical trials. Stat Med. 9(1-2):65-71. doi: 10.1002/sim.4780090113. PMID: 2345839.


Version History

Published October 14, 2024

May 16, 2024: Journal Peer Reviewers Are Familiar With Pragmatic Trials, Want More on Implementation

According to a report from the NIH Pragmatic Trials Collaboratory Coordinating Center, journal editors and peer reviewers were familiar with pragmatic clinical trials and their designs and analytic approaches, but they often asked for more information about intervention implementation.

The report was published this week in the Living Textbook of Pragmatic Clinical Trials.

The report’s authors invited the principal investigators of the first several completed NIH Collaboratory Trials to confidentially share the journal peer reviews of manuscripts reporting the trials’ main outcomes. They independently reviewed the peer reviews of the manuscripts to note common questions and themes.

“We did not generally observe that reviewers were unfamiliar with pragmatic clinical trials or had difficulty understanding the design and analytic approaches of the studies,” the authors reported. Instead, they found that the reviewers in many cases requested more information about implementation outcomes, implementation strategies, and intervention content.

Although many of the NIH Collaboratory Trial teams have published separate implementation-focused papers, the report suggests that reviewers may want or expect some of this information to be included with the report of primary study outcomes to aid in the interpretation of results.

Read the full report.

Case Study: Journal Reviews of NIH Collaboratory Trials

ARCHIVED PAGE

Archived on November 26, 2025. Go to the latest version.

Dissemination Approaches For Different Stakeholders


Section 3

Case Study: Journal Reviews of NIH Collaboratory Trials

Pragmatic clinical trials test whether evidence-based interventions work in real-world settings (Simon et al 2020). Unlike traditional clinical trials, the study interventions in pragmatic trials are typically integrated into routine clinical processes and workflows and often rely on existing sources of electronic data collected at the point of care (Staman et al 2023). When large-scale pragmatic trials are embedded in healthcare systems, they have the potential to directly inform clinical practice, guidelines, and health policy decisions (Palazzo et al 2022).

The NIH Pragmatic Trials Collaboratory supports several large-scale, multicenter pragmatic clinical trials embedded in healthcare systems (see the NIH Collaboratory Trials). Several of these trials have been completed, and their primary results have been published in the peer-reviewed biomedical literature (Coronado et al 2018; DeBar et al 2022; Dember et al 2019; Huang et al 2019; Jarvik et al 2020; Melnick et al 2022; Mitchell et al 2020; Simon et al 2022; Vazquez et al 2024; Zatzick et al 2021).

In the early years of the NIH Pragmatic Trials Collaboratory, which began in 2012, there was a question about how journals' peer reviewers would react to manuscripts reporting on pragmatic trials, given that such designs were still relatively new. It was unknown whether reviewers would have sufficient familiarity with or understanding of innovative or complex pragmatic trials methods. Therefore, we sought to address this question by exploring reviews of manuscripts reporting the primary outcomes of NIH Collaboratory Trials.

We invited the principal investigators of the first 8 completed NIH Collaboratory Trials (Coronado et al 2018; DeBar et al 2022; Dember et al 2019; Huang et al 2019; Jarvik et al 2020; Mitchell et al 2020; Simon et al 2022; Zatzick et al 2021), after their primary results were published, to confidentially share with the NIH Collaboratory Coordinating Center the comments they received on their manuscripts during the peer review process. The manuscript reviews were received between 2018 and 2020, and the manuscripts were published between 2018 and 2022 (Table 1).

Table 1. Study Designs Described in the Reviewed Manuscriptsa

a  Some of the 8 manuscripts were reviewed by more than 1 journal.
Journal Study Designs Described
in the Manuscripts
Annals of Internal Medicine 2 parallel cluster randomized trials
JAMA 1 randomized controlled trial
1 parallel cluster randomized trial
2 stepped-wedge cluster randomized trials
JAMA Internal Medicine 2 parallel cluster randomized trials
JAMA Surgery 1 stepped-wedge cluster randomized trial
Journal of the American Society of Nephrology 1 parallel cluster randomized trial
The Lancet 1 parallel cluster randomized trial
New England Journal of Medicine 1 stepped-wedge cluster randomized trial

The 3 of us independently reviewed the peer review comments received for the first 3 manuscripts and met to discuss the themes that emerged. We then independently reviewed the peer review comments for all 8 of the manuscripts to note common questions and themes.

We did not generally observe that reviewers were unfamiliar with pragmatic clinical trials or had difficulty understanding the design and analytic approaches of the studies. Rather, the questions that most commonly arose in the manuscript reviews were related to the implementation and content of the study interventions. Specifically, reviewers asked for more information about (1) implementation outcomes, (2) implementation strategies, and (3) intervention content. See Table 2 for representative comments.

Table 2. Representative Comments by Journal Editors and Reviewers

a   Portions of some comments are redacted for deidentification of the studies and the reviewing journals.
Theme Commenta
Include more information about implementation outcomes “Is there additional information on patient adherence with the [intervention protocol]? …[It is] unclear the extent to which patients were able to adhere to [the intervention protocol], which could influence the results.”

 

"Major flaw is lack of implementation data—for this pragmatic trial, a few observations every quarter would not generate an estimation of adherence to the intervention…”

 

“[It is] important to distinguish between problems with efficacy and problems (merely) with adherence. If the [interventions] are not efficacious, we should abandon them (at least for this population and/or application), whereas if the [interventions] are efficacious but this effect was diluted by low adherence, then we should focus on methods to improve adherence. I believe the authors have the requisite data and trial design to perform a complier average treatment effect (CATE) analysis (an instrumental variable analysis in which randomization arm is the instrument).”

Include more information about implementation strategies “Please give more detail about the steps taken to the implement the intervention. The intervention appears very top down. Was there any effort to engage and motivate the [staff] responsible for implementing the intervention?”

 

“In light of the difficulty in achieving adherence to the intervention, several questions arise with regard to the methods for implementing the intervention. For instance, some of the detail in the protocol about investigator efforts to ‘influence’ [clinic staff] and the guidance provided to the [clinics] should be pulled into the main manuscript to provide better clarity on what steps the investigators took to get [clinics] to implement the practice change.”

 

“It would be of clinical significance to further discuss approaches to improve uptake of the trial intervention. It appears that there was excellent collaboration/stakeholder engagement at the executive and research level of the [healthcare systems] but that engagement did not extend to the [clinic] level where the intervention was carried out. Were environmental assessments carried out before the trial? Was there a pilot feasibility test of the trial conducted? Were qualitative interviews conducted with key stakeholders including patients, [clinicians] and [clinic] managers? None of these were reported.”

Include more information about intervention content “Please include more description in the paper about the content of the intervention, include [a table] about the elements of the intervention received, and more description about the content of the [training].… The paper does not include enough information to really understand the process.”

 

“The intervention is billed as ‘brief,’ but there is mention of [more intensive care] for various persons, so it is useful to know what the minimal intervention consists of in terms of contact time with patient, and up to what amount is considered maximum time with the patient. What is the minimum that everybody gets?”

Implementation outcomes are the effects of the actions taken in a pragmatic trial to implement the practices or processes being studied. For example, the uptake (or adoption) of an intervention and the adherence (or fidelity) to an intervention are implementation outcomes. In several of the manuscript reviews, reviewers asked the authors to provide information about whether and how such outcomes were measured in the trial and, if they were measured, to report data on intervention implementation in the manuscript. The reviewers were interested in learning about the implementation outcomes themselves and about how outcomes like uptake and adherence may have been related to the primary study results.

Reviewers also asked questions about implementation strategies, or the steps the study teams took to implement their study interventions. For example, one reviewer requested more information about how the study team engaged with frontline staff in the participating clinics and what, if any, incentives were provided for referring patients to the intervention. Another asked whether the study team conducted pilot or feasibility studies of strategies for engaging healthcare system partners. In another case, a reviewer wanted to know more about the integration of the intervention into existing clinic workflows and about reported difficulties in rolling out the intervention. In asking about implementation strategies, reviewers were seeking to better understand how methods of engaging healthcare practitioners in implementing the interventions may have been related to implementation outcomes and the study results.

Some reviewers also wanted more information about intervention content. For example, in the case of a manuscript reporting the results of a complex intervention involving a team-based care model, a reviewer was interested in seeing a more detailed description of the intervention.

See also the Dissemination and Implementation chapter of the Living Textbook.

Embedding a pragmatic clinical trial in the existing workflow of a healthcare system presents complex challenges that can influence the uptake of and adherence to the intervention and the study team's ability to detect treatment effects (Staman et al 2023). In addition to publishing their primary study results, some NIH Collaboratory Trial teams have published separate implementation-focused papers that report implementation outcomes, barriers to implementation, and lessons learned. Our experience with journal peer reviews of the first several completed NIH Collaboratory Trials suggests that reviewers may want or expect some of this information to be included with the report of primary study outcomes to aid in the interpretation of results.

The small sample of manuscripts for which we explored peer reviews were published in journals with relatively high impact factors in their fields. The outcomes reported in these manuscripts were from pragmatic clinical trials that started between 2013 and 2016. It is unknown whether our observations are generalizable to the experiences of pragmatic trials investigators with other journals or those reporting the results of more recently initiated studies.

Full and transparent reporting of pragmatic clinical trials—including details about intervention implementation—are important for helping readers to understand how the studies were conducted and to place the results in context. The NIH Pragmatic Trials Collaboratory has developed a reporting template for pragmatic trials. Reporting guidelines, such as the pragmatic trials extension of the CONSORT statement (Zwarenstein et al 2008), are available to help investigators improve their reporting of pragmatic trials and assist peer reviewers and journal editors in understanding these studies and their importance.

SECTIONS

CHAPTER SECTIONS

Resources


NIH Collaboratory Trials Publication Types Handout
A reference to help pragmatic trial teams understand potential opportunities for publication


Data and Resource Sharing
Data and resources shared by the NIH Collaboratory Trials, including protocols, consent documents, public use datasets, computable phenotypes, analytic code, and more

REFERENCES

back to top

Coronado GD, Petrik AF, Vollmer WM, et al. 2018. Effectiveness of a mailed colorectal cancer screening outreach program in community health clinics: the STOP CRC cluster randomized clinical trial. JAMA Intern Med. 178(9):1174-1181. doi: 10.1001/jamainternmed.2018.3629. PMID: 30083752.

DeBar L, Mayhew M, Benes L, et al. 2022. A primary care-based cognitive behavioral therapy intervention for long-term opioid users with chronic pain: a randomized pragmatic trial. Ann Intern Med. 175(1):46-55. doi: 10.7326/M21-1436. PMID: 34724405.

Dember LM, Lacson E Jr, Brunelli SM, et al. The TiME trial: a fully embedded, cluster-randomized, pragmatic trial of hemodialysis session duration. J Am Soc Nephrol. 30(5):890-903. doi: 10.1681/ASN.2018090945. PMID: 31000566.

Huang SS, Septimus E, Kleinman K, et al. 2019. Chlorhexidine versus routine bathing to prevent multidrug-resistant organisms and all-cause bloodstream infections in general medical and surgical units (ABATE Infection trial): a cluster-randomised trial. Lancet. 393(10177):1205-1215. doi: 10.1016/S0140-6736(18)32593-5. PMID: 30850112.

Jarvik JG, Meier EN, James KT, et al. 2020. The effect of including benchmark prevalence data of common imaging findings in spine image reports on health care utilization among adults undergoing spine imaging: a stepped-wedge randomized clinical trial. JAMA Netw Open. 3(9):e2015713. doi: 10.1001/jamanetworkopen.2020.15713. PMID: 32886121.

Melnick ER, Nath B, Dziura JD, et al. 2022. User centered clinical decision support to implement initiation of buprenorphine for opioid use disorder in the emergency department: EMBED pragmatic cluster randomized controlled trial. BMJ. 377:e069271. doi: 10.1136/bmj-2021-069271. PMID: 35760423.

Mitchell SL, Volandes AE, Gutman R, et al. 2020. Advance care planning video intervention among long-stay nursing home residents: a pragmatic cluster randomized clinical trial. JAMA Intern Med. 180(8):1070-1078. doi: 10.1001/jamainternmed.2020.2366. PMID: 32628258.

Palazzo L, Tuzzio L, Simon GE, Larson EB. 2022. A value proposition for pragmatic clinical trials. Am J Manag Care. 28(9):e312-e314. doi: 10.37765/ajmc.2022.89224. PMID: 36121362.

Simon GE, Platt R, Hernandez AF. 2020. Evidence from pragmatic trials during routine care - slouching toward a learning health system. N Engl J Med. 382(16):1488-1491. doi: 10.1056/NEJMp1915448. PMID: 32294344.

Simon GE, Shortreed SM, Rossom RC, et al. 2022. Effect of offering care management or online dialectical behavior therapy skills training vs usual care on self-harm among adult outpatients with suicidal ideation: a randomized clinical trial. JAMA. 327(7):630-638. doi: 10.1001/jama.2022.0423. PMID: 35166800.

Staman KL, Check DK, Zatzick D, et al. 2023. Intervention delivery for embedded pragmatic clinical trials: Development of a tool to measure complexity. Contemp Clin Trials. 126:107105. doi: 10.1016/j.cct.2023.107105. PMID: 36708968.

Vazquez MA, Oliver G, Amarasingham R, et al. 2024. Pragmatic trial of hospitalization rate in chronic kidney disease. N Engl J Med. 390(13):1196-1206. doi: 10.1056/NEJMoa2311708. PMID: 38598574.

Zatzick D, Jurkovich G, Heagerty P, et al. 2021. Stepped collaborative care targeting posttraumatic stress disorder symptoms and comorbidity for US trauma care systems: a randomized clinical trial. JAMA Surg. 156(5):430-474. doi: 10.1001/jamasurg.2021.0131. PMID: 33688908.

Zwarenstein M, Treweek S, Gagnier JJ, et al. 2008. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 337:a2390. doi: 10.1136/bmj.a2390. PMID: 19001484.


Version History

Published May 15, 2024