Introduction

Analysis Plan


Section 1

Introduction

Pragmatic clinical trial designs, particularly those that use cluster randomization or other novel methods, pose challenges during the study design phase (Cook et al 2016). Here, we briefly examine current challenges in the design of pragmatic trials, as well as potential solutions and future directions for further exploration.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Cook AJ, Delong E, Murray DM, Vollmer WM, Heagerty PJ. 2016. Statistical lessons learned for designing cluster randomized pragmatic clinical trials from the NIH Health Care Systems Collaboratory Biostatistics and Design Core. Clin Trials. 13:504-512. doi:10.1177/1740774516646578. PMID: 27179253.


Version History

April 30, 2024: Made nonsubstantive changes to the text as part of the annual content update (changes made D. Seils).

June 23, 2022: Updated the name of the NIH Collaboratory in the contributors list as part of the annual content update (changes made by D. Seils).

May 27, 2020: Added Heagerty to the contributors list and reordered the sections of this chapter as part the annual content update (changes made by D. Seils).

May 1, 2020: Made nonsubstantive formatting changes to the References section as part of the annual content update (changes made by D. Seils).

January 16, 2019: Made nonsubstantive changes to the text as part of the annual content update (changes made by D. Seils).

Published August 25, 2017

Introduction

Choosing and Specifying Endpoints and Outcomes


Section 1

Introduction

For an explanatory trial, investigators can specify any outcome or endpoint, define the endpoint, and then measure it. The term outcome usually refers to the measured variable (e.g., peak volume of oxygen or PROMIS Fatigue score), whereas an endpoint refers to the analyzed parameter (e.g., change from baseline at 6 weeks in mean PROMIS Fatigue score). Even after a specific outcome is selected, it may be challenging to determine the best way to measure the effect of an intervention in terms of an analyzable endpoint, especially with pragmatic research where data are collected as part of routine care.

With pragmatic research, the endpoints and outcomes need to be available as part of routine care. Although the research question regarding the relative risks, benefits, and burdens of a specific intervention or activity will drive the selection of endpoints and outcomes, in a PCT, the selection must be balanced with an understanding of what is available in the electronic health record (EHR) or claims data and what additional resources will be needed to capture information not found in these sources.

Watch the video module: What do Endpoints and Outcomes Look Like in Pragmatic Clinical Trials

Some conditions can be objectively defined with a lab test, are straightforward to diagnose, and/or have International Classification of Diseases (ICD) codes. Some conditions, such as a broken leg, almost certainly require medical intervention and are likely to be captured in an EHR. Other conditions, however, are more ambiguous or less severe, and patients might not go to providers for treatment. Events or conditions that are not medically attended are unlikely to be captured in an EHR.

Defining endpoints and outcomes for some health phenomena is relatively easy for things like

  • acute myocardial infarction
  • broken bone
  • hospitalization

However, many outcomes are not routinely recorded as part of healthcare delivery. For example:

  • suicide attempt
  • gout flare
  • silent myocardial infarction
  • early miscarriage

To detect these types of outcomes in pragmatic research, some additional work may be necessary, which might move the outcome ascertainment aspect in a less pragmatic direction.
Key questions include:

  • What challenges do you anticipate in trying to ascertain the endpoint?
  • How might you address the challenges?

In this chapter, we will discuss endpoints and outcomes in pragmatic trials.

  • Meaningful endpoints
  • Outcomes measured via the electronic health record
  • Impatient endpoints
  • Using death as an endpoint
  • Outcomes measured using digital health technology
  • Outcomes measured via direct patient report

SECTIONS

CHAPTER SECTIONS


Version History

March 11, 2026: Updated as part of annual review (changes made by K. Staman).

February 22, 2024: Updated video module (changes made by K. Staman)

September 30, 2022: Made minor nonsubstantive text edits (changes made by K. Staman and L. Stewart)

January 22, 2021: Added embedded video (change made by G. Uhlenbrauck).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

December 4, 2018: Added key questions (changes made by K. Staman).

Published August 25, 2017

Introduction – ARCHIVED

Experimental Designs and Randomization Schemes


Section 1

Introduction – ARCHIVED

Pragmatic clinical trials (PCTs) differ from "traditional" randomized controlled trials (RCTs) largely in terms of their overall purpose. Traditional RCTs can be thought of as explanatory or mechanistic experiments that attempt to minimize potential confounding and ensure a high degree of internal validity. The goal is to determine whether the experimental condition is efficacious when it is the only condition that differs between cases and controls.

PCTs, on the other hand, are often conducted to evaluate whether a therapy or other intervention is effective in the “real-world” conditions of its proposed use; their ultimate goal is "to improve practice and policy." Califf and Sugarman (2015) propose the following “common sense” definition for a PCT when conducted in a healthcare context:

[A trial that is] designed for the primary purpose of informing decision-makers regarding the comparative balance of benefits, burdens and risks of a biomedical or behavioral health intervention at the individual or population level.

Each kind of trial has strengths and weaknesses. Strictly controlled explanatory RCTs seek to maximize internal validity but may not be generalizable outside controlled settings, because the conditions of study (including the study population) are not representative of more typical patient-treatment settings. They also often require substantial expense and supportive infrastructure and oversight. PCTs, on the other hand, are more likely to maximize external validity and generalizability and may cost less to perform than traditional RCTs, especially when they can capitalize on existing data sources such as the electronic health record (EHR). However, less restrictive criteria, a lesser degree of fidelity to intervention, and a lesser degree of monitoring/compliance results in data that are "messier" or less complete, and may also be more subject to changes in the healthcare delivery mechanisms. These considerations may need to be addressed through specialized analytical approaches and possibly larger sample sizes.

Importantly, very few trials are entirely explanatory or pragmatic, but exist on a continuum where any given trial contains elements of both in different proportions. The PRECIS-2 system provides one way of thinking about and visualizing explanatory and pragmatic aspects of clinical trial design.

Additional detailed information and checklists for assessing pragmatic trial designs for feasibility can be found in the Assessing Feasibility section of this resource.

Can Traditional RCTs Be Pragmatic?

Many traditional randomized trials are optimized for their explanatory power. That is, they are designed to detect differences in the effect of an intervention on particular prespecified parameters. The populations enrolled in such trials are typically highly selected in order to exclude conditions that could affect the study endpoints and obscure measurements of treatment differences. As noted in the previous section, while RCTs often have a high degree of internal validity, they may be less generalizable to unselected patient populations where many individuals will have the kinds of comorbid conditions that often result in being excluded from participation in RCTs. For example: it is relatively common for patients with cardiovascular conditions to also suffer from chronic kidney disease, but the latter condition often constitutes an exclusion criterion for many cardiovascular RCTs.

Large Simple Trials

Because "traditional" RCTs may lack generalizability, are designed to maximize explanatory power, and often require extensive and expensive infrastructure to ensure compliance with complex study protocols, they generally do not conform to the "pragmatic" paradigm. However, although the "pragmatic" terminology is relatively recent, the basic ideas have been implemented for some time in community- and school-based studies and also under the rubric of the "large simple trial" (LST). LSTs typically incorporate extremely simplified protocols that focus on collecting only data that are immediately relevant to the prespecified endpoints; they also typically feature relatively nonrestrictive eligibility criteria.

An Older Large Simple Trial and a Contemporary Large Pragmatic Clinical Trial

  • ISIS-2: This 1988 cardiovascular trial randomized more than 17,000 research participants who were admitted to a hospital within 24 hours of experiencing symptoms of acute myocardial infarction to receive treatment with streptokinase, aspirin, both, or neither. Although ISIS-2 was in some respects a “traditional” RCT, it had pragmatic features including minimal eligibility criteria, streamlined data collection, and a “real-world” research setting (ISIS-2 Collaborative Group 1988).
  • ADAPTABLE: This ongoing study is randomizing 20,000 participants who are at elevated risk for heart disease to receive either lower- or higher-dose aspirin in an attempt to ascertain which of these two commonly used doses is better for preventing heart attack and stroke. ADAPTABLE has numerous pragmatic features, including a large sample size drawn from “real-world” populations, data collection centered on using information gathered directly from patients’ electronic health records, and a study endpoint aimed at answering a question that is directly relevant to current clinical practice (ADAPTABLE 2017).

SECTIONS

CHAPTER SECTIONS

Resources

Introduction to Pragmatic Clinical Trials
Presentation from the Health Care Systems Interactions Core

Pragmatic Trials: A Workshop Handbook
This e-book from the Colorado Research and Implementation Science Program provides a primer on the design, conduct, and evaluation of PCTs.

Research Methods Resources Website on Group- or Cluster-Randomized Studies
The NIH Office of Extramural Research website provides resources for investigators considering cluster randomized designs, including links to NIH webinars, key references, and statements to help investigators prepare sound applications and avoid methodological pitfalls.

Large Simple Trials
Findings and products from the Clinical Trials Transformation Initiative’s Large Simple Trials Project

REFERENCES

back to top

ADAPTABLE – the Aspirin Study – A Patient-Centered Study. National Patient-Centered Clinical Research Network. http://theaspirinstudy.org/. Accessed July 26, 2017.

Califf RM, Sugarman J. 2015. Exploring the ethical and regulatory issues in clinical trials. Clin Trials. 12:436-441. doi:0.1177/1740774515598334. PMID: 26374676.

ISIS-2 (Second International Study of Infarct Survival) Collaborative Group. 1988. Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction: ISIS-2. Lancet. 2:349-360. PMID: 2899772.


Version History

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

May 27, 2020: Added Heagerty to the contributors list and reordered the sections of this chapter as part the annual content update (changes made by D. Seils).

May 1, 2020: Made nonsubstantive changes to the Resources sidebar and the References section as part of the annual content update (changes made by D. Seils).

January 15, 2019: Updated text describing differences between RCTs and PCTs and the history of pragmatic approaches in other settings as part of annual content update (changes made by D. Seils).

Published August 25, 2017

Introduction

ARCHIVED PAGE

Archived on November 26, 2025. Go to the latest version.

Dissemination Approaches For Different Stakeholders


Section 1

Introduction

In academia, traditionally, researchers report their clinical study results in scientific journals. The results of pragmatic clinical research are often used to inform clinical and health care system decision makers, frame clinical guidelines, inform policy, and, ultimately, may be implemented into routine care or clinical operations. A paradigm shift has begun toward using different dissemination strategies to improve upon the fact that it takes 17 years for a small fraction (~14%) of original research to benefit patient care (Balas and Boren 2000), while the majority of research findings never influence care improvement.  The typical pathway to publication may not be an adequate means of dissemination of information to inform decision makers.

The mode and medium for disseminating the findings for a PCT should be tailored to the specific audience. The Patient-Centered Outcomes Research Institute (PCORI) recommends that patients and other stakeholders be involved in plans to disseminate study findings by, for example, identifying various audiences for dissemination, shaping the study design with final products in mind, and developing creative approaches to get information into the hands of those who need it.

Having stakeholders at the table from the start can be key to driving practice change if it is indicated from the results or to redesign/adjust if results are not as expected.  There are a wide range of stakeholder groups relevant to PCTs, and these are described in detail in the Engaging Stakeholders chapter. This chapter explores dissemination strategies for different stakeholders, beginning with considerations for transparent reporting to the scientific community and then moving to approaches for disseminating to patients and health systems leaders.

SECTIONS

CHAPTER SECTIONS

Resources

Data and Resource Sharing Page

As part of the Collaboratory's commitment to sharing, all NIH Collaboratory Trials are expected to share data and resources, such as protocols, consent documents, public use datasets, computable phenotypes, and analytic code.

Learn more in these Living Textbook chapters:

Designing With Implementation and Dissemination in Mind

Dissemination and Implementation

Building Partnerships to Ensure a Successful Trial

REFERENCES

back to top

Balas EA, Boren SA. 2000. Managing clinical knowledge for health care improvement. Yearb Med Inform. 65–70. PMID: 27699347.


Version History

June 12, 2020: Added link to Data and Resources Sharing page to the Resources section (changes made by K. Staman).

February 11, 2020: Added link to Building Partnerships to Ensure a Successful Trial (changes made by K. Staman).

Published August 25, 2017

Let It, Help It, Make It Happen – ARCHIVED

ARCHIVED PAGE

Archived on August 7, 2025. Go to the latest version.

Dissemination and Implementation


Section 3


Let It, Help It, Make It Happen – ARCHIVED

Within healthcare systems, the ways an intervention might be adopted or implemented can happen through a number of different mechanisms. Passive diffusion or “let it happen” is an approach wherein new practices are spread in an untargeted way. To "help it happen" the investigators can work with health system leaders and other stakeholders to enable the intervention. Or, in conjunction with stakeholders, investigators can "make it happen" (Greenhalgh et al. 2004).

 

(Modified from Greenhalgh et al. 2004)

No matter how pragmatic the trial is, to “make it happen” the actual intervention needs to be adopted, providers need to be trained to deliver it, and they need to consistently choose to deliver it. Not only that, some consideration should be given to ensuring that patients who would benefit from the intervention are able to receive it and that the leaders of the healthcare systems are willing to champion the use of the intervention in their system.

The NIH Collaboratory Trials

Pragmatic Trial Design Strategies That Facilitate “Make It Happen” Research to Health Care System Practice Change

The NIH Collaboratory trials use several strategies during the conduct of the trial to prepare the intervention and health systems for future implementation (if the intervention is shown to be effective). Thinking about implementation early in the process of development will help with the eventual implementation of the intervention.

Key considerations for potential future implementation of the trials were:

  • Who is going to deliver the intervention?
  • How does the intervention fit with the ultimate patient population for whom it is intended? (And what are the differences between that population and the population in the trial?)
  • To what degree can we build-in tests of provider training, support, adherence, mediators and moderators of high quality delivery
  • Will implementation occur in only the trial sites or in sites across the region or country?

With the Collaboratory trials, the leadership involved in the health systems where the intervention will be implemented has recognized the demand for the intervention to solve a particular problem. This will always be paramount and should be considered during the design phase of the trail. If the intervention is shown to be effective, several methods are planned to “make it happen.”

We will discuss case examples in the remaining sections of this chapter. Some of the trials described are in progress. As the implementation phase for each study evolves, there will likely be other lessons to share.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. 2004. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 82:581–629. doi:10.1111/j.0887-378X.2004.00325.x. PMID: 15595944.


Version History

December 5, 2018: Minor edits as part of the annual review process (changes made by K. Staman).

Published August 25, 2017

Introduction

Data Sharing and Embedded Research


Section 1

Introduction

The contributors to this chapter initially wrote an opinion piece for Annals of Internal Medicine (Simon et al. 2017) on data sharing. In this chapter, we expand on the ideas presented there and frame them using lessons learned from the Collaboratory.

Video originally published in Annals of Internal Medicine (Simon et al. 2017) and used with permission.

Emerging policies and procedures for sharing analyzable research datasets hold great promise for increasing transparency and reproducibility in medical research. Expanded use of the data can increase the knowledge base through secondary analyses, decrease selective reporting, and lead to improvements in clinical care (Institute of Medicine 2015; Krumholz et al. 2016; Warren 2016; NIH Data Sharing Policy 2015). Enabling the responsible sharing of data is a global priority and a number of solutions have been proposed, including by the National Academy of Medicine (2015), the International Committee of Medical Journal Editors (ICMJE; Taichman et al. 2016) and the National Institutes of Health. In Europe, access to data from industry sponsored trials has increased markedly, and there are encouraging programs in the U.S., such as the Yale University Open Data Access (YODA) partnership with Medtronic (Krumholz et al. 2013), the Academic Research Organization Consortium for Continuing Evaluation of Scientific Studies — Cardiovascular (ACCESS CV 2016) the Supporting Open Access to Researchers (SOAR) Initiative (Pencina et al. 2016), and the OptumLabs healthcare industry collaborative research and innovation center.

While we enthusiastically support data sharing, the conceptual framework for it is rooted in individually randomized controlled trials (RCTs) with participants’ explicit informed consent, which can include authorization for data sharing. Pragmatic research embedded in health systems is different from conventional trials: it often involves a waiver of patient consent, uses data from the electronic health record (EHR), and often includes information that could identify patients, health care providers, and healthcare facilities or organizations. In some cases, the primary data for enrolled patients includes every encounter, medication, and procedure. As we describe in the Annals article, even if study data would not allow identification of individual participants, the potential for disclosure of sensitive information regarding providers or health systems may still be substantial. These data have the capacity to do harm if taken out of context, used inappropriately or for comparative purposes, or to single out an individual, provider or institution. Healthcare systems voluntarily participate in embedded research and have raised these concerns about releasing information from electronic health records. Their specific concern is that health systems or facilities volunteering to participate in research might be penalized by release of detailed operational information that others are not required to make public. Participation in public-domain research is distinct from health systems’ participation in public quality reporting programs, where measures are standardized and public comparison of providers or facilities is either required or a clear expectation of multiple organizations.

Because of the unique concerns of clinicians and healthcare systems participating in embedded research, a requirement to share data using mechanisms designed for conventional, individually randomized trials will be challenging and might dissuade some healthcare systems from participating, thereby reducing opportunities to answer important scientific and healthcare questions using data acquired from clinical health care delivery.

Embedding clinical trials in healthcare systems as part of the delivery of care could improve the speed, quality, and cost of research, but it is not currently necessary (or required) for the systems to participate, and it often imposes opportunity costs that can distract from operational priorities. Although the healthcare systems sometimes derive direct benefit from participation in research (e.g., when there is congruence with prioritized quality improvement efforts), their principal motive is typically an altruistic one—to contribute to the knowledge base about the relative benefits, risks, and burdens of treatments. In this respect, health system participants in pragmatic research are similar to individual participants in conventional clinical trials. For individuals who participate in clinical research, researchers offer guarantees through the informed consent process that sensitive information will not be misused and ensure that individual protected health information is not exposed through trial activities or data sharing. In the same way, pragmatic research needs to consider the specific confidentiality concerns of participating health care providers or systems and to identify appropriate processes and technical structures for data sharing. We use experiences from the NIH Collaboratory, which supports embedded clinical trials that address major national priorities, to explore the most important concerns and potential solutions in this chapter.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Institute of Medicine. 2015. Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk. Washington, D.C: National Academies Press. https://doi.org/10.17226/18998.

Krumholz HM, Ross JS, Gross CP, et al. 2013. A historic moment for open science: the Yale University Open Data Access project and medtronic. Ann Intern Med. 158:910–911. doi:10.7326/0003-4819-158-12-201306180-00009. PMID:23778908.

Krumholz HM, Terry SF, Waldstreicher J. 2016. Data acquisition, curation, and use for a continuously learning health system. JAMA. 316:1669–1670. doi:10.1001/jama.2016.12537. PMID:27668668.

Pencina MJ, Louzao DM, McCourt BJ, et al. 2016. Supporting open access to clinical trial data for researchers: The Duke Clinical Research Institute–Bristol-Myers Squibb Supporting Open Access to Researchers Initiative. Am Heart J. 172:64–69. doi:10.1016/j.ahj.2015.11.002. PMID:26856217.

Simon GE, Corondo G, DeBar LL, et al. 2017. Data Sharing and Embedded Research. Ann Intern Med. doi:10.7326/M17-0863. PMID:28973353.

Taichman DB, Backus J, Baethge C, et al. 2016. Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. Ann Intern Med. 164:505. doi:10.7326/M15-2928. PMID:26792258.

The Academic Research Organization Consortium for Continuing Evaluation of Scientific Studies — Cardiovascular (ACCESS CV). 2016. Sharing data from cardiovascular clinical trials — a proposal. New Engl J Med. 375:407–409. doi:10.1056/NEJMp1605260. PMID:27518659.

Warren E. 2016. Strengthening research through data sharing. New Engl J Med. 375:401–403. doi:10.1056/NEJMp1607282. PMID:27518656.


Version History

February 25, 2025: Updated hyperlinks (change made by G. Uhlenbrauck).

March 22, 2023: Updated hyperlinks (change made by G. Uhlenbrauck).

Published August 25, 2017

Introduction

Participant Recruitment


Section 1

Introduction

As noted elsewhere in this textbook, recruitment targets of the embedded PCT (ePCT) may be individual patients, or clusters of patients, healthcare providers, community clinics, units in a hospital, and/or the healthcare system itself. Examples of targeted individuals could be patients living with heart disease or hypertension; adults in need of colorectal cancer screening; patients undergoing diagnostic spine imaging; nursing home residents involved in advance care planning; or patients and their physicians managing multiple chronic conditions. Examples of targeted clusters could be small medical practices in underserved communities testing disease-screening approaches; hospital units evaluating strategies to reduce infections; or healthcare systems studying how to increase guideline-concordant practices.

The plan for recruiting trial participants is integral to the design of the ePCT. The intervention, if effective, will be implemented with typical participants in routine clinical care settings. To maximize generalizability of the trial’s results, the recruitment eligibility criteria will tend to be as wide as possible, with little selection beyond the clinical indication of interest (Loudon et al. 2015).

In the design phase, it is recommended that study teams evaluate several characteristics and capabilities of their partner healthcare system that could contribute to, or have an impact on, their trial’s recruitment; for example:

  • Presence of an electronic health record (EHR) system, its  maturity in the healthcare system, extent of integration of its data systems, and research infrastructure to support using the EHR for the trial
  • Presence of decision support tools
  • Use of patient-reported outcome measures and extent those measures are digitized
  • Presence of comprehensive disease registries
  • For those that have previously participated in clinical research, typical strategies used to recruit participants routinely served in the healthcare setting
    • Specific facilitators or barriers to recruitment
    • How consent and opt out have been conducted in other studies in that setting

DISCLAIMER: The views expressed in this chapter are those of the contributors and do not necessarily represent the views of the National Heart, Lung, and Blood Institute; the National Institutes of Health; or the U.S. Department of Health and Human Services.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. 2015. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 350:h2147. doi:10.1136/bmj.h2147. PMID: 25956159.


Version History

December 18, 2018: Made nonsubstantive edits as part of annual content update (changes made by L. Wing).

Published August 25, 2017