Meaningful Endpoints

Choosing and Specifying Endpoints and Outcomes


Section 2

Meaningful Endpoints

A distinguishing feature of PCTs is a focus on outcomes that are "directly relevant to participants, funders, communities, and healthcare practitioners" (Califf and Sugarman 2015).

  • Who will use the information the most?
  • Does the outcome matter to them?

For example, researchers and regulators might be interested in measuring how an intervention reduces the risk of cardiovascular death. However, a patient does not care how the intervention reduces the risk of cardiovascular death if the overall risk of death is unchanged. Complicating matters, health care system leaders might care about broader endpoints (such as reduced infection rate), because these endpoints may affect the health system as a whole.

According to the PRECIS-2 (Pragmatic Explanatory Continuum Indicator Summary 2) criteria, "as the primary outcome becomes less recognizably important to patients, or is assessed on criteria seldom used in usual care, the trial becomes more explanatory [and less pragmatic]" (Loudon et al. 2015). Authors of the PRECIS-2 criteria provide an example of number of falls in elderly in the community as a pragmatic outcome, in contrast with surrogate endpoints such as bone density, muscle strength, and functional ability, when assessing an intervention designed to reduce falls. (Learn more about the PRECIS-2 tool.)

Qualities that make outcomes less pragmatic

  • Choosing a surrogate outcome or physiological outcome that is mainly important to providers (such as a blood test)
  • Using a composite outcome that is less important to patients
  • Choosing tests not normally used in usual care or outcomes that require central adjudication
  • Measuring a shorter-term outcome of an intervention for a condition in which patients are more concerned about longer-term outcomes

Note that use of a surrogate endpoint, while technically less pragmatic, may still be the best way to measure the effects of an intervention or activity in some cases. For example, surrogate endpoints can be used to describe outcomes in patients with ambiguous conditions or conditions for which there are multiple diagnoses, although care must be taken to ensure that the endpoint is specific and sensitive enough to be valid. For example, a range of ICD codes can be used for diabetes, and collectively the code set can identify most cases, but any individual code will be limited.

For more on stakeholders who might provide input regarding which endpoints and outcomes are the most important to stakeholder groups, see the Living Textbook Chapter, Building Partnerships to Ensure a Successful Trial. For more on determining whether an endpoint is valid, see the Living Textbook Chapter on Assessing Fitness for Use of Real-world Data Sources.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Califf RM, Sugarman J. 2015. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clin Trials. 12:436-441. doi:10.1177/1740774515598334. PMID: 26374676.

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. 2015. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 350:h2147. doi: https://doi.org/10.1136/bmj.h2147. PMID: 25956159.


Version History

March 11, 2026: Updated as part of annual review (changes made by K. Staman).

September 30, 2022: Made minor nonsubstantive text edits (changes made by K. Staman and L. Stewart)

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

February 11, 2020: Added links to Building Partnerships to Ensure a Successful Trial (changes made by K. Staman).

December 4, 2018: Added key questions (changes made by K. Staman).

Published August 25, 2017

Moving Forward

Data Sharing and Embedded Research


Section 8


Moving Forward

We fully support open access to clinical trial data. At the same time, it is important to avoid the unintended consequence of creating barriers to delivery systems’ and providers’ participation in embedded research. Different solutions will be required for sharing data that supports embedded research compared to those developed for conventional randomized trials. A central issue is who decides which uses are allowable. To motivate organizations to opt in to embedded research, a framework that delineates what can automatically be shared (questions that refine or deepen the original research question, such as sub-sets or secondary outcomes) from what must be considered by a panel of stakeholders (re-purposing the data for a new hypothesis) would be useful. Uses of the data that require provider attributes would require agreement from the organizations that originally agreed to participate in the research. Furthermore, this framework could ensure the data can be used to support the public good without jeopardizing the individuals or organizations whose data are at risk of inappropriate use.

For certain secondary analyses, making limited datasets available through public or private archives may be worthwhile. However, analyses of distributed datasets that remain in the possession of clinical organizations will likely require these organizations to execute analyses for investigators. It will be necessary to ensure appropriate technical infrastructure to support such work. This infrastructure and the personnel to support it will incur substantial costs.

In our article in Annals of Internal Medicine, we encourage researchers planning and leading embedded research to consider the same questions asked of the NIH Collaboratory Trials:

  • “What data could be shared by the least restrictive mechanism, a public archive open to any interested user?
  • What additional data could be shared using a more restrictive mechanism (private archive, public or private data enclave)?
  • Would the scientific or public health benefit of sharing additional data justify the additional effort to establish a more restrictive data sharing mechanism? (Simon et al. 2017)"

Data sharing policies for embedded research that support important research can be developed with a concerted effort of all stakeholders, and we must balance potential for harm with the ethical imperative to be more transparent and share data. The development of data sharing plans that are the most useful will be the least restrictive while providing appropriate protection for participant privacy, health system privacy, and scientific integrity.

 

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Simon GE, Corondo G, DeBar LL, et al. 2017. Data Sharing and Embedded Research. Ann Intern Med. doi:10.7326/M17-0863. PMID:28973353.


Version History

Published August 25, 2017

Data Sharing Solutions for Embedded Research

Data Sharing and Embedded Research


Section 3


Data Sharing Solutions for Embedded Research

The last few years have seen real progress in increasing openness (Ebrahim et al. 2014), and several methods, with varying degrees of restriction, transparency, and cost have been deployed (see table below), ranging from public release of data sets to private data enclaves with distributed research networks. These methods afford different levels of protection for health systems but also require different levels of support for implementation. We discuss these solutions below using the NIH Collaboratory Trials as examples.

 

Technical Structures for Data Sharing From Least Restrictive (and Least Expensive) to Most Restrictive (and Most Expensive)

Structure Description Additional elements Resource needs Example
Public archive
  • Analyzable data can obtained by any user for any use
  • No restriction on the kinds of research questions new users can address
  • May impose restrictions like prohibitions against re-identification or access to small cell counts
  • May de-identify certain elements, such as study site or demographics, or present sensitive data as an aggregate summary variable
  • Initial development and annotation
  • Maintenance and access costs

 

Agency for Healthcare Research and Quality (AHRQ) Healthcare cost and utilization project (HCUP)
Private archive

 

  • Analyzable data can be obtained by authorized users
  • Honest broker or the original owner of the data decides which uses to authorize
  • Requires binding agreement by recipient regarding protection and use of transferred data
  • As noted for public archive

 

  • As noted for public archive
  • Evaluation of requests
  • Execution of data sharing, data use, data transfer, and other agreements, including agreements covering data with full identifiers
  • Monitoring of compliance with agreements, and response to breach of agreements
Yale University Open Data Access (YODA) Project

Centers for Medicaid and Medicare (CMS) Limited Data Sets

National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repository

Public enclave
  • Any user may query the data, but not take possession of it. Only aggregate results may be removed from the enclave
  • No restriction on the kinds of questions users can address

 

  • May impose restrictions like prohibitions against re-identification, passing the data to other users, or access to small cell counts
  • May de-identify certain elements, such as study site or demographics

 

  • Initial development and annotation
  • Ongoing curation and governance
  • Creation and maintenance of informatics support for analyses, including software licenses and computational capabilities, and file storage
  • Personnel needed to ensure data quality, etc.
Centers for Medicaid and Medicare (CMS) Virtual Research Data Center (VRDC)
Private enclave
  • Similar to public enclave with regard to provisions for analyzing data without taking possession of it
  • Honest broker or the original owner of the data decides which uses to authorize
  • Moderated by an honest broker or by representatives of the study and/ or site (either queries or results)

 

  • As noted for public enclave
  • Additional resources to evaluate requests and supervise the conduct of approved studies
Food and Drug Administration (FDA) Sentinel Distributed Data Set

 

 

Public and Private Data Archives

With a data archive, data are annotated and de-identified as deemed necessary, and stored for later analyses by interested users. A publicaly available archive is the least restrictive and least expensive option for sharing data, and a number of Collaboratory trials have used this method (see table below). In most cases, some modification to or restriction of the full analytic dataset was necessary to protect the privacy of health systems or providers. For example, the Suicide Prevention Outreach Trial (SPOT) was developed to compare suicide attempt rates in patients who receive one of two suicide prevention strategies versus usual care. The investigators did not plan to include study site (health system) in the publicly available dataset given concerns by participating health systems that such data could be used for inappropriate comparisons of suicide attempt rates across health systems. A naïve analysis of these data could compare rates of suicide attempt across health systems without considering well-established variation by geographic region and race/ethnicity. In this context, a health system making extra efforts to engage higher-risk populations could paradoxically be shown to have high suicide rates. To facilitate the examination of variation in intervention effects across health systems, datasets including health system identifiers are available on request, following a supervised data archive, subject to specific agreements regarding use and re-disclosure. Because SPOT was randomized at the patient level, failure to account for study site in the released data set may lead to a mis-estimation of variance, but the data will still be of scientific and public health value.

As another example, the Collaboratory’s Pain Program for Active Coping and Training (PPACT) trial was developed to coordinate and integrate services for helping patients adopt self-management skills for chronic pain, limit use of opioid medications, and identify factors amenable to treatment in the primary care setting (Debar et al. 2012). The study was conducted at Kaiser Permanente in the Northwest, Georgia, and Hawaii regions thereby representing diversity of patients and healthcare systems. Because the trial was conducted in three distinct regions with different racial and ethnic distributions, release of demographic information would readily identify the regions and potentially the participating PCPs. Because participating health plans were concerned that naïve analyses of region-specific data could be used to conduct inappropriate or invalid comparisons of pain treatment and outcomes across various health systems, their data-sharing plan attempted to assure regional anonymity. Similarly, there were sensitivities about examination of individual clinician opioid prescribing patterns. Thus, such data was included only in an aggregated format. As such, the PPACT investigators created a public-release data archive that can be shared and that enables individuals to replicate, or at least closely replicate, the primary analysis. The public release dataset was expected to include anonymous patient and cluster identifiers, but no information on region or clinic facility.

The TiME and ICD-Pieces trials both used the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repository—a private archive—to share data. Use of this archive transfers the administrative, financial, and oversight responsibilities to NIDDK, substantially decreasing the burden of the investigators. The availability of more repositories for data sharing will help future investigators more effectively and efficiently share data.

Among national trauma care systems, there are incentives supporting data sharing in multisite pragmatic trials. For example, TSOS trial has a private archive and shares data with researchers whose aim is to impact future policy or effect clinical care in trauma centers nationwide. Although the data are housed in an archive, the team will consider an enclave approach—analyzing the data themselves and returning results—depending on the research question and potential ethical obligations not to reveal or receive sensitive data. Perhaps most importantly, the TSOS team was incentivized to share data and publish with teams at other U.S. trauma sites as part of the larger study goal of disseminating knowledge that will further American College of Surgeons practice guidelines for PTSD and comorbidity screening and intervention (Zatzick et al. 2016).

Public and Private Data Enclaves

A data enclave allows investigators to perform analyses without taking possession of the data. A public enclave allows any user to conduct research on any topic; for a private enclave an honest broker or the original owners of the data will determine appropriate use. Private enclaves may establish their own rules regarding users and uses of their data. The NIH Collaboratory’s ABATE trial used this type of private enclave for all primary and secondary analyses of trial data. All analyses were conducted behind Hospital Corporation of America's firewall using a supervised data enclave model to prevent misuse of data for comparative purposes. This model requires a data use agreement (DUA), and all data are de-identified. Other investigators, with approval, could reproduce the results if ever needed. This solution allows investigators to perform analyses without actually downloading the data themselves, but it is costly, and is in effect only for a finite period of time.

A distributed research network approach is a variation of a private data enclave and has been an important factor in obtaining the voluntary participation of health care organizations in public health activities and research for the public good. It allows the organizations to maintain physical and operational control over both their patients’ and their own confidential data. They thus can opt in to participation in a wide array of societally beneficial programs without concern that they are putting their data at risk of other uses. The Patient-Centered Outcomes Research Network (PCORnet) and FDA’s Sentinel System are examples of a network of users where each participating site holds their data behind their firewall, but can make them available (opt-in) through a distributed research network for approved queries.

Limitations of These Solutions

All of these data sharing mechanisms have drawbacks. Greater control inevitably involves greater expense because of the added leadership, legal, statistical, and information technology resources required. Further, when sensitive health system characteristics are important potential confounders, the least restrictive and least expensive methods are also often the least useful, because the data that can be shared with no restriction will lack information needed to replicate the primary analysis, or to address some additional questions. The most restrictive—an enclave controlled by the original data owner—does not guarantee access. All of these solutions incur meaningful costs for annotation of the data, and all but public archives require ongoing support for oversight. Enclaves also incur substantial ongoing costs for oversight and for maintaining a computing environment that can support analyses.

In this JAMA Viewpoint article, NIH Collaboratory investigator Dr. Richard Platt and colleague Dr. Tracy Lieu discuss the value of data enclaves to facilitate information sharing in support of research, quality improvement, and public health reporting (Platt and Lieu 2018).

“Data enclaves address 2 major barriers to data sharing. First, they allow health systems to protect patients’ interests and their own by maintaining physical and operational control, permitting the systems to opt in or out of proposed analyses. Second, they obviate the need to build new secure systems. (Platt and Lieu. 2018)”

 

Collaboratory Data Sharing Plans (Assumes HIPAA-Compliant Patient De-identification for All Patients and a Data Use Agreement Where Appropriate)

Study name Risks to providers or health systems Data sharing structure Steps to mitigate risks to providers or health systems
PPACT Pain Program for Active Coping and Training Data on opioid prescribing patterns could be misused for inappropriate comparisons of providers or facilities. Public archive of a modified dataset Public-use dataset does not include facility or health system identifiers, characteristics or prescribing/referral practices of individual providers, or patient-level data on race or ethnicity.
STOP CRC Strategies and Opportunities to Stop Colon Cancer in Priority Populations Data on screening rates could be misused for inappropriate or biased comparisons of performance across clinics or inaccurate comparisons with public quality measures. Private archive managed by study team De-identified patient-level data are available, with permissions and data use agreements in place. Data use agreements are limited to specific research uses and require destruction after authorized analyses are completed.

 

SPOT Suicide Prevention Outreach Trial Data on suicide attempt rates could be used for biased or inappropriate comparisons of suicide attempts or suicide mortality across health systems. Public archive of a modified dataset Public-use dataset does not include indicator for health system.

 

TiME Time to Reduce Mortality in End-Stage Renal Disease Data regarding mortality could be misused for inappropriate or biased comparisons of facilities or healthcare systems. Detailed data regarding patterns of care could reveal proprietary business information. Private archive managed by NIDDK De-identified patient-level data were aggregated across provider organizations and stored at the NIDDK Central Repository. Facility identifiers, dialysis provider organization identifiers, and data elements that were unique to one of the dialysis providers were removed. Data are made available through formal request and a data use agreement between the requestor and the NIDDK.
PROVEN Pragmatic Trial of Video Education in Nursing Homes Data regarding mortality could be misused for inappropriate or biased comparisons of participating facilities or systems. Data regarding admissions and discharges could reveal proprietary business information. Public archive of aggregate-level dataset Public-use dataset includes facility-level aggregate data, with restrictions to prevent re-identification of participating facilities.

 

 

 

LIRE Lumbar Image Reporting with Epidemiology Data regarding treatment patterns and resource use could be used for inappropriate or biased comparisons across health systems and could reveal proprietary health system business information. Private archive managed by study team Patient-level datasets were de-identified by health systems, clinics, providers, and patients. Investigators  authorize release to specific users for specific purposes.

 

 

ABATE Active Bathing to Eliminate Infection Data regarding infection rates could be used for inappropriate comparisons of facilities or with public reports. Detailed information regarding facilities and utilization patterns could reveal proprietary business information. Private enclave managed by study team Potential users may propose specific queries. Only query results (not individual data) will be shared.
ICD-Pieces Improving Chronic Disease management with Pieces Data regarding patterns of care could be used for biased or inappropriate comparisons across facilities or health systems. Given different specifications, comparison to publicly reported quality measures would be misleading. Private archive managed by NIDDK Patient-level data were de-identified and stored in aggregate database. Identifiers for healthcare system, primary practice and patients were removed. Use of aggregate dataset is governed by authorized agreements with NIDDK.
TSOS Trauma Survivors Outcomes and Support Data regarding baseline patient characteristics and study outcomes could be used for biased or inappropriate comparisons of care in participating facilities. Private archive managed by study team De-identified patient level data are provided, with priority given to research that affects trauma care systems nationwide and Collaboratory investigators.

 

 

 

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Debar LL, Kindler L, Keefe FJ, et al. 2012. A primary care-based interdisciplinary team approach to the treatment of chronic pain utilizing a pragmatic clinical trials framework. Transl Behav Med. 2:523–530. doi:10.1007/s13142-012-0163-2. PMID:23440672.

Ebrahim S, Sohani ZN, Montoya L, et al. 2014. Reanalyses of randomized clinical trial data. JAMA. 312:1024–1032. doi:10.1001/jama.2014.9646. PMID: 25203082.

 

Platt R, Lieu T. Data Enclaves for Sharing Information Derived From Clinical and Administrative Data. 2018;320:753. doi: 10.1001/jama.2018.9342.PMID:30083726

Zatzick DF, Russo J, Darnell D, et al. 2016. An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement Sci. 11:58. doi:10.1186/s13012-016-0424-4. PMID:27130272.

 


Version History

September 25, 2025: Added resources box with NIMH Data Archive presentation (changes made by G. Uhlenbrauck).

March 9: Updated to make descriptions of the trials past tense (changes made by K. Staman).

December 5, 2018: Updated and added reference as part of annual review (change made by K. Staman).

Published August 25, 2017

Data Sharing Concerns

Data Sharing and Embedded Research


Section 2


Data Sharing Concerns

There is a strong assumption that data and metadata should be broadly shared with the scientific community. NIH issued a data management and sharing policy in 2023 that requires researchers who accept NIH funding to produce a data management and sharing plan (NIH 2023). There are many reasons for these requirements. Appropriate data management is crucial for maintaining the integrity and rigor of the research, and data sharing increases the need for strong data management. Data sharing can improve the reproducibility of research findings and reuse of existing data in new research. This allows research to progress more rapidly and ensures that the benefits of federal funding are optimized. Finally, transparency that results from data sharing enhances confidence in research findings and allows others to ascertain and validate results and analyses. In general, data should be managed and shared in a way that is consistent with the FAIR principles. That is, data should be Findable, Accessible, and Interoperable, and digital assets should be Reusable.

Not all data are appropriate for sharing. There are exceptions to the regulatory requirements for data sharing due to ethical considerations discussed in this section. These considerations are especially relevant for pragmatic clinical trials.

Type of Information Disclosed

Traditional clinical trials, such as tests of efficacy of a new drug, device, behavioral treatment or process, typically create research data sets, which are readily deidentified and contain a limited number of data elements that pertain to the research question. With these data sets it is generally feasible for researchers who did not participate in the original trial to both reproduce the primary results and to perform additional analyses addressing different questions. However, the range of these additional questions is typically limited by the design of the trial dataset. Pragmatic trials and other embedded research typically compare alternative treatments, treatment strategies, or policies. In those comparisons, variation in practice patterns among providers or facilities are potentially important confounders—especially in trials randomizing providers or facilities rather than individual patients.

Embedded research data sets may contain rich information extracted from health system records. Those practice-based data often contain more specific information about the providers and the systems themselves than do conventional clinical trials. Examples include the number, size, or location of facilities and practices; practice volume; the number, size, and census of primary, specialty, and inpatient care units; the number or type of personnel they employ; the structure of their formularies; and information about their vendors and supply chain.

For example, ABATE Infection, an NIH Collaboratory Trial, used data from over 500,000 admissions in 53 hospitals (Huang et al 2019). The dataset included information on every hospital’s census and length of stay on most wards, plus individuals’ procedure and comorbidity data that could reveal sensitive business information regarding patient volume, size of individual services, length of stay on individual wards, and case mix. The size and richness of the dataset effectively precluded protection against reidentification of the hospitals by comparison with external data sources. These facilities varied in size and could readily be identified by the simple release of numerators and denominators. In addition, because the ABATE Infection trial evaluated changes in the number of multidrug-resistant organisms in clinical cultures in these facilities, the potential for misuse and misinterpretation of the data for purposes unrelated to the original research question (such as using the data to make biased comparisons of the quality of care at these facilities) would be unacceptable to the healthcare system.

Similarly, the PROVEN study, an NIH Collaboratory Trial, worked with 2 nursing home systems operating in over 20 states and obtained a wide array of clinical data downloaded monthly on over 200,000 admissions and 60,000 long-stay residents treated in 360 skilled nursing facilities (Mor el al 2017). Detailed clinical and demographic data from standardized patient assessments on all these patients was automatically merged with longitudinal information about staffing, treatments, and hospitalizations from the facility, which in turn was merged with Medicare claims data to track hospital use and vital status, regardless of whether the patient switched facilities, over the entire study period. While facilities participating in the intervention group represented less than 1% of all US facilities, it would not be difficult to identify the facilities depending upon the level of detail in which study results were presented.

Patient Participant Privacy

Because pragmatic clinical trials often involve both large amounts of data from large numbers of patients and real-world data from embedded interventions, adequately deidentifying patient data may be difficult if the data are shared. There is greater risk that patient data will inadvertently include identifiable information or that linkage of different data types will allow reidentification of patient information. This means that efforts to adequately deidentify data for data sharing may be logistically difficult and prohibitively expensive (Morain et al 2023).

Provider and Institutional Confidentiality

In addition to concerns about potential privacy violations to patients who are participants in pragmatic trials, there are 2 types of confidentiality risks to provider groups and healthcare systems. The first involves revealing business information (for example, which drugs are purchased or what price is paid for specific services), which has a clear right of privacy. The second is revealing information that could be used for naïve and potentially biased comparisons of quality of care or performance, especially if that information is different (more detailed; more subsets, limited to vulnerable populations, lacking case-mix adjustment) from what is publicly reported by all systems or facilities. In a perfect world there would be no right of privacy about quality of healthcare delivery, but the current world is not perfect because the scope of disclosures for those participating in embedded research could be far greater than that required for assessing quality parameters. Health systems volunteer to participate in research to improve public health, and bearing an additional risk of misuse of sensitive information may be unacceptable (Platt et al 2016). Moreover, healthcare entities that are incentivized for their performance on quality metrics may be especially concerned about research that may produce data inconsistent with public reports because of differences between definitions or methods used by a study versus those used for public quality measures (for example, Healthcare Information Data and Information Set [HEDIS] based on claims data versus HEDIS that relies on medical record abstraction) (Simon et al 2017). Also, the information may be extremely sensitive and may include vulnerable populations. For example, in the PPACT study, an NIH Collaboratory Trial, part of the reason that healthcare systems and individual providers partnered for the research was the tremendous concern about overprescribing opioids and the dangers it presents. Yet, there were substantial sensitivities about individual primary care provider prescribing patterns, which in turn influenced what data could be made available in the shared data sets. Sensitive medical domains that might be the focus of an embedded trial—areas with complex and sensitive issues—could present similar concerns.

Together, the concerns about the potential for reidentification of patient data and risks to participants (both patients and healthcare providers) could justify limiting the data sharing that would normally be expected. NIH regulations allow for maintaining the privacy of data if sharing data would pose risks to participants and the data cannot be adequately deidentified.

Informed Consent

Many pragmatic clinical trials are granted waivers or alterations of consent, particularly when they use cluster randomized designs, but waivers or alterations may be granted for embedded studies when extant data from clinical care are the only data being used. This is a challenge because informed consent for data sharing is generally used to justify data sharing. Informed consent is also the primary vehicle for the demonstration of respect for participants. Therefore, if a waiver of consent is granted for a pragmatic trial, the assumption that data sharing is consistent with the participant’s wishes is not necessarily valid and the obligation to demonstrate respect to participants must be met in a different way. There are several ways of addressing this, including disclosure (if not consent), and greater visibility and education of patients about ongoing research efforts to improve care could play a role in demonstrating respect (Morain et al 2025; O’Rourke et al 2025; Propes et al 2024)

Current and proposed disclosure policies are particularly challenging for observational studies and cluster randomized trials because providers and delivery systems, like their patients, have some of the attributes of research subjects. This is especially problematic for providers since, while care systems may authorize use of their data, individual providers typically are not provided this opportunity.

For providers, practices, and health systems that participate in research studies, although there are no similar regulatory protections, there is a reasonable corollary to a waiver of consent, especially for individuals whose involvement is determined by their inclusion in a randomized cluster without their explicit consent. Some have argued that health systems, providers, and/or individual practitioners are participants in embedded research—much like patients—and therefore we have ethical obligation to provide suitable assurances regarding legitimate privacy and confidentiality concerns about use and reuse of proprietary data collected during clinical care. However, this ethical argument has proved contentious; the scientific community is encouraging a shift to a more transparent clinical trials enterprise, and this type of data sharing is required in other industries, including the pharmaceutical and device industries. The crux of the argument boils down to a very practical matter, which is ensuring voluntary participation.

Risk of Breach in Data Security

Embedded research studies are typically orders of magnitude larger than conventional clinical trials, making delivery systems especially sensitive to the potential for breaches of data security. For example, the median sample size for the NIH Collaboratory Trials is 19,500 individuals; the largest involves more than 500,000 individuals. Thus, the potential for harm from a single security breach is substantial. NIH is considering a new research security policy that would require institutions to have a research security plan if they accept more than $50 million in federal funding and to implement new training requirements for all research personnel (NIH 2025). Researchers have a duty to protect data from unauthorized access. Research security plans are an opportunity to think through meeting these obligations and demonstrating respect for participants.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Huang SS, Septimus E, Kleinman K, et al. 2019. Chlorhexidine versus routine bathing to prevent multidrug-resistant organisms and all-cause bloodstream infections in general medical and surgical units (ABATE Infection trial): A cluster-randomised trial. Lancet. 393(10177):1205-1215. doi: 10.1016/S0140-6736(18)32593-5. PMID: 30850112.

Mor V, Volandes AE, Gutman R, Gatsonis C, Mitchell SL. 2017. PRagmatic trial Of Video Education in Nursing homes: The design and rationale for a pragmatic cluster randomized trial in the nursing home setting. Clin Trials. 14(2):140-151. doi: 10.1177/1740774516685298. PMID: 28068789.

Morain SR, Bollinger J, Weinfurt K, Sugarman J. 2023. Stakeholder perspectives on data sharing from pragmatic clinical trials: Unanticipated challenges for meeting emerging requirements. Learn Health Syst. 8(1):e10366. doi: 10.1002/lrh2.10366. PMID: 38249837.

Morain SR, Brickler A, Ali J, et al. 2025. Ethical considerations for sharing aggregate results from pragmatic clinical trials. Clin Trials. 22(2):248-254. doi: 10.1177/17407745241290782. PMID: 39587730.

NIH. 2023. Final NIH Policy for Data Management and Sharing. NOT-OD-21-013. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html. Accessed October 7, 2025.

NIH. 2025. Implementation of NIH Research Security Policies. NOT-OD-25-154. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-25-154.html. Accessed October 7, 2025.

O'Rourke PP, Ali J, Carrithers J, et al. 2025. Disentangling informing participants from obtaining their consent. Learn Health Syst. 2025 Apr 21. doi: 10.1002/lrh2.70014. Epub ahead of print.

Platt R, Ramsberg J. 2016. Challenges for sharing data from embedded research. N Engl J Med. 374(19):1897. doi: 10.1056/NEJMc1602016. PMID: 27096325.

Propes C, O'Rourke PP, Morain SR. 2024. Recurring and emerging ethical issues in pragmatic clinical trials. Circ Cardiovasc Qual Outcomes. 17(7):e010847. doi: 10.1161/CIRCOUTCOMES.124.010847. PMID: 39012931.

Simon GE, Coronado G, DeBar LL, et al. 2017. Data sharing and embedded research. Ann Intern Med. 167(9):668-670. doi: 10.7326/M17-0863. PMID: 28973353.


Version History

October 13, 2025: Added an introduction and a section on patient participant privacy; revised the section on informed consent; added references; and updated the contributor list (changes made by D. Seils).

August 20, 2025: Updated link to The Embedded Pragmatic Clinical Trial Ecosystem section (change made by D. Seils).

March 9, 2023: Changed the description of the trials to past tense (changes made by K. Staman).

February 16, 2023: Added a resource to the Resources sidebar (changes made by D. Seils).

December 5, 2018: Added references as part of the annual update (changes made by K. Staman).

Published August 25, 2017

Partner Engagement Throughout the PCT Life Cycle

Building Partnerships and Teams to ensure a successful trial

Section 4

Partner Engagement Throughout the PCT Life Cycle

Engagement of key partner groups is critical across the research continuum, from making sure the right questions are asked to ensuring that study findings make it into the hands of health and healthcare decision-makers. Three broad stages for involving partners include: 1) planning the study, 2) conducting the study, and 3) disseminating the results (PCORI 2015). Ways in which stakeholders might contribute at each stage are presented in the Figure and summary below.

Planning the Study

Conducting the Study and Analyzing Results

Disseminating the Results

Planning the Study

Engagement of key partner groups ideally begins with the selection of a research topic, and many organizations employ a multi-partner process of topic prioritization (National Institute for Health Research 2016; PCORI). This ensures that resources are allocated to studies that will answer questions of greatest importance to decision-makers, and also increases the likelihood that a study will receive the support it needs for successful implementation. As noted above, PCTs that answer questions that matter to healthcare delivery organizations, clinicians, and patients are more likely to garner support. In choosing whether to support a PCT, it is recommended that decision-makers respect, promote, or represent the interests of those likely to be directly or indirectly affected; advance organizational mission and values; and consider stewardship of resources (financial, human, and organizational) (Whicher et al. 2015).

Once a research question has been selected, partners can contribute to study planning in a number of ways. For example, patient representatives can identify health outcomes that are important to them that researchers may not have thought to measure. A different set of outcomes related to resource utilization may be important to healthcare payers. Involving clinicians and other clinic staff in designing the study protocol can help minimize the trial’s impact on clinical workflow, which is particularly important for PCTs.

Conducting the Study

Partners can continue to play an important role during study implementation. While not everything about a PCT will work as initially planned, seemingly insurmountable problems usually have solutions, and well-established engagement throughout a trial can help prevent or overcome such roadblocks. Because healthcare systems are dynamic, PCTs require continued efforts to establish, maintain, and re-establish engagement through leadership or staff changes. Patient and clinician partners can contribute to the development of communication materials and data collection instruments that are understandable and easy to use. Patients and patient advocates can also provide valuable insight regarding strategies that will motivate patients to enroll in the study and remain engaged throughout the study. Once data collection is completed, partners can help to plan analyses and interpret results.

Informing and Delivering the Study Intervention

Patient and clinician partners may also play an important role in informing and delivering the study intervention or program. One way trial partners may inform how the intervention is delivered by testing and providing feedback on example materials or methods through focus groups, interviews, and other forms of testing. In addition to providing guidance on how the intervention is delivered, a trial may be designed with designated roles for health system provider support in delivering the intervention. It is important to clearly define roles and expectations, including compensation for research study support by health system partners. Once data collection is completed, partners can help to plan analyses and interpret results.

Disseminating the Results

Partners can enhance dissemination by helping to translate study findings for diverse audiences and identifying avenues for dissemination beyond the traditional scientific literature. The PCORI Dissemination and Implementation Framework provides detailed information and tools for designing and implementing a robust dissemination strategy informed by multiple partner groups. Additional information can also be found in the Living Textbook chapter, Dissemination Approaches for Different Stakeholders.

SECTIONS

CHAPTER SECTIONS
Resources

The Communicating with Health System Partners handout outlines points in a study when research teams may need to engage with health system leaders, clinic-level managers, and frontline staff.

Dissemination Approaches for Different Stakeholders

Living Textbook chapter

PCORI Dissemination and Implementation Framework

Provides information and tools for designing and implementing a robust dissemination strategy informed by multiple partner groups

In a video interview, Drs. Susan Huang and Gloria Coronado give advice to pragmatic trial investigators, including a recommendation to engage operational partners within the sites.

REFERENCES

back to top

National Institute for Health Research. 2016. The James Lind Alliance Guidebook.  www.jla.nihr.ac.uk/jla-guidebook/downloads/JLA-Guidebook-Version-6-February-2016.pdf. Accessed May 9, 2017.

PCORI. 2015. PCORI Engagement Rubric. www.pcori.org/sites/default/files/Engagement-Rubric.pdf. Accessed May 9, 2017.

PCORI. Generation and Prioritization of Topics for Funding Announcements. http://www.pcori.org/research-results/how-we-select-research-topics/generation-and-prioritization-topics-funding-4. Accessed May 9, 2017.

Whicher DM, Miller JE, Dunham KM, Joffe S. 2015. Gatekeepers for pragmatic clinical trials. Clin Trials. 12:442–448. doi:10.1177/1740774515597699. PMID: 26374683.


Version History

June 11, 2025: Added new examples from NIH Collaboratory Trials GGC4H, Nudge, and BEST-ICU. Added new resource on communicating with health system partners (changes made by E. McCamic).

October 3, 2022: Added new points to Conducting list. Made minor nonsubstantive text edits. Added contributors (changes made by K. Staman and L. Stewart)

August 27, 2020: Made minor nonsubstantive font change (changes made by L. Wing).

Published August 25, 2017

Introduction

Building Partnerships and Teams to ensure a successful trial


Section 1


Introduction

Pragmatic clinical trials (PCTs) are designed to answer questions that are relevant to patients, clinicians, payers, policy-makers, and other healthcare decision-makers and to identify generalizable, sustainable ways to improve health and care delivery. Engagement of key partner groups throughout the research process is a core feature of comparative effectiveness research and particularly important in the context of PCTs. Partner input helps to ensure that PCTs are designed to answer questions important to them, that they are feasible to conduct with minimal clinical disruption, and that results are interpreted and shared appropriately.

Partner (Stakeholder): “An individual or group who is responsible for or affected by health- and healthcare-related decisions that can be informed by research evidence” (Concannon et al. 2012)

Engagement: “A bi-directional relationship between the stakeholder and researcher that results in informed decision-making about the selection, conduct, and use of research" (Concannon et al. 2012)

With PCTs conducted in real-world settings (e.g., hospitals, nursing homes, clinics), productive collaboration among researchers, clinicians, patients, and healthcare delivery organization leaders is needed to ensure that studies can be conducted in ways that support research and the goals of the organization, the clinician, and the patient.

The topic of partner engagement will be revisited throughout the Living Textbook and also merits upfront discussion. The NIH Pragmatic Trials Collaboratory has a working group dedicated to supporting partner engagement and developing best practices: the Health Care Systems Interactions Core. Lessons gleaned from the NIH Collaboratory PCT experiences, along with additional resources on partner engagement in PCTs, are described in this chapter.

Partner engagement in pragmatic research is not automatic—it must be mindfully established. According to Eric Larson, MD, formerly of the NIH Pragmatic Trials Collaboratory’s Health Care Systems Interactions Core,

“the best way to create engagement is for partners to commit to it at the outset so that they learn to trust each other and address problems collaboratively... Such collaborations are the key to solving the unforeseen and inevitable challenges of conducting clinical trials in large healthcare systems” (see full interview).

Guidelines and Methods for Engagement

Guidelines for meaningfully engaging partners in the research process include core principles such as respect, fairness, co-learning, accountability, transparency, and trust (Lavallee et al. 2012; PCORI 2015; Sheridan et al. 2017). These principles can be demonstrated by ensuring that 1) roles and expectations of all partners are clearly established, 2) everyone is adequately prepared to participate in engagement activities, 3) different viewpoints are encouraged and respected, and 4) feedback is provided regarding how partner input was used or why it was not used.

Seeking to advance the study and practice of engagement in health research, the Patient-Centered  Outcomes Research Institute (PCORI) launched the Engagement in Health Research Literature Explorer. Locating relevant research articles about engagement can be challenging because of a lack of standard terminology. The new tool searches a curated database of peer-reviewed literature on engagement. Articles are included in the database if they describe engagement experiences, report research findings on engagement practices, or present theories, concepts, or views on engagement. The database is updated monthly and is one way PCORI is helping to promote meaningful involvement of patients, caregivers, clinicians, and other healthcare partners throughout the research process.

There are multiple methods of partner engagement, and decisions about which method or methods to employ should be based on factors such as the stage of research, engagement objective, number and diversity of partners, geographic dispersion, and resources. One method for facilitating engagement is to establish an advisory board or steering committee with members from different partner groups. It may be helpful to build on previous collaborations (e.g., quality improvement champions) or to begin partnerships through a pilot study. Face-to-face meetings are ideal for establishing relationships and trust, but are not always feasible. Although phone and web-based conferences can be a reasonable substitute, special effort is required to facilitate active participation by stakeholders. For some types of input, one-on-one interviews or survey techniques may be more appropriate.

Engagement should begin with the selection and clarification of a research question and continue through all phases of the research. Rather than asking potential partners, “How can we answer the question I have already selected?”, investigators should ask, “What questions are most important to your health system and the people you serve?” The frequency of engagement may also vary over the course of a study. For example, more frequent engagement may be beneficial during the early stages of planning and implementation and once results from the study are available, while less frequent interaction may suffice during the enrollment and follow-up period. Regardless of the method or frequency of engagement, it is necessary to ensure that all partners are fully prepared to participate and are fairly compensated for their time and effort (see the PCORI Compensation Framework for additional guidance on compensating partners).

Case Study: IMPACT-AFib Trial

A good example for the importance of collaboration comes from the IMPACT-AFib trial. A large number of collaborators led to multiple iterations of the protocol, substantial discussion, and a lengthy review process (Cocoros et al. 2019). Branding, logos, and details related to how the intervention looked required review and approval from all partner sites. These tasks and many other essential tasks could not have been performed by external investigators, and internal champions and were instrumental in ensuring the proposed protocol could be executed (Garcia et al. 2020). Throughout the process of the trial, patient representatives also provided indispensable guidance (Cocoros et al. 2019).

SECTIONS

CHAPTER SECTIONS

Resources

Quick Start Guide for Researcher and Healthcare Systems Leader Partnerships

This Quick Start Guide is designed to help clinical investigators partner with healthcare system leaders to support the successful conduct of an ePCT within their healthcare system. It provides advice from the Collaboratory and serves as an annotated table of contents, pointing readers to essential content in the Living Textbook regarding partnering to conduct an ePCT.

PCORI Compensation Framework

Contains guidance on compensating partners

REFERENCES

back to top

Cocoros NM, Pokorney SD, Haynes K, et al. 2019. FDA-Catalyst—Using FDA’s Sentinel Initiative for large-scale pragmatic randomized trials: approach and lessons learned during the planning phase of the first trial. Clin Trials. 16:90-97. doi:10.1177/1740774518812776. PMID: 30445835

Concannon TW, Meissner P, Grunbaum JA, et al. 2012. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 27:985–991. doi:10.1007/s11606-012-2037-1. PMID: 22528615.

Garcia CJ, Haynes K, Pokorney SD, et al. 2020. Practical challenges in the conduct of pragmatic trials embedded in health plans: lessons of IMPACT-AFib, an FDA-Catalyst trial. Clin Trials. 17:360-367. doi:10.1177/1740774520928426. PMID: 32589056

 

Lavallee DC, Williams CJ, Tambor ES, Deverka PA. 2012. Stakeholder engagement in comparative effectiveness research: how will we measure success? J Comp Eff Res. 1:397–407. doi:10.2217/cer.12.44.

PCORI. 2015. PCORI Engagement Rubric. www.pcori.org/sites/default/files/Engagement-Rubric.pdf. Accessed May 9, 2017.

Sheridan S, Schrandt S, Forsythe L, Hilliard TS, Paez KA. 2017. The PCORI Engagement Rubric: promising practices for partnering in research. Ann Fam Med. 15:165–170. doi:10.1370/afm.2042. PMID: 28289118.


Version History

October 3, 2022: Made minor nonsubstantive text corrections. Added contributors. Added “Case Study: IMPACT-AFib Trial” section. Added references (changes made by K. Staman and L. Stewart)

August 27, 2020: Made minor nonsubstantive text corrections (changes made by L. Wing).

February 27, 2020: Made minor nonsubstantive text corrections (changes made by G. Uhlenbrauck).

December 15, 2018: Added text and revised as part of annual review (changes made by K. Staman).

Published August 25, 2017

Additional Resources

Study Startup


Section 3


Additional Resources

 

Resource Description
Toolkits
Clinical Research Toolbox
This toolkit from the National Center for Complementary and Integrative Health (NCCIH) has templates, sample forms, and information materials to assist investigators with study startup activities.
Clinical Research Study Investigator Toolbox This toolkit from the National Institute on Aging (NIA) serves as a web-based informational repository containing templates, sample forms, guidelines, regulations and informational materials to assist investigators in the development and conduct of high-quality clinical research studies.
White Papers
Considerations for Training Front-Line Staff and Clinicians on Pragmatic Clinical Trial Procedures This document helps PCT study teams plan training for study procedures that involve front-line clinicians and staff. The content was developed by drawing on trial-specific experience from the NIH Collaboratory Trials. The document describes how training for PCTs will differ from training conducted for typical research studies, and includes a list of specific considerations, real-world examples, a checklist for PCT training design, and links to additional resources.
Online training
Operationalizing the Trial Design An online video training resource for researchers to learn strategies for operationalizing a trial and engaging study teams and participants.
Journal articles
Communication is the key to success in pragmatic clinical trials in Practice-based Research Networks (PBRNs)

Bertram S, et al. J Am Board Fam Med 2013

Effective communication is the foundation of feasibility and fidelity in practice-based pragmatic research studies. Doing a study with practices spread over several states requires long-distance communication strategies, including e-mails, faxes, telephone calls, conference calls, and texting. Developing and ensuring comfort with distance communications requires additional time and use of different talents and expertise than those required for face-to-face communication. This discussion is based on extensive experience of 2 groups who have worked collaboratively on several large, federally funded, pragmatic trials in a practice-based research network.
Trials without tribulations: minimizing the burden of pragmatic research on healthcare systems

Larson EB, et al. Healthcare 2015

Pragmatic clinical trials are increasingly common because they have the potential to yield findings that are directly translatable to real-world healthcare settings. Pragmatic clinical trials need to integrate research into the clinical workflow without placing an undue burden on the delivery system. This requires a research partnership between investigators and healthcare system representatives.
A guide to research partnerships for pragmatic clinical trials

Johnson KE, et al. BMJ 2014

A successful pragmatic clinical trial starts with a strong partnership between researcher and healthcare system, goes through a rigorous objective evaluation of the ability of the partner healthcare system to participate, and ends with evidence about sustainable ways to improve care, as well as a long term scientific relationship.
Pragmatic clinical trials embedded in healthcare systems: Generalizable lessons from the NIH Collaboratory

Weinfurt, et al. BMC Med Res Methodol 2017

The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost-effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care.

 

SECTIONS

CHAPTER SECTIONS


Version History

January 30, 2026: Added Gregory Simon as a contributor. Updated links to resources. Removed resources with links that are no longer active. Added two new resources (changes made by T. Green).

May 22, 2020: Added resource links to NCCIH and NIA research toolkits (changes made by L. Wing).

December 5, 2018: Added a new resource as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Implementation Readiness Checklist

Study Startup


Section 2


Implementation Readiness Checklist

The checklist below identifies milestones that mark trial readiness.

Download as Word or PDF

 

Milestone Completed
Recruitment plans are finalized
All sites identified (documentation of site commitment)
Methods for accurately identifying participants validated
All agreements for necessary subcontracts in place (including DUAs if applicable)
Ethical/regulatory aspects are addressed
Coordinated IRB oversight in place
Finalized plans for informed consent or waiver of informed consent
Finalized data and safety monitoring plan
Intervention is fully developed and finalized
Finalized intervention (including materials and training at sites) ready for site implementation
Finalized protocol is IRB approved (informed consent and data collection forms, if applicable)
Data collection methods are adequately tested  
Validated methods for extracting and harmonizing electronic health record information
Validated study surveys, interviews, or other data collection modes
Demonstrated quality assurance and harmonization of data elements across healthcare systems/sites
Statistical and data analysis methods have been adequately developed and recorded
Budget is realistic, feasible, and accounts for potential changes
Trial registration is updated to reflect any changes to study protocol (including changes to outcome measures and analytic plan)

SECTIONS

CHAPTER SECTIONS


Version History

January 29, 2026:  Added Gregory Simon as a contributor. Updated Implementation Readiness Checklist as part of annual content update (changes made by T. Green)

December 5, 2018: Updated Implementation Readiness Checklist as part of annual content update (changes made by L. Wing)

Published August 25, 2017

current section :

Implementation Readiness Checklist

Introduction

Study Startup


Section 1

Introduction

After feasibility and pilot testing, it may be necessary to adjust or refine the study design, intervention workflow, support processes, personnel, or study tools. Ideally, pilot and feasibility testing should evaluate all planned trial procedures (identification of participants, random assignment within daily health care operations, consistent delivery of study interventions, and integrity/timeliness of data regarding primary outcomes). As the study is conducted, assessment of contextual changes will be important, potentially leading to continued iterations of pilot testing or troubleshooting. It has been suggested that study teams assess their protocol and intervention workflow according to predefined study checkpoints (for example, after X number of patients enrolled or after X number of weeks in the study). Remember to build in flexibility to accommodate local conditions and changes over time. Below are a few examples of study refinements.

 

Issue Adjustment
Inadvertent patient crossover to intervention due to changes in electronic health records This trial tested insertion of epidemiologic information into radiology reports regarding imaging for back pain. The partner healthcare system had an issue of dynamic updating when a user opened a radiology report. Since randomization depended on calendar time in the stepped-wedge design, there was a potential for a single patient to cross over from the nonintervention group to the intervention group simply because the report was viewed at different times. The study team worked with site programmers to change the intervention insertion from dynamic to static so that it did not change depending on the viewing date.
Inadequate PRO collection The study team had an expectation at the outset that adequate PRO data would be routinely collected and recorded in the EHR system. Pilot testing revealed that PRO data were actually collected and recorded infrequently. Consequently, the team discovered after the trial began that they had to enhance support for PRO collection.
Intervention modification based on clinical practice limitations The study team adjusted elements of the intervention delivery based on the availability of staff and limitations of the practice (such as physical therapists’ professional licensing limitations in working with a larger group of patients).
Incomplete information regarding eligible participants The partner healthcare system had initial problems with patient lists not distinguishing between potential candidates and those who were eligible and confirmed for enrollment. The study team put in place an additional manual confirmation by the practice facilitators.

 

The startup phase involves ensuring that the elements of trial readiness have been completed before full site implementation begins. Activities during this phase may include:

  • Finalizing the study documentation (see Documentation Checklist).
  • Visiting each participating site/clinic/health system to identify concerns, identify logistical or practical barriers, and complete any necessary training of frontline clinicians and staff.
  • Anticipating potential barriers and creating contingency plans for site dropout, inadequate enrollment, intervention contamination, EHR system changes, and potential turnover of healthcare system leadership, staff, or clinicians.
  • Developing necessary tools for communication with study staff, health system staff, and potential participants (these may be similar or distinct, depending on specifics of study design and protocol).
  • Establishing and testing procedures for collection of outcome data (including testing of assumptions regarding data recorded by health clinicians or other personnel).
  • Establishing a system whereby the study team can quickly verify that the intervention is being implemented as planned.
  • Establishing a plan for ongoing communication/meetings with study teams, advisory board, and data monitoring committee.

SECTIONS

CHAPTER SECTIONS

NIH Resources for Study Startup

The National Center for Complementary and Integrative Health (NCCIH) offers a Clinical Research Toolbox containing templates, sample forms, and information materials to assist clinical investigators in the development and conduct of high-quality clinical research studies.

This Quick Start Guide is designed to provide some resources to project managers (PMs) to support the conduct of an embedded pragmatic clinical trial (ePCT) within their healthcare system.

The National Institute on Aging (NIA) provides a Clinical Research Study Investigator's Toolbox as an online informational repository for investigators and staff involved in clinical research.

The National Center for Advancing Translational Sciences (NCATS) supports a broad range of clinical research tools that facilitate clinical trial design, patient recruitment, and regulatory compliance.

Living Textbook

Read about stakeholder engagement in study startup and throughout the ePCT trial life cycle.


Version History

January 30, 2026: Added Gregory Simon as a contributor. Made additional nonsubstantive edits as part of annual content update. (changes made by T. Green)

July 8, 2020: Added link to NCATS clinical research resources (change made by L. Wing).

May 22, 2020: Added links to study startup resources at NCCIH and NIA  (changes made by L. Wing).

February 11, 2020: Added Resource box with link to stakeholder engagement section (changes made by K. Staman).

December 18, 2018: Made nonsubstantive edits as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Additional Resources

Assessing Feasibility


Section 8


Additional Resources

Resource Description
ePCT training resources
Topic 7: Pilot and Feasibility Testing (PDF) (Wendy Weber, ND, PhD, MPH)

 

Topic 7: Case Study Webinar: Pilot-testing Interventions in Pragmatic Trials (Video) (Greg Simon, MD, MPH)

These are a slide deck and video webinar from the 2018 ePCT Training Workshop specifically related to pilot testing and feasibility. Other ePCT training resources are also available.
White papers and guidance
Considerations for Training Front-Line Staff and Clinicians on Pragmatic Clinical Trial Procedures This document helps PCT study teams plan training for study procedures that involve front-line clinicians and staff. The content was developed by drawing on trial-specific experience from the NIH Collaboratory Trials. The document describes how training for PCTs will differ from training conducted for typical research studies, and includes a list of specific considerations, real-world examples, a checklist for PCT training design, and links to additional resources.
Lessons Learned from the NIH Health Care Systems Research Collaboratory Trials This document presents problems and solutions for PCT initiation and implementation based on trial-specific experience from the NIH Collaboratory Trials. For each NIH Collaboratory Trial, problems and solutions are listed as they pertain to building partnerships, defining clinically important questions, assessing feasibility, involving interest holders in study design, developing study workflows, and considering potential Institutional Review Board (IRB), regulatory and biostatistical issues.
Online training
Center for Research Implementation Science and Prevention (CRISP) Pragmatic Trials Workshop Handbook (PDF) An online course and resources for researchers to learn about conducting pragmatic clinical trials.
Journal articles
Readiness assessment for pragmatic trials (RAPT) RAPT is a framework that study teams can use in the pilot phase to assess the readiness of their embedded intervention before advancing to the full implementation phase. RAPT delineates 9 readiness criteria to evaluate from low to high readiness for an intervention.
Communication is the key to success in pragmatic clinical trials in Practice-based Research Networks (PBRNs) Effective communication is the foundation of feasibility and fidelity in practice-based pragmatic research studies. Doing a study with practices spread over several states requires long-distance communication strategies, including e-mails, faxes, telephone calls, conference calls, and texting. Developing and ensuring comfort with distance communications requires additional time and use of different talents and expertise than those required for face-to-face communication. This discussion is based on extensive experience of 2 groups who have worked collaboratively on several large, federally funded, pragmatic trials in a practice-based research network.
Trials without tribulations: minimizing the burden of pragmatic research on healthcare systems Pragmatic clinical trials are increasingly common because they have the potential to yield findings that are directly translatable to real-world healthcare settings. Pragmatic clinical trials need to integrate research into the clinical workflow without placing an undue burden on the delivery system. This requires a research partnership between investigators and healthcare system representatives.
A guide to research partnerships for pragmatic clinical trials A successful pragmatic clinical trial starts with a strong partnership between researcher and healthcare system, goes through a rigorous objective evaluation of the ability of the partner healthcare system to participate, and ends with evidence about sustainable ways to improve care, as well as a long term scientific relationship.

SECTIONS

CHAPTER SECTIONS


Version History

August 27, 2020: Added a link to the RAPT resource as part of annual content update (changes made by L. Wing).

December 11, 2018: Added new resource to table as part of annual content update (changes made by L. Wing).

Published August 25, 2017