Spotlight on NIH Collaboratory Trials

Assessing Feasibility


Section 7


Spotlight on NIH Collaboratory Trials

The NIH Collaboratory has conducted interviews over the years with the principal investigators of the ePCT NIH Collaboratory Trials. In order to assist future study teams, we highlight some challenges and lessons learned during feasibility testing from a sampling of studies before they advanced to the full implementation phase.

GGC4H: Guiding Good Choices for Health – Drs. Kuklinksi, and Sterling

The GGC4H study asks two core questions: Will the Guiding Good Choices program improve adolescent behavioral health when offered in a pediatric health care setting? Will parents in the health care setting actually enroll in the program, and to what degree? The initial study design called for studying adolescents whose parents had enrolled in the intervention, and in doing so, raised issues of selection bias and valid assessment of effectiveness. Moreover, the original adolescent sample included only those with well visits, another source of selection bias. During the transition period, the team’s biostatisticians revised the design to address both effectiveness and implementation. The new design includes all adolescents in the eligible age range who receive care at the pediatric clinics involved in the study and recruits them prior to any intervention being offered to their parents. Intervention and control arm adolescents are recruited and followed-up using similar procedures and timing. The new design also includes offering the intervention to all families in the intervention arm, even if their adolescents are not in the study. With these changes, study results will be more generalizable and valid for both effectiveness and implementation questions.

Read more about GGC4H:

EMBED: Pragmatic Trial of User-Centered Clinical Decision Support to Implement Emergency Department-initiated Buprenorphine for Opioid Use Disorder – Drs. D'Onofrio and Melnick

Among the feasibility challenges addressed by the EMBED study in the pilot phase were poor usability of health information technology (HIT), a complex and unfamiliar protocol for initiation of buprenorphine, limitations of the EHR system and vendor-provided clinical decision support (CDS), and the urgency of addressing the opioid crisis. During the UG3 phase, the study team conducted direct observation and interviews of clinicians, including attending and resident physicians as well as advanced practice providers, in the emergency department to identify current gaps and needs in HIT. They focused on developing a user-friendly web-based CDS tool to facilitate management of potentially eligible patients with opioid use disorder (OUD). They also validated a two-algorithm phenotype that could flag potential patients with OUD. Further, the team modified their original study design from a stepped-wedge to a parallel group-randomized trial with constrained randomization to better address the temporal trends of the opioid crisis.

Read more about EMBED:

SPOT: Suicide Prevention Outreach Trial – Dr. Simon

Multiple rounds of pilot testing were done to refine the outreach programs in order to maximize engagement. The study team sought to find a balance between being assertive and intrusive. The team included individuals with experience of self-harm or suicidal ideation to inform the process of developing and refining the outreach messages. Based on the pilot testing, the team has an expectation of the level of engagement. However, the pilot was conducted at only one site. It is hoped that engagement will be about the same or better at the other sites, but this remains to be seen. Dr. Simon noted that issues related to engaging patients are not expected to vary widely among sites. However, they may run into some technical or health system issues, because all of the sites have different customized versions of the Epic electronic health record system. The trial relies on embedded tools in Epic to make its processes work. “Writing all that code and translating it to another Epic instance is not simple. It’s not like a Microsoft Word document that any version of Microsoft Word can open.”

Read more about SPOT:

STOP CRC: Strategies and Opportunities to Stop Colorectal Cancer in Priority Populations – Drs. Coronado and Green

In the first phase of the trial, the electronic medical record (EMR) tools needed for the intervention were customized through consultation with EMR specialists and an advisory board of clinicians, policymakers, and payers. The intervention was introduced to participating clinics in the second phase, followed by refinement of the EMR tools. A major accomplishment of this phase has been the implementation of a well-validated quality improvement approach called Plan-Do-Study-Act, or PDSA. The use of PDSA has helped to identify implementation issues and unintended consequences and has empowered clinics to actively address local conditions. The PIs observed that, while their UH2 pilot was as comprehensive as possible and provided a useful way to begin the research, important learning has continued throughout the UH3 phase.

Read more about STOP CRC:

LIRE: Lumbar Imaging with Reporting of Epidemiology – Dr. Jarvik

Dr. Jarvik says that the most important lesson is to work with systems and people you know and trust and with whom you have good relationships. The LIRE study team had pre-existing, well-established research relationships with the sites, and it helped with engagement with the clinicians, health system leaders, and the IRB. Dr. Jarvik also said that the more you can pilot and smooth out the small kinks the better off you will be. Some of our systems are highly integrated and top-down managed, and some, like the Mayo Clinic, are much more diverse.

Read more about LIRE:

PPACT: Collaborative Care for Chronic Pain in Primary Care – Dr. DeBar

Dr. DeBar says that when focusing on clinical issues considered critical and urgent, nothing is static. Everything is moving all the time. Everything is new in this hybrid between clinical care and pragmatic research. Resilience is required. It is important to adopt systems and processes that are native to the healthcare system whenever you can. In the different Kaiser Permanente regions, there are systems, processes, and project managers for change initiatives and quality improvement, and it would have been better to substantively partner with them earlier in the process. Last, be cognizant of what makes your research question a timely one, because the answer to this question portends challenges in implementation. Do we need this because there is a lack of existing services? If so, then the politics are simple. But if there are services that may or may not meet the needs of existing patients, then close work with the interest holders is required.

Read more about PPACT:

SECTIONS

CHAPTER SECTIONS


Version History

September 2, 2020: Added links to Grand Rounds Presentations related to each of the project spotlights (changes made by L. Wing).

August 27, 2020: Added two scenarios from NIH Collaboratory Trials and made nonsubstantive changes to text as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Feasibility Assessment Scenarios From the NIH Collaboratory Trials

Assessing Feasibility


Section 6


Feasibility Assessment Scenarios From the NIH Collaboratory Trials

The following table gives examples of various feasibility challenges and troubleshooting approaches taken by the NIH Collaboratory Trial study teams during the trial’s planning or pilot phase.

Feasibility Assessment Examples
RAPT Domain Feasibility Challenge
Approach
Measurement The original study design led to partial cross-nesting of intervention participants, which would have threatened valid statistical inference. Biostatisticians devised a novel analytic approach that resolved statistical concerns and, in a simulation study, showed strong power, nominal alpha levels, and adequate coverage.
Measurement EHR data did not include all adolescent outcomes and were not consistently available across the sites. Developed and tested an Adolescent Behavioral Health Survey to collect data on key adolescent outcomes.
Feasibility Federal regulations around buprenorphine (BUP) administration for opioid use disorder required physicians to have specialized, time-consuming training. Designed the intervention’s clinical decision support tool to provide flexibility for both waivered and non-waivered emergency department clinicians to use while remaining in compliance with regulatory statutes.
Feasibility Poor EHR usability was a barrier for incorporating a complex workflow in the emergency department setting. Optimized EHR usability and integration, automation of EHR workflow, and scalability across a variety of healthcare systems.
Feasibility/Cost Patient-reported outcomes, such as the Brief Pain Inventory, were not embedded in the EHR system to allow extraction from the record. Required building an enhanced infrastructure for quarterly PRO data collection designed to be as easily scalable as possible. For example, reliance on patient health record and interactive voice response systems in clinic use and reserving person-based outreach only when patient did not engage with automated outreach.
Acceptability Navigating local systems was challenging. Involved the QI infrastructure in trial planning. QI project managers were embedded in healthcare systems and guided projects.
Cost The study team did not anticipate some of the delays associated with data validation. Reallocated funds for additional IT and data analyst efforts.
Feasibility/Cost Because the primary outcome is hospitalization rate per person day-alive, the data needed to be matched between nursing homes and hospitals and Medicare vital statistics data since nursing home data alone could have biased results. Added additional IT resources to help link the systems.
Feasibility Capabilities of the EHR systems were varied with no single administrative database. Asked all level 1 and 2 trauma centers to complete a survey regarding EHR capabilities and found that while some sites were able to automate PTSD screening, other sites needed to screen manually. Developed methods to work with all sites regardless of capability and created a 10-domain EHR screen for risk factors for PTSD and other comorbid conditions.
Acceptability A small change to workflow or the IT system was often viewed as a large change by health system personnel. More activity than expected was required at the local level and with individual practitioners and administrators to engage the personnel at the facilities.
Feasibility/
Acceptability
The study team initially planned for structured, step-wise electronic tools that were time-consuming to use but would provide a detailed therapy plan. After discussing the tool with medical directors and physicians, the team developed more user-friendly, less burdensome tools.
Feasibility/Cost/
Measurement
Management of multiple chronic conditions varied across different healthcare systems. Study facilitators developed different workflows to accommodate the variations in resources at every site. These were roles in the healthcare systems and required more multidisciplinary review of the proposed workflows.
Feasibility/Cost/
Measurement
Updates in real-time with the use of the EHR meant that the lists of eligible and active patients at the clinics were continuously changing, which caused discordance between the lists that had been gathered for research purposes. The team worked with the statisticians and added a secondary analysis. In another instance, much more intensive analyst staffing during participant recruitment was required to accommodate these frequent updates in provider and clinic assignment of potentially eligible patients.
Feasibility/Acceptability The study team and healthcare system partners did not want to recruit facility leadership to participate in the study and then tell them they were assigned to control since the partners felt that all facilities would want to have the intervention video. The team chose to "prerandomize" by first applying eligibility criteria to existing data on all of the partner facilities and then giving them the opportunity to exclude other facilities based on recent leadership changes. They next divided facilities into a priori strata and randomly selected the 120 treatment facilities from the pool, leaving the rest as controls. In this way, no facilities that wanted to participate were disappointed; the partners were confident that they would have a high participation rate.
Feasibility The initial sample size was based on broad estimates of the prevalence of multiple chronic conditions across the healthcare systems and was limited by lack of cluster-level detailed information. In the planning phase, the cluster units were redefined from individual practitioners to practice sites. The team queried EHR systems with the new cluster definition and collaborated with statisticians at the NIH to establish an appropriate sample size.

SECTIONS

CHAPTER SECTIONS


Version History

October 25, 2022: Added the RAPT Domain to the feasibility assessment examples table as part of annual content update (changes made by E. McCamic).

August 27, 2020: Added four feasibility assessment examples to the table and made nonsubstantive changes to text as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Pilot Testing

Assessing Feasibility


Section 5


Pilot Testing

Pilot testing involves an assessment of the readiness of the embedded intervention before launching its full implementation. Such assessment will involve evaluating the context, capabilities, and challenges of the partner healthcare system as well as testing key elements of the intervention and collection and transfer of data from the EHR.

Readiness Assessment for Pragmatic Trials (RAPT)

RAPT is the first model to help interventionists and funders assess the extent to which interventions are ready for PCTs. Scoring efficacious interventions using RAPT can inform research team discussions regarding whether or not to advance an intervention to effectiveness testing using a PCT and how to design that PCT. (Baier et al. 2019)

RAPT is a recently developed framework for study teams in the pilot phase to assess readiness of their embedded intervention before advancing to the full implementation phase (Baier et al. 2019). RAPT delineates 9 readiness criteria to evaluate the intervention, from low to high readiness:

RAPT Domain
Implementation protocol Is the protocol sufficiently detailed to be replicated?
Evidence To what extent does the evidence base support the intervention’s efficacy?
Risk Is it known how safe the intervention is?
Feasibility To what extent can the intervention be implemented under existing conditions?
Measurement To what extent can the intervention’s outcomes be captured?
Cost How likely is the intervention to be economically viable?
Acceptability How willing are providers likely to be to adopt the intervention?
Alignment To what extent does the intervention align with external interest holders’ priorities?
Impact How useful will the intervention’s results be?

For details on how to use the RAPT tool, visit the RAPT Model website.

Challenges During the Pilot Phase

Feasibility assessment specific to embedded PCTs may be associated with the following challenges.

Biostatistical Issues

Study teams should work early on with their statistician to anticipate gaps or issues around cluster randomization, including sample size and potential for contamination, intraclass correlation, varying cluster size, the need for stratification or matching, and potential for missing follow-up data.

The Collaboratory’s Biostatistics and Study Design Core provides resources to help address challenges related to:

Secondary Use of EHR Data

Using EHR data for research is fundamentally different from using prospectively collected data. Several aspects of EHR data drive these differences, including the lack of control over data definitions and data collection processes in healthcare facilities, procedures for access to the data, frequent dependence on record linkage, the need for computable definitions for cohorts and outcomes of interest, and the intricacies of demonstrating that data are of adequate quality to support research conclusions.

Consider how the intervention will use existing EHR data for cohort identification, recruitment, sample size estimates, population screening, collection of embedded patient-reported outcome (PRO) data, and so on. If there will be different EHR systems, how will they be linked? Pilot test the data collection procedures and any web-based tools developed specifically for the study.

The Collaboratory’s Electronic Health Records Core offers PCT-specific guidance for study teams on:

Capabilities and Readiness of the Partner Healthcare System

Consider whether there is a gap in existing services for the target population, or whether the healthcare system has had difficulties successfully addressing patient needs. It is important to evaluate how the embedded intervention will align with the goals of the healthcare system to advance their practice. Also consider the best timing for embedding the PCT intervention if there are expected EHR platform changes or pending external contextual or policy changes that could impact the system’s capacity to implement the PCT. Allow sufficient time for build up and/or development of new EHR tools and forms.

Consider testing the intervention across partner systems that vary in capacity or context; for example, a VA health system and a regional or community health system. Evaluate how effective the system or clinic will be as a research partner. One way is for the study team to pilot the programming capacity of a site (e.g., giving the site sample programming code, or asking sites to send the code that they will use to the study coordinating center) to determine if the site is able to fully participate. Researchers could devise a set of criteria for site inclusion. For example, in a trial in which epidemiological benchmarks are inserted into radiology imaging reports, the study team wanted to verify at each site that the intervention text could be successfully included in the report based on a specific CPT code, modality, patient age, and date. Another way to gauge the effectiveness of a partnership is to confirm the healthcare system’s commitment to identifying the clinical staff available to carry out the study intervention.

Integrating the Study Into the Clinical Workflow

Test how smoothly the embedded intervention will be incorporated into the site’s existing system infrastructure. Sites have different workflows and available resources, which can complicate the process. In pragmatic research, it is important that study procedures mimic routine practice. Consider how individual healthcare providers will be engaged in the intervention and how burdensome it may be for them. Make an effort to simplify and streamline the study to make integration as user-friendly as possible. Be open to adapting the study based on interest holder input or changes in the care delivery system. Study teams are also encouraged to pilot the effectiveness of trial-related materials such as informed consent forms and processes, training videos, brochures and placards, and toolkits.

  • What are 2-5 aspects of your trial that are essential to pilot?

SECTIONS

CHAPTER SECTIONS


Version History

October 4, 2022: Converted RAPT Domain list to a table as part of the annual content update (changes made by E. McCamic).

August 27, 2020: Added an introduction to the RAPT readiness assessment tool as part of the the annual content update (changes made by L. Wing).

April 15, 2020: Updated broken hyperlinks (changes made by G. Uhlenbrauck).

December 11, 2018: Updated text describing pilot testing with the partner healthcare system as part of the annual content update; added a Key Question (changes made by L. Wing).

Published August 25, 2017

Delineating the Roles of All Interest Holders to Determine Training Needs

Assessing Feasibility

Section 4

Delineating the Roles of All Interest Holders to Determine Training Needs

Consider the roles of different interest holders at each trial site and the training needed for frontline staff and operational personnel. Embedded PCT (ePCT) interventions have varying levels of complexity, and accordingly may involve clinicians (physicians, specialists, nurses), senior management, business operations personnel, IT staff, researchers, clinic champions, and practice facilitators. The intervention may fall anywhere along a spectrum from relatively “passive,” requiring initial planning with the staff implementing the intervention and then limited ongoing training, to relatively “active” and involve system-wide changes that would need reinforcement over time.

Consider how existing procedures in each setting will need to be adapted and whether the intervention will generate new care delivery processes and workflows. Training for embedded PCTs is optimally conducted within routine care and operations using the healthcare system’s existing training structure. Also, system change over time may need to be accommodated, for example, to orient new leaders and staff joining the system or to account for changes in electronic tools for clinical decision support.

The following checklist presents considerations for designing interest holder training. For more guidance and examples, read Training Front-line Staff and Clinicians.

Checklist for PCT Training Design

Download as Word or PDF

Item Considered/Completed
Determine implementation complexity
Determine the degree of involvement from interest holders, the number of interest holders, and the amount of ongoing training needed for the intervention
Coordinate with the study sites or care delivery organizations
Identify local contact/champion
Determine who needs to be trained
Check if standard training structures and materials are available
Determine if staff or clinicians in the organization are able to conduct study training
Review parallel training efforts or programs planned by the care organization that may overlap with study training plans
Human resources
Review existing staff roles with supervisor/manager and discuss study-specific responsibilities or tasks
Consider alignment of staff identities and lived experience with the population served
Consult with local community organizations and DEI offices to ensure equitable hiring and support processes
Create scope of work for staff performing study tasks
Discuss potential contracting or hiring requirements with care delivery organizations’ Human Resources departments*
Training topics
Define new procedures and changes to existing clinic workflow
Review communications to be given to patients and suggestions for staff if patients have questions about the trial communications or procedures
Determine if staff roles require training on human subjects protection
Control and intervention arms
Develop specific training procedures for different study arms as relevant
Track training activities (study analyses may need this)
Training structure
Consider how standard training structures might correspond/not correspond with study training
Will a train-the-trainer approach work?
Fidelity monitoring
Consider how tools needed to track study procedures might also be used to indicate need for retraining
Encourage input from staff about tools to make tracking easier for them and update over time

*With respect to hiring, consider providing funds through the study to pay the personnel who are directly responsible for study procedures related to research, which both prioritizes the study procedures and gives more control to investigators. In instances in which additional work is anticipated because of the intervention, the healthcare system could use study funds to directly hire the personnel needed, or to cover the effort of existing personnel if involved in the study.

SECTIONS

CHAPTER SECTIONS


Version History

July 2, 2024: Made additions to Human Resources section of PCT Training Design Checklist (changes made by E. McCamic).

December 10, 2018: Made nonsubstantive edits to text as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Establishing Close Partnerships With Participating Healthcare System Leaders and Staff

Assessing Feasibility


Section 3


Establishing Close Partnerships With Participating Healthcare System Leaders and Staff

Conducting PCTs embedded in healthcare system settings requires efforts at directly involving diverse interest holders, including delivery system leaders, operational personnel, IT staff, statisticians, frontline care providers and patients. Input from these interest holders is an important part of the trial’s design and planning. Establishing and maintaining good relationships and communication are essential for the long-term success of the trial.

Study teams should identify at least one co-investigator at each study site who will serve as an integral project member and a site champion over the course of the project. Having such a research partner is important in understanding who is affected by the embedded elements of your intervention. Champions not only can identify inefficiencies or constraints not evident at higher levels of the organization, but also have experience in fixing problems internally. Choosing the right partner is critical. They need to have enough experience to know how to effectively troubleshoot within their system and have the respect of system leaders and influence within their organization to implement solutions. Both the study team and the healthcare system may need to make adjustments or accommodations for competing priorities throughout the trial planning and execution phases.

The following table gives example scenarios of building partnerships from the NIH Collaboratory Trials.

Establishing Partnerships Examples
Scenario Research Team Approach
EHR programming staff at the site prioritized clinical needs over research needs, which led to delays in data pulls for the trial. Engaged with site programmers early in the project to help them develop and maintain an investment in the trial’s purpose and research methods. A helpful approach is to have the study team regularly informed of the programmers’ priorities. Clinical-based programmers generally have different priorities than research-based programmers.
Intervention was in the primary care setting where schedules are busy and space is tight. Teamed with clinicians to understand workflow and schedule study-related patient visits during slower clinic periods and held patient visits in less conventional ways (after hours, groups met in lobby spaces).
There were high amounts of leadership turnover at medical director and provider levels due to preexisting pressures and challenges inherent in community clinics. Met regularly with leadership teams and established an advisory board and other infrastructure to help engage leaders and gatekeepers.
Leadership approval of the study was delayed because different departments within a single healthcare system were unable to initiate approval without the other departments going first. For example, Interest holder A could not approve the study before Interest holder B approved. Facilitated in-depth discussions of the project with all the relevant interest holders on the phone or web at the same time, when face-to-face meetings were not possible. A prior history of collaboration among investigators and support from senior officers in the healthcare systems was instrumental in obtaining approval.
Gaining widespread support from health system interest holders was essential before implementing the intervention. Leveraged the health systems’ previous experience in conducting embedded clinical research. Health system and clinic leaders were enthusiastic about how the intervention, if successful, could fill a service gap.

 

SECTIONS

CHAPTER SECTIONS


Version History

August 27, 2020: Added a new scenario to the table and made nonsubstantive changes to text as part of annual content update (changes made by L. Wing).

December 10, 2018: Added a new scenario to the table (changes made by L. Wing).

Published August 25, 2017

Developing the Trial Documentation

Assessing Feasibility


Section 2


Developing the Trial Documentation

The checklist below suggests documents needed for fulfilling regulatory requirements as well as other comprehensive records of the trial’s conduct. Note that the required or recommended documents will vary depending on the particulars of the intervention and related contractual agreements, but it will be beneficial to be as thorough as possible at this feasibility phase. Also, teams should expect that these documents will undergo modifications, possibly multiple times, both before and during the trial.

Pilot testing informs the trial’s needs and how to plan for challenges with staffing, recruitment, data collection, intervention timing, and other elements. In large healthcare systems, conditions may shift or new initiatives may be disseminated within the system and thus require flexibility in, for example, committee membership, modes of communication, staffing, or training. Such ongoing considerations are important for study teams involved in implementing embedded PCTs in real-world settings.

Documentation Checklist

Download as Word or PDF

Document Completed
Study-related
Protocol
Staffing plan, including multisite organization chart
Recruitment plan
Statistical analysis plan
Budget
Contractual documents (e.g., memorandum of understanding [MOU], reliance agreement)
Electronic health record use plan and IT-facilitated updates as needed  
Study plan and timeline  
Communication plan
Committee membership and meeting plan, including advisory and steering committees  
Manual of procedures
Data coordinating activities (e.g., data dictionary, data quality assessment, data harmonization across sites)
Patient recruitment and intervention materials
Clinical staff training and intervention materials
Interviewer/research staff training
Vendor contracts
Specimen management plan
Site initiation plan
Dissemination and sustainability plan
Regulatory
Data sharing plan including data use agreements between parties
IRB review and approval
Registry in ClinicalTrials.gov
Informing participants; consent process and documentation
Oversight
Data and safety monitoring plan/committee
Data management plan
Quality management plan

SECTIONS

CHAPTER SECTIONS


Version History

December 10, 2018: Made minor additions to the Documentation Checklist (changes made by L. Wing).

Published August 25, 2017

Introduction

Assessing Feasibility


Section 1

Introduction

Assessing the feasibility of a randomized embedded PCT (ePCT) is a crucial part of the planning phase, serving as a bridge from designing to conducting the trial—the point when investigators activate sites, randomize participants, and begin data collection. Because PCTs are embedded within healthcare delivery systems and typically use data extracted from electronic health records (EHRs), feasibility assessment may differ from what is done for explanatory clinical trials. Potential differences include the need to establish close partnerships with healthcare system leadership, clinical staff, patient partners, IT personnel, and other interest holders; develop and validate intervention-specific EHR tools; and incorporate the intervention into the clinical workflow as seamlessly as possible to reduce the burden on care providers.

One component of feasibility is assessing the logistics of embedding the trial within the healthcare system. Consider the resources that it will take to implement the intervention and how the study may modify the system’s current workflow. Another component is pilot testing the key aspects of the study (such as the randomization scheme, identification of study participants or sites, intervention specifics, or data collection) to determine if the procedures are well coordinated and able to generate results. The study team should also evaluate the intervention’s flexibility in both delivery and adherence and ensure that the outcomes will be relevant to patients, clinicians, and other decision makers. Pilot testing will be particularly critical for complex ePCT interventions in order to reduce uncertainties during the implementation phase. The following sections describe feasibility considerations.

Watch the video module: Pilot and Feasibility Testing: The LIRE Example

SECTIONS

CHAPTER SECTIONS


Version History

January 22, 2021: Added embedded video (change made by G. Uhlenbrauck).

August 27, 2020: Added a resource link for the RAPT tool (change made by L. Wing).

December 10, 2018: Made minor nonsubstantive text corrections (changes made by L. Wing).

Published August 25, 2017

Introduction – Data and Safety Monitoring ARCHIVED

ARCHIVE Data and safety monitoring


Section 1

Introduction – Data and Safety Monitoring ARCHIVED

There is an ethical obligation to monitor for changes to the risk-benefit balance and data integrity during the course of a clinical trial. The purpose is threefold: to protect the welfare of participants in the trial, to protect those patients with the same clinical condition outside the trial, and to ensure that the trial results will be informative. Data monitoring committees (DMCs), sponsors, investigators, and other stakeholders are likely to be familiar with practices for monitoring traditional trials, but some special considerations may apply to pragmatic trials that are conducted in the setting of routine healthcare delivery. For example, data quality and timeliness of reporting can be a concern with trial data collected using electronic health records (EHRs). In addition, it may be difficult to collect follow-up data in ways that would deviate from standard clinical workflows.

In this chapter, we discuss issues related to data monitoring that may pose particular challenges in the context of embedded pragmatic clinical trials (ePCTs). These issues are important to consider before study initiation to ensure that an appropriate data monitoring plan is in place—one that balances the pragmatic nature of a trial with the need to maintain trial safety, validity, and integrity. To illustrate these concepts, we will discuss case studies involving planning for data monitoring from the NIH Collaboratory Trials.

SECTIONS

CHAPTER SECTIONS


Version History

July 3, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

December 13, 2018: Updated text as part of annual content update (changes made by L. Wing).

Published August 25, 2017

Introduction

Using Electronic Health Record Data in Pragmatic Clinical Trials


Section 1

Introduction

Using electronic health record (EHR) data for research is fundamentally different than collecting the research data prospectively, as is traditional for controlled clinical trials. Several features of EHR systems create these important differences, most importantly being the lack of investigator control over data collection and recording processes in health care facilities. Other factors include the lack of standard definitions for identifying patient cohorts and study-specific outcomes, the challenges associated with completeness of longitudinal data, and potential errors in linkage of records across systems (Zozus et al. 2015). All of these challenge investigators to assure and demonstrate that data are of adequate quality to support research conclusions. While many of the issues addressed in this chapter apply to a broad range of study designs that might use data from the EHR, this chapter describes the use cases and associated challenges for using EHR data in pragmatic clinical trials, particularly those that include randomization. Specifically, we will discuss:

  • Prerequisites for conducting pragmatic research using EHR systems
  • Developing and refining the research question and defining the data that are essential and necessary to answer that question
  • Data sources for explanatory trials vs PCTs
  • The role of data as a partial representation of (or surrogate for) clinical phenomena under investigation
  • Considerations for the use of EHR data, including understanding bias and provenance, completeness and other dimensions of data quality, and methods for linking between multiple data sources

Challenges and Prerequisites for Using EHR Systems

In a NIH Pragmatic Trials Collaboratory manuscript, 20 NIH Collaboratory Trials responded to a survey about the challenges they encountered when using EHR systems for pragmatic clinical research (Richesson et al. 2021). The goal of the study was to elucidate challenges and develop solutions—or prerequisites for pragmatic research—to enable healthcare system leaders, policy makers, and EHR designers to improve the national capacity for generating real-world evidence. The table summarizes 6 broad challenges and solutions, identified by the study’s authors. The solutions for each broad challenge—if implemented as part of the health systems and research infrastructure—can enable the rapid conduct of future pragmatic trials, and hence can be conceptualized as prerequisites for successful EHR-based pragmatic research.

Challenge

Prerequisite

Inadequate collection of patient-centered data Integrate collection of patient-centered data into EHR systems
Lack of structured data collection Facilitate structured research data collection by leveraging standard EHR functions, usable interfaces, and standard workflows
Lack of standardization Support creation of high-quality research data by using standards
Lack of resources to support customization of EHRs Ensure adequate IT staff to support embedded research
Difficulties aggregating data across sites Create aggregate, multidata type resources for multisite trials
Inefficiencies accessing EHR data Create reusable and automated queries

This study highlights the need to tailor the use of EHR systems to enable the collection of patient-centered outcomes and the extraction of high-quality, standardized data. Although EHR data systems are designed to support clinical care and billing, high-quality data derived from these systems can also help improve population health by generating reliable evidence and advancing continuous learning within and across healthcare systems.

For further descriptions of the 6 challenges and prerequisites, read Enhancing the use of EHR systems for pragmatic embedded research: lessons from the NIH Health Care Systems Research Collaboratory.

Data Sources for Explanatory Trials vs PCTs

There is a marked contrast between using the data collected within an EHR system for research versus using data that were collected outside of an EHR explicitly for a trial. Traditionally in clinical research, a study protocol specifies the data to be collected, and they are collected through a separate, stand-alone system. The circumstances around data collection for traditional trials, including procedures for taking samples, making observations and recording data (e.g., patient positioning, timing, and anatomical location) are clearly defined in the protocol and the data are collected in accordance with those specifications. Further, in traditional research, the protocol defines the timing of data relative to the trial milestones or activities, for example, “the second assessment occurs 14 days post baseline.” In designing traditional (or explanatory) research studies, a top-down approach is usually taken starting with the research question and working down to the required data.

In contrast, the use of existing data streams, a defining feature for pragmatic clinical trials, presents a number of issues and requires a different approach than in traditional explanatory trials. Data contained in EHRs captured from routine-care settings or insurance claims have a different context from prospectively collected research data. While the context of care and data collection is often unspecified, it is certainly not defined around a research question or protocol. Consequently, the structure and representation of clinical data is imposed at the facility according to their standards for clinical documentation and business needs rather than by the needs of the research study. This structure, along with local context, record linkage considerations, use of diagnosis or other structured codes, etc, brings substantial and unique challenges for using data from EHR systems in research.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Richesson RL, Marsolo KS, Douthit BJ, et al. 2021. Enhancing the use of EHR systems for pragmatic embedded research: lessons from the NIH Health Care Systems Research Collaboratory. Journal of the American Medical Informatics Association. 28:2626–2640. doi:10.1093/jamia/ocab202.

Zozus MN, Richesson R, Hammond WE, Simon GE. 2015. Acquiring and Using Electronic Health Record Data.  https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/Acquiring%20and%20Using%20Electronic%20Health%20Record%20Data.pdf. Accessed July 14, 2025.


Version History

October 7, 2025: Updated text as part of annual review (changes made by K. Staman).

July 14, 2025: Updated references and resources (changes made by G. Uhlenbrauck).

August 26, 2022: Updated text as part of annual update (changes made by K. Staman).

July 3, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

November 30, 2018: Updated text as part of annual update (changes made by K. Staman).

Published August 25, 2017

Introduction – ARCHIVED

Consent, Disclosure, and Non-Disclosure


Section 1

Introduction – ARCHIVED

version 1.0 - removed 10/18/2022 go to latest version

As with any type of clinical research, it is important to protect the rights, interests and welfare of human subjects. This is based on the ethical principle of respect for persons (or autonomy). This principle generally requires that a competent person has a right to decide what is or is not done to them. In research, a primary mechanism for achieving this is through the informed consent process.

Who might need to be protected/considered?

The issue of identifying a research subject is fairly straightforward in conventional research, and the overarching regulations governing research were designed with this type of research in mind.

The Federal Policy for the Protection of Human Subjects, also known as the Common Rule, is a set of rules for the protection of human subjects (45 CFR 46 subpart A). The Common Rule defines a human subject as "a living individual about whom an investigator (whether professional or student) conducting research obtains

(1) Data through intervention or interaction with the individual, or
(2) Identifiable private information §46.102."

With pragmatic clinical trials (PCTs), identifying those who need to be considered—and the appropriate methods for respecting their autonomy—becomes more complicated. Smalley et al (2015) define 3 categories of research participants in PCTs: direct participants, indirect participants, and collateral participants. Recognizing the different individuals and manner in which they may be affected by pragmatic research can help to ensure their rights and welfare are protected. For example, because PCTs are conducted in real-world settings, they may affect individuals by way of routine exposure to an environment (eg, a hospital) in which a PCT is being conducted. For example, when PCTs are cluster randomized (ie, the unit of randomization is the facility, provider, community, etc), individuals may be indirectly exposed to an intervention. In other words, they may not be the direct target of the intervention (direct participant), but they may be exposed to it nonetheless (indirect participant). Finally, although this group is not considered to be "research subjects" in US federal regulations, collateral participants exist; these are patients, caregivers, and patient advocacy groups who may be affected by the occurrence or findings of the trial. Consideration of this population and effective communication with them is important (Smalley et al 2015).

What are the risks to participants?

The idea behind informed consent is that people need to be afforded the opportunity to weigh the relative risks, possible benefits, and potential burdens of a research study before deciding whether to participate. The risks involved in some research are largely informational—the risk of harm is not physical at all—but comes from inappropriate disclosure of private information. For other research, the risks involved may be deemed "minimal" by an IRB, especially if the treatments being compared are each considered standard of care and the research interventions do not pose substantial risks or burdens.

With PCTs, there may be several approaches related to the autonomy interests. In this chapter, we review three broad approaches: informed consent, disclosure and authorization, and nondisclosure. We also examine data on people's preferences regarding consent, disclosure and authorization, and nondisclosure.

SECTIONS

CHAPTER SECTIONS

Resources

Special Issue of Clinical Trials
This page provides background and links to a series of 12 articles on the ethics and regulatory challenges in pragmatic clinical trials. Each article in the special issue of Clinical Trials describes an issue in detail (eg, privacy, identifying research participants) and, where possible, attempts to provide guidance for future PCTs.

Legal and Ethical Architecture for Patient-Centered Outcomes Research (PCOR) Data (“Architecture”)
This document provides a collection of tools and resources aimed at helping a broad audience of stakeholders understand the ethical and regulatory requirements related to collecting, using, sharing, and disclosing PCOR data.

Common Rule
The Federal Policy for the Protection of Human Subjects

REFERENCES

back to top

Smalley JB, Merritt MW, Al-Khatib SM, McCall D, Staman KL, Stepnowsky C. 2015. Ethical responsibilities toward indirect and collateral participants in pragmatic clinical trials. Clin Trials. 12:476-484. doi:10.1177/1740774515597698. PMID: 26374687.


Version History

July 3, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published August 25, 2017