July 16, 2019: ACP PEACE Trial Moves From Planning to Implementation Phase: An Interview With Dr. Angelo Volandes

Dr. Angelo Volandes
Dr. Angelo Volandes

The National Institute on Aging (NIA) recently approved the Advance Care Planning: Promoting Effective and Aligned Communication in the Elderly (ACP PEACE) trial, an NIH Collaboratory Demonstration Project, to move from the planning phase to the implementation phase. The goal of ACP PEACE is to evaluate a comprehensive advance care planning program that combines clinician communication skills training and patient video decision aids.

We spoke with Dr. Angelo Volandes, co–principal investigator of ACP PEACE with Dr. James Tulsky, at the NIH Collaboratory Steering Committee meeting in May about what the study team has learned during the planning phase of the trial.

Were there surprises during the planning phase of the study?

There were lots of surprises. The biggest surprise was that most clinicians don’t use the structured variable in the electronic health record (EHR) that we were going to use to extract our primary outcome. The workaround, which I think is actually better, is to use natural language processing (NLP) to abstract our primary outcome from the free text of the clinical note in the EHR.

The other big surprise was that oncologists really enjoyed the intervention. They have been open to the skills training and, if anything, they’ve asked for more.

What is an example of a challenge you were able to overcome with the help of the NIH Collaboratory Core Working Groups?

One challenge we encountered is related to an issue discussed in a paper by NIH Collaboratory investigators Dr. Kevin Weinfurt and Dr. Jeremy Sugarman and colleagues. It has to do with the idea of “broadcast notification.” One of our 3 participating healthcare systems asks patients if they will allow their deidentified medical record data to be used for research purposes. Every patient in that healthcare system has the option to opt out of having their deidentified data used for research purposes. Our other 2 participating healthcare systems don’t do that as a routine matter. So we needed a different approach.

Dr. James Tulsky
Dr. James Tulsky

The idea of broadcast notification—which is new and was developed in the NIH Collaboratory—is to display posters or other notices in healthcare settings that let patients know they can opt out if they have a concern about their deidentified data being shared for research purposes. Our local institutional review board (IRB) was unfamiliar with this approach. Having the Ethics and Regulatory Core help us understand the approach and educate our IRB was incredibly helpful. It was especially helpful to be able to share a published, peer-reviewed paper showing that this was an issue the NIH Collaboratory had studied.

(Editor’s note: Read the article by Weinfurt et al,  “Comparison of Approaches for Notification and Authorization in Pragmatic Clinical Research Evaluating Commonly Used Medical Practices,” in the November 2017 issue of Medical Care.)

What other key challenges have you faced?

There are always competing priorities in real-world oncology clinics. For example, there are quality improvement projects all over the place. When you’re the clinician, how do you devote the appropriate attention and time to this particular project? We feel our project is at the crux of patient-centered care, about understanding the goals, values, and beliefs of patients when it comes to serious illness care. But there are competing priorities. There can be a tension between the time you need to get the project done, for the intervention to truly reach its fruition, versus what a clinic might be willing to do.

What advice do you have for investigators conducting their first embedded pragmatic clinical trial (ePCT)?

It’s really important to get the appropriate buy-in from people in high enough positions of authority so that the project happens. It is not enough to get the chief research officer of a healthcare system to say the project is a great idea. You need the chief marketing officer, the chief executive officer, the finance people to sign off on it. When you’re in the pragmatic research world, it’s no longer just research in a controlled environment. It affects things you didn’t think about—like patient flow, revenue—and everything has to be accounted for. Make sure you get appropriate buy-in from enough stakeholders to know that you’re going to get the project done.

Also, don’t underestimate the costs of information technology (IT). For example, we need a lot more resources for our IT infrastructure now that we have switched from using a structured variable to using NLP to obtain our primary outcome. Make sure you have thought through IT needs, especially in pragmatic trials, where so much is abstracted from the EHR. Think carefully, early on, about how much time you will need from the IT group.

Anything else you want to say about ePCTs or the NIH Collaboratory?

It’s critically important to participate in the regular meetings of the Core Working Groups, to take advantage of them to help you address challenges. For example, when we encountered the problem with obtaining the primary outcome, we presented it to the EHR Core. When we had the challenge with notification and the IRB, we presented it during a meeting of the Ethics and Regulatory Core. The Core Working Groups are most useful when you openly share the challenges you are facing. That’s the way to get help from the Cores. This is my second pragmatic trial, and I’m not perfect. I put it out there because I want help from the experts.

ACP PEACE is supported within the NIH Collaboratory by a cooperative agreement from the NIA and receives logistical and technical support from the NIH Collaboratory Coordinating Center. Read more about ACP PEACE in the Living Textbook, and learn more about the NIH Collaboratory Demonstration Projects.

June 28, 2019: Moving Beyond Return of Research Results to Return of Value (Consuelo Wilkins, MD, MSCI)


Consuelo H. Wilkins, MD, MSCI
Vice President for Health Equity, Vanderbilt University Medical Center
Executive Director, Meharry-Vanderbilt Alliance


Moving Beyond Return of Research Results to Return of Value


Health outcomes; Research results; Patient preferences; Value of information

Key Points

  • In returning value to research participants, results are shared with added context, are prioritized by each participant, include specific suggestions for relevant actions, and incorporate participant recommendations and preferences.
  • Data captured for research purposes, including EHR data, vital signs, and genetic data, can be repurposed and reoriented for study participants.
  • Participants are more likely to trust research if results are returned—and they are more likely to participate again.

Discussion Themes

We need to return study results that are informed by participants, and we need to design approaches for accessing and understanding results that participants will want to use.

We should think carefully about risk mitigation when returning research results for which there is a clear next step or action for the participant.

Read more about understanding what information is valued by research participants in a recent article by Dr. Wilkins and colleagues in Health Affairs.


#pctGR, @Collaboratory1, @drchwilkins, @vumchealth

July 2, 2019: New Living Textbook Section on Inpatient Endpoints in Pragmatic Clinical Trials

A new section in the Living Textbook describes the considerations for using “real-world” data for inpatient-based event ascertainment. There are many sources for acquiring this information, and they have different time lags in their availability and varying degrees of error and bias. In order to use inpatient endpoints in pragmatic clinical trials, these factors must be understood during the design, conduct, and analysis phases of an embedded pragmatic clinical trial.

“The pragmatic trial community needs to collectively determine which endpoints are relevant for pragmatic trials, how they can be measured and validated, and how the accuracy of these measurement methods may impact hypothesis testing sample size estimates.” —Eisenstein et al 2019

Topics in the chapter include:

  • Pragmatic trial inpatient endpoints
  • Inpatient event data sources
  • Patient-reported data
  • Secondary data sources: EHR
  • Secondary data sources: claims
  • Data source accuracy


June 20, 2019: EMBED Investigators Discuss Progress and Transition to Implementation Phase

At the May 2019 meeting of the NIH Collaboratory Steering Committee, we talked with Drs. Ted Melnick and Gail D’Onofrio of EMBED, an NIH Collaboratory Demonstration Project, to hear about progress and challenges during the UG3 planning phase. The goal of EMBED is to test whether implementation of a user-centered clinical decision support system increases adoption of initiation of buprenorphine/naloxone into the routine emergency care of patients with opioid use disorder. In the UG3 phase, the study team put in place the infrastructure of a pragmatic, multicenter, parallel, group-randomized health IT intervention. EMBED recently transitioned to the UH3 implementation phase and plans to launch the intervention at 20 sites across 5 healthcare systems in August 2019.

“With EMBED, we’re trying to take evidence-based research and implement it to improve practice. EMBED is both a research and patient care project.”

Were there any surprises during the study’s planning phase?

The first surprise came at last year’s Steering Committee meeting, when we met with the Biostatistics and Study Design Core. They encouraged us to change our original study design from stepped-wedge to group-randomized, which we did. We think this advice led to a stronger study. The main reason for this is the group-randomized design’s ability to better account for temporal changes. Since our intervention is being conducted in the middle of an opioid crisis, there are potentially other concurrent interventions that could make it difficult to determine the effect of our intervention. The group-randomized design should give us better insight into whether our intervention is driving behavior change in treating patients with opioid use disorder.

What is an example of a challenge that you were able to overcome with the help of a Core Working Group?

In addition to design advice from the Biostatistics Core, we received expert guidance from the Ethics and Regulatory Core, who helped us prepare for the central IRB process. The Core’s input was essential to how we developed our protocol’s waiver of informed consent, data handling, and protection of patient privacy. We were able to demonstrate to the IRB that our approach was logical and informed. We think this helped the IRB “get it” and allowed us to more efficiently address patient privacy issues in a vulnerable population across multiple healthcare systems.

What other key challenges have you faced?

One challenge was on the IT side with electronic health record (EHR) integration, which required more customization than we initially planned. How we work with EHR vendors is evolving, and we’ve found good partners so that we can integrate across different systems. This has strengthened our intervention so that it is perceived as more universal than one designed only for a specific EHR system.

Another challenge is the general under-resourcing of healthcare delivery systems for pragmatic research. We found that, regardless of budget, getting approval from system leadership for an IT change is often not enough—what is needed is figuring out who is going to make the change, how much time is involved, and whether the team has the bandwidth to complete the task. You cannot underestimate the degree of difficulty a change poses to a health system that is still struggling to get the clinical side of things right.

The way a study is framed to leadership is important—understand what’s motivating them to participate and move a project forward. With EMBED, we’re trying to take evidence-based research and implement it to improve practice. EMBED is both a research and patient care project. We need to impress upon leadership that we can improve patient outcomes and we’ll pay for it, but we need their help and support in navigating the process through the institution.

What words of advice do you have for investigators conducting their first embedded PCT?

  • Study teams should think about potential barriers from the beginning and find solutions quickly.
  • Make sure that health system leadership discusses your project with those on the ground.
  • Enlist the experts your study needs for each site. In our case, we needed both an IT expert for the operational side and a clinical expert, or we couldn’t have moved the project forward.
  • Recognize that there are trade-offs in pragmatic design and remember that you’re working with health systems in which your intervention will need to be replicated.
  • Make your intervention sustainable and easily usable by the clinician, without the need for research or other additional staff.

EMBED is supported within the NIH Collaboratory by a cooperative agreement from the National Institute on Drug Abuse and receives logistical and technical support from the NIH Collaboratory Coordinating Center. Read more about EMBED in the Living Textbook, and learn more about the NIH Collaboratory Demonstration Projects.

June 7, 2019: Meeting Materials from the 2019 NIH Collaboratory Steering Committee Meeting

The Collaboratory has made available all the presentations from their recent Steering Committee meeting held in Bethesda May 1-2, 2019.

Highlights of Day 1 included updates on the progress and sustainability of the NIH Collaboratory, perspectives on the landscape of embedded PCTs (ePCTs) and the need for real-world evidence, challenges and lessons learned from the UH3 Demonstration Projects, updates on progress and transition plans from the UG3 Demonstration Projects, and discussions on data sharing policy and planning. Day 2 featured an intensive workshop hosted by the NIH with the goal of starting discussions on statistical issues with ePCTs.

View or download the meeting materials on the website.

June 3, 2019: SPOT Illustrates Use of Real-World Health System Data in Designing Embedded Pragmatic Clinical Trials

An important advantage of embedding pragmatic clinical trials within health care systems is the availability of detailed clinical data on potential participants during trial design. These data can be used to determine eligibility criteria, predict changes in participant characteristics over time, and inform sample size calculations and other design features.

Investigators from the Suicide Prevention Outreach Trial (SPOT), an NIH Collaboratory Demonstration Project, recently shared their experiences with using electronic health record data on patients in the participating health systems to inform trial design. The article was published in Clinical Trials.

SPOT was designed to compare the effect of 2 outreach interventions and usual care on the rate of fatal and nonfatal suicide attempts in 3 large health care delivery systems. The investigators used historical data from the electronic health records of the participating health systems to select eligibility requirements, estimate the distribution of patient characteristics during the trial, and calculate statistical power and sample size. Their experiences offer lessons for others who are designing pragmatic trials embedded in health systems with automated data sources.

SPOT was supported within the NIH Collaboratory by a cooperative agreement from the National Institute of Mental Health and received logistical and technical support from the NIH Collaboratory Coordinating Center. Download a study snapshot of SPOT, and learn more about the NIH Collaboratory Demonstration Projects.

May 31, 2019: Adapting Clinical Trial Design to Meet the Needs of Learning Health Systems (Harriette Van Spall, MD, MPH)


Harriette G.C. Van Spall, MD, MPH, FRCPC
Associate Professor of Medicine
Department of Medicine, Division of Cardiology
Department of Health Research Methods, Evidence, and Impact
McMaster University
Population Health Research Institute


Adapting Clinical Trial Design to Meet the Needs of Learning Health Systems


Learning health system; Pragmatic clinical trial; Patient-Centered Care Transitions in Heart Failure (PACT-HF); Heart failure; Stepped-wedge cluster trial

Key Points

  • Characteristics of a learning health system include:
    • Possessing a culture of knowledge and quality improvement
    • Encouraging research innovation by embedding research into clinical practice and generating knowledge at the point of care
    • Harnessing data from electronic health records and claims/administrative databases
    • Fostering trust between research and clinical teams
    • Engaging patients, clinicians, and key stakeholders
  • The Patient-Centered Care Transitions in Heart Failure (PACT-HF) trial evaluated the effectiveness of a group of transitional care services in patients hospitalized for HF within a publicly funded healthcare system.
  • Challenges of a learning health system include integrating care, intervention, and communications across silos; streamlining workflow; preventing “contamination” of usual care; and the limited interoperability of EHRs and slow updates to claims/administrative datasets.

Discussion Themes

Efficacy in explanatory randomized clinical trials (RCTs) does not equate to effectiveness in real-world settings.

Decisions about implementation of an intervention are not made “live”; you must wait until the study has ended, all the data are available for analysis, and analysis is complete before you can inform decision-maker partners about the risks and benefits of the intervention.

Read more about the PACT-HF study and results in JAMA Network (Van Spall et al. 2019)


#pctGR, @Collaboratory1

May 10, 2019: Treating Data as an Asset: Data Entrepreneurship in the Service of Patients (Eric Perakslis, PhD)


Eric D. Perakslis MS, PhD
Rubenstein Fellow, Duke University
Lecturer, Department of Biomedical Informatics
Harvard Medical School


Treating Data as an Asset: Data Entrepreneurship in the Service of Patients


Digital health; Health data; General Data Protection Regulation (GDPR); Data sharing

Key Points

  • The only 100% common element of digital transformation across all industries is data.
  • With data and digital transformation, patients are changing: They are active, connected, informed, and savvy.
  • Security, compliance, and privacy are different things.

Discussion Themes

Is there any hope of data sharing policies helping to bridge the micro and macro silos of healthcare data?

As data starts to flows through institutions, it ends up in multiple places. Part of sharing data is protecting a single source of truth.

If something is relevant to the bedside, it’s worth doing.

Read Dr. Perakslis’s commentary in The Lancet (May 2019).


#healthdata, #pctGR, @Collaboratory1

April 22, 2019: TiME Trial Confirms Feasibility of Embedding Large Pragmatic Trials in Clinical Care

Laura Dember

The primary results of the Time to Reduce Mortality in End-Stage Renal Disease (TiME) trial, an NIH Collaboratory Demonstration Project, were published online this month in the Journal of the American Society of Nephrology. The study confirmed the feasibility of embedding a large pragmatic clinical trial in clinical care delivery.

Although maintenance hemodialysis has long been a staple of care for patients with end-stage renal disease, there are limited data from clinical trials to inform optimal approaches, including the optimal duration of hemodialysis sessions. The TiME trial investigators, in partnership with 2 large dialysis provider organizations, evaluated the effects of a longer hemodialysis session duration on mortality and hospitalization rate among more than 7000 patients receiving care in 266 dialysis facilities.

The TiME trial was discontinued early (median follow-up, 1.1 years) because there was an insufficient difference in mean hemodialysis session duration between the intervention group and the usual care group. The investigators observed no reduction in mortality or hospitalization rate in either group.

Despite ending early, the trial met important objectives for informing the implementation of large pragmatic clinical trials embedded in health care systems. In a large multicenter study with no onsite research personnel, the investigators quickly and efficiently enrolled a large number of participants using an opt-out consent approach. The study data were obtained entirely from the electronic health and administrative records of the partnering dialysis provider organizations and were generated from routine clinical care delivery.

“The TiME trial provides an important foundation for future pragmatic trials in dialysis as well as in other settings,” said Dr. Laura M. Dember of the University of Pennsylvania Perelman School of Medicine, the principal investigator of the TiME trial.

The TiME trial was supported within the NIH Collaboratory by a cooperative agreement from the National Institute of Diabetes and Digestive and Kidney Diseases and received logistical and technical support from the NIH Collaboratory Coordinating Center. Download a study snapshot about the TiME trial, and learn more about the NIH Collaboratory Demonstration Projects.

April 12, 2019: Development of Harmonized Outcome Measures for Use in Research and Clinical Practice (Richard Gliklich, MD, Michelle Leavy, MPH, Elise Berliner, PhD)


Richard Gliklich, MD
CEO, OM1, Inc.

Michelle B. Leavy, MPH
Head, Healthcare Research and Policy
OM1, Inc.

Elise Berliner, PhD
Director, Technology Assessment Program
Center for Evidence and Practice Improvement (CEPI)
Agency for Healthcare Research and Quality (AHRQ)


Development of Harmonized Outcome Measures for Use in Research and Clinical Practice


Health outcomes; Patient-centered outcomes; Agency for Healthcare Research and Quality; Patient registries; Clinical data; Patient-reported outcomes; Value-based care; Electronic health records; Learning health system; Conceptual framework

Key Points

  • The goal of the Outcome Measures Framework is to create a common conceptual model for classifying the range of outcomes that are relevant to patients and providers across most conditions.
  • Harmonization of outcome measures is essential to comparing and aggregating results between and among registries, clinical research, and quality reporting, and to facilitating performance and value-based measurement.
  • A minimum measure set is the minimum set of harmonized measures that can be captured consistently in research and clinical practice.
  • Developing the framework used a stakeholder-driven process that categorized outcomes as clinical responses, patient-reported, survival, resource utilization, and events of interest for a sample set of 5 clinical areas.

Discussion Themes

The benefits of developing a core set of measures include reduced clinician burden and improved patient care.

How is this work informing the HL7 work group that is defining standards for registries?

Next steps include implementation of the minimum measure sets in EHRs, registries, and other research efforts; demonstrating the value of a minimum measure set; and encouraging adoption of the measures.

Learn more about AHRQ’s Outcome Measures Framework.


#pctGR, @Collaboratory1, @AHRQNews