October 4, 2019: Ascertaining Death and Hospitalization Endpoints: The TRANSFORM-HF Experience (Eric Eisenstein, DBA, Kevin Anstrom, PhD)

Speakers

Eric L. Eisenstein, DBA
Associate Professor in Medicine
Duke University School of Medicine

Kevin J. Anstrom, PhD
Professor of Biostatistics and Bioinformatics
Director of Biostatistics, Duke Clinical Research Institute
Duke University School of Medicine

Topic

Ascertaining Death and Hospitalization Endpoints: The TRANSFORM-HF Experience

Keywords

Clinical endpoints: Ascertaining death; Hospitalization; TRANSFORM-HF; National Death Index

Key Points

  • When patient deaths occur outside the care setting, the cause of death may not be reliably documented. For researchers, the challenges of measuring deaths include the lack of a national death data source and incomplete or hard-to-access sources.
  • The death identification and adjudication process differs for explanatory versus pragmatic trials, and has implications for how death endpoints are acquired and measured.
  • The TRANSFORM-HF pragmatic trial is comparing the effects of treatment strategies on long-term outcomes for hospitalized patients with heart failure. The primary study endpoint is all-cause mortality, which is ascertained and verified using a hybrid approach at the clinical site and call center, and includes searching the National Death Index data.

Discussion Themes

What are the tradeoffs in making endpoint ascertainment more simple?

If using a hybrid death data collection strategy, how are discrepancies adjudicated?

Use of call centers that coordinate follow-up patient contact and data collection is a valid approach that ensures a single point of contact for patients or proxies and care providers. This approach should also be supplemented with redundant data sources.

Read more in the Living Textbook about Using Death as an Endpoint and Inpatient Endpoints in Pragmatic Clinical Trials.

Tags
#pctGR, @Collaboratory1, @DCRINews

September 13, 2019: ADAPTABLE Recruitment and Follow-up: Health Plan Research Network Engagement (Kevin Haynes, PharmD, MSCE)

Speaker

Kevin Haynes, PharmD, MSCE
Principal Scientist
HealthCore, Inc.

Topic

ADAPTABLE Recruitment and Follow-up: Health Plan Research Network Engagement

Keywords

Real-world evidence; Real-world data; Study design; Claims data; ADAPTABLE; Patient recruitment; Pragmatic clinical trial; Electronic health record; Informed consent; Learning health system

Key Points

  • We need integrated EHR data and claims data in order to close evidence gaps in observational pharmacoepidemiology studies and comparative effectiveness trials.
  • The health plan claims data environment can be leveraged to support real-world evidence studies.
  • An integrated health plan network database can be a resource for data about eligibility, lab test results, and pharmacy claims. In a pragmatic clinical trial, using health plan data can provide a longitudinal electronic approach to endpoint ascertainment.

Discussion Themes

Have you had a chance to do any cost analysis of this high-volume, low-touch method of recruitment?

In the ADAPTABLE enrollment portal, it seems the biggest chunk of time (reading about the study) came before the screening questions. Do you think swapping the order might have improved enrollment?

Read more about the ADAPTABLE pragmatic trial.

Tags

#pctGR, @Collaboratory1, @HealthCoreRWE

September 6, 2019: Transforming Medical Evidence Generation with Technology-Enabled Trials (Matthew T. Roe, MD MHS)

Speaker

Matthew T. Roe, MD, MHS
Senior Investigator, Professor of Medicine
Duke Clinical Research Institute

Topic

Transforming Medical Evidence Generation with Technology-Enabled Trials

Keywords

Mobile clinical trials; Real-world evidence; Real-world data; Study design; Regulatory oversight; Digital health; Mobile health applications; Biosensors; Electronic health records

Key Points

  • Digital health applications and electronic health records provide tremendous opportunities for improving trial efficiencies, broadening patient participation, and reducing cost.
  • Novel approaches that can help reduce data collection burden for study sites include importing EHR data directly into the trial database, collecting patient-reported outcomes through web-based portals, and incorporating digital health data from wearables and biosensors.
  • To realize the potential of new technology, cross-sectional partnerships are needed among research participants, researchers, biopharma device industries, professional medical associations, insurers, FDA, clinicians, health IT, contract research organizations, and health systems.

Discussion Themes

How many potential patients might we lose if having a smart phone is an inclusion criterion for a clinical study?

How can we ensure that the clinical trial infrastructure is inclusive of minority populations, especially those in rural settings?

What is the role of physicians in reaching a large number of participants who are not near an academic research center?

Ultimately, in clinical trials, the data are what matter and what decisions are based on. We need to understand data quality and standards for the data to be accepted.

Read more about digital health at FDA’s Digital Health website.

Tags

#pctGR, @Collaboratory1, @MTRHeart

August 23, 2019: Oh Yes, We Have Tons of Patients Who Can Do This Study! (Vanita R. Aroda, MD)

Speaker

Vanita R. Aroda, MD
Director of Diabetes Clinical Research
Brigham & Women’s Hospital
Harvard Medical School

Topic

Oh Yes, We Have Tons of Patients Who Can Do This Study!

Keywords

Patient engagement; Patient recruitment and retention; Clinician engagement; Health care systems; Multicenter clinical trials; Electronic health record

Key Points

  • Research occurs beyond the silo. Effective large-scale multicenter clinical trial recruitment requires an accessible network of potential participants.
  • Engage colleagues and the healthcare system as part of the collaborative journey across the trial’s lifecycle.
  • It is highly recommended to do a role-playing exercise with the study team to prevent fumbles when engaging and recruiting study participants.
  • The science, the protocols, and the data are all important, but it is the essential human element that makes it all happen.

Discussion Themes

Participant retention is really a continuation of good recruitment and engagement.

Make sure your database query makes clinical sense and is the best fit to answer your study question. Don’t spend time on the wrong data.

What other recruitment opportunities or techniques can sites use after they exhaust their patient panel?

Read more about the scalability of an EHR-based approach to patient recruitment in a diabetes study by Dr. Varoda and colleagues in Clinical Trials (2019).

Tags

#pctGR, @Collaboratory1

August 2, 2019: AI and the Future of Psychiatry (Murali Doraiswamy, MBBS)

Speaker

Murali Doraiswamy, MBBS
Professor of Psychiatry and Behavioral Sciences
Duke School of Medicine

Topic

AI and the Future of Psychiatry

Keywords

Artificial intelligence; Machine learning; Psychiatry; Ethical adoption of technologies; Mental health; Wearables; Mobile health

Key Points

  • There is growing evidence from randomized controlled trials of the efficacy of using digital tools in mental health diagnosis and treatment.
  • Could artificial intelligence (AI) and machine learning technologies be used to:
    • Reduce the stigma associated with mental health treatment?
    • Predict the risk for future suicide?
    • Detect Alzheimer’s years before diagnosis?
  • Categories of AI applications include low-risk apps that measure but do not diagnose, and apps used in diagnosis or treatment that must meet the same high standards of evidence as medications.
  • Clinicians still struggle with how to integrate patient data from wearable devices. AI technology might help if it could be used to synthesize the data into a risk profile for an individual.

Discussion Themes

What are the roles of stress, exercise, and sleep in mental health, and can autonomic data from wearables help explain the variance in mental health symptoms?

To develop evidence thresholds for AI, we need larger scale public-private partnerships as well as pragmatic trials addressing key clinical questions.

Read more from Dr. Doraiswamy in How to Use Technology Ethically to Increase Access to Mental Healthcare.
Tags

#AI, #pctGR, @Collaboratory1

July 16, 2019: ACP PEACE Trial Moves From Planning to Implementation Phase: An Interview With Dr. Angelo Volandes

Dr. Angelo Volandes
Dr. Angelo Volandes

The National Institute on Aging (NIA) recently approved the Advance Care Planning: Promoting Effective and Aligned Communication in the Elderly (ACP PEACE) trial, an NIH Collaboratory Demonstration Project, to move from the planning phase to the implementation phase. The goal of ACP PEACE is to evaluate a comprehensive advance care planning program that combines clinician communication skills training and patient video decision aids.

We spoke with Dr. Angelo Volandes, co–principal investigator of ACP PEACE with Dr. James Tulsky, at the NIH Collaboratory Steering Committee meeting in May about what the study team has learned during the planning phase of the trial.

Were there surprises during the planning phase of the study?

There were lots of surprises. The biggest surprise was that most clinicians don’t use the structured variable in the electronic health record (EHR) that we were going to use to extract our primary outcome. The workaround, which I think is actually better, is to use natural language processing (NLP) to abstract our primary outcome from the free text of the clinical note in the EHR.

The other big surprise was that oncologists really enjoyed the intervention. They have been open to the skills training and, if anything, they’ve asked for more.

What is an example of a challenge you were able to overcome with the help of the NIH Collaboratory Core Working Groups?

One challenge we encountered is related to an issue discussed in a paper by NIH Collaboratory investigators Dr. Kevin Weinfurt and Dr. Jeremy Sugarman and colleagues. It has to do with the idea of “broadcast notification.” One of our 3 participating healthcare systems asks patients if they will allow their deidentified medical record data to be used for research purposes. Every patient in that healthcare system has the option to opt out of having their deidentified data used for research purposes. Our other 2 participating healthcare systems don’t do that as a routine matter. So we needed a different approach.

Dr. James Tulsky
Dr. James Tulsky

The idea of broadcast notification—which is new and was developed in the NIH Collaboratory—is to display posters or other notices in healthcare settings that let patients know they can opt out if they have a concern about their deidentified data being shared for research purposes. Our local institutional review board (IRB) was unfamiliar with this approach. Having the Ethics and Regulatory Core help us understand the approach and educate our IRB was incredibly helpful. It was especially helpful to be able to share a published, peer-reviewed paper showing that this was an issue the NIH Collaboratory had studied.

(Editor’s note: Read the article by Weinfurt et al,  “Comparison of Approaches for Notification and Authorization in Pragmatic Clinical Research Evaluating Commonly Used Medical Practices,” in the November 2017 issue of Medical Care.)

What other key challenges have you faced?

There are always competing priorities in real-world oncology clinics. For example, there are quality improvement projects all over the place. When you’re the clinician, how do you devote the appropriate attention and time to this particular project? We feel our project is at the crux of patient-centered care, about understanding the goals, values, and beliefs of patients when it comes to serious illness care. But there are competing priorities. There can be a tension between the time you need to get the project done, for the intervention to truly reach its fruition, versus what a clinic might be willing to do.

What advice do you have for investigators conducting their first embedded pragmatic clinical trial (ePCT)?

It’s really important to get the appropriate buy-in from people in high enough positions of authority so that the project happens. It is not enough to get the chief research officer of a healthcare system to say the project is a great idea. You need the chief marketing officer, the chief executive officer, the finance people to sign off on it. When you’re in the pragmatic research world, it’s no longer just research in a controlled environment. It affects things you didn’t think about—like patient flow, revenue—and everything has to be accounted for. Make sure you get appropriate buy-in from enough stakeholders to know that you’re going to get the project done.

Also, don’t underestimate the costs of information technology (IT). For example, we need a lot more resources for our IT infrastructure now that we have switched from using a structured variable to using NLP to obtain our primary outcome. Make sure you have thought through IT needs, especially in pragmatic trials, where so much is abstracted from the EHR. Think carefully, early on, about how much time you will need from the IT group.

Anything else you want to say about ePCTs or the NIH Collaboratory?

It’s critically important to participate in the regular meetings of the Core Working Groups, to take advantage of them to help you address challenges. For example, when we encountered the problem with obtaining the primary outcome, we presented it to the EHR Core. When we had the challenge with notification and the IRB, we presented it during a meeting of the Ethics and Regulatory Core. The Core Working Groups are most useful when you openly share the challenges you are facing. That’s the way to get help from the Cores. This is my second pragmatic trial, and I’m not perfect. I put it out there because I want help from the experts.

ACP PEACE is supported within the NIH Collaboratory by a cooperative agreement from the NIA and receives logistical and technical support from the NIH Collaboratory Coordinating Center. Read more about ACP PEACE in the Living Textbook, and learn more about the NIH Collaboratory Demonstration Projects.

June 28, 2019: Moving Beyond Return of Research Results to Return of Value (Consuelo Wilkins, MD, MSCI)

Speaker

Consuelo H. Wilkins, MD, MSCI
Vice President for Health Equity, Vanderbilt University Medical Center
Executive Director, Meharry-Vanderbilt Alliance

Topic

Moving Beyond Return of Research Results to Return of Value

Keywords

Health outcomes; Research results; Patient preferences; Value of information

Key Points

  • In returning value to research participants, results are shared with added context, are prioritized by each participant, include specific suggestions for relevant actions, and incorporate participant recommendations and preferences.
  • Data captured for research purposes, including EHR data, vital signs, and genetic data, can be repurposed and reoriented for study participants.
  • Participants are more likely to trust research if results are returned—and they are more likely to participate again.

Discussion Themes

We need to return study results that are informed by participants, and we need to design approaches for accessing and understanding results that participants will want to use.

We should think carefully about risk mitigation when returning research results for which there is a clear next step or action for the participant.

Read more about understanding what information is valued by research participants in a recent article by Dr. Wilkins and colleagues in Health Affairs.

Tags

#pctGR, @Collaboratory1, @drchwilkins, @vumchealth

July 2, 2019: New Living Textbook Section on Inpatient Endpoints in Pragmatic Clinical Trials

A new section in the Living Textbook describes the considerations for using “real-world” data for inpatient-based event ascertainment. There are many sources for acquiring this information, and they have different time lags in their availability and varying degrees of error and bias. In order to use inpatient endpoints in pragmatic clinical trials, these factors must be understood during the design, conduct, and analysis phases of an embedded pragmatic clinical trial.

“The pragmatic trial community needs to collectively determine which endpoints are relevant for pragmatic trials, how they can be measured and validated, and how the accuracy of these measurement methods may impact hypothesis testing sample size estimates.” —Eisenstein et al 2019

Topics in the chapter include:

  • Pragmatic trial inpatient endpoints
  • Inpatient event data sources
  • Patient-reported data
  • Secondary data sources: EHR
  • Secondary data sources: claims
  • Case studies: ICD-Pieces, TRANSFORM-HF, ADAPTABLE, and TRANSLATE ACS
  • Data source accuracy

 

June 20, 2019: EMBED Investigators Discuss Progress and Transition to Implementation Phase

At the May 2019 meeting of the NIH Collaboratory Steering Committee, we talked with Drs. Ted Melnick and Gail D’Onofrio of EMBED, an NIH Collaboratory Demonstration Project, to hear about progress and challenges during the UG3 planning phase. The goal of EMBED is to test whether implementation of a user-centered clinical decision support system increases adoption of initiation of buprenorphine/naloxone into the routine emergency care of patients with opioid use disorder. In the UG3 phase, the study team put in place the infrastructure of a pragmatic, multicenter, parallel, group-randomized health IT intervention. EMBED recently transitioned to the UH3 implementation phase and plans to launch the intervention at 20 sites across 5 healthcare systems in August 2019.

“With EMBED, we’re trying to take evidence-based research and implement it to improve practice. EMBED is both a research and patient care project.”

Were there any surprises during the study’s planning phase?

The first surprise came at last year’s Steering Committee meeting, when we met with the Biostatistics and Study Design Core. They encouraged us to change our original study design from stepped-wedge to group-randomized, which we did. We think this advice led to a stronger study. The main reason for this is the group-randomized design’s ability to better account for temporal changes. Since our intervention is being conducted in the middle of an opioid crisis, there are potentially other concurrent interventions that could make it difficult to determine the effect of our intervention. The group-randomized design should give us better insight into whether our intervention is driving behavior change in treating patients with opioid use disorder.

What is an example of a challenge that you were able to overcome with the help of a Core Working Group?

In addition to design advice from the Biostatistics Core, we received expert guidance from the Ethics and Regulatory Core, who helped us prepare for the central IRB process. The Core’s input was essential to how we developed our protocol’s waiver of informed consent, data handling, and protection of patient privacy. We were able to demonstrate to the IRB that our approach was logical and informed. We think this helped the IRB “get it” and allowed us to more efficiently address patient privacy issues in a vulnerable population across multiple healthcare systems.

What other key challenges have you faced?

One challenge was on the IT side with electronic health record (EHR) integration, which required more customization than we initially planned. How we work with EHR vendors is evolving, and we’ve found good partners so that we can integrate across different systems. This has strengthened our intervention so that it is perceived as more universal than one designed only for a specific EHR system.

Another challenge is the general under-resourcing of healthcare delivery systems for pragmatic research. We found that, regardless of budget, getting approval from system leadership for an IT change is often not enough—what is needed is figuring out who is going to make the change, how much time is involved, and whether the team has the bandwidth to complete the task. You cannot underestimate the degree of difficulty a change poses to a health system that is still struggling to get the clinical side of things right.

The way a study is framed to leadership is important—understand what’s motivating them to participate and move a project forward. With EMBED, we’re trying to take evidence-based research and implement it to improve practice. EMBED is both a research and patient care project. We need to impress upon leadership that we can improve patient outcomes and we’ll pay for it, but we need their help and support in navigating the process through the institution.

What words of advice do you have for investigators conducting their first embedded PCT?

  • Study teams should think about potential barriers from the beginning and find solutions quickly.
  • Make sure that health system leadership discusses your project with those on the ground.
  • Enlist the experts your study needs for each site. In our case, we needed both an IT expert for the operational side and a clinical expert, or we couldn’t have moved the project forward.
  • Recognize that there are trade-offs in pragmatic design and remember that you’re working with health systems in which your intervention will need to be replicated.
  • Make your intervention sustainable and easily usable by the clinician, without the need for research or other additional staff.

EMBED is supported within the NIH Collaboratory by a cooperative agreement from the National Institute on Drug Abuse and receives logistical and technical support from the NIH Collaboratory Coordinating Center. Read more about EMBED in the Living Textbook, and learn more about the NIH Collaboratory Demonstration Projects.

June 7, 2019: Meeting Materials from the 2019 NIH Collaboratory Steering Committee Meeting

The Collaboratory has made available all the presentations from their recent Steering Committee meeting held in Bethesda May 1-2, 2019.

Highlights of Day 1 included updates on the progress and sustainability of the NIH Collaboratory, perspectives on the landscape of embedded PCTs (ePCTs) and the need for real-world evidence, challenges and lessons learned from the UH3 Demonstration Projects, updates on progress and transition plans from the UG3 Demonstration Projects, and discussions on data sharing policy and planning. Day 2 featured an intensive workshop hosted by the NIH with the goal of starting discussions on statistical issues with ePCTs.

View or download the meeting materials on the website.