Podcast October 1, 2018: Returning Individual Research Results to Participants: Guidance for a New Research Paradigm

Listen to the episode here:

At least once a month, we will release interviews with Grand Rounds speakers that delve into their topic of interest and give listeners bonus time with these featured experts.

Please let us know what you think by providing your feedback through the podcast page. We also encourage you to listen and share the recordings with your colleagues!

August 28, 2018: ADAPTABLE Patient-Reported Health Data Codes Now Available

The ADAPTABLE pragmatic trial relies on patients to report key information at baseline and throughout follow-up. To capture these data, ADAPTABLE investigators developed a LOINC (Logical Observation Identifiers Names and Codes) patient-reported item set, which is now publicly available.

The development of the item set is part of the ADAPTABLE Supplement, an initiative funded by the Office of the Assistant Secretary for Planning and Evaluation to develop best practices for capturing patient-reported outcome data and optimal analytic approaches for using the data in a pragmatic clinical trial. Additional reference material can be found in the ADAPTABLE Supplement Roundtable Meeting summary, in a report describing the results of a literature review of data standards and metadata standards for variables of interest, and on GitHub. The project is expected to inform future efforts to integrate patient-reported data in the electronic health record and provide opportunities to streamline data for use in pragmatic trials. Information from the project is being added to the Living Textbook as it accumulates; learn more in the chapters on Using Electronic Health Record Data and Choosing and Specifying End Points and Outcomes.

ADAPTABLE (Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long-Term Effectiveness) aims to identify the optimal dose of aspirin therapy for secondary prevention in atherosclerotic cardiovascular disease and is the first major randomized comparative effectiveness trial to be conducted by the National Patient-Centered Clinical Research Network (PCORnet).

June 22, 2018: Pragmatic Trial Design to Study Health Policy Interventions: Lessons Learned from ARTEMIS (Tracy Wang, MD, MHS, MSc)

Speaker

Tracy Y. Wang, MD, MHS, MSc
Director, DCRI Health Service Research
Fellowship Associate Program Director
Associate Professor of Medicine, Cardiology
Duke University Medical Center

Topic

Pragmatic Trial Design to Study Health Policy Interventions: Lessons Learned from ARTEMIS

Keywords

Clinical research; Pragmatic clinical trial; Pragmatic trial design; Health policy; ARTEMIS; Health policy; Health system; Cost-sharing models

Key Points

  • The ARTEMIS trial aims to improve patient outcomes by simulating health system and payer consideration of novel cost-sharing models.
  • Health policy and implementation studies require pragmatic trial design.
  • The ARTEMIS trial consists of 301 sites across the United States, with 23% classified as teaching hospitals.
  • Design and execution of the ARTEMIS trial prompted many questions, such as “Can we innovate the design of pragmatic health policy trials?”

Discussion Themes

As with most pragmatic clinical trials, the ARTEMIS trial is aimed at decision makers. However, in ARTEMIS, the “decision makers” are the healthcare systems, as well as the payers which is unusual as compared to most pragmatic clinical trials.

Variability in payment coverage contributed to the design of the ARTEMIS trial.

While the patient population in the ARTEMIS trial is analogous, the randomization of hospitals and the way in which the work of the hospitals is conducted is highly variable. Linking and data collection helped contribute to ARTEMIS findings.

 

Tags

@Collaboratory1, @TYWangMD, #pragmatictrial, #clinicaltrials, #pctGR, #EHR, @DCRnews

June 4, 2018: New Article Explores Misleading Use of the Label “Pragmatic” for Some Randomized Clinical Trials

A recent study published in BMC Medicine found that many randomized controlled trials (RCTs) self-labeled as “pragmatic” were actually explanatory in nature, in that they assessed investigational medicines compared with placebo to test efficacy before licensing. Of the RCTs studied, one-third were pre-licensing, single-center, or placebo-controlled trials and thus not appropriately described as pragmatic.

Appropriately describing the design and characteristics of a pragmatic trial helps readers understand the trial’s relevance for real-world practice. The authors explain that RCTs suitably termed pragmatic compare the effectiveness of 2 available medicines or interventions prescribed in routine clinical care. The purpose of such pragmatic RCTs is to provide real-world evidence for which interventions should be recommended or prioritized.

The authors recommend that investigators use a standard tool, such as the CONSORT Pragmatic Trials extension or the PRECIS-2 tool, to prospectively evaluate the pragmatic characteristics of their RCTs. Use of these tools can also assist funders, ethics committees, and journal editors in determining whether an RCT has been accurately labeled as pragmatic.

The BMC Medicine article cites NIH Collaboratory publications by Ali et al. and Johnson et al., as well as the Living Textbook, in its discussion of pragmatic RCTs and the tools available to assess their relevance for real-world practice.

“Submissions of RCTs to funders, research ethics committees, and peer-reviewed journals should include a PRECIS-2 tool assessment done by the trial investigators. Clarity and accuracy on the extent to which an RCT is pragmatic will help [to] understand how much it is relevant to real-world practice.” (Dal-Ré et al. 2018)

December 15, 2017: Does Machine Learning Have a Place in a Learning Health System?

Speakers

Michael Pencina, PhD
Professor of Biostatistics and Bioinformatics, Duke University
Director of Biostatistics
Duke Clinical Research Institute

Topic

Does Machine Learning Have a Place in a Learning Health System?

Keywords

Machine Learning; Artificial Intelligence; AI; Learning Health Systems

Key Points

  • Machine learning has many different applications for generating evidence in meaningful ways in a learning health system (LHS).
  • Although other industries are using machine learning, the health care industry has been slow to adopt artificial intelligence (AI) methodologies.
  • The Forge Center was formed under the leadership of Dr. Robert Califf and uses team science—biostatisticians, engineers, computer scientists, informaticists, clinicians, and patients collaborate to develop machine learning solutions and prototypes to improve health.
  • In a learning health system, the process is to identify the problem, formulate steps to solve it, find the right data and perform analysis, test the proposed solution (by embedding randomized experiments in a LHS), and implement or modify the solution.
  • Machine learning is a small piece of a LHS, but an important one, and methods are characterized by the use of complex mathematical algorithms trained and optimized on large amounts of data.

Discussion Themes

Demonstrating enhanced value of machine learning over existing algorithms will be an important next step. An ongoing question is how do models get translated into clinical decision making? Machine learning is a tool to develop a model, but implementation of the findings will require team science.

Prediction models can be calibrated to work across health systems to an extent, but there are many unique features of individual health systems, so large health systems should use their own data to optimize the information and learning in a specific setting.  

There are key issues related to accurate ascertainment of data, especially with relation to completeness. For example, inpatient data collected during a hospital stay are likely to yield models that have value. If data rely on events that happen outside the system, it can be harder to get the complete picture.

For More Information

For information on #MachineLearning in a #LearningHealthSystem visit http://bit.ly/2D5ATEG

Tags

@PCTGrandRounds, @Collaboratory1, @DukeForge, @Califf001, #MachineLearning, #LearingHealthSystem, #pctGR

December 1, 2017: Providing a Shared Repository of Detailed Clinical Models for All of Health and Healthcare

Speakers

Stanley M. Huff, MD
Chief Medical Informatics Officer
Intermountain Healthcare and Professor (Clinical) of Biomedical Informatics
University of Utah

W. Ed Hammond, PhD
Duke Center for Health Informatics
Clinical & Translational Science Institute
Duke University

Topic

Providing a Shared Repository of Detailed Clinical Models for All of Health and Healthcare

Keywords

Pragmatic clinical trial; Clinical research; Clinical Information Interoperability Council; Repository; Learning Health System; Patient care; Data collection; Data dissemination

Key Points

  • The Clinical Information Interoperability Council (CIIC) was created to address the lack of standardized data definitions and to increase the ability to share data for improved patient care and research.
  • Accurate computable data should be the foundation of a Learning Health System (LHS), which will lead to better patient care through executable clinical decision-support modules.
  • The ultimate goal of the CIIC is to create ubiquitous sharing of data across medicine including patient care, clinical research, device data, and billing and administration.
  • The three most important questions for the CIIC are what data to collect, how the data should be modeled, and what are computable definitions of the data?

Discussion Themes

All stakeholders need to agree to work together and to allow practicing front-line clinicians to direct the work.

Stakeholders should use and share common tools to create models, and share the models through an open internet accessible repository.

The goal of the repository is to have a common representation digitally for what happened in the real world, by creating agreed-upon names and definitions for a common data set.

What level of vetting is appropriate for data definitions? This should not be a popularity contest for data, but rather a decision made by expert judges.

For More Information

For information on dissemination approaches for different healthcare stakeholders, visit the Living Textbook http://bit.ly/2kcSqGb

Tags

@PCTGrandRounds, @Collaboratory1, @UUtah, @DukeHealth, #Healthcare, #ClinicalDecision Support, #LearningHealthSystem, #ClinicalResearch, #PatientCare, #pctGR

Podcast November 14, 2017: Moderators’ Edition: Discussion of the Future of the NIH Collaboratory

Dr. Adrian Hernandez

Dr. Kevin Weinfurt

In this episode of the NIH Collaboratory Grand Rounds podcast, Drs. Adrian Hernandez and Kevin Weinfurt, two NIH Collaboratory Co-PIs and podcast moderators, discuss their predictions and hopes for the NIH Collaboratory in the next year.

Click on the recording below to listen to the podcast.

 

We encourage you to share this podcast with your colleagues and tune in for our next edition with Dr. Susan Ellenberg on “Data and Safety Monitoring in Pragmatic Clinical Trials.”

Read the blog and the full transcript here.

 

 Rate this podcast

October 20, 2017: Automated Public Health Surveillance Using Electronic Health Record Data

Speaker

Michael Klompas, MD, MPH, FIDSA, FSHEA
Associate Professor
Department of Population Medicine
Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA

Topic

Automated Public Health Surveillance Using Electronic Health Record Data

Keywords

Pragmatic clinical trial; Clinical research; Electronic health record; EHR; Health surveillance; Harvard Pilgrim

Key Points

  • Electronic health record (EHR) systems are a rich potential source for detailed, timely, and efficient surveillance of large populations.
  • The Harvard School of Public Health created an EHR system for public health surveillance, the Electronic Medical Record Support for Public Health (ESP) platform.
  • Data from electronic health records can help providers to find disparities in care patterns and outcomes, and to inform interventions for vulnerable members of the population.
  • Interactive visualization software can unlock the power of EHR data to track disease incidence rates, characteristics, and trends.

Discussion Themes

Electronic health record data allows for more sensitive and specific disease detection compared to insurance claims.

The Electronic Medical Record Support for Public Health (ESP) platform allows clinical practice groups to participate in public health surveillance while retaining ownership and control of their data.

The ESP platform is also a vehicle to find and track patients who are prescribed opioids in real time, which is critical in the climate of nationwide opioid abuse epidemic. 

There are tools similar to RiskScape in other states that have comparable functionality in finding and organizing patient data to track and monitor public health trends.

For More Information

For more information on PROs, visit the Living Textbook http://bit.ly/2ym4R79 #pctGR

Tags

@Collaboratory1, @HarvardPilgrim, #EHRs, #healthsurveillance, #clinicalresearch, #depression, #diabetes, #obesity, #pctGR

August 18, 2017: Behavioral Economics: A Versatile Tool for Clinical Research – From Interventions to Participant Engagement

Speaker

Charlene Wong, MD
Assistant Professor
Department of Pediatrics
Duke Clinical Research Institute
Margolis Center for Health Policy
Duke University

Topic

Behavioral Economics: A Versatile Tool for Clinical Research – From Interventions to Participant Engagement

Keywords

Pragmatic clinical trial; Clinical research; Behavioral economics

Key Points

  • Behavioral economics can be used to motivate behavior change in lifestyle, medicine adherence, and clinical recommendations.
  • Behavioral economics can inform intervention design for motivating behavior change, and inform strategies for increasing enrollment and retention.
  • Types of incentives in behavioral economics include monetary, nonmonetary, privileges, and informational incentives.
  • In behavioral economics, incentive delivery and choice environment are critical.

Discussion Themes

When incentives are taken away at the end of a study, the desired behavior trails off. Social networks and support can help sustain behavior changes.

The design and delivery of an incentive is important, in order to avoid the “undue influence” effect outlined in the Belmont Report.

Further research is needed to determine how to best tailor financial incentives for young people.

For More Information

For more information on behavioral economics, follow @DrCharleneWong #pctGR

Tags

@Collaboratory1, @DrCharleneWong, #BehavioralEconomics, #pctGR

June 30, 2017: The Yale Open Data Access (YODA) Project: Lessons Learned in Data Sharing

Speaker

Joseph Ross, MD, MHS, Section of General Internal Medicine, School of Medicine, Center for Outcomes Research and Evaluation, Yale-New Haven Hospital

Topic

The Yale Open Data Access (YODA) Project: Lessons Learned in Data Sharing

Keywords

Pragmatic clinical trial; Clinical research; Health Data; YODA, Yale, Open Access

Key Points

  • Yale’s Open Data Access (YODA) Project is committed to transparency and good stewardship of data usage
  • The YODA Project tenet that ”Underreporting is Scientific Misconduct” emphasizes the importance of open data sharing
  • Only about 50% of clinical trials are never published, many only partially reported
  • The YODA Project is maximizing value of collected data while minimizing duplication of data
  • An application process and secure platform help the YODA Project team to prevent distribution and protect patient privacy
  • YODA’s third party approach in removes influence over access, and is in best interest of all stakeholders

Discussion Themes

No requests have ever been rejected, but there have been cases where YODA could not provide the data needed (i.e. CT scans from a trial due to de-identification and cost).
YODA first published requests online right away, but received pushback from investigators because they thought it was unfair for their requests to be published before they could start work. Now, they are only published when they gain access to the data, in order to still maximize transparency.
YODA Project is expensive, but funding has come in through industry (e.g. Johnson and Johnson) so users continue to have access without a fee.
Incentivizing the movement of data from academicians to a common platform is a challenge moving forward because many feel they do not need SAS software and just want to disseminate themselves, so YODA will keep working toward its goal of open access.
Should journal editors have authors clearly state their relationship with the data? Several authors published data from the SPRINT data release and there was nothing in the article stating these authors had nothing to do with the trial.

For More Information

Read more about the YODA Project at http://yoda.yale.edu/

Tags
#pctGR, @YODAProject, @Yale, @PCTGrandRounds, @Collaboratory1, #HealthData