In this episode of the NIH Collaboratory Grand Rounds podcast, Dr. Jeffrey Botkin and Dr. Consuelo Wilkins discuss returning individual research results to participants. In their conversation with Dr. Adrian Hernandez, Drs. Botkin and Wilkins emphasize the importance of returning results to participants, the present challenges, and hopes for the process in the next five years.
The ADAPTABLE pragmatic trial relies on patients to report key information at baseline and throughout follow-up. To capture these data, ADAPTABLE investigators developed a LOINC (Logical Observation Identifiers Names and Codes) patient-reported item set, which is now publicly available.
ADAPTABLE (Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long-Term Effectiveness) aims to identify the optimal dose of aspirin therapy for secondary prevention in atherosclerotic cardiovascular disease and is the first major randomized comparative effectiveness trial to be conducted by the National Patient-Centered Clinical Research Network (PCORnet).
Tracy Y. Wang, MD, MHS, MSc
Director, DCRI Health Service Research
Fellowship Associate Program Director
Associate Professor of Medicine, Cardiology
Duke University Medical Center
Pragmatic Trial Design to Study Health Policy Interventions: Lessons Learned from ARTEMIS
Clinical research; Pragmatic clinical trial; Pragmatic trial design; Health policy; ARTEMIS; Health policy; Health system; Cost-sharing models
The ARTEMIS trial aims to improve patient outcomes by simulating health system and payer consideration of novel cost-sharing models.
Health policy and implementation studies require pragmatic trial design.
The ARTEMIS trial consists of 301 sites across the United States, with 23% classified as teaching hospitals.
Design and execution of the ARTEMIS trial prompted many questions, such as “Can we innovate the design of pragmatic health policy trials?”
As with most pragmatic clinical trials, the ARTEMIS trial is aimed at decision makers. However, in ARTEMIS, the “decision makers” are the healthcare systems, as well as the payers which is unusual as compared to most pragmatic clinical trials.
Variability in payment coverage contributed to the design of the ARTEMIS trial.
While the patient population in the ARTEMIS trial is analogous, the randomization of hospitals and the way in which the work of the hospitals is conducted is highly variable. Linking and data collection helped contribute to ARTEMIS findings.
A recent study published in BMC Medicine found that many randomized controlled trials (RCTs) self-labeled as “pragmatic” were actually explanatory in nature, in that they assessed investigational medicines compared with placebo to test efficacy before licensing. Of the RCTs studied, one-third were pre-licensing, single-center, or placebo-controlled trials and thus not appropriately described as pragmatic.
Appropriately describing the design and characteristics of a pragmatic trial helps readers understand the trial’s relevance for real-world practice. The authors explain that RCTs suitably termed pragmatic compare the effectiveness of 2 available medicines or interventions prescribed in routine clinical care. The purpose of such pragmatic RCTs is to provide real-world evidence for which interventions should be recommended or prioritized.
The authors recommend that investigators use a standard tool, such as the CONSORT Pragmatic Trials extension or the PRECIS-2 tool, to prospectively evaluate the pragmatic characteristics of their RCTs. Use of these tools can also assist funders, ethics committees, and journal editors in determining whether an RCT has been accurately labeled as pragmatic.
The BMC Medicine article cites NIH Collaboratory publications by Ali et al. and Johnson et al., as well as the Living Textbook, in its discussion of pragmatic RCTs and the tools available to assess their relevance for real-world practice.
“Submissions of RCTs to funders, research ethics committees, and peer-reviewed journals should include a PRECIS-2 tool assessment done by the trial investigators. Clarity and accuracy on the extent to which an RCT is pragmatic will help [to] understand how much it is relevant to real-world practice.” (Dal-Ré et al. 2018)
Michael Pencina, PhD
Professor of Biostatistics and Bioinformatics, Duke University
Director of Biostatistics
Duke Clinical Research Institute
Does Machine Learning Have a Place in a Learning Health System?
Machine Learning; Artificial Intelligence; AI; Learning Health Systems
Machine learning has many different applications for generating evidence in meaningful ways in a learning health system (LHS).
Although other industries are using machine learning, the health care industry has been slow to adopt artificial intelligence (AI) methodologies.
The Forge Center was formed under the leadership of Dr. Robert Califf and uses team science—biostatisticians, engineers, computer scientists, informaticists, clinicians, and patients collaborate to develop machine learning solutions and prototypes to improve health.
In a learning health system, the process is to identify the problem, formulate steps to solve it, find the right data and perform analysis, test the proposed solution (by embedding randomized experiments in a LHS), and implement or modify the solution.
Machine learning is a small piece of a LHS, but an important one, and methods are characterized by the use of complex mathematical algorithms trained and optimized on large amounts of data.
Demonstrating enhanced value of machine learning over existing algorithms will be an important next step. An ongoing question is how do models get translated into clinical decision making? Machine learning is a tool to develop a model, but implementation of the findings will require team science.
Prediction models can be calibrated to work across health systems to an extent, but there are many unique features of individual health systems, so large health systems should use their own data to optimize the information and learning in a specific setting.
There are key issues related to accurate ascertainment of data, especially with relation to completeness. For example, inpatient data collected during a hospital stay are likely to yield models that have value. If data rely on events that happen outside the system, it can be harder to get the complete picture.
Stanley M. Huff, MD
Chief Medical Informatics Officer
Intermountain Healthcare and Professor (Clinical) of Biomedical Informatics
University of Utah
W. Ed Hammond, PhD
Duke Center for Health Informatics
Clinical & Translational Science Institute
Providing a Shared Repository of Detailed Clinical Models for All of Health and Healthcare
Pragmatic clinical trial; Clinical research; Clinical Information Interoperability Council; Repository; Learning Health System; Patient care; Data collection; Data dissemination
The Clinical Information Interoperability Council (CIIC) was created to address the lack of standardized data definitions and to increase the ability to share data for improved patient care and research.
Accurate computable data should be the foundation of a Learning Health System (LHS), which will lead to better patient care through executable clinical decision-support modules.
The ultimate goal of the CIIC is to create ubiquitous sharing of data across medicine including patient care, clinical research, device data, and billing and administration.
The three most important questions for the CIIC are what data to collect, how the data should be modeled, and what are computable definitions of the data?
All stakeholders need to agree to work together and to allow practicing front-line clinicians to direct the work.
Stakeholders should use and share common tools to create models, and share the models through an open internet accessible repository.
The goal of the repository is to have a common representation digitally for what happened in the real world, by creating agreed-upon names and definitions for a common data set.
What level of vetting is appropriate for data definitions? This should not be a popularity contest for data, but rather a decision made by expert judges.
For More Information
For information on dissemination approaches for different healthcare stakeholders, visit the Living Textbook http://bit.ly/2kcSqGb
In this episode of the NIH Collaboratory Grand Rounds podcast, Drs. Adrian Hernandez and Kevin Weinfurt, two NIH Collaboratory Co-PIs and podcast moderators, discuss their predictions and hopes for the NIH Collaboratory in the next year.
Click on the recording below to listen to the podcast.
We encourage you toshare this podcastwith your colleagues and tune in for our next edition with Dr. Susan Ellenberg on “Data and Safety Monitoring in Pragmatic Clinical Trials.”
Joseph Ross, MD, MHS, Section of General Internal Medicine, School of Medicine, Center for Outcomes Research and Evaluation, Yale-New Haven Hospital
The Yale Open Data Access (YODA) Project: Lessons Learned in Data Sharing
Pragmatic clinical trial; Clinical research; Health Data; YODA, Yale, Open Access
Yale’s Open Data Access (YODA) Project is committed to transparency and good stewardship of data usage
The YODA Project tenet that ”Underreporting is Scientific Misconduct” emphasizes the importance of open data sharing
Only about 50% of clinical trials are never published, many only partially reported
The YODA Project is maximizing value of collected data while minimizing duplication of data
An application process and secure platform help the YODA Project team to prevent distribution and protect patient privacy
YODA’s third party approach in removes influence over access, and is in best interest of all stakeholders
No requests have ever been rejected, but there have been cases where YODA could not provide the data needed (i.e. CT scans from a trial due to de-identification and cost).
YODA first published requests online right away, but received pushback from investigators because they thought it was unfair for their requests to be published before they could start work. Now, they are only published when they gain access to the data, in order to still maximize transparency.
YODA Project is expensive, but funding has come in through industry (e.g. Johnson and Johnson) so users continue to have access without a fee.
Incentivizing the movement of data from academicians to a common platform is a challenge moving forward because many feel they do not need SAS software and just want to disseminate themselves, so YODA will keep working toward its goal of open access.
Should journal editors have authors clearly state their relationship with the data? Several authors published data from the SPRINT data release and there was nothing in the article stating these authors had nothing to do with the trial.