June 22, 2018: Pragmatic Trial Design to Study Health Policy Interventions: Lessons Learned from ARTEMIS (Tracy Wang, MD, MHS, MSc)

Speaker

Tracy Y. Wang, MD, MHS, MSc
Director, DCRI Health Service Research
Fellowship Associate Program Director
Associate Professor of Medicine, Cardiology
Duke University Medical Center

Topic

Pragmatic Trial Design to Study Health Policy Interventions: Lessons Learned from ARTEMIS

Keywords

Clinical research; Pragmatic clinical trial; Pragmatic trial design; Health policy; ARTEMIS; Health policy; Health system; Cost-sharing models

Key Points

  • The ARTEMIS trial aims to improve patient outcomes by simulating health system and payer consideration of novel cost-sharing models.
  • Health policy and implementation studies require pragmatic trial design.
  • The ARTEMIS trial consists of 301 sites across the United States, with 23% classified as teaching hospitals.
  • Design and execution of the ARTEMIS trial prompted many questions, such as “Can we innovate the design of pragmatic health policy trials?”

Discussion Themes

As with most pragmatic clinical trials, the ARTEMIS trial is aimed at decision makers. However, in ARTEMIS, the “decision makers” are the healthcare systems, as well as the payers which is unusual as compared to most pragmatic clinical trials.

Variability in payment coverage contributed to the design of the ARTEMIS trial.

While the patient population in the ARTEMIS trial is analogous, the randomization of hospitals and the way in which the work of the hospitals is conducted is highly variable. Linking and data collection helped contribute to ARTEMIS findings.

 

Tags

@Collaboratory1, @TYWangMD, #pragmatictrial, #clinicaltrials, #pctGR, #EHR, @DCRnews

June 4, 2018: New Article Explores Misleading Use of the Label “Pragmatic” for Some Randomized Clinical Trials

A recent study published in BMC Medicine found that many randomized controlled trials (RCTs) self-labeled as “pragmatic” were actually explanatory in nature, in that they assessed investigational medicines compared with placebo to test efficacy before licensing. Of the RCTs studied, one-third were pre-licensing, single-center, or placebo-controlled trials and thus not appropriately described as pragmatic.

Appropriately describing the design and characteristics of a pragmatic trial helps readers understand the trial’s relevance for real-world practice. The authors explain that RCTs suitably termed pragmatic compare the effectiveness of 2 available medicines or interventions prescribed in routine clinical care. The purpose of such pragmatic RCTs is to provide real-world evidence for which interventions should be recommended or prioritized.

The authors recommend that investigators use a standard tool, such as the CONSORT Pragmatic Trials extension or the PRECIS-2 tool, to prospectively evaluate the pragmatic characteristics of their RCTs. Use of these tools can also assist funders, ethics committees, and journal editors in determining whether an RCT has been accurately labeled as pragmatic.

The BMC Medicine article cites NIH Collaboratory publications by Ali et al. and Johnson et al., as well as the Living Textbook, in its discussion of pragmatic RCTs and the tools available to assess their relevance for real-world practice.

“Submissions of RCTs to funders, research ethics committees, and peer-reviewed journals should include a PRECIS-2 tool assessment done by the trial investigators. Clarity and accuracy on the extent to which an RCT is pragmatic will help [to] understand how much it is relevant to real-world practice.” (Dal-Ré et al. 2018)

December 15, 2017: Does Machine Learning Have a Place in a Learning Health System?

Speakers

Michael Pencina, PhD
Professor of Biostatistics and Bioinformatics, Duke University
Director of Biostatistics
Duke Clinical Research Institute

Topic

Does Machine Learning Have a Place in a Learning Health System?

Keywords

Machine Learning; Artificial Intelligence; AI; Learning Health Systems

Key Points

  • Machine learning has many different applications for generating evidence in meaningful ways in a learning health system (LHS).
  • Although other industries are using machine learning, the health care industry has been slow to adopt artificial intelligence (AI) methodologies.
  • The Forge Center was formed under the leadership of Dr. Robert Califf and uses team science—biostatisticians, engineers, computer scientists, informaticists, clinicians, and patients collaborate to develop machine learning solutions and prototypes to improve health.
  • In a learning health system, the process is to identify the problem, formulate steps to solve it, find the right data and perform analysis, test the proposed solution (by embedding randomized experiments in a LHS), and implement or modify the solution.
  • Machine learning is a small piece of a LHS, but an important one, and methods are characterized by the use of complex mathematical algorithms trained and optimized on large amounts of data.

Discussion Themes

Demonstrating enhanced value of machine learning over existing algorithms will be an important next step. An ongoing question is how do models get translated into clinical decision making? Machine learning is a tool to develop a model, but implementation of the findings will require team science.

Prediction models can be calibrated to work across health systems to an extent, but there are many unique features of individual health systems, so large health systems should use their own data to optimize the information and learning in a specific setting.  

There are key issues related to accurate ascertainment of data, especially with relation to completeness. For example, inpatient data collected during a hospital stay are likely to yield models that have value. If data rely on events that happen outside the system, it can be harder to get the complete picture.

For More Information

For information on #MachineLearning in a #LearningHealthSystem visit http://bit.ly/2D5ATEG

Tags

@PCTGrandRounds, @Collaboratory1, @DukeForge, @Califf001, #MachineLearning, #LearingHealthSystem, #pctGR

December 1, 2017: Providing a Shared Repository of Detailed Clinical Models for All of Health and Healthcare

Speakers

Stanley M. Huff, MD
Chief Medical Informatics Officer
Intermountain Healthcare and Professor (Clinical) of Biomedical Informatics
University of Utah

W. Ed Hammond, PhD
Duke Center for Health Informatics
Clinical & Translational Science Institute
Duke University

Topic

Providing a Shared Repository of Detailed Clinical Models for All of Health and Healthcare

Keywords

Pragmatic clinical trial; Clinical research; Clinical Information Interoperability Council; Repository; Learning Health System; Patient care; Data collection; Data dissemination

Key Points

  • The Clinical Information Interoperability Council (CIIC) was created to address the lack of standardized data definitions and to increase the ability to share data for improved patient care and research.
  • Accurate computable data should be the foundation of a Learning Health System (LHS), which will lead to better patient care through executable clinical decision-support modules.
  • The ultimate goal of the CIIC is to create ubiquitous sharing of data across medicine including patient care, clinical research, device data, and billing and administration.
  • The three most important questions for the CIIC are what data to collect, how the data should be modeled, and what are computable definitions of the data?

Discussion Themes

All stakeholders need to agree to work together and to allow practicing front-line clinicians to direct the work.

Stakeholders should use and share common tools to create models, and share the models through an open internet accessible repository.

The goal of the repository is to have a common representation digitally for what happened in the real world, by creating agreed-upon names and definitions for a common data set.

What level of vetting is appropriate for data definitions? This should not be a popularity contest for data, but rather a decision made by expert judges.

For More Information

For information on dissemination approaches for different healthcare stakeholders, visit the Living Textbook http://bit.ly/2kcSqGb

Tags

@PCTGrandRounds, @Collaboratory1, @UUtah, @DukeHealth, #Healthcare, #ClinicalDecision Support, #LearningHealthSystem, #ClinicalResearch, #PatientCare, #pctGR

Podcast November 14, 2017: Moderators’ Edition: Discussion of the Future of the NIH Collaboratory

Dr. Adrian Hernandez

Dr. Kevin Weinfurt

In this episode of the NIH Collaboratory Grand Rounds podcast, Drs. Adrian Hernandez and Kevin Weinfurt, two NIH Collaboratory Co-PIs and podcast moderators, discuss their predictions and hopes for the NIH Collaboratory in the next year.

Click on the recording below to listen to the podcast.

 

We encourage you to share this podcast with your colleagues and tune in for our next edition with Dr. Susan Ellenberg on “Data and Safety Monitoring in Pragmatic Clinical Trials.”

Read the blog and the full transcript here.

 

 Rate this podcast

October 20, 2017: Automated Public Health Surveillance Using Electronic Health Record Data

Speaker

Michael Klompas, MD, MPH, FIDSA, FSHEA
Associate Professor
Department of Population Medicine
Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA

Topic

Automated Public Health Surveillance Using Electronic Health Record Data

Keywords

Pragmatic clinical trial; Clinical research; Electronic health record; EHR; Health surveillance; Harvard Pilgrim

Key Points

  • Electronic health record (EHR) systems are a rich potential source for detailed, timely, and efficient surveillance of large populations.
  • The Harvard School of Public Health created an EHR system for public health surveillance, the Electronic Medical Record Support for Public Health (ESP) platform.
  • Data from electronic health records can help providers to find disparities in care patterns and outcomes, and to inform interventions for vulnerable members of the population.
  • Interactive visualization software can unlock the power of EHR data to track disease incidence rates, characteristics, and trends.

Discussion Themes

Electronic health record data allows for more sensitive and specific disease detection compared to insurance claims.

The Electronic Medical Record Support for Public Health (ESP) platform allows clinical practice groups to participate in public health surveillance while retaining ownership and control of their data.

The ESP platform is also a vehicle to find and track patients who are prescribed opioids in real time, which is critical in the climate of nationwide opioid abuse epidemic. 

There are tools similar to RiskScape in other states that have comparable functionality in finding and organizing patient data to track and monitor public health trends.

For More Information

For more information on PROs, visit the Living Textbook http://bit.ly/2ym4R79 #pctGR

Tags

@Collaboratory1, @HarvardPilgrim, #EHRs, #healthsurveillance, #clinicalresearch, #depression, #diabetes, #obesity, #pctGR

August 18, 2017: Behavioral Economics: A Versatile Tool for Clinical Research – From Interventions to Participant Engagement

Speaker

Charlene Wong, MD
Assistant Professor
Department of Pediatrics
Duke Clinical Research Institute
Margolis Center for Health Policy
Duke University

Topic

Behavioral Economics: A Versatile Tool for Clinical Research – From Interventions to Participant Engagement

Keywords

Pragmatic clinical trial; Clinical research; Behavioral economics

Key Points

  • Behavioral economics can be used to motivate behavior change in lifestyle, medicine adherence, and clinical recommendations.
  • Behavioral economics can inform intervention design for motivating behavior change, and inform strategies for increasing enrollment and retention.
  • Types of incentives in behavioral economics include monetary, nonmonetary, privileges, and informational incentives.
  • In behavioral economics, incentive delivery and choice environment are critical.

Discussion Themes

When incentives are taken away at the end of a study, the desired behavior trails off. Social networks and support can help sustain behavior changes.

The design and delivery of an incentive is important, in order to avoid the “undue influence” effect outlined in the Belmont Report.

Further research is needed to determine how to best tailor financial incentives for young people.

For More Information

For more information on behavioral economics, follow @DrCharleneWong #pctGR

Tags

@Collaboratory1, @DrCharleneWong, #BehavioralEconomics, #pctGR

June 30, 2017: The Yale Open Data Access (YODA) Project: Lessons Learned in Data Sharing

Speaker

Joseph Ross, MD, MHS, Section of General Internal Medicine, School of Medicine, Center for Outcomes Research and Evaluation, Yale-New Haven Hospital

Topic

The Yale Open Data Access (YODA) Project: Lessons Learned in Data Sharing

Keywords

Pragmatic clinical trial; Clinical research; Health Data; YODA, Yale, Open Access

Key Points

  • Yale’s Open Data Access (YODA) Project is committed to transparency and good stewardship of data usage
  • The YODA Project tenet that ”Underreporting is Scientific Misconduct” emphasizes the importance of open data sharing
  • Only about 50% of clinical trials are never published, many only partially reported
  • The YODA Project is maximizing value of collected data while minimizing duplication of data
  • An application process and secure platform help the YODA Project team to prevent distribution and protect patient privacy
  • YODA’s third party approach in removes influence over access, and is in best interest of all stakeholders

Discussion Themes

No requests have ever been rejected, but there have been cases where YODA could not provide the data needed (i.e. CT scans from a trial due to de-identification and cost).
YODA first published requests online right away, but received pushback from investigators because they thought it was unfair for their requests to be published before they could start work. Now, they are only published when they gain access to the data, in order to still maximize transparency.
YODA Project is expensive, but funding has come in through industry (e.g. Johnson and Johnson) so users continue to have access without a fee.
Incentivizing the movement of data from academicians to a common platform is a challenge moving forward because many feel they do not need SAS software and just want to disseminate themselves, so YODA will keep working toward its goal of open access.
Should journal editors have authors clearly state their relationship with the data? Several authors published data from the SPRINT data release and there was nothing in the article stating these authors had nothing to do with the trial.

For More Information

Read more about the YODA Project at http://yoda.yale.edu/

Tags
#pctGR, @YODAProject, @Yale, @PCTGrandRounds, @Collaboratory1, #HealthData

June 16, 2017: Project Baseline: Rationale, Design, and Progress

Speakers

Jessica Mega, MD, MPH, Verily

Ken Mahaffey, MD, Stanford Medicine

Topic

Toward National Trauma Care Practice Change for PTSD and Comorbidity: Lessons Learned from the TSOS Pragmatic Trial

Keywords

Patient engagement; Patient-centric research; Outcomes research; Patient recruitment; Pragmatic clinical trial; Clinical research; Digital trials

Key Points

  • A coalition to develop gold standard data, tools and technologies to create a holistic, aggregated view of health.
  • Verily, Duke University School of Medicine, Stanford Medicine & Google will work together to acquire, organize & analyze multidimensional types of data.
  • Health-related data will be collected via in-person visits, mobile apps, deep molecular profiling, sensors and wearables.
  • Project Baseline will engage participants not as patients but as partners in planning and conducting the research.

Discussion Themes

Since the diagnostic testing will include incidental findings, who will be financially responsible for the downstream utilization?

Is there monetary compensation (how much) or other forms (ie devices to keep, services, credits, etc)?

What is the plan to encourage diversity in subjects, to make sure it’s not just a lot of young, well-educated white people?

Low SES and underserved minority persons are highest priority for reducing population risk and reducing health disparities. How can you reach these populations?

What is the plan for intellectual property that will be generated?

For More Information

Read more about Project Baseline at https://www.projectbaseline.com/. You can also watch the Overview Video on YouTube: https://youtu.be/ufOORB6ZNaA.

Tags
#pctGR, #ProjectBaseline, #PatientEngagement
@PCTGrandRounds, @Collaboratory1

 

Journal Editors Propose New Requirements for Data Sharing

On January 20, 2016, the International Committee of Medical Journal Editors (ICMJE) published an editorial in 14 major medical journals in which they propose that clinical researchers must agree to share the deidentified data set used to generate results (including tables, figures, and appendices or supplementary material) as a condition of publication in one of their member journals no later that six months after publication. By changing the requirements for manuscripts they will consider for publication, they aim to ensure reproducibility (independent confirmation of results), foster data sharing, and enhance transparency. To meet the new requirements, authors will need to include a plan for data sharing as a component of clinical trial registration that includes where the data will be stored and a mechanism for sharing the data.

Evolving Standards for Data Reporting and Sharing

As early as 2003, the National Institutes of Health published a data sharing policy for research funded through the agency, stipulating that “Data should be made as widely and freely available as possible while safeguarding the privacy of participants, and protecting confidential and proprietary data.” Under this policy, federally funded studies receiving over $500,000 per year were required to have a data sharing plan that describes how data will be shared, that shared data be available in a usable form for some extended period of time, and that the least restrictive method for sharing of research data is used.

In 2007, Congress enacted the Food and Drug Administration Amendments Act. Section 801 of the Act requires study sponsors to report certain kinds of clinical trial data within a specified interval to the ClinicalTrials.gov registry, where it is made available to the public. Importantly, this requirement applied to any study classified as an “applicable clinical trial” (typically, an interventional clinical trial), regardless of whether it was conducted with NIH or other federal funding or supported by industry or academic funding. However, recent academic and journalistic investigations have demonstrated that overall compliance with FDAAA requirements is relatively poor.

In 2015, the Institute of Medicine (now the National Academy of Medicine) published a report that advocates for responsible sharing of clinical trial data to strengthen the evidence base, allow for replication of findings, and enable additional analyses. In addition, these efforts are being complemented by ongoing initiatives aimed at widening access to clinical trial data and improving results reporting, including the Yale University Open Data Access project (YODA), the joint Duke Clinical Research Institute/Bristol-Myers Squibb Supporting Open Access to clinical trials data for Researchers initiative (SOAR), and the international AllTrials project.

Responses to the Draft ICMJE Policy

The ICMJE recommendations are appearing in the midst of a growing focus on issues relating to the integrity of clinical research, including reproducibility of results, transparent and timely reporting of trial results, and facilitating widespread data sharing, and the release of the draft policy is amplifying ongoing national and international conversations taking place on social media and in prominent journals. Although many researchers and patient advocates have hailed the policy as timely and needed, others have expressed concerns, including questions about implementation and possible unforeseen consequences.

The ICMJE is welcoming feedback from the public regarding the draft policy at www.icmje.org and will continue to collect comments through April 18, 2016.

Resources

Journal editors publish editorial in 14 major medical journals stipulating that clinical researchers must agree to share a deidentified data set: Sharing clinical trial data: A proposal from the International Committee of Medical Journal Editors (Annals of Internal Medicine version). January 20, 2016.

A New England Journal of Medicine editorial in which deputy editor Dan Longo and editor-in-chief Jeffrey Drazen discuss details of the ICJME proposal: Data sharing. January 21, 2016.

A follow-up editorial in the New England Journal of Medicine by Jeffrey Drazen: Data sharing and the Journal. January 25, 2016.

Editorial in the British Medical Journal: Researchers must share data to ensure publication in top journals. January 22, 2016.

Commentary in Nature from Stephan Lewandowsky and Dorothy Bishop: Research integrity: Don’t let transparency damage science. January 25, 2016.

National Public Radio interview on Morning Edition: Journal editors to researchers: Show everyone your clinical data with Harlan Krumholz. January 27, 2016.

Institute of Medicine (now the National Academy of Medicine) report advocating for responsible sharing of clinical trial data: Sharing clinical trial data: maximizing benefits, minimizing risk. National Academies Press, 2015.

Rethinking Clinical Trials Living Textbook Chapter, Acquiring and using electronic health record data, which describes the use of data collected in clinical practice for research and the complexities involved in sharing data. November 3, 2015.

NIH Health Care Systems Research Collaboratory data sharing policy. June 23, 2014.

List of International Committee of Medical Journal Editors (ICMJE) member journals.