February 22, 2024: Updated Template Provides Guidance for Reporting of Pragmatic Trial Results

An updated template from the NIH Pragmatic Trials Collaboratory provides guidance for the transparent reporting of the primary results of pragmatic clinical trials.

The template includes elements from the Consolidated Standards of Reporting Trials (CONSORT) statement and its extensions. It also addresses secondary use of electronic health record data, involvement of research partners and healthcare systems in the conduct of pragmatic trials, and special ethical and regulatory considerations.

Download the template.

The updated template is organized by the recommended reporting elements presented in the CONSORT checklist and draws on recent experiences and lessons learned from the NIH Collaboratory Trials. Appendices include links to CONSORT and its relevant extensions, the Pragmatic-Explanatory Continuum Indicator Summary (PRECIS-2) tools and resources, and examples of figures to include in pragmatic trial reports.

January 22, 2021: Is It Time to Embrace Preprints? A Conversation About the First 18 Months of medRxiv (Harlan Krumholz, MD, SM; Joseph Ross, MD, MHS)

Speakers

Harlan M. Krumholz, MD, SM
Harold H. Hines, Jr. Professor of Medicine and Public Health
Yale University

Joseph S. Ross, MD, MHS
Professor of Medicine and Public Health
Yale University

Topic

Is It Time to Embrace Preprints? A Conversation About the First 18 Months of medRxiv

Keywords

Preprints; Preprint server; medRxiv; Open science; Health science research; Research transparency; Preliminary research reports

Key Points

  • A preprint is a research manuscript yet to be certified by peer review and accepted for publication by a journal. A preprint server, like medRxiv, is an online platform dedicated to the distribution of preprints.

  • MedRxiv is publisher-neutral. It is operated by the Cold Spring Harbor Laboratory and managed in partnership with BMJ and Yale University.

  • Server submission requirements for authors help to mitigate concerns about preprints. These include clear posting criteria (ie, original research articles only), an established screening process, and a caution to users of preprints, including researchers, journalists, and the public, that states: “Preprints are preliminary reports of work that have not been peer-reviewed. They should not be relied on to guide clinical practice or health-related behaviors and should not be reported in news media as established information.”

  • Not allowed are commentaries, editorials, opinion pieces or essays, letters to editors, narrative reviews, medical-legal research, and case reports.

Discussion Themes

The concept of “living data” or “living analyses” has grown out of the pandemic crisis and could stay on as a feature of scientific communication.

How do you think the public conversation around a preprint may positively or negatively impact the peer review process itself?

How are academic institutions acknowledging preprints in the sense of “evidence of productivity” (as for academic promotion)?

Preprints can serve as a teaching opportunity not only for reminding scientists to be discerning readers of reported science, but also for reminding the media.

Learn more about medRxiv.

Tags

#pctGR, @Collaboratory1

 

November 3, 2020: Disseminating Trial Results: We Can Have Faster and Better

Healthcare cover imageNIH Collaboratory investigators Drs. Greg Simon, Rachel Richesson, and Adrian Hernandez published an opinion piece in Healthcare arguing that clinical trials investigators should align their dissemination processes with industry-sponsored trials to favor speed, and that years-long delays in dissemination reduce the relevance of clinical research.

“Delays reduce the ability for researchers to apply trial findings to new research questions, impede clinicians from having the most up-to-date information, and perhaps most importantly, are a disservice to patients who could benefit from the information.”

The authors use experiences with pragmatic trials supported by the NIH Collaboratory to explore faster dissemination of results, and suggest the following solutions:

  • Real-time access to outcome data
  • Continuous data curation and cleaning
  • Immediate data analysis
  • Rapid reporting of trial results

Much change is needed to reach these goals. The authors suggest that by modeling processes after industry-sponsored trials, researchers may be able to improve the speed and quality of results reporting.

“Cultural incentives are aligned in industry sponsored trials to favor speed: readiness for generalizing topline results is considered valuable to shareholders, and the culture encourages a system where data are liquid, available, and continuously cleaned and curated, such that topline results can be reported within a timespan of two weeks rather than two years.”

As part of the NIH Collaboratory’s commitment to dissemination and sharing, all NIH Collaboratory Trials are expected to share data and resources, and topline results are reported in our weekly Grand Rounds Webinars.

 

October 25, 2019: Real-World Evidence for Drug Effectiveness Evaluation: Addressing the Credibility Gap (Richard Willke, PhD)

Speaker

Richard Willke, PhD
Chief Science Officer
ISPOR

Topic

Real-World Evidence for Drug Effectiveness Evaluation: Addressing the Credibility Gap

Keywords

Real-world evidence; Non-interventional studies; Health economics; ISPOR; Transparency; Reproducibility

Key Points

  • ISPOR is an international, multistakeholder nonprofit dedicated to advancing health economics and outcomes research excellence to improve decision making for health globally.
  • Key characteristics of credible and useful real-world evidence include:
    • Careful data collection or curation
    • Appropriate analytic methods
    • Good procedural practices for transparent study process
    • Replicability and reproducibility
    • Informed interpretation
    • Fit-for-purpose application
  • For transparency, it is recommended that researchers declare their study to be an exploratory (hypothesis evaluation) study and post the study protocol and analysis plan on a public study registration site prior to conducting the study analysis.

Discussion Themes

A draft white paper, Improving Transparency in Non-Interventional Research, is available for comment until November 15, 2019.

Sharing all study implementation parameters and definitions provides clarity on what was actually done and enables reproduction with confidence.

Potential registries for non-interventional real-world evidence studies include:

Read more about ISPOR.

Tags
#pctGR, @Collaboratory1, @ISPORorg

August 9, 2019: Open Science: Are we there yet? (Adrian Hernandez, MD)

Speaker

Adrian Hernandez, MD
Professor of Medicine
Vice Dean for Clinical Research
Duke University, School of Medicine

Topic

Open Science: Are We There Yet?

Keywords

Open science; Data sharing; Secondary analyses; Research collaboration

Key Points

  • Open science involves the responsible sharing of research data for the purpose of scientific advancement, integrity, and transparency.
  • Various stakeholders have made progress toward sharing clinical trial data, including:
  • Guiding principles of open science include appropriate access to research information; proper oversight with minimum barriers to data access; maintaining utility of data; an expectation that results of shared data will similarly be shared; and acknowledgment of those who contribute original data.
  • Despite efforts at supporting open science, no academic institution has an open science policy yet.

Discussion Themes

Open science remains an important goal to build trust and expand knowledge.

Data sharing is not a traditional measure of academic success. What incentives would need to change in order to support open science?

Tags

#OpenScience, #DataSharing, #pctGR, @Collaboratory1, @texhern

July 12, 2019: medRxiv: A Paradigm Shift in Disseminating Clinical and Public Health Research (Harlan Krumholz, MD, SM, Joseph Ross, MD, MHS)

Speakers

Harlan M. Krumholz, MD, SM
Harold H. Hines, Jr. Professor of Medicine and Public Health
Yale University

Joseph S. Ross, MD, MHS
Associate Professor of Medicine and Public Health
Yale University

Topic

medRxiv: A Paradigm Shift in Disseminating Clinical and Public Health Research

Keywords

Open science; Clinical research dissemination; Preprints; medRxiv preprint server

Key Points

  • medRxiv (med archive) is a server for health science preprints. It is a free service to the research community, managed in partnership with BMJ and Yale.
  • Benefits of preprints in medicine include early sharing of new information; enabling less “publishable” studies to be more readily available; and facilitating replication and reproducibility studies.
  • medRxiv submissions require:
    • Following ICMJE guidance, including author names, contact info, affiliation
    • Funding and competing interests statements
    • Statement of IRB or ethics committee approval
    • Study registration (ClinicalTrials.gov or other ICMJE approved registry for trials, PROSPERO for reviews) or link to protocol
    • Data sharing availability statement
    • EQUATOR Network reporting guidelines checklists
  • The medRxiv preprint server urges caution in using and reporting preprints, and includes language explaining that preprints are preliminary reports of work that have not been peer-reviewed, should not be relied on to guide clinical practice or health-related behaviors, and should not be reported in news media as established information.

Discussion Themes

Preprint servers do not replace, but rather complement, peer review.

Preprint has the potential for being a vehicle for high-quality but “negative” results. If we teach students that a negative result is also a good result, providing an avenue for us to walk-the-talk more easily via open communication seems largely positive despite the limitations.

Read more about medRxiv.

Tags

#pctGR, @Collaboratory1, @jsross119, @hmkyale

June 4, 2018: New Article Explores Misleading Use of the Label “Pragmatic” for Some Randomized Clinical Trials

A recent study published in BMC Medicine found that many randomized controlled trials (RCTs) self-labeled as “pragmatic” were actually explanatory in nature, in that they assessed investigational medicines compared with placebo to test efficacy before licensing. Of the RCTs studied, one-third were pre-licensing, single-center, or placebo-controlled trials and thus not appropriately described as pragmatic.

Appropriately describing the design and characteristics of a pragmatic trial helps readers understand the trial’s relevance for real-world practice. The authors explain that RCTs suitably termed pragmatic compare the effectiveness of 2 available medicines or interventions prescribed in routine clinical care. The purpose of such pragmatic RCTs is to provide real-world evidence for which interventions should be recommended or prioritized.

The authors recommend that investigators use a standard tool, such as the CONSORT Pragmatic Trials extension or the PRECIS-2 tool, to prospectively evaluate the pragmatic characteristics of their RCTs. Use of these tools can also assist funders, ethics committees, and journal editors in determining whether an RCT has been accurately labeled as pragmatic.

The BMC Medicine article cites NIH Collaboratory publications by Ali et al. and Johnson et al., as well as the Living Textbook, in its discussion of pragmatic RCTs and the tools available to assess their relevance for real-world practice.

“Submissions of RCTs to funders, research ethics committees, and peer-reviewed journals should include a PRECIS-2 tool assessment done by the trial investigators. Clarity and accuracy on the extent to which an RCT is pragmatic will help [to] understand how much it is relevant to real-world practice.” (Dal-Ré et al. 2018)

Journal Editors Propose New Requirements for Data Sharing

On January 20, 2016, the International Committee of Medical Journal Editors (ICMJE) published an editorial in 14 major medical journals in which they propose that clinical researchers must agree to share the deidentified data set used to generate results (including tables, figures, and appendices or supplementary material) as a condition of publication in one of their member journals no later that six months after publication. By changing the requirements for manuscripts they will consider for publication, they aim to ensure reproducibility (independent confirmation of results), foster data sharing, and enhance transparency. To meet the new requirements, authors will need to include a plan for data sharing as a component of clinical trial registration that includes where the data will be stored and a mechanism for sharing the data.

Evolving Standards for Data Reporting and Sharing

As early as 2003, the National Institutes of Health published a data sharing policy for research funded through the agency, stipulating that “Data should be made as widely and freely available as possible while safeguarding the privacy of participants, and protecting confidential and proprietary data.” Under this policy, federally funded studies receiving over $500,000 per year were required to have a data sharing plan that describes how data will be shared, that shared data be available in a usable form for some extended period of time, and that the least restrictive method for sharing of research data is used.

In 2007, Congress enacted the Food and Drug Administration Amendments Act. Section 801 of the Act requires study sponsors to report certain kinds of clinical trial data within a specified interval to the ClinicalTrials.gov registry, where it is made available to the public. Importantly, this requirement applied to any study classified as an “applicable clinical trial” (typically, an interventional clinical trial), regardless of whether it was conducted with NIH or other federal funding or supported by industry or academic funding. However, recent academic and journalistic investigations have demonstrated that overall compliance with FDAAA requirements is relatively poor.

In 2015, the Institute of Medicine (now the National Academy of Medicine) published a report that advocates for responsible sharing of clinical trial data to strengthen the evidence base, allow for replication of findings, and enable additional analyses. In addition, these efforts are being complemented by ongoing initiatives aimed at widening access to clinical trial data and improving results reporting, including the Yale University Open Data Access project (YODA), the joint Duke Clinical Research Institute/Bristol-Myers Squibb Supporting Open Access to clinical trials data for Researchers initiative (SOAR), and the international AllTrials project.

Responses to the Draft ICMJE Policy

The ICMJE recommendations are appearing in the midst of a growing focus on issues relating to the integrity of clinical research, including reproducibility of results, transparent and timely reporting of trial results, and facilitating widespread data sharing, and the release of the draft policy is amplifying ongoing national and international conversations taking place on social media and in prominent journals. Although many researchers and patient advocates have hailed the policy as timely and needed, others have expressed concerns, including questions about implementation and possible unforeseen consequences.

The ICMJE is welcoming feedback from the public regarding the draft policy at www.icmje.org and will continue to collect comments through April 18, 2016.

Resources

Journal editors publish editorial in 14 major medical journals stipulating that clinical researchers must agree to share a deidentified data set: Sharing clinical trial data: A proposal from the International Committee of Medical Journal Editors (Annals of Internal Medicine version). January 20, 2016.

A New England Journal of Medicine editorial in which deputy editor Dan Longo and editor-in-chief Jeffrey Drazen discuss details of the ICJME proposal: Data sharing. January 21, 2016.

A follow-up editorial in the New England Journal of Medicine by Jeffrey Drazen: Data sharing and the Journal. January 25, 2016.

Editorial in the British Medical Journal: Researchers must share data to ensure publication in top journals. January 22, 2016.

Commentary in Nature from Stephan Lewandowsky and Dorothy Bishop: Research integrity: Don’t let transparency damage science. January 25, 2016.

National Public Radio interview on Morning Edition: Journal editors to researchers: Show everyone your clinical data with Harlan Krumholz. January 27, 2016.

Institute of Medicine (now the National Academy of Medicine) report advocating for responsible sharing of clinical trial data: Sharing clinical trial data: maximizing benefits, minimizing risk. National Academies Press, 2015.

Rethinking Clinical Trials Living Textbook Chapter, Acquiring and using electronic health record data, which describes the use of data collected in clinical practice for research and the complexities involved in sharing data. November 3, 2015.

NIH Health Care Systems Research Collaboratory data sharing policy. June 23, 2014.

List of International Committee of Medical Journal Editors (ICMJE) member journals.

ClinicalTrials.gov Analysis Dataset Available from CTTI

Tools for ResearchAs part of a project that examined the degree to which sponsors of clinical research are complying with federal requirements for the reporting of clinical trial results, the Clinical Trials Transformation Initiative (CTTI) and the authors of the study are making the primary dataset used in the analysis available to the public. The full analysis dataset, study variables, and data definitions are available as Excel worksheets from the CTTI website and on the Living Textbook’s Tools for Research page.


Researchers Find Poor Compliance with Clinical Trials Reporting Law

A new analysis of data from the ClinicalTrials.gov website shows that despite federal laws requiring the public reporting of results from clinical trials, most research sponsors fail to do so in a timely fashion—or, in many cases, at all. The study, published in the March 12, 2015 issue of the New England Journal of Medicine, was conducted by researchers at Duke University and supported by the NIH Collaboratory and the Clinical Trials Transformation Initiative (CTTI). The study’s authors examined trial results as reported to ClinicalTrials.gov and evaluated the degree to which research sponsors were complying with a federal law that requires public reporting of findings from clinical trials of medical products regulated by the U.S. Food and Drug Administration (FDA).

“We thought it would be a great idea to see how compliant investigators are with results reporting, as mandated by law,” said lead author Dr. Monique Anderson, a cardiologist and assistant professor of medicine at Duke University.

Photograph of study author Monique L. Anderson, MD
Monique L. Anderson, MD.
Photo courtesy of Duke Medicine.

Using a publicly available database developed and maintained at Duke by CTTI, the authors were able to home in on trials registered with ClinicalTrials.gov that were highly likely to have been conducted within a 5-year study window and to be subject to the Food and Drug Administration Amendments Act (FDAAA). This federal law, which was enacted in 2007, includes provisions that obligate sponsors of non-phase 1 clinical trials testing medical products to report study results to ClinicalTrials.gov within 12 months of the trial’s end. It also describes allowable exceptions for failing to meet that timeline.

However, when the authors analyzed the data, they found that relatively few studies overall—just 13 percent—had reported results within the 12-month period prescribed by FDAAA, and less than 40 percent had reported results at any time between the enactment of FDAAA and the 5-year benchmark.

“We were really surprised at how untimely the reporting was—and that more than 66 percent hadn’t reported at all over the 5 years [of the study interval],” said Dr. Anderson, noting that although prior studies have explored the issue of results reporting, they have until now been confined to examinations of reporting rates at 1 year.

Another unexpected result was the finding that industry-sponsored studies were significantly more likely to have reported timely results than were trials sponsored by the National Institutes of Health (NIH) or by other academic or government funding sources. The authors noted that despite a seemingly widespread lack of compliance with both legal and ethical imperatives for reporting trial results, there has so far been no penalty for failing to meet reporting obligations, even though FDAAA spells out punishments that include fines of up to $10,000 per day and, in the case of NIH-sponsored trials, loss of future funding.

“Academia needs to be educated on FDAAA, because enforcement will happen at some point. There’s maybe a sense that ‘this law is for industry,’ but it applies to everyone,” said Anderson, who points out that this study is being published just as the U.S. Department of Health and Human Services and the NIH are in the process of crafting new rules that deal specifically with ensuring compliance with federal reporting laws.

According to Anderson, increased awareness of the law, coupled with stepped-up enforcement and infrastructure designed to inform researchers about their reporting obligations, have the potential to improve compliance with both the letter and the spirit of the regulations. “I think reporting rates will skyrocket after the rulemaking,” she says.

In the end, Anderson notes, reporting clinical trials results in order to contribute to scientific and medical knowledge is as much an ethical obligation for researchers as a legal one: “It’s something we really promise to every patient when they enroll on a trial.”


Read the full article here:

Anderson ML, Chiswell K, Peterson ED, Tasneem A, Topping J, Califf RM. Compliance with results reporting at ClinicalTrials.gov. N Engl J Med. 2015;372:1031-9. DOI: 10.1056/NEJMsa1409364.
Additional reading:

"Results of many clinical trials not being reported" (NPR)

"Clinical trial sponsors fail to publicly disclose report results, research shows" (Forbes.com)