Grand Rounds March 8, 2024: Public-Private Partnerships in the Trustworthy Health AI Ecosystem (Michael Pencina, PhD; Brian Anderson, MD)

Speakers

Michael Pencina, PhD
Chief Data Scientist, Duke Health
Duke University School of Medicine

Brian Anderson, MD
CEO, Coalition for Health AI

Keywords

Artificial Intelligence, AI, Coalition for Health AI (CHAI), Duke ABCDS, Governance, Regulation

Key Points

  • The regulatory landscape for AI is changing rapidly. There are a number of government agencies trying to establish their position on AI, including FDA and The U.S. Department of Health and Human Services Office of Civil Rights. The Office of the National Coordinator for Health Information Technology (ONC) published its final rule on AI in December.
  • We are living in a “wild west” of algorithms. There’s so much focus on development and technological progress and not enough attention to its value, quality, ethical principles, or health equity implications.
  • There are principles we can apply for the responsible development and use of AI, such as ensuring that AI technology serves humans; defining the task we want the AI tool to accomplish; describing what the successful use of the AI tool looks like; and creating transparent systems for continuously testing and monitoring AI tools.
  • Three years ago, Duke created an algorithm-based clinical support (ABCDS) function which monitors application of algorithms, AI, in our health system. The governance is now changing as AI moves from predictive models to large-language models.
  • This is complex environment where not everyone speaks the same language, clinicians, developers, patients, informaticists, statisticians, and more. It is very important that development teams are connected with what is happening at the health system level and the clinical teams on everything they build.
  • Model development follows the FDA process starting with model development, silent evaluation, effectiveness evaluation, and general deployment, with evaluations at regular intervals.
  • The Coalition for Health AI (CHAI) is a nonprofit that brings together more than 1,400 private sector organizations, including health care systems, payors, device manufacturers, technology companies, and patient advocates, plus U.S. government partners, to provide a framework for the landscape of health AI tools to ensure high quality care, increase trust among users, and meet health care needs.
  • Generative AI will become more impactful and ubiquitous. We have an urgent need to define what is safe and effective in this space and rethink how we regulate and align generative AI

 

Discussion Themes

-The field is really exploding. How are you forecasting how the federation will come together to address the potential volume of algorithms? Part of the challenge is that it is really important that we build consensus quickly and include as many voices as we can. We are identifying as many use cases as we can, building consensus around a set of shared definitions and frameworks, and then fleshing those out in the nuances of use cases. The other challenge is, once we have that defined, we need to share freely what we learn for wide adoption. We can’t do this in silos.

How are you approaching ethics in AI? How do you envision including an ethical component in algorithm development? For our ABCDS team for Duke, we have an ethicist on the team to make sure our principles are applied. This is not enough; we will need more as the field develops

Tags

#pctGR, @Collaboratory1