Ethics for Artificial Intelligence and Machine Learning in Pragmatic Clinical Trials
Section 2
Institutional Review Board Approval
One challenge to the responsible conduct of digital PCT research involving AI/ML is that only studies that meet the US Human Subjects Research Regulations’ (including the Common Rule and FDA’s commensurate rules) definition of human subjects research are obligated to seek authorization from an appropriately constituted institutional review board (IRB) (See Living Textbook chapter Identifying Those Engaged in Research). The definition of a human subject is an identifiable, alive person who will be treated with an intervention under study or whose identified data or specimens are used. Informed consent is required for research deemed more than minimal risk by the IRB. If the research poses no more than minimal risk, the IRB will determine if consent can be waived or altered.
PCTs that test or use AI/ML systems sometimes involve the collection or secondary analysis of de-identified data, and are therefore exempt from IRB review. These trials can also qualify for a waiver of consent for the use of identifiable health data when the research poses minimal risks and could not practically be carried out without the waiver, among other criteria. For further discussion see the Waivers and Alterations section of the Consent, Waivers of Consent, and Regulatory Notification chapter. A full description of HIPPA can also be found in the Gaining Permission to Use Real-World Data section of the Acquiring Real-World Data chapter.
Beyond pernicious bias experienced at the individual level, AI/ML can also engender harms to communities and wider social groups (Doerr and Meeder 2022). However, in the Common Rule’s Criteria for IRB approval of research (§46.111), it states that: “The IRB should not consider possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility.” While different IRBs might interpret this exclusion differently, it might limit opportunities for ethical reflection and protection against algorithmic injustices that AI/ML systems are prone to perpetuate absent ongoing oversight. Some authors thus argue that IRBs might not be the most appropriate oversight body for digital PCTs involving AI/ML, owing to IRB’s limited scope and individual-specific assessment of benefits (Spector-Bagdady et al 2022).
Investigator tip: Consistent with recent recommendations for research funders (Bernstein et al 2021), digital PCT investigators could prospectively assess their proposed research and development of an AI/ML system to describe potential risks to society, identify subgroups within society that might be particularly affected, and commit to risk mitigation strategies.
SECTIONS
REFERENCES
Bernstein MS, Levi M, Magnus D, Rajala BA, Satz D, Waeiss Q. 2021. Ethics and society review: Ethics reflection as a precondition to research funding. Proc Nat Acad Sci. 118(52):e2117261118. doi:10.1073/pnas.2117261118. PMID: 34934006.
Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics & Human Research. 2022 Jul;44(4):34-8. PMID: 35802789.
Spector-Bagdady K, Rahimzadeh V, Jaffe K, Moreno J. 2022. Promoting ethical deployment of artificial intelligence and machine learning in healthcare. Am J Bioeth. 22(5):4-7. doi:10.1080/15265161.2022.2059206. PMID: 35499568.