Giant leaps in technology always bring questions and consternation, particularly in health care where physicians have been fooled before by the promise of so-called revolutionary new digital tools. Most of us are still scarred from our frustrations with poorly designed electronic health records and other technology that we greet the coming age of AI with skepticism and concern.
The truth is, generative AI and AI-enabled tools represent a sea change in our ability to process information and understand complex datasets, which makes transformative change in society and medicine inevitable. In the same way that life—and medical practice—has been forever changed by the internet and smartphones, so too will it be radically changed by AI—or augmented intelligence, often called artificial intelligence.
And so we have to get it right.
When the AMA released its “Principles for AI Development, Deployment and Use” (PDF) last fall, it was immediately followed by an AMA survey that showed four in 10 physicians are both equally excited and concerned about AI applications in health care and how they may impact the patient-physician relationship. While 70% of respondents recognized AI’s potential to support diagnoses and improve workflow efficiency, large percentages worry about patient privacy and de-personalizing the human interactions that have always been at the center of health care.
Bridge the confidence gap
The AMA’s principles on AI were created to begin addressing this confidence gap, providing a clear road map for the administration, Congress and industry stakeholders to follow as discussions heat up around AI governance, regulation and its appropriate use in the health care setting.
The principles—created with insights from AMA subject-matter experts, AMA physician members, informaticists and national medical specialty organizations with an expertise in AI— build on existing AMA policies that began in 2018 to encourage a comprehensive government approach to AI governance to mitigate risks to patients and liability concerns for physicians.
The principles state that, above all else, health care AI must be designed, developed and deployed in a manner that is ethical, equitable, responsible and transparent. And that AI use requires a risk-based approach in which the level of scrutiny, validation and oversight should be proportionate to the potential overall or disparate harm and consequences the AI system might introduce.
The AMA spends a lot of time engaging with physicians about technology trends and, truthfully, physicians’ priorities for digital health adoption—whether it’s AI, telehealth or some other form of digitally enabled care—are simple. We need to know: Does it work? Will it work in my practice? Will insurance cover its use? And, importantly, who is accountable if something goes wrong?
Liability concern is real
As with all digital health tools, liability is a potential barrier to AI implementation and uptake. If a patient has an adverse reaction to treatment because an AI tool or algorithm recommends a certain prescription based on a patient’s data, who bears the burden of responsibility: the physician, the company that owns the AI algorithm, or the individual or team who built and trained the AI algorithm?
The U.S. Department of Health and Human Services’ Office of Civil Rights recently issued its long-awaited nondiscrimination rule, which included a problematic provision that creates new liability for physicians who use AI-enabled technologies and other clinical algorithms potentially resulting in discriminatory harms. While the final rule is significantly more permissive than an earlier proposal, it is still concerning as it places new duties on physicians and creates the risk of penalties should they rely on algorithm-enabled tools that result in discriminatory harms.
The AMA continues to urge physicians to carefully consider the new requirements in their decisions to incorporate AI tools into their practices. The lack of transparency requirements for these tools requires physicians to be diligent about their selections and have proper policies in place to guide use within a practice. Transparency and explainability regarding the design, development and deployment processes should be mandated by law where possible—including potential sources of inequity in problem formulation, inputs and implementation.
Physicians should understand that, where they use AI-enabled tools and systems without transparency provided by the AI developer, their risks of liability for reliance on that AI will likely increase. The need for full transparency is greatest where AI-enabled systems have greater impacts on direct patient care, such as by AI-enabled medical devices, clinical decision support and interaction with AI-driven chatbots.
AI and AI-enabled tools already in wide use, including large language models such as OpenAI’s ChatGPT, provide a tantalizing glimpse into the future of medical practice. I’m optimistic that a future in which technology eliminates or greatly reduces the mountain of administrative hassles and tedious tasks that too often fill our days is possible. The promise of technology is that it will free us from these burdens—and major sources of burnout—so that we can dedicate more time to our patients. That’s good for physicians, good for patients and good for the health of our country.
But for that to happen, there must be trust on the part of doctors and patients. And we need the right regulatory environment to build that trust. AI is too powerful and too revolutionary to leave questions about liability and governance unanswered.