Digital

Success with health care AI comes down to teamwork

. 4 MIN READ
By
Timothy M. Smith , Contributing News Writer

Augmented intelligence (AI) in health care isn’t a monolith. There are countless organizations working on ways to improve screening, diagnosis and treatment using AI.

Join the AMA today

Become a member of the nation’s largest physician organization and get exclusive access and benefits.

But for health care AI to rightly earn the trust of patients and physicians, this multitude has to come together. Developers, deployers and end users of AI—often called artificial intelligence—all need to embrace some core ethical responsibilities.

An open-access, peer-reviewed essay published in the Journal of Medical Systems summarizes these crosscutting responsibilities. And though they don’t all require physicians to take the foremost role, each is make-or-break to the patient-physician encounter.

Learn more about artificial intelligence versus augmented intelligence and the AMA’s other research and advocacy in this vital and emerging area of medical innovation.

“Physicians have an ethical responsibility to place patient welfare above their own self-interest or obligations to others, to use sound medical judgment on patients’ behalf and to advocate for patients’ welfare,” wrote the authors, who developed this framework during their tenure at the AMA.

“Successfully integrating AI into health care requires collaboration, and engaging stakeholders early to address these issues is critical,” they wrote.

Learn about three questions that must be answered to identify health care AI that physicians can trust.

Related Coverage

To see success with health care AI, target the quadruple aim

The essay summarizes the responsibilities of developers, deployers and end users in planning and developing AI systems, as well as in implementing and monitoring them.

“Most of these responsibilities have more than one stakeholder,” said Kathleen Blake, MD, MPH, one of the essay’s authors and a senior adviser at the AMA. “This is a team sport.”

Make sure the AI system addresses a meaningful clinical goal. “There are a lot of bright, shiny objects out there,” Dr. Blake said. “A meaningful goal is something that you, your organization and your patients agree is important to address.”

Ensure it works as intended. "You need to be sure what it does, as well as what it doesn’t do.”

Explore and resolve legal implications prior to implementation, and agree on oversight for safe and fair use and access. Pay particular attention to liability and intellectual property.

Develop a clear protocol to identify and correct for potential bias. “People don't get up in the morning trying to create biased products,” Dr. Blake said. “But deployers and physicians should always be asking developers what they did to test their products for potential bias.”

Ensure appropriate patient safeguards are in place for direct-to-consumer tools that lack physician oversight. As with dietary supplements, physicians should ask patients, “Are you using any direct-to-consumer products I should be aware of?”

Related Coverage

To ID health care AI doctors can trust, answer these 3 questions

Make clinical decisions, such as diagnosis and treatment. “You need to be very certain whether a tool is for screening, risk assessment, diagnosis or treatment,” Dr. Blake said.

Have the authority and ability to override the AI system. For example, there may be something you know about a patient that causes you to question the system’s diagnosis or treatment.

Ensure meaningful oversight is in place for ongoing monitoring. “You want to be sure its performance over time is at least as good as it was when it was introduced.”

See to it that the AI system continues to perform as intended. Do this through performance monitoring and maintenance.

Make sure ethical issues identified at the time of purchase and during use have been addressed. These include safeguarding privacy, securing patient consent and providing patients’ access to their records.

Establish clear protocols for enforcement and accountability, including one that ensures equitable implementation. “For example, what if an AI product improved care but was only deployed at a clinic in the suburbs, where there was a high rate of insured individuals? Could inequitable care across a health system or population result?” Dr. Blake asked.

A companion AMA webpage features additional highlights from the essay, as well as links to relevant opinions in the AMA Code of Medical Ethics.

Learn more about the AMA's commitment to helping physicians harness health care AI in ways that safely and effectively improve patient care.

FEATURED STORIES