There is a huge potential seen for augmented intelligence (AI)—often called artificial intelligence—to provide clinical decision support. But there is also serious concern about how machine learning and clinical algorithms can introduce bias and perpetuate health inequity.
A new topic for this discussion is ChatGPT (generative pretrained transformer), a large language model developed by OpenAI that uses huge amounts of data to mimic human conversation.
Researchers measured ChatGPT’s ability to perform clinical reasoning by testing its performance on the U.S. Medical Licensing Exam (USMLE), and found that it could pass the exams at 60% accuracy without prior reinforcement or training.
But the large language model’s success on a written exam must be understood properly, said John Halamka, MD, MS, president of the Mayo Clinic Platform.
“Generative AI is not thought, it's not sentience,” Dr. Halamka said during an episode of “AMA Update.”
According to a ChatGPT-generated description of the “AMA Update” interview based on a full transcript, the episode’s “discussion touches on the importance of credible sources and the potential for misinformation, as well as the need to balance the reduction of human burden with the risk of harm from incorrect diagnoses.”
AI could cut clerical burdens
Dr. Halamka described how he used ChatGPT to help him write a news release—the first draft of which was “perfect, eloquent and compelling,” but “totally wrong,” requiring him to edit all the material facts.
That having been said, Dr. Halamka noted that he was able to finish the task in five minutes, rather than the hour it would have normally taken without assistance.
“In the short term, you'll see it used for administrative purposes, for generating text that humans then edit to correct the facts, and the result is reduction of human burden,” he said.
Given the great resignation in medicine and what Dr. Halamka calls a “crisis of staffing” in health care, “burden reduction is actually a huge win.”
Learn more from an article—“Artificial Intelligence in Medicine & ChatGPT: De-Tether the Physician”—that was co-written by AMA President-elect Jesse Ehrenfeld, MD, MPH, and published in the Journal of Medical Systems.
How tomorrow’s doctors may use AI
But, he warned, using ChatGPT now for diagnosis of a complex medical condition holds “high potential for harm,” as most generative models are trained on popular materials that may contain misinformation or purposeful disinformation instead of rigorously peer-reviewed scientific literature.
“I would love to be able to say: We programmed ChatGPT with 10 million deidentified patient charts, and patients like the one in front of you had the following treatments from other clinicians in the past,” Dr. Halamka said. “That would be lovely. We're not there yet.”
Dr. Halamka added that if this generation of physicians can learn how to use AI to cut administrative burdens, tomorrow’s physicians will have more time to spend with patients.
“The next generation of physicians will be less memorizers and more knowledge navigators,” Dr. Halamka said. “If, in my generation, we can take out 50% of the burden, the next generation will have more joy in practice.”
Find out about augmented intelligence versus artificial intelligence in medicine and AMA policy on health care AI.
Also, read the JAMA editorial, “Nonhuman ‘Authors’ and Implications for the Integrity of Scientific Publication and Medical Knowledge.”
“AMA Update” covers health care topics affecting the lives of physicians and patients. Hear from physicians and experts on public health, advocacy issues, scope of practice and more. Catch every episode by subscribing to the AMA’s YouTube channel or the audio podcast version, which features educational presentations and in-depth discussions.