“We're a long way from robots running off with patient care,” said Richard Frank, MD, PharmD, a member of the AMA Current Procedural Terminology (CPT®) Editorial Panel.
Dr. Frank, the former chief medical officer of Siemens Healthineers, made that remark at the conclusion of the AMA CPT & RBRVS 2025 Annual Symposium, held virtually this year.
The event serves as the setting for speakers to explain the changes being made in the coming year to the CPT code set, often referred to as the “language of medicine,” and to describe how the codes have been updated to adapt to new technology, innovation and developments in the practice of medicine.
Naturally, augmented intelligence (AI)—often called artificial intelligence—was the subject of much discussion during this year’s event.
There are more than 11,000 CPT codes in use and there are 420 updates for 2025, including seven for services driven by AI.
Dr. Frank was noting that, although the use of AI in medical decision-making is expected to boom, “these autonomous devices are still going to be under the judgment of the prudent physician making those decisions for their individual patient.”
Dr. Frank should know. He co-chaired the AI working group of the AMA-convened Digital Medicine Payment Advisory Group, which generated the content for the CPT code set’s Appendix S, which is where the CPT AI Taxonomy is housed.
The taxonomy was introduced in 2022 and has been implemented in CPT codes to classify AI medical services and procedures as assistive, augmentative or autonomous. That classification is based on the work performed by the AI application on behalf of the physician or other qualified health care professional.
The new codes added for 2025 are for AI augmentative data analysis involved in:
- Electrocardiogram measurements, 0902T and 0932T.
- Medical chest imagining, 0877T–0880T.
- Image-guided prostate biopsy 0898T.
They are all Category III CPT codes, which means they are temporary and used for emerging technologies, services and procedures. In all, 30% of the additions to the 2025 code set were Category III codes.
“Augmentative” refers to an AI level that analyzes or quantifies data in a clinically meaningful way, but still requires interpretation by a physician or other qualified health care professional.
Many more services driven by AI and other digital health tools are anticipated, prompting the CPT Editorial Panel to convene the AMA Digital Medicine Coding Committee being led by Dr. Frank, an internist, and his co-chair Mark Synovec, MD, who is a pathologist and former chair of the CPT Editorial Panel.
“It clearly became evident, with the advent of more and more codes related to AI, that we really needed to bring that expertise closer in-house and so, hence, the formation of the Digital Medicine Coding Committee,” said Dr. Synovec, president of the independent Topeka Pathology Group, during the symposium.
In general, the committee will provide advisory input to the CPT Editorial Panel, and seek to “create a consistency and predictability in the code set” regarding AI and digital medicine, Dr. Synovec explained.
Specifically, the committee will:
- Frame goals of AI and digital medicine codification and payment in terms of harmonization, synchronization, and alignment with the Food and Drug Administration (FDA) and Centers for Medicare & Medicaid Services (CMS).
- Perform critical review of digital health code-change applications.
- Work on the Horizon 2030 project seeking to ensure that the AI definitions in Appendix S remain contemporary through the end of this decade.
- Consider coding solutions for autonomous AI and population-health services.
The committee includes representatives from several physician specialties as well as a seat formally reserved for an FDA representative, plus informal representation from CMS.
“They have been sitting in and listening and providing input where necessary,” Dr. Synovec said of CMS.
CPT refers to doctors’ work, not AI
In a separate presentation, Dr. Frank told how AI-driven services are described in the CPT code set “by the impact that the technology has on the work of the physician or qualified health professional, both in the amount of work and in the changes in the way the service is performed.”
He quoted from the Appendix S introduction which explains why the term “AI” remains undefined in both the taxonomy and the code set.
“In health care, there is no single product, procedure or service for which the term ‘AI’ is sufficient— or even necessary—to describe its intended clinical use or utility,” Dr. Frank said.
The taxonomy is intended to describe the work done by algorithms or software, he explained.
“The use of the term ‘AI’ in the CPT context is not intended to encompass nor constrain the full scope of innovations characterized as the work done by machines, and it is not intended to convey that the physician is being replaced,” he added. “In fact, it is quite the opposite—the function of a CPT code is to describe the physician work necessary to perform a medical service.”
During an audience Q&A session, Dr. Frank noted that physicians and others who use digital medicine and AI tools in their everyday practice are being asked to identify coding gaps and solutions for the CPT Editorial Panel to consider.
From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors—not a burden.
Feds not offering much guidance
The importance of the work of the CPT Editorial Panel to provide some structure around AI use and payment in health care was underscored in another session by Shannon Curtis, the AMA’s assistant director for federal affairs, who gave an overview of government AI regulations and policies—such as they are.
“The payment landscape for AI is unsettled—if somewhat nonexistent—and a challenge that everybody's trying to deal with,” Curtis said.
The main government agency working on AI is the FDA, which has the authority to regulate products for safety and efficacy, but its scope is limited to instances in which the AI application qualifies as a medical device.
She noted that the FDA has not developed a system specifically designed for regulating AI-enabled devices, so these products fall under the same paradigm as any other medical device. But even within these limits, the FDA has already authorized for market around 1,000 AI-enabled devices.
AMA advocacy has been focused on requiring transparency about where AI is being used and being able to explain how it is being used. However, the FDA has not done much work in these areas yet, nor has it updated its labeling requirements, Curtis said. This is “something we're watching very closely,” she added.
The Federal Trade Commission, the agency charged with consumer protection, has broader and more general authority over AI. This includes the power to take action against deceptive practices, misleading claims and questionable data integrity, but it has only used these enforcement powers occasionally.
“None of it has been health care-focused yet,” Curtis said. “They do have authority to go after bad actors on some elements going forward, so we'll will be watching very closely.”
The Department of Health and Human Services’ Office of Civil Rights, under the Affordable Care Act’s Section 1557 rule against nondiscrimination, did issue a rule prohibiting algorithms—including algorithms powered by AI—whose use results in discrimination.
“This new final rule is going to require physicians and others to use reasonable efforts to identify and mitigate risks from algorithms including AI,” Shannon said. “Which means that physicians and other [HIPAA]-covered entities under that rule do have some new obligations to ensure that there's not discriminatory harms from AI and they could face some liability under this section going forward.”
Lastly, the Office of the Assistant Secretary for Technology Policy (formerly the Office of the National Coordinator for Health IT), has issued an AI-transparency regulation as part of its EHR-certification process.
The AMA would like to see this limited regulation be expanded.
“We are very pleased to see some of the first true AI-transparency regulations,” Curtis said, adding that the AMA will be working with the assistant secretary’s office to ensure enforcement.
Otherwise, however, there are not going to be any assurances from the federal government that AI-enabled tools—at least those that are not medical devices—work, perform well, or won’t have an output that is “riddled with errors,” Curtis said.
“We really do need more federal action to ensure the safety and performance of these AI tools going forward,” she added. “The AMA does have concerns with the current oversight and structure that we've seen and the lack of action that we've seen.”
Other AMA concerns include the lack of consensus standards for AI design, development and deployment, or even what “good AI looks like,” Curtis said, adding that there also needs to be standards on data validation and for determining who is liable when the use of AI leads to patient harm.
Without standards, it’s a “Wild West” environment, with vendors “doing whatever they want to try to sell a product,” she added.
“We've done a lot of advocacy in this space with the goal of ensuring the design, development and deployment of health care AI is transparent, ethical, equitable and responsible,” Curtis said.