Clinical algorithms can help guide clinical decision-making, but—if not developed accurately—they also carry the potential to introduce bias and racism that can threaten health and perpetuate inequities already experienced by historically marginalized communities.
At the request of Congress, the U.S. Agency for Healthcare Research and Quality (AHRQ) is examining how clinical algorithms may introduce bias into clinical decision-making and the AMA provided information to aid the effort.
AHRQ, an agency of the Department of Health and Human Services, issued a request for information on how algorithms can introduce bias and subsequently “influence access to care, quality of care, or health outcomes for racial and ethnic minorities and people who are socioeconomically disadvantaged.”
It defined clinical algorithms as “a set of steps that clinicians use to guide decision-making in preventive services (such as screening), diagnosis, clinical management, or other assessments to improve a patient's health,” and may be informed by patient-specific characteristics including sociodemographic and physiological factors.
“The use of race or ethnicity in clinical algorithms used in cardiology, nephrology, obstetrics, and urology, among others, have been questioned and subjected to close scrutiny,” AMA Executive Vice President and CEO James L. Madara, MD, wrote in a letter to David Meyers, MD, AHRQ’s acting director.
“It is clear that a comprehensive assessment of the use of race and ethnicity data in clinical algorithms is vital to understand the extent of current use and ensure that their inclusion does not reenforce pre-existing inequities in care,” the letter adds.
Dr. Madara noted that AHRQ is “ideally situated” to conduct and fund research on the use of race and ethnicity data in clinical algorithms and their potential for contributing to “medical racism and bias in clinical decision-making.”
Examples of bias cited
The letter references a commercially used algorithm that used health costs as a proxy for health in order to target patients for high-risk care-management programs. Researchers found that Black patients were consistently assigned lower risk scores than similarly situated white patients, as the algorithm failed to account for systemic and long-standing inequities in spending on Black patients.
It also notes the initiative of the American Society of Nephrology and National Kidney Foundation to review and reconsider the inclusion of race from a key measure for calculating estimated kidney function, the estimated glomerular filtration rate (eGFR), which for decades has been automatically adjusted to give a higher number for Black patients. This measurement may suggest that Black patients’ kidneys are healthier than they actually are—which can hinder Black patients’ ability to receive appropriate health care, including a potentially lifesaving transplant.
“Efforts like this one, involving stakeholders from medical societies, patient organizations, and related specialists, can provide insights and a potential framework for the meaningful review of clinical algorithms and their potential for perpetuating medical racism and bias in clinical decision-making,” Dr. Madara wrote.
Policy directing efforts
The AMA is in the process of convening organizations to gather information about the use of clinical algorithms and to create an action plan to address problems that are uncovered.
This effort stems from policy adopted at the November 2020 AMA Special Meeting, which directed the AMA to:
- Collaborate with appropriate stakeholders and content experts to develop recommendations on how to interpret or improve clinical algorithms that currently include race-based correction factors.
- Support research that promotes antiracist strategies to mitigate algorithmic bias in medicine.
Dr. Madara added that patients have “a fundamental right to know” the risk, benefit and alternatives of any health care intervention that they are considering, and this includes being told if an algorithm was used to inform clinical decision-making. Patients also have the right to know if their data will be used to develop an algorithm and that this should only be done on an opt-in basis.
“The AMA believes it is vital that all providers understand how the clinical algorithms they rely on to provide appropriate and equitable care in practice are developed,” the letter says. “Over-reliance on any algorithm, particularly without an understanding of what its most effective uses are, can create a risk for amplifying and perpetuating biases that are present in the data, including any bias based in race or ethnicity.”