Artificial intelligence can help health care providers and patients predict and manage risks, but to what end? Its use might heighten patient anxiety, increase interventions that could add complications and side effects, or impinge on a clinician's judgment.

Paul Scherz explored these concepts in a CHA webinar in April called "AI and the Ethics of Precision Medicine," part of a webinar series on AI ethics in health care organized by the Center for Theology and Ethics in Catholic Health.
Scherz is the Our Lady of Guadalupe professor of theology at the University of Notre Dame and author of The Ethics of Precision Medicine: The Problems of Prevention in Healthcare.
"We're currently seeing an explosion of interest by governments, technology companies and health care systems in predicting an individual health risk," said Scherz. "Precision medicine, as this model of health care is called, uses AI-based techniques to combine information from genetics, medical charts, demographics and other datasets to predict the risks facing an individual so that they can be managed."
A changing field
When he started out in genetics research in the 1990s, Scherz said the goals of the field were to find genes that cause a disease and then cure the disease with medication or by editing the gene. The field shifted when researchers learned that there wasn't a gene-to-disease correspondence. "It became obvious that our genetic makeup is too complex for simple cures," he said. "The field had to find something to do with its wealth of data and expensive sequencing infrastructure. So it has turned to predicting risk."
AI is helping to analyze much of this information. American health systems, Scherz said, "are already attempting to integrate these predictions into clinical care, and tech companies are fighting for health data. This all leads to the question of what ethical concerns might arise from these developments."

There are clear cases where it makes sense to focus on risk reduction, such as prescribing drugs for people with hereditary high cholesterol or more breast cancer screening for women with a family history, he said.
But the ability to analyze this information can drum up problems. "As you look for more risks, you find them, as more conditions are surveilled and more people are at risk," Scherz said. "People who formerly considered themselves healthy now live under a cloud of anxiety, knowing of their dangers, some of which can be addressed, but others of which cannot. Given knowledge of risk, the medical system snaps into action."
People might be given interventions that can come with their own side effects or complications. "This risk-based paradigm is now subject to medical controversy, with growing evidence of overdiagnosis and overtreatment," Scherz said.
He also pointed out that AI-driven risk reduction may not be the most effective way to reduce diseases, and ethicists have argued that we can better prevent chronic diseases by addressing social determinants of health. He also said scholars question whether results are biased because precision medicine research is based on data that may not represent all groups.
Scherz warned that AI-based risk predictions could lead to "automation bias," such as when airplane pilots rely too heavily on automated systems and fail to recognize a system error.
"Automated risk management will detract from the practitioner's capacity for prudential judgment," he said.
Ethical responses
Patients can have autonomy to determine an acceptable level of risk, but the patient must be educated on those risks and benefits, he said. "It's in this work of translation that the medical practitioner plays a key role," he said. "This sensitivity to the individual, concrete solution is what makes medicine an art rather than merely a scientific discipline."
He offered a few ways to support ethical implementation of these technologies, such as using risk scores to identify and treat only high or very high-risk patients. "This would limit risk reduction and the corresponding anxiety to those who will receive the greatest benefit," he said.
He also suggested that risk should be calculated for conditions for which highly effective interventions are available. "This is a classic criterion for genetic testing," he said. "You should only do it when the test is highly predictive and leads to effective action."
Third, he suggested that risk predictions shouldn't be tied to sanctions such as changes to reimbursement and that metrics should support rather than undermine clinical judgment.
Finally, he said AI systems should provide more rather than less information and offer suggestions for things the provider might think about.
AI-driven precision risk management presents huge challenges because there's no limit to reducing risk, he said.
"Precision medicine boosters in the tech industry promise a future of zero risk, low-cost health care delivered by app," he said. "For not to succumb to what would in fact be a future of costly overdiagnosis and medical burnout, we must act to shape these systems to support clinical judgment."