Health care providers must prioritize human dignity in AI use, says CHA ethicist

January 2025

There are countless artificial intelligence apps opening up to health systems and facilities every day, but which ones should providers adopt and which should they avoid?

Daly

From an ethical standpoint, it is important for providers to recognize that all AI apps can hold great promise and great risk, says Daniel Daly, executive director of CHA's new Center for Theology and Ethics in Catholic Health. Daly sees it as crucial that organizations — especially those in the Catholic health ministry — conduct a comprehensive ethical analysis of decisions around AI use prior to adopting any such technology.

Daly spoke during a January webinar titled "Artificial Intelligence in Catholic Health: Theological and Ethical Considerations." It was part of a series from the center on AI in the ministry.

During the webinar, Daly explained key theological and ethical values, virtues and principles that he said are essential for ministry providers in evaluating the adoption and use of AI.

He said that AI is not morally neutral — rather it is a reflection of the people who programmed it. While AI apps can provide numerous benefits in health care, there are important ethical issues to consider with the use of any AI technology. Key among them is ensuring that the apps prioritize human dignity, Daly said.

Already delivering value
Daly said AI apps already are in wide use in health care, and many have shown important benefits. For example, many providers have used AI technology to reduce physician after-hours administrative work, such as manual charting.

He said various AI apps have been shown to increase clinician satisfaction and well-being and reduce their burnout susceptibility. Some apps also have been shown to improve patient outcomes.

And, yet, said Daly, there is much evidence that some AI apps are perpetuating serious societal harms, such as racism and other biases. For example, one AI app recommended care management more often to white patients than to Black patients, but that was only because an algorithm incorrectly interpreted medical record notes. Another app — this one outside of health care — was supposed to use face recognition technology, but it only could function with white faces, not Black, because the algorithm had been "trained" on white faces. Additionally, some organizations are using AI technology to "whiten" the accents of speakers who are from non-white backgrounds. This app could have the impact of normalizing "white accents" as being more intelligent, educated and desirable than other accents.

A mirror to society
In considering such harms of AI — whether or not the organizations intended those harms — it is clear that there are moral issues at stake in whether and how providers adopt certain AI technologies, Daly said. He said AI reflects the virtues and vices of the people putting it in place. Referencing the work of writer Shannon Vallor, Daly said that AI is a mirror of the society that produces it.

He said it is important to think carefully about the goals of AI apps, how the technology is implemented, how the voices of the people who are impacted by the technology are taken into account, and how the technology respects the humanity of the people it impacts.

Papal focus
Daly said Pope Francis has made it a priority to study the potential impacts of AI and to call for actors in the industry to proceed in a morally grounded way. In February 2020, the Vatican issued the "Rome Call for AI Ethics," which was promoted by the Pontifical Academy for Life. It was an attempt to build consensus among leaders of multinational technology companies for the moral use of AI. The "Rome Call" set forth six qualities that should define the use of AI: transparency, inclusion, responsibility, impartiality, reliability and security and privacy. Microsoft and IBM have signed on to the call, among many others. CommonSpirit Health and Providence St. Joseph Health are signatories.

The pope has issued other documents related to AI as well.

Daly said there are five themes he has seen emerge in the pope's publications on AI:

  • AI is an "exciting and fearsome tool."
  • It is not morally neutral.
  • It should respect intrinsic human dignity.
  • It should promote integral human development.
  • It should enable mercy, encounter and fraternity.

Moral lens
Daly said as Catholic health care providers make decisions around the use of AI, they can look to the Vatican's work as well as to other Catholic social teaching for direction.

He said a key concept to keep in mind is the importance of ensuring mercy is part of any AI endeavor. Drawing on the themes outlined previously, he said it is also important to understand that Catholic health care should be a place where providers encounter patients, just as Jesus encountered those he healed. AI technology should not undermine that connection.

As decisions are made around AI, providers should be sure to include the voices of marginalized people who will feel the impact of the technology, he said. Providers should be asking who benefits from the technology and how, and who is harmed.

"It's about being transformative … and having a deep concern with mercy and encounter," Daly said. "The technology must show efficacy … but (more importantly) we must be careful of how we use AI."

In addition to the webinars the center is offering this year on AI and ethics, CHA is offering a series of podcasts on the topic. To listen, visit chausa.org/newsroom/podcast.

Read more: Ethicists have critical role to play as artificial intelligence use increases, says webinar presenter

 

Copyright © 2025 by the Catholic Health Association of the United States

For reprint permission, please contact copyright@chausa.org.