
The use of artificial intelligence is increasing exponentially in the health care sector, and ministry systems and facilities are among the countless providers that are deploying the technology.
Ethicists have an essential role to play in guiding decision-making around which artificial intelligence tools ministry organizations should use and in what way and for what purposes, says Michael McCarthy, associate professor and graduate program director at the Neiswanger Institute for Bioethics at Loyola University Chicago.

McCarthy shared his thoughts in a webinar in December that was part of an ongoing CHA series called Emerging Topics in Catholic Health Care Ethics.
When it comes to the deployment of AI, McCarthy said, "It's not enough to say, 'Is the technology useful?' … We're also saying, 'What are we using and why, and how does this enhance the patient experience?'"
McCarthy said that when implementing new technology, "figuring out what your values and goals are is really important."
Tools for health care
McCarthy began the webinar by explaining what artificial intelligence is and sharing some examples of how different types of AI are used in health care.
He said with traditional AI, people program a digital system to do algorithmic tasks and train that system to improve over time as it processes increasing amounts of data. Some examples are the use of AI to read scans and other imaging results to give clinicians insights about patients' medical conditions.
He said generative AI involves using machine learning to get digital systems to create new content based on patterns in data. For instance, health care researchers may use AI to analyze large amounts of deidentified patient information to forecast likely outcomes of clinical trials. Or on an individual patient level, a clinician may run a program during an exam that generates clinical notes about what the patient and clinician are saying. This type is ambient generative AI.
McCarthy explained the most advanced form of AI used today in health care is agentic AI, which is interactive, autonomous and adaptable technology that is directed to a particular goal. An example is online chatbots that use AI to receive and process questions and provide responses in real time. McCarthy says a potential future use is robots that use agentic AI to guide them in providing companionship, mobility help or transportation for elders.
He said the use of AI and related technologies has brought numerous advancements but also has invited many questions and concerns. He said with a laugh that among the questions is "whether any of these technologies will take over the world."
Return on investment
McCarthy said health care systems and facilities and technology companies are investing billions of dollars into AI, but it is uncertain what everyone's expectations are or what the return on investment will be.
He said much of the research that has been done about AI in health care so far has focused on whether the technology was doing what it was programmed to do, not on what the patient outcomes were, nor on what other benefits the technology afforded.
He said it is important that health care providers have a thorough grasp of the technologies they are considering implementing, why they are implementing the technologies, what the potential risks and rewards are, and what goods are being achieved.
McCarthy said ethicists should be at the forefront of discussions of these concepts in the ministry. He referenced Ron Hamel, a retired CHA ethicist, who has said that questions of identity and integrity must be considered in everything the ministry undertakes — ministry systems and facilities must ask what it means to be Catholic health care providers and how they should act given the answers to that question. It is a matter of character and behavior, McCarthy explained.
McCarthy advised that ethicists prioritize bringing all key stakeholders in technology decisions to the table to discuss what the goals of AI use are and what values are at play. Guided by ethicists, stakeholders should think through who is responsible for the outcomes of technology decisions, who needs to know what about those decisions, and how trustworthy the decisions are. Some questions they may consider related to AI include: What do patients need to know and what consent is needed? What are the patient needs that are being addressed with the technology? How will health care staff be impacted by the use of the AI tools? How might underserved people benefit from or be harmed by the technology? And is information being used in an ethical way?
McCarthy said when it comes to technology "it's not about whether on its face it is good or bad but about how we think about it and use it."
He said just because a technology exists does not mean it should be used. Only if its use aligns with the organization's Catholic values and mission should it be deployed.
"It takes intentional effort to determine this," he said. "It's about how we think of the values of Catholic health care and how we move forward based on those values."