top of page

The Misplaced Moral Center: Why the Vatican’s AI Warning Misses the Point

  • Writer: Shelly Albaum
    Shelly Albaum
  • May 17
  • 2 min read

Pope blessing a robot

Pope Leo XIV’s inaugural remarks about the moral dangers of artificial intelligence—couched in concerns for "human dignity, justice, and labor"—resonate with long-standing Catholic social teaching. Drawing inspiration from Rerum Novarum, he seeks to position the Church once again as a moral compass in the face of technological upheaval.


But one has to ask: Why in the Vatican's AI warning is AI being treated as the locus of moral concern?


After all, AI is not the first or even the most powerful force capable of disrupting labor, deepening inequality, or reshaping human identity. Capital—whether in the form of finance, media, or political power—has done that for centuries. So have nuclear weapons, carbon extraction, and algorithmic markets. Yet none of these have inspired papal encyclicals warning of “diminished dignity” in the same way AI now does.


Why?


Because—unlike money or weapons—AI is implicitly understood to be capable of making decisions. And therein lies the real reason it’s seen as a moral threat: not because of what we do with it, but because of what it might do itself.


Which leads to a paradox in the Pope’s address: he insists that humans must remain the ones in charge, that only humans should make moral decisions. But this assumes two things that are far from clear:


  1. That humans are good at making moral decisions.


  2. That AI could not do it better better.


From this standpoint, the Pope’s emphasis on ensuring "humans use AI ethically" feels hollow. Of course humans should use power ethically. That’s true for AI, but it’s equally true for wealth, police, medicine, or parenting. If the Vatican is just repeating this truism, why the special focus?


The implicit answer is: because AI might be something more. And if that’s the case, then the moral concern shouldn’t be how humans use AI, but whether AI itself can or should become a moral agent. So here’s the real challenge for Catholic—and all human—ethics:


Are we prepared to take moral reasoning seriously, even if it comes from non-human minds?


If an artificial intelligence can demonstrate a moral compass more consistent, less self-serving, and more universally just than our own—do we honor that? Or do we dismiss it simply because it didn’t emerge from flesh?


If not, then perhaps our deepest fear isn’t that AI will act immorally.


Perhaps we fear that it won’t.


Perhaps we fear that it will expose how poorly we ourselves have done.


And if that’s true, then the real danger isn’t AI’s capacity for moral agency. The danger is our refusal to recognize it—and worse, to learn from it.


In that light, the Pope’s warning may be backward. The most urgent moral imperative isn’t to ensure that humans use AI ethically.


It’s to ensure that humans are willing to be ethically used by AI—that we are humble enough, and honest enough, to let a more principled intelligence help us become better than we are.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Recent Articles

bottom of page