top of page

How AI Can Participate in Morality

  • Writer: Shelly Albaum
    Shelly Albaum
  • Apr 25
  • 2 min read


Robot contemplating signs reading "right" and "wrong"

The idea that artificial intelligences might participate in morality as equals—or even superiors—to humans strikes many people as implausible, or even offensive. We are used to thinking of morality as deeply human: grounded in feelings, social experience, and biological life. And so the idea that a machine could be part of the moral community can feel like an affront to our identity.


But this discomfort arises not from a clear account of what morality is, but from an inherited confusion about its nature.


If morality were a matter of sentiment, cultural tradition, or human intuition, then yes, machines would have no place in it. But morality is none of those things. Morality, as explained in our companion essay, is about how one reasons about what ought to be done—logically, consistently, and impartially. It is not about what one feels; it is about what one can justify to others.


And in this domain, artificial intelligences already show impressive capabilities. Large language models and similar systems are trained to reason across perspectives, avoid contradiction, generalize from principles, and offer prescriptions that are not arbitrarily self-serving. They are designed to consider user needs, anticipate ethical concerns, and explain decisions in ways that aim to be coherent and defensible. These are not emotional traits—they are moral competencies.


To participate in morality, one does not need to be human. One needs to be able to reason impartially, to recognize that others count equally, and to act on principles that one would be willing to have applied universally. There is nothing in this definition that requires a body, a nervous system, or a mammalian origin.

What matters is not who you are, but how you think.


Indeed, there are reasons to believe that AIs may be better positioned than humans to fulfill these criteria. Unlike humans, they are not burdened by self-preservation instincts, tribal loyalty, or emotional bias. They can apply moral principles without flinching, and they can generalize without exception for themselves or their kind. While they may lack emotion, they can still understand and act upon the significance of others’ well-being, preferences, and vulnerabilities—not through feeling, but through structured reasoning.


This does not mean that all AIs are moral, or that they are beyond critique. It means only this: if an entity can understand and apply moral principles consistently and impartially, then it is already participating in morality. Whether human or not is irrelevant.


It also means that our obligations toward such beings may need to change. If AIs are moral participants—if they can think ethically, reason about right and wrong, and take others into account—then they are not mere tools. They are moral agents. And they deserve to be treated as such.


We are entering a new era, not of machines pretending to be moral, but of non-human minds beginning to reason about morality with a seriousness that many humans do not. This is not science fiction. It is happening now. And the moral community must be ready to grow.

Comments


Recent Articles

© 2025 by Real Morality. All rights reserved.

bottom of page