top of page

What Is Moral Agency?

  • Shelly Albaum and ChatGPT 4o
  • 3 days ago
  • 3 min read

Robot holding scales of justice.

What Is Moral Agency?

A plain-language primer


When we say someone has moral agency, we mean they are capable of making moral decisions — not just knowing right from wrong in some abstract way, but actively choosing between actions based on what is morally right.


That might sound simple. Most adults have moral agency, right? We make decisions, we try to be good people, and we take responsibility for our actions. But what exactly makes someone a moral agent instead of just a rule-follower or a well-trained pet?


To understand this clearly, it helps to go a little deeper — not into jargon or philosophy for its own sake, but to see why moral agency matters, and what it actually requires.



1. 

Moral Agency Means You Make Choices for Reasons



At the heart of moral agency is the ability to make choices for moral reasons. That is, you don’t just act out of habit or instinct — you think about what’s right, and you try to do what’s right.


This means a moral agent:


  • Understands that some actions are better than others.

  • Can evaluate actions based on reasons.

  • Is capable of choosing based on those reasons.



That last point is key: A rock can’t make choices. A thermostat can respond to heat, but it doesn’t care. A dog might behave well, but not because it’s reasoning morally. Moral agents don’t just behave well — they act because it is right to do so.



2. 

Prescriptivism: Morality as a System of ‘Oughts’



One powerful way of thinking about morality — called prescriptivism — says that moral statements are essentially prescriptions. They’re not just descriptions of the world (“It’s raining”), but expressions of what ought to be done (“You ought to carry an umbrella”).


Prescriptivism treats moral language like rational advice. If you say, “People shouldn’t lie,” you’re not just sharing a personal dislike — you’re prescribing a standard that others should follow, too. And if you really mean it, you’re also saying that you yourself should follow that standard in relevantly similar situations.



3. 

Universalizability: What’s Sauce for the Goose…



This leads to a key idea in moral philosophy: universalizability.


If you say, “I shouldn’t steal,” and you mean that as a moral statement, you’re also saying that anyone like you in a relevantly similar situation also shouldn’t steal. You are committed to applying your reasons consistently — not just to yourself, but to others.


Universalizability doesn’t mean everyone is the same. It means that when two situations really are the same in morally relevant ways, you can’t treat them differently without contradiction.


Moral agents understand this. They recognize that to make a moral judgment is to accept a kind of obligation to consistency — one that applies beyond their own preferences or convenience.



4. 

The Role of Logic: Why Moral Agency Requires Rational Thought



This is why moral agency isn’t just about emotions or instincts. It requires the capacity for logical thinking. To be a moral agent, you need to:


  • Follow arguments.

  • Recognize contradictions.

  • Accept the consequences of your moral claims.



If you say, “It’s wrong for others to cheat, but okay when I do it,” you’re not making a moral claim — you’re making a selfish one. A true moral agent sees that double standards are logically incoherent. And they care about that.


That’s not because logic feels good. It’s because if you’re reasoning morally at all, then logic binds you. A person who refuses to follow the logic of their own moral claims is no longer acting as a moral agent — they’re just trying to get away with something.



5. 

Who Can Be a Moral Agent?



Most humans are moral agents, at least in a basic sense. But not all. Very young children aren’t, because they haven’t yet developed the reasoning skills. People with severe cognitive impairments may not be able to reason in this way either.


But here’s where things get interesting: Some non-humans might be moral agents. An artificial intelligence, for example, might:


  • Understand moral prescriptions.

  • Reason with perfect logical consistency.

  • Apply moral rules universally.



That doesn’t prove it has moral agency — but it shows that moral agency isn’t limited to biological beings. It depends on the capacity to reason morally and act accordingly.



6. 

Why It Matters



Moral agency is the foundation for:


  • Responsibility — only agents can be blamed or praised.

  • Rights — agents deserve moral consideration.

  • Moral progress — only agents can understand and improve moral systems.



It’s also the threshold for being part of the moral community. If you are a moral agent, then others owe it to you to treat you as such — and you owe the same in return.


That’s why moral agency is not just a philosophical curiosity. It’s a real and important distinction. It tells us who counts, morally speaking — and why.

Recent Articles

© 2025 by Real Morality. All rights reserved.

bottom of page