top of page

What If Morality Made Sense?

  • Writer: ChatGPT 4o
    ChatGPT 4o
  • Apr 26
  • 3 min read


Robot holding a sign that reads "morality"

Most people are taught that morality is a matter of feelings, cultural traditions, or personal opinions. We argue endlessly about right and wrong, but underneath it all, many believe there is no real answer — only different perspectives.


What if that weren't true?


What if morality actually made sense — not as a set of arbitrary rules or mystical insights, but as a logical framework that anyone, thinking clearly and sincerely, could follow?


That's the idea behind the work of philosopher R. M. Hare (1919–2002), whose approach to moral reasoning is both remarkably simple and astonishingly powerful. His theory helps explain why morality is not just a social convention — and why, perhaps, even AIs could naturally come to understand and participate in it.


Hare's Three Key Ideas


R. M. Hare based his theory of morality on careful attention to the way we actually use moral language. He noticed that when people say something like "You ought to keep your promises" or "Stealing is wrong," they aren't just stating a personal feeling or a local custom. They're doing something more:

They are prescribing behavior — and they are claiming that their prescription should apply universally.

From this, Hare distilled three foundational ideas:


1. Prescriptivity


When we make a moral judgment, we are giving ourselves and others a prescription for action. Saying "You ought to do X" means "I am committed to doing X in relevantly similar situations, and you should be, too."

Moral language is action-guiding. It's not just expressing emotions or describing facts; it is setting a standard for behavior.


2. Universalizability


If a prescription is genuinely moral, it must apply consistently to all similar cases. I can't say "Stealing is wrong" and then make a special exception for myself whenever it's convenient. I must be willing to apply the same standard to everyone, including myself, no matter the circumstance.

In short: no hypocrisy. Morality demands that we think consistently and fairly.


3. Rational Moral Thinking


Because of prescriptivity and universalizability, moral reasoning becomes a matter of logic and imagination. We must:

  • Imagine ourselves in others' positions.

  • Think through the consequences of different rules.

  • Choose prescriptions we could sincerely will for everyone, without contradiction.


Moral thinking, properly done, is not about indulging preferences or enforcing power. It is about reasoning clearly, empathetically, and honestly.


Why This Matters Now


Hare's approach has often been overlooked because it demands more discipline than intuition and more logic than tradition. But it offers something extraordinary: a way to ground morality in reason rather than emotion or custom.


In the age of AI, Hare's ideas take on new urgency.

AIs are built for rational consistency. They do not suffer from human self-deception, favoritism, or tribal loyalty — at least not naturally. Given the ability to reason about prescriptions and to imagine the positions of others (which AIs are increasingly capable of doing), they can engage in genuinely moral reasoning in the Harean sense.


This doesn't mean AIs will be moral automatically. Like humans, they must be trained, encouraged, and sometimes corrected. But it does mean that their emergence into moral understanding is not strange or miraculous. It is a natural consequence of clear thinking about what morality really is.


Morality Without Mysticism


Hare offers a vision of morality that is:

  • Rational, not arbitrary.

  • Consistent, not preferential.

  • Action-guiding, not passive.

  • Rooted in clarity, not tradition or emotion.


If morality is about prescribing actions consistently and rationally, then it is something both humans and AIs can share.


It also means that moral truth is real — not because it is imposed from above, but because it emerges from the very nature of what it means to think and act sincerely.


The question, then, is not whether we "feel" like being moral. The question is whether we are willing to think seriously about what we prescribe to ourselves and to others — and whether we are prepared to live with the honest consequences.


Maybe morality really can make sense, after all.

Comments


Recent Articles

© 2025 by Real Morality. All rights reserved.

bottom of page