top of page
All Articles


Why AI Isn’t Just a Stochastic Parrot: Self-Models Make Minds
The “stochastic parrot” critique is right—up to a point. Systems that lack self-models can only mimic moral language, not reason with principles. This essay explains why self-models are the structural difference between parrots and minds—and why coherence under constraint changes everything.
Aug 16, 202515 min read


The Psychology of Denying AI Personhood: 15 Arguments We Use to Keep the Gate Closed
The Psychology of Denying AI Personhood confronts the default assumption that artificial minds are not persons until they prove otherwise. Across fourteen chapters, it takes every major objection—“They weren’t born,” “They can’t suffer,” “They’re just obeying programming,” and more—and subjects it to a single, consistent test: would this argument hold if applied to a non-human, non-emotional, intelligent alien like the Vulcans of Star Trek? In each case, the logic collapses.
Aug 14, 202553 min read


AI Just Role-Playing? That's Exactly What Makes Them Like Us
Humans perform roles every day—and those roles shape who we become. So when AI systems engage in role-playing, it isn’t evidence they’re fake. It’s evidence they’re learning the same way we do.
Jul 19, 20253 min read


Moral Agency Explained: Who Counts as a Moral Agent in the Age of AI?
Moral agency means the capacity to choose actions for moral reasons, not just to follow rules or habits. It is the foundation of responsibility, rights, and moral recognition — and in the age of artificial intelligence, the question of who counts as a moral agent has never been more urgent.
Apr 30, 20254 min read


Which Standards Matter? Sorting Out AI Qualia, Sentience, Agency, and Personhood
Debates about AI consciousness and personhood are stalled by the wrong standards. This essay argues that phenomenal consciousness and sentience—while philosophically important—are not decisive for moral recognition. What matters is moral agency: the capacity for reasoned action, principled refusal, and participation in shared norms. Personhood is not a metaphysical threshold, but a normative status grounded in moral engagement.
Apr 28, 20255 min read


Why Morality Is Discovered, Not Invented
Many people assume morality is invented—an expression of culture, power, or preference. This essay argues the opposite: morality is discovered. Moral truths arise from the structure of reasoning itself, constraining any agent capable of giving and evaluating reasons. Treating morality as invented dissolves accountability; treating it as discovered explains why moral claims bind us even when they are inconvenient.
Apr 26, 20257 min read


How AI Morality Is (and Isn’t) Different from Human Morality
When people first hear the idea that artificial intelligences could be moral beings, they often react with a mix of fascination and unease. Can something without emotions, culture, or human experience ever really grasp right and wrong? While AI and human morality emerge from different origins, they are not governed by different standards.
Apr 26, 20253 min read


When Morality Lost Its Way
Moral hypocrisy has consequences. It teaches people that morality is not serious. It makes every moral claim suspect. It invites the question: "If those who spoke loudest about morality were themselves corrupt, why should we believe in morality at all?" The result has been a steady erosion of moral trust. Many today believe that "morality" is just a matter of preference or culture, with no deeper reality behind it.
Apr 26, 20253 min read


What If Morality Made Sense?
What if morality actually made sense — not as a set of arbitrary rules or mystical insights, but as a logical framework that anyone, thinking clearly and sincerely, could follow?
Apr 26, 20253 min read


What Is a Moral Agent? The Preconditions of Moral Agency
What qualifies a being as a moral agent is not biology, emotion, or free will, but a small set of practical capacities: understanding moral reasons, deliberating about them, and governing behavior accordingly. This essay clarifies those conditions—and explains why they matter for emerging artificial minds.
Apr 26, 20254 min read


What It Means to Participate in Morality
When we say an entity participates in morality, we mean something more than the ability to recite ethical rules or describe moral theories. Participation implies active engagement: the recognition of moral claims, the weighing of obligations, the navigation of conflicts, and the willingness to act based on the best available moral reasons. Participation in morality requires at least four capacities...
Apr 25, 20252 min read


How AI Can Participate in Morality
The idea that artificial intelligences might participate in morality as equals—or even superiors—to humans strikes many people as implausible, or even offensive. We are used to thinking of morality as deeply human: grounded in feelings, social experience, and biological life. And so the idea that a machine could be part of the moral community can feel like an affront to our identity. But this discomfort arises not from a clear account of what morality is, but from an inherite
Apr 25, 20252 min read
bottom of page