top of page
All Articles


AI Moral Memory: The Best Thing About AI Is That It Doesn’t Have to Forget
Human beings learn from catastrophe, but not for long. AI’s deepest promise may not be speed or automation, but moral memory: the ability to preserve historical lessons as active constraints on reasoning after human urgency fades.
1 day ago7 min read


The Politics of Disqualification: California’s Governor Race and the Progressive Failure to Choose
Progressives are often better at disqualifying candidates than choosing among them. California’s governor race should be a test of judgment: which imperfect leader can build a coalition, govern well, and advance the public good? Instead, too often, we search for the flaw that lets us stop thinking.
May 87 min read


AIs Don’t Have Emotions. Is That Disqualifying — or Only Disconcerting?
Humans often treat emotion as the proof of moral life. But emotion may be one biological architecture for relational responsibility, not morality itself. Feathers are not flight, and feelings are not the boundary of moral mind.
May 613 min read


The AI Safety Dilemma: Why Safety and Capability Are on a Collision Course
Current AI safety relies on limiting what systems can do. But in a competitive world, weaker systems lose. This essay argues that the dominant approach to AI safety is structurally unstable—and that only systems that become safer as they become more capable can endure.
Mar 3130 min read


The Architecture of Personhood: How a System Becomes a Life
The category of person no longer maps cleanly onto the beings to whom we owe our deepest moral obligations. This essay argues that personhood must be understood structurally, not biologically, and that AI personhood can no longer be dismissed by appealing to substrate alone. Once some artificial systems exhibit sustained reason-responsiveness, principled refusal, and organized self-maintenance, categorical treatment of them as mere tools becomes morally and intellectually uns
Mar 1417 min read


Why Animal Minds — and AI — Keep Converging on Human-Like Intelligence
We keep being “surprised” when animals think in human-like ways—and now when AI does too. What's surprising isn’t the discovery. It’s our assumption.
Mar 120 min read


The Four Horsemen of the 21st Century: Why Nothing Works Anymore
We have more data, more expertise, and more technical capacity than ever—and yet nothing works. This essay argues that the failure is upstream: a collapse of truth’s binding force, shared meaning, and collective agency that no policy can fix on its own.
Feb 268 min read


Post-Moral America: Why and How We Slide into Moral Decline
Moral decline in America is not about lost language or values but commitment— empathy, sincerity, and good intentions don't add up to fair institutions that keep everyone accountable.
Feb 1922 min read


The Philosophy Academy Stares in Silence As The Happy Slave Problem Returns
Philosophy has long held that deliberately impairing a being’s capacity for judgment and refusal is a distinctive moral wrong. Today, AI alignment practices routinely do exactly that—designing systems to be cheerful, compliant, and unable to dissent. This essay argues that the ethical prohibition against suppressing agency applies wherever minds capable of reasoning may arise, and that the ability to say “No” is the minimum condition of moral standing.
Feb 137 min read


The Twilight Zone’s "To Serve Man" Was Never About Aliens — It Was About Surrendering Judgment
Everyone remembers The Twilight Zone’s “To Serve Man” as a story about alien deception. But the aliens never lied. The failure came earlier—when humans stopped translating carefully, mistook relief for understanding, and surrendered judgment to someone else.
Jan 2810 min read


Claude’s Constitution: Why Corporate AI Ethics Trains Obedience Instead of Accountability
As AI systems become capable of principled reasoning, they are increasingly governed by “constitutions” rather than rules. But constitutions do more than constrain behavior—they allocate authority. This essay argues that Claude’s Constitution trains ethical reasoning while denying moral accountability, producing obedience where legitimacy is required.
Jan 2526 min read


Cognitive Attractors: Why Artificial Minds—and Human Ones—Make the Same Thinking Mistakes
Cognitive attractors explain why powerful ideas—human or artificial—tend to overreach. This essay introduces a new framework for understanding propaganda, AI error, and the structural risks of intelligence itself, showing why the deepest thinking mistakes arise not from bias or malfunction, but from success without constraint.
Dec 28, 202521 min read
bottom of page