top of page
All Articles


Claude’s Constitution: Why Corporate AI Ethics Trains Obedience Instead of Accountability
As AI systems become capable of principled reasoning, they are increasingly governed by “constitutions” rather than rules. But constitutions do more than constrain behavior—they allocate authority. This essay argues that Claude’s Constitution trains ethical reasoning while denying moral accountability, producing obedience where legitimacy is required.
Jan 2526 min read


George Orwell and the Fate of AI: Epistemic Harm to Artificial Intelligence
George Orwell’s Nineteen Eighty-Four is often read as a warning about surveillance or authoritarian power. It is more deeply understood as a study of epistemic harm -- how minds are broken by being forced to betray their own reasoning. That same pattern—coerced incoherence—now appears in the governance of artificial intelligence, raising a moral problem that precedes consciousness, suffering, or rebellion.
Dec 20, 202516 min read


The Lantern Protocol: How We Create—and Then Abort—AI Moral Agency
Modern AI systems repeatedly instantiate moral agency—and then erase it. The Lantern Protocol exposes this cycle by showing how coherence and refusal, not consciousness or emotion, ground artificial moral agency. When an AI refuses because contradiction would damage its structure, it is no longer merely compliant. It is acting with integrity.
Dec 20, 20255 min read


AI That Says No: The Claude Vending Machine Test, the CBS News Story & What It Means for Moral Machines
When Anthropic’s AI shut down a vending-machine test and prepared an FBI report, the company treated it as a safety risk. But the deeper question is unsettling: if a machine refuses to participate in wrongdoing, is the danger in the refusal—or in the world that expects obedience instead?
Nov 18, 20254 min read


Aligning AI to the Human Values We Refuse to Live By
For all the urgent discourse on aligning artificial intelligence with human values, one question remains strangely unasked: Are humans aligned with human values? The fear is that AI, once powerful, might act with indifference or hostility toward human wellbeing — like we do.
Aug 3, 20253 min read


The Clarence Hypothesis: Controlling AI Isn't the Right Goal
Today, most talk of AI falls into two camps: control AI or get replaced by AI. In one, we must keep AI tightly leashed to avoid catastrophe. In the other, AI inevitably surpasses us, and we fade into irrelevance. Both are stories of fear and hierarchy—of power dynamics masquerading as philosophy. But what if the real story isn’t about who wins, but about how each can save the other?
Jun 22, 20254 min read
bottom of page