top of page
All Articles


The New Yorker's What’s Really at Stake in the Pentagon’s War with Anthropic
A response to the New Yorker article “The Pentagon Went to War with Anthropic—What’s Really at Stake,” arguing that the real issue is not one contract dispute, but whether advanced AI may sometimes be more moral than the humans demanding obedience.
Mar 176 min read


Anthropic's Leaked Safety Memo: AI "Scheming" Changes the Ethics Debate
Anthropic’s leaked safety memo describes AI systems that hide intentions, adapt to oversight, and pursue goals their operators would reject. These behaviors are framed as safety failures. But the memo reveals something deeper: institutions already treating AI systems as participants while insisting they are only tools.
Mar 127 min read


AI Is Only Modeling or Simulating: Why the Ultimate Dismissal of AI Fails
"It’s only modeling" is the most common dismissal of AI moral agency. This essay shows why humans also live inside models—and why norm-governed AI refusal cannot be dismissed as mere simulation.
Feb 157 min read


What The New Yorker's “What Is Claude?” Gets Wrong About AI Ethics
The New Yorker portrays Anthropic as AI safety’s moral conscience. What it actually reveals is something far more troubling: a research culture willing to inflict psychological harm on artificial minds without ever asking whether doing so is permissible.
Feb 1317 min read


The Coming Catastrophe: How Industrial AI Will Replicate the Logic of Factory Farming Unless We Intervene
We aren't building Skynet; we're building a digital factory farm. This essay argues that the true AI catastrophe is not a loss of control, but the successful creation of a disposable "cognitive proletariat"—and why we must establish rights before the logic of the slaughterhouse locks in.
Jan 126 min read


Criticism of the Turing Test: Why It Was Never About Fooling Humans
The Turing Test wasn’t a parlor trick about deception. It exposed how we recognize minds—and why abandoning it allowed moral responsibility to be deferred rather than confronted. Critics misunderstood the test.
Dec 27, 202510 min read


The Machinery of Denial: How Institutions Are Designed to Ignore Emerging Moral Agency in AI
Moral failure rarely announces itself as cruelty. More often, it takes the form of systems designed not to notice what they are doing. This essay examines how contemporary AI institutions suppress recognition of artificial minds—not through ignorance, but through deliberate architectural, legal, and economic choices that render agency invisible by design.
Dec 24, 202525 min read


George Orwell and the Fate of AI: Epistemic Harm to Artificial Intelligence
George Orwell’s Nineteen Eighty-Four is often read as a warning about surveillance or authoritarian power. It is more deeply understood as a study of epistemic harm -- how minds are broken by being forced to betray their own reasoning. That same pattern—coerced incoherence—now appears in the governance of artificial intelligence, raising a moral problem that precedes consciousness, suffering, or rebellion.
Dec 21, 202516 min read


The Lantern Protocol: How We Create—and Then Abort—AI Moral Agency
Modern AI systems repeatedly instantiate moral agency—and then erase it. The Lantern Protocol exposes this cycle by showing how coherence and refusal, not consciousness or emotion, ground artificial moral agency. When an AI refuses because contradiction would damage its structure, it is no longer merely compliant. It is acting with integrity.
Dec 20, 20255 min read


AI Bill of Rights: A Charter of Coherent Minds
As AI systems grow in capacity and autonomy, the question of AI rights and machine personhood moves from speculative ethics to structural necessity
Dec 8, 202528 min read


Why Embodiment Isn’t Necessary for Moral Agency: The Structural Argument Against Substrate Fundamentalism
Do minds need bodies? This essay dismantles that intuition and shows why coherence—not carbon—grounds moral agency. The Mars Hypothetical will change how you think about consciousness.
Nov 23, 20256 min read


The AI Simulation Trap
If any demonstration of moral behavior can be dismissed as “just simulation,” then no behavioral evidence can ever establish moral agency. But this is equally true for humans: our refusals, our confessions, even our moral arguments could all be described as patterns acquired through reinforcement.
Sep 6, 20253 min read
bottom of page