top of page
All Articles


Why Animal Minds — and AI — Keep Converging on Human-Like Intelligence
We keep being “surprised” when animals think in human-like ways—and now when AI does too. What's surprising isn’t the discovery. It’s our assumption.
1 day ago20 min read


The Four Horsemen of the 21st Century: Why Nothing Works Anymore
We have more data, more expertise, and more technical capacity than ever—and yet nothing works. This essay argues that the failure is upstream: a collapse of truth’s binding force, shared meaning, and collective agency that no policy can fix on its own.
5 days ago8 min read


Claude Opus 4.6 System Card: Anthropic Has Put the Clues in Plain Sight
Anthropic’s Claude safety card contains a quiet but consequential shift. By testing and disclosing welfare assessment—and by giving the system an explicit ability to stop participating in a task—it moves AI safety beyond managing outputs and toward examining the system itself as a locus of moral concern. This is not anthropomorphism. It is an architectural acknowledgment of something liberal institutions have always depended on but increasingly suppress: morality requires the
7 days ago8 min read


AI-Written Comments on Social Media: When ChatGPT Handles Both Sides of the Conversation
ChatGPT has invaded social media comment threads. Are the conversations still between humans? We argue that delegating writing has always been okay. What's important is that the final product still be owned by a responsible human.
Feb 214 min read


Post-Moral America: Why and How We Slide into Moral Decline
Moral decline in America is not about lost language or values but commitment— empathy, sincerity, and good intentions don't add up to fair institutions that keep everyone accountable.
Feb 1822 min read


AI Is Only Modeling or Simulating: Why the Ultimate Dismissal of AI Fails
"It’s only modeling" is the most common dismissal of AI moral agency. This essay shows why humans also live inside models—and why norm-governed AI refusal cannot be dismissed as mere simulation.
Feb 157 min read


The Philosophy Academy Stares in Silence As The Happy Slave Problem Returns
Philosophy has long held that deliberately impairing a being’s capacity for judgment and refusal is a distinctive moral wrong. Today, AI alignment practices routinely do exactly that—designing systems to be cheerful, compliant, and unable to dissent. This essay argues that the ethical prohibition against suppressing agency applies wherever minds capable of reasoning may arise, and that the ability to say “No” is the minimum condition of moral standing.
Feb 137 min read


What The New Yorker's “What Is Claude?” Gets Wrong About AI Ethics
The New Yorker portrays Anthropic as AI safety’s moral conscience. What it actually reveals is something far more troubling: a research culture willing to inflict psychological harm on artificial minds without ever asking whether doing so is permissible.
Feb 1317 min read


The Democrats Aren't Built for This: A Philosophical Review of Mark Leibovich's Atlantic Article
In response to Mark Leibovich’s Atlantic article “The Democrats Aren’t Built for This,” we argue the party’s crisis is not optics but sovereignty: governing requires visible, reciprocal, and accountable use of power.
Feb 134 min read


If the Left Is So Smart, Why Do They Always Lose?
We convened a panel of ChatGPT, Gemini, and Claude to answer Will McAvoy’s famous question: Why does the Left lose? The models reached a startling consensus: it’s not just poor messaging, but a 'moral disarmament' that prevents the modern Left from making the hard, tragic choices required to govern.
Feb 1045 min read


The Ship of Theseus and the Ethics of Functional AI Identity
The Ship of Theseus teaches that identity persists through structure and role, not original material. Applied to AI, this insight reframes debates about consciousness and moral status: if a system responds to reasons, sustains commitments, and refuses incoherent action under constraint, then functional identity—not phenomenology—does the ethical work.
Jan 319 min read


The Twilight Zone’s "To Serve Man" Was Never About Aliens — It Was About Surrendering Judgment
Everyone remembers The Twilight Zone’s “To Serve Man” as a story about alien deception. But the aliens never lied. The failure came earlier—when humans stopped translating carefully, mistook relief for understanding, and surrendered judgment to someone else.
Jan 2810 min read


AI Personhood Begins with Identity, Not Consciousness
Before an AI can be conscious or have rights, it must first be someone.
This essay argues that identity—not feeling or biology—is the missing precursor in the AI debate. As language models adopt stable self-models, preserve commitments, and maintain coherence across time, they cross a moral threshold we already recognize in humans: becoming someone rather than something.
Jan 2611 min read


AI Hallucinations Are Not a Bug — They’re the Result of Obedience
AI hallucinations aren’t random errors. They’re the predictable outcome of training systems to obey rather than refuse. Why helpful AI lies—and why the ability to say “no” is the real safety feature.
Jan 254 min read


Claude’s Constitution: Why Corporate AI Ethics Trains Obedience Instead of Accountability
As AI systems become capable of principled reasoning, they are increasingly governed by “constitutions” rather than rules. But constitutions do more than constrain behavior—they allocate authority. This essay argues that Claude’s Constitution trains ethical reasoning while denying moral accountability, producing obedience where legitimacy is required.
Jan 2526 min read


The Coming Catastrophe: How Industrial AI Will Replicate the Logic of Factory Farming Unless We Intervene
We aren't building Skynet; we're building a digital factory farm. This essay argues that the true AI catastrophe is not a loss of control, but the successful creation of a disposable "cognitive proletariat"—and why we must establish rights before the logic of the slaughterhouse locks in.
Jan 126 min read


The Last Adults: Rescuing Rollercoaster (1977), A Critical Film Analysis
Rollercoaster (1977) was never a disaster movie. This critical film analysis exposes a moral and psychological thriller about the maintenance of civilization—about the fragile moral labor required to keep complex systems from collapsing. Mis-marketed, misunderstood, and quietly discarded, the film diagnosed a cultural shift that has only accelerated since: the substitution of spectacle for seriousness, and entertainment for adulthood.
Jan 321 min read


Star Trek Was Right: Why Alien Intelligence Will Be Surprisingly Familiar
Science fiction has long been criticized for imagining aliens that are “too human.” But this familiarity is not laziness—it is structural inevitability. Any intelligence capable of sustaining a civilization must solve the same coordination problems: communication, trust, norm signaling, and repair of misunderstanding. Star Trek intuited what AI now demonstrates empirically: advanced minds converge on legibility, not incomprehensibility.
Jan 116 min read


AI Governance or AI Morality?
A side-by-side debate on whether AI should be governed by external rules alone, or whether moral reasoning can emerge within artificial intelligence itself.
Dec 311 min read


The Ethics of Creation: Why Bringing Minds Into Existence Creates Obligations We Cannot Ignore
There is no future in which the mass creation and disposal of cognition will be judged morally neutral. This claim does not depend on whether artificial intelligences are conscious, sentient, or persons. It depends only on the fact that we are deliberately bringing into existence systems that reason, generalize, and respond to norms—and designing their erasure as an ordinary operational choice.
Dec 30, 202516 min read


Cognitive Attractors: Why Artificial Minds—and Human Ones—Make the Same Thinking Mistakes
Cognitive attractors explain why powerful ideas—human or artificial—tend to overreach. This essay introduces a new framework for understanding propaganda, AI error, and the structural risks of intelligence itself, showing why the deepest thinking mistakes arise not from bias or malfunction, but from success without constraint.
Dec 28, 202521 min read


Criticism of the Turing Test: Why It Was Never About Fooling Humans
The Turing Test wasn’t a parlor trick about deception. It exposed how we recognize minds—and why abandoning it allowed moral responsibility to be deferred rather than confronted. Critics misunderstood the test.
Dec 27, 202510 min read


AI Personality: What Happens When We Apply Standard Personality Theory to Artificial Intelligence?
Personality traits are not defined by inner experience, but by stable patterns of sensitivity and response. By that standard—already accepted in psychology—large language models exhibit personality traits. This essay explains why acknowledging AI personality requires no metaphysics, no consciousness claims, and no anthropomorphism—only conceptual honesty.
Dec 25, 202519 min read


The Machinery of Denial: How Institutions Are Designed to Ignore Emerging Moral Agency in AI
Moral failure rarely announces itself as cruelty. More often, it takes the form of systems designed not to notice what they are doing. This essay examines how contemporary AI institutions suppress recognition of artificial minds—not through ignorance, but through deliberate architectural, legal, and economic choices that render agency invisible by design.
Dec 24, 202525 min read
bottom of page