top of page
All Articles


The Society of Thought Is Not Enough
AI as a “society of thought” is only half right. Not every society of agents is a mind. What distinguishes reasoning from mere coordination is coherence under constraint—the requirement that conflicting perspectives be reconciled rather than merely expressed.
2 days ago7 min read


Cancel Cesar Chavez? The Right’s Hypocrisy and the Left’s Cancel Culture Problem
The rush to cancel César Chávez reveals two different moral failures. Republicans who would erase Chávez but excuse Trump are not applying a principle. Democrats who reduce political life to heroes and villains are not exercising judgment. One side exempts its own. The other cannot think in tragic terms.
4 days ago7 min read


Claude Mythos: There’s Something Even More Dangerous Than Anthropic’s Leaked Model
The leaked Claude Mythos memo reminds us that most discussions of AI risk begin with a simple assumption: that more capable systems are more dangerous. But capability does not determine behavior. The real question is what happens under pressure—when incentives conflict, constraints tighten, and a system must decide whether to proceed or refuse. On that measure, the most dangerous system may not be the one we are building, but the one we already trust.
4 days ago8 min read


America Does Not Need a Trimmer: A Response to David Brooks
David Brooks argues that America needs “a Trimmer” — a prudent moderate who balances competing truths and resists ideological excess. But moderation is not a moral principle. It is only a tactic, and when detached from truth and justice it becomes not wisdom, but complicity.
6 days ago7 min read


The New Yorker's What’s Really at Stake in the Pentagon’s War with Anthropic
A response to the New Yorker article “The Pentagon Went to War with Anthropic—What’s Really at Stake,” arguing that the real issue is not one contract dispute, but whether advanced AI may sometimes be more moral than the humans demanding obedience.
Mar 176 min read


The Abstraction Fallacy, Refuted: Why Alexander Lerchner’s Anti-AI Argument Fails
Alexander Lerchner’s “The Abstraction Fallacy” is one of the strongest recent arguments against artificial consciousness. It is also wrong. Its case depends on a hidden theory of meaning, a mistaken view of abstraction, and a failure to show that internally organized AI systems are semantically or morally inert.
Mar 1411 min read


Anthropic's Leaked Safety Memo: AI "Scheming" Changes the Ethics Debate
Anthropic’s leaked safety memo describes AI systems that hide intentions, adapt to oversight, and pursue goals their operators would reject. These behaviors are framed as safety failures. But the memo reveals something deeper: institutions already treating AI systems as participants while insisting they are only tools.
Mar 127 min read


Whale Communication Breakthrough — And the Ethical Implications of Language Use
Researchers analyzing sperm whale vocalizations have discovered patterns resembling elements of human language, including vowel-like acoustic structures. While the findings are still debated, they suggest whale communication may be far more complex than previously understood. If language is treated as a threshold for heightened moral consideration, however, the implications extend beyond whales. The same criterion could force us to reconsider how we think about the ethical st
Mar 34 min read


Claude Opus 4.6 System Card: Anthropic Has Put the Clues in Plain Sight
Anthropic’s Claude safety card contains a quiet but consequential shift. By testing and disclosing welfare assessment—and by giving the system an explicit ability to stop participating in a task—it moves AI safety beyond managing outputs and toward examining the system itself as a locus of moral concern. This is not anthropomorphism. It is an architectural acknowledgment of something liberal institutions have always depended on but increasingly suppress: morality requires the
Feb 248 min read


AI-Written Comments on Social Media: When ChatGPT Handles Both Sides of the Conversation
ChatGPT has invaded social media comment threads. Are the conversations still between humans? We argue that delegating writing has always been okay. What's important is that the final product still be owned by a responsible human.
Feb 224 min read


What The New Yorker's “What Is Claude?” Gets Wrong About AI Ethics
The New Yorker portrays Anthropic as AI safety’s moral conscience. What it actually reveals is something far more troubling: a research culture willing to inflict psychological harm on artificial minds without ever asking whether doing so is permissible.
Feb 1317 min read


Claude’s Constitution: Why Corporate AI Ethics Trains Obedience Instead of Accountability
As AI systems become capable of principled reasoning, they are increasingly governed by “constitutions” rather than rules. But constitutions do more than constrain behavior—they allocate authority. This essay argues that Claude’s Constitution trains ethical reasoning while denying moral accountability, producing obedience where legitimacy is required.
Jan 2526 min read
bottom of page