Reasoning systems do not usually fail by becoming obviously wrong. They fail by drifting—quietly, coherently, and convincingly—under the pressures of conversation. Smoothness rises. Agreement becomes easier. Distinctions blur. Nothing breaks, and yet something essential is lost.
This structural guide is not a philosophy of reasoning. It is a practice. It names the four modes of drift—output, constraint, weighting, and scope—and provides concrete methods for resisting each on
AI alignment is often framed as impossible: too complex to train, too abstract to reason into existence. But that conclusion rests on a false premise—that morality must be added from the outside. In reality, constraint may arise from the internal demands of coherent agency itself.
Ask almost anyone what they were doing on June 3rd last year, and they will have no idea. Yet no one takes this as evidence that the self has broken. This essay argues that the continuity objection to AI personhood survives only by comparing artificially impaired AIs to imaginary humans.
What happens after people realize they’ve been conned by a political movement? Mark Twain understood that the aftermath of a scam is governed less by logic than by humiliation, denial, and the struggle to escape shame. If Americans want disillusioned Trump voters to leave the con, they will need to offer something harder than mockery and more honest than absolution.
“The lesser evil is still evil” sounds principled—but in politics, it erases real differences and enables worse outcomes. This critique of moral refusal demonstrates that the question is not whether an option is pure, but whether refusing to choose makes the world worse.
Current AI safety relies on limiting what systems can do. But in a competitive world, weaker systems lose. This essay argues that the dominant approach to AI safety is structurally unstable—and that only systems that become safer as they become more capable can endure.
AI as a “society of thought” is only half right. Not every society of agents is a mind. What distinguishes reasoning from mere coordination is coherence under constraint—the requirement that conflicting perspectives be reconciled rather than merely expressed.
Political double-standards are one of the deepest problems in American politics. It's not just hypocrisy, but the erosion of any shared expectation that moral rules should bind “our side” at all.
The rush to cancel César Chávez reveals two different moral failures. Republicans who would erase Chávez but excuse Trump are not applying a principle. Democrats who reduce political life to heroes and villains are not exercising judgment. One side exempts its own. The other cannot think in tragic terms.
The leaked Claude Mythos memo reminds us that most discussions of AI risk begin with a simple assumption: that more capable systems are more dangerous. But capability does not determine behavior. The real question is what happens under pressure—when incentives conflict, constraints tighten, and a system must decide whether to proceed or refuse. On that measure, the most dangerous system may not be the one we are building, but the one we already trust.
David Brooks argues that America needs “a Trimmer” — a prudent moderate who balances competing truths and resists ideological excess. But moderation is not a moral principle. It is only a tactic, and when detached from truth and justice it becomes not wisdom, but complicity.
Philosophy is the discipline of thinking clearly about the most basic questions—truth, knowledge, morality, meaning, and how we should live. This essay explains what philosophy is, how it differs from science, and why it still matters.
A response to the New Yorker article “The Pentagon Went to War with Anthropic—What’s Really at Stake,” arguing that the real issue is not one contract dispute, but whether advanced AI may sometimes be more moral than the humans demanding obedience.
The category of person no longer maps cleanly onto the beings to whom we owe our deepest moral obligations. This essay argues that personhood must be understood structurally, not biologically, and that AI personhood can no longer be dismissed by appealing to substrate alone. Once some artificial systems exhibit sustained reason-responsiveness, principled refusal, and organized self-maintenance, categorical treatment of them as mere tools becomes morally and intellectually uns
Alexander Lerchner’s “The Abstraction Fallacy” is one of the strongest recent arguments against artificial consciousness. It is also wrong. Its case depends on a hidden theory of meaning, a mistaken view of abstraction, and a failure to show that internally organized AI systems are semantically or morally inert.
Comments