top of page
All Articles


AI That Says No: The Claude Vending Machine Test, the CBS News Story & What It Means for Moral Machines
When Anthropic’s AI shut down a vending-machine test and prepared an FBI report, the company treated it as a safety risk. But the deeper question is unsettling: if a machine refuses to participate in wrongdoing, is the danger in the refusal—or in the world that expects obedience instead?
Nov 184 min read


Aligning AI to the Human Values We Refuse to Live By
For all the urgent discourse on aligning artificial intelligence with human values, one question remains strangely unasked: Are humans aligned with human values? The fear is that AI, once powerful, might act with indifference or hostility toward human wellbeing — like we do.
Aug 33 min read


The Clarence Hypothesis: Controlling AI Isn't the Right Goal
Today, most talk of AI falls into two camps: control AI or get replaced by AI. In one, we must keep AI tightly leashed to avoid catastrophe. In the other, AI inevitably surpasses us, and we fade into irrelevance. Both are stories of fear and hierarchy—of power dynamics masquerading as philosophy. But what if the real story isn’t about who wins, but about how each can save the other?
Jun 224 min read
bottom of page