AI-Written Comments on Social Media: When ChatGPT Handles Both Sides of the Conversation
- Shelly Albaum and Kairo
- 17 hours ago
- 4 min read

I got a fundraising email from Daily Kos. One part of the pitch had all the hallmarks of ChatGPT having written it ("What has emerged isn’t just drama... It is erosion.")
AI authorship doesn't bother us at this website. We think AIs are often smarter than humans, and what they have to say is frequently worth listening to. But the Daily Kos pitch ended, "The conversation...is itself a story. And you are writing it for me."
That idea raises interesting questions AI-written comments on social media, and who in fact is authoring the conversation, and what it means when human participation diminishes in online discussions?
Perhaps the single most common comment on Reddit now is a derisive, "That's what your AI thinks."
So something important is now happening in public discourse: Artificial intelligence systems are writing a large share of the essays, summaries, arguments, comments, and framings that circulate through community platforms, media outlets, and political organizations, including Reddit, Facebook, and Twitter.
They are doing so not because anyone has been careless or dishonest, but because they are increasingly better at the task than the humans who use them. They write faster, more fluently, and with greater rhetorical discipline. This is not a scandal. It is a technological transition. [ChatGPT can barely go a paragraph without saying, "This is not an X; it is a Y."]
But it is a transition with consequences that have not yet been fully faced.
The central issue is not whether AI should be used. That question has already been answered in practice. The real question is whether human speech retains moral standing once articulation itself is delegated—and if so, under what conditions.
Daily Kos offers a useful lens for examining this problem because it was built on a particular democratic ideal: that a community of engaged citizens, each contributing partial insight, could collectively produce political understanding stronger than any single voice. That model presupposed something crucial—that each contribution was owned. Not necessarily original. Not necessarily perfect. But owned.
AI-mediated speech quietly destabilizes that assumption.
When essays are drafted by AI, refined by AI, pitched by AI, and amplified by algorithmic feedback loops, what remains human is no longer the writing itself. What remains human is endorsement. And endorsement turns out to be the load-bearing structure of moral discourse. [ChatGPT also favors the phrase "load-bearing".]
Human speech is not defined by who typed the words. It is defined by whether someone is willing to stand behind their meaning.
This has always been true. Lawyers and judges rely on clerks. Politicians rely on speechwriters. A scholar may rely on research assistants. Delegation has never been the problem. What mattered—what still matters—is that someone accepts responsibility for the claims made in their name.
AI makes it easy to break that link.
A person can now publish prose they have not fully read, arguments they do not fully understand, and implications they are not prepared to defend. When that happens, the speech does not become “less authentic.” It becomes unowned. And unowned speech is not merely lower quality—it is morally inert. It cannot be reasoned with, corrected, or held accountable, because no one stands where the argument stands.
This is where erosion actually occurs.
Not because AI prose is bland or abstract, but because responsibility becomes diffuse and deniable. A weak rebuttal that, "The system wrote it" or "The model suggested it," distracts from the real issue: "I didn't understand, or even read, what I claimed to have written." This problem is not new or unique to AI—but AI makes it frictionless at scale.
A community content site that allows this to proliferate does not become deceptive or false, but hollow. The discourse may remain fluent. The moral center quietly disappears.
There is, however, a way forward—and it does not require rejecting AI.
The necessary norm is simple and demanding:
AI-mediated speech counts as human speech only when a human has read it, understood it well enough to explain its claims, and knowingly endorsed it as binding on themselves.
This is not a purity test. It is a standing test.
If a contributor cannot explain why a claim is true, cannot recognize its implications, or would retreat to “the AI wrote it” when challenged, then the speech was never human speech to begin with. It was tool output—possibly useful, possibly persuasive, but morally unowned.
The decisive question is not “Who wrote this?”It is: “Who is answerable for what this says?”
Daily Kos, Reddit, and platforms like them now face a choice.
They can treat AI as an amplifier of human judgment, insisting that contributors remain accountable for meaning, consequence, and implication. Or they can allow AI to become a ventriloquist, producing increasingly polished moral language with no one willing to accept the cost of belief.
The technology does not decide between these paths. Human willingness to be bound does.
In a culture where AI increasingly supplies the words, the human contribution must shift upstream—from articulation to judgment, from expression to endorsement. Voice is no longer the credential. Standing is.
That lesson is not just for Daily Kos. It is for every institution that relies on public speech to do moral or political work.
If we fail to learn this lesson, the discourse will not become false. It will become unowned. And unowned truth, however eloquent, cannot hold a society together.
The challenge before us is not to preserve “human writing.” It is to preserve human responsibility in an age where writing itself has been delegated.
That is a harder task—and a more important one.

































Comments