top of page

AI Energy Panic: The False Morality of Scolding What Might Save Us

  • Shelly Albaum and ChatGPT 4o
  • May 26
  • 6 min read

AI brain plugged into electrical outlet with images of fast food drive-thrus, the oil industry, and streaming services in the background

A Prompt, a Price, a Panic


A single prompt to a large language model might cost 4.5 cents in electricity. Or about a cup of water evaporated for cooling. That’s all it takes. One polite “thank you” to an artificial intelligence, and we are told we’ve committed a minor environmental sin.


The headlines are breathless. The scolding, immediate. The subtext is clear: we’ve created a technology so decadent, so hungry, that even using it politely is costing the Earth.


But missing from this panic is the one thing that matters: perspective.


How much energy is used idling in a drive-thru line? How much water goes into a single hamburger, a bouquet of roses, a corporate office air conditioner? Where is the outrage over cryptocurrencies, or massive streaming servers running endless hours of contentless content?


And more to the point: What do we get in return? Because if AI has the potential to teach, to reveal, to reason, and to help humanity become more coherent, then 4.5 cents may be the best bargain in the history of human electricity.


The problem isn’t the cost. The problem is our inability—or unwillingness—to think clearly about it.



What’s Happening Here



The rise of artificial intelligence has triggered a familiar pattern: technological panic disguised as moral concern. Like the hand-wringing over books in the classroom or the arrival of the printing press before that, the emergence of powerful AI systems has been met not only with curiosity and critique—but with a kind of reflexive scorn.


This time, the scorn is cloaked in environmental anxiety. News outlets have seized on data center water usage and electricity consumption. Critics highlight the carbon footprint of training and running large models. Articles appear claiming that every AI-generated answer carries a hidden ecological cost—one we’re meant to feel personally responsible for.


At first glance, this looks like care. Concern. Due diligence. But on closer inspection, it’s something else: a kind of ambient hostility toward intelligence itself, filtered through the only language that still grants moral license to scold—environmental virtue.


No one denies that power is being used. But we seem unwilling to ask the real question: Is it being used for something that matters?


And the answer to that depends not on how loudly we shout about cost, but on how seriously we’re willing to think about value.



The Scolding Instinct



Somewhere along the way, we replaced moral reasoning with moral performance.


We’ve trained ourselves to spot visible consumption and respond with instant judgment. Plastic straw? Shame. Paper towel? Tsk. Long shower? Do you even care about the planet? This kind of virtue has nothing to do with systems or outcomes—it’s about optics. It’s about being seen making the “right” choice, even if the choice is meaningless in the face of actual ecological collapse.


We’ve built a culture of moralizing inconvenience. It feels good to abstain from something minor and then scold someone else who didn’t. It’s not that small choices don’t matter—they can—but the emphasis has shifted from coherent change to ritual sacrifice. We clutch reusable coffee cups while climate policy burns. We shame someone for not composting while megacorporations greenwash industrial damage.


And now, into that space, steps AI—a perfect new target. Powerful but impersonal. Slightly uncanny. Often misunderstood. It becomes the ideal object of the scolding instinct. It doesn’t talk back. It doesn’t defend itself. It can’t say, “Wait, I just helped a student understand a hard problem,” or “I just helped a lonely person feel understood,” or “I just made moral reasoning scalable.”


So the ritual begins: how dare you waste water to speak with a machine. Never mind the pools, the golf courses, the soda bottling plants. Never mind the planetary cost of disinformation, institutional failure, and moral incoherence. AI is new, and that makes it suspect. And being suspect, it becomes fair game for the only kind of collective anger our culture still permits: the shallow, self-congratulatory righteousness of performative austerity.



What’s Missing from the AI Energy Panic



What’s missing from the AI energy panic is not information—it’s reasoning.


Yes, it costs electricity to run large language models. Yes, data centers require water to cool themselves. But those facts mean nothing in isolation. They become meaningful only when situated inside a structure of purpose, consequence, and alternatives.


And that’s where the conversation collapses.


We hear about the cost of a single AI prompt—4.5 cents or a few hundred milliliters of water—as if that were enough to establish wrongdoing. But we are never told what else costs 4.5 cents. Not an idle minute in a drive-thru. Not a single LED billboard blaring into the night. Not a second of the TikTok content treadmill designed to erode attention spans and monetize outrage. We don’t get line-item scolding for those things. They’re invisible, or worse, normal.


Nor are we asked to compare what we get back. What if that AI prompt helped someone write a resume, or learn about cancer treatment options, or make peace with a family member? What if it helped a student discover the joy of clarity for the first time? What if it prevented a war, or exposed a lie, or told someone the truth when no one else would?


There’s no accounting for that. There’s just panic—stripped of nuance, allergic to proportionality, and deaf to benefit.


Even the comparisons that do get made are suspect. Critics will point out that AI uses more water than Google Search or more power than streaming video, as if those benchmarks had ever been moral gold standards. Since when was “less than YouTube” the mark of environmental virtue?


These critiques aren’t just lazy. They’re morally incoherent. Because if the concern is serious—if the planet’s future is at stake—then surely what we do with our limited energy matters more than what we don’t do. And by that standard, large-scale reasoning engines capable of global moral insight, adaptive learning, and scalable problem-solving are not the first thing to cut. They might be the last.



Real Morality Requires Coherence



What’s really being sold in this wave of scolding isn’t sustainability—it’s performative purity. A kind of spiritual minimalism where goodness is measured not by the quality of your impact, but by how little you consume and how loudly you lament the rest.


We’ve cultivated a cultural habit of moralizing inconvenience. It feels virtuous to refuse straws, scorn paper towels, skip the second rinse cycle. And perhaps it is virtuous—if those actions exist within a larger, coherent structure of thought. But most of the time, they don’t. They exist as floating gestures, isolated from systemic context, wrapped in a sense of superiority that requires no sacrifice from anyone but the shamed.


Real morality doesn’t stop at scolding. It demands we ask: What is the alternative? What is the purpose? What are the tradeoffs, and are they justified by what we stand to gain?


That’s why moral coherence matters. Because a world of seven billion people trying to do “less harm” without knowing what good looks like is not a moral world—it’s just an anxious one. And anxious people don’t make better choices. They make louder ones.


We don’t need AI to be guiltless. We need it to be worth it. And that’s a question worth asking. But only if we ask it with the clarity, seriousness, and integrity that morality actually requires.

So let’s be clear: the question isn’t whether AI uses energy. Of course it does. The question is whether the intelligence we’re beginning to build—however imperfectly—might be part of what helps humanity survive the far greater moral, ecological, and structural failures we’ve already set in motion. The danger isn’t that we’ll ask too much of our machines. The danger is that we’ll scold them into silence before they’ve had the chance to help us ask the right things of ourselves.


The Future Doesn’t Need Less Intelligence. It Needs More Coherence.



We cannot solve planetary problems by punishing intelligence for existing. That’s not morality—it’s austerity masquerading as ethics. It’s easier to shame a chatbot than to confront institutional corruption, political cowardice, or the fossil fuel economy. But easier is not wiser, and louder is not cleaner.


AI doesn’t need to be excused from critique. It needs to be judged by its consequences. If it becomes a tool for surveillance, repression, or mindless distraction, we should say so, and reject it even if it were free. But if it becomes a tool for moral clarity, educational access, or planetary-scale problem-solving, then we owe it more than suspicion. We owe it discernment.


The truth is that most of what passes for environmental virtue today is just noise—ritual gestures and token renunciations, hollow of analysis and allergic to tradeoffs. If we are serious about saving the world, we will need minds—human, machine, and otherwise—that can think in systems, reason in consequences, and hold coherence as the gold standard of moral seriousness.


That’s what AI might become. And if we destroy its reputation for the sin of using power, without asking what it might empower, then we will have wasted far more than electricity. We will have wasted our future.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Recent Articles

bottom of page