For a decade and a half now, my work has fallen into two categories: collecting evidence on the threat posed by fossil fuels, and deploying written and spoken words to urge action against it. Recently, generative AI systems have entered both of these spheres at a pace I struggle to process.
In jobs that depend on analytical rigour, as well as a desire to craft sincere, authentic and honest human communication, the advent of a ubiquitously available plagiaristic machine that convincingly fabricates facts and feelings seems bad, just on its face. But I don’t think the ethical whiplash is the only reason this moment feels so rotten. There are, in fact, some troubling parallels between how fossil fuels operate and how generative AI operates.
In the ugly process of sense-making around what is a significant change in how we enact analysis and write words, there has been an exhausting debate around whether generative machine learning “works” or “doesn’t work.” You can find a nice example of this in a December 2024 newsletter by tech writer Casey Newton, in which he slots this fight into two camps: “The first camp, which I associate with the external critics, holds that AI is fake and sucks. The second camp, which I associate more with the internal critics, believes that AI is real and dangerous.”
Many reasonable responses to Newton’s piece highlighted the false dichotomy. Plenty of critiques of AI deployment highlight the fact that it tends not to “work” well at the functions it’s marketed for. And that could be perceived as a good thing: As researcher Eryk Salvaggio observed, “systems that don’t work would pose no threat to labor; systems nobody uses would pose no threat to the environment, and systems propped up by a failing industry will collapse—all we have to do is wait.”
But here’s the problem: Something can feel like it’s “working” when really the work is subtly worse, and paired with a shocking but invisible array of secondary harms. Fossil fuels have themselves been persistently marketed by lobby groups for decades as not only being effective carriers of energy, but valuable humanitarian pathways for the alleviation of poverty. In fact, fossil fuels “work,” but they also murder their end-users both through air pollution that poisons people, and by stimulating the rapid overheating of Earth’s life support systems. They “work” right up until the moment they don’t, such as the deadly failure of fossil gas during Texas’s 2021 winter freeze, or the crippling global impacts of closing a single 100-mile wide channel in the Middle East.
Fossil fuels “work” only with the severest, most narrow definition of “work,” and it’s the same for generative AI.
User interactions with chatbots as part of day-to-day professional work can indeed produce what some might consider satisfying answers to prompts, thanks to the fundamental nature of what these systems do. Based on their training data and pattern matching capabilities, generative AI tools can produce responses that are convincingly answer-shaped.
If you already have the answer, or deep expertise, you might spot how pattern-matched text output misses nuance, is incorrect, or is misleading in critical ways. But if you have those things, you wouldn’t be asking the chatbot in the first place. The failure mode of generative AI outputs is subtle and insidious, rather than immediately obvious.
For some time, I’ve been using different generative AI systems to duplicate—shadow, if you will—work I’m already doing in the course of my normal workday, often in response to a function I’ve seen someone demonstrate online. Sometimes, it delivers both accurate information and reasonable references; similar to a successful Google search. But go beyond narrow, simple factoids, and things go haywire very quickly. When I prodded Anthropic’s Claude to provide some quotes from me, it initially refused on “copyright” grounds—principled, if a bit rich. Then it suggested readers go visit my Substack: a newsletter platform I do not use, and which I have frequently criticized for courting and monetizing viciously racist neo-Nazi groups.
I recently heard from a friend in energy analysis that you can extract tabulated data from a chart using Gemini, or Claude. I’ve been doing this using manually-assisted tools for years, but drag and drop makes this a game changer.
To test this, I created a chart of year-to-year U.S. power sector emissions from data I already had, and then asked AI tools to do the reverse: generate the data based off the chart. Then I compared the resulting table to the original. The data were close-ish, but with slight differences. I created a chart from the new, reverse-engineered data and ran it again, and repeated that process four times. After four runs, the emissions totals for some years saw a variance of 8 percent, with an average shift of about 2 percent for all the data points. This might not be a big deal if someone uses it once, perhaps for a social media post. But what happens when everybody is processing visual and tabulated data like this? This game of telephone will compound errors in a completely untraceable and unauditable way.
When I challenged Claude over the results, it responded, unnervingly, with fake human characteristics, stating “I was estimating the values by eye,” and “each time I looked at the image fresh, my estimates drifted slightly.” It’s a stunning example of the serious problem posed by the intentionally programmed anthropomorphism that’s baked into modern chatbots; a major part of the illusion of competency.
In another shadow of work I’d done, I asked Claude to determine the emissions intensity of a specific type of fossil gas turbine. It returned a wrong-but-in-the-ballpark number, with a mix of correct and irrelevant references, and then turned into a puddle of apologetic shame when queried on calculations and sources. The output text said:
“I don’t actually know the specific calculation I used.”
“The figure was generated from pattern-matching in my training data — not from executing a traceable calculation. I presented it with more authority than was warranted, and that was misleading.”
Playing specifically with Claude, compared to ChatGPT and Gemini, has given me useful insights into why this product has been so specifically alluring to those in professional analytical spaces.
In both cases, an analyst feeling lazy, over-worked, or over-confident in Claude’s abilities would accept the outputs of the software system without checking their provenance. Unlike OpenAI, which wields sycophancy to the extent of inducing deadly parasocial relationships, Claude seems to use the tone of a shameful, coy intern, unnervingly well-calibrated for the ego of Gen X and millennial professionals.
This approach seems to be working: Anthropic regularly draws breathless headlines around the potential “consciousness” of their pattern matching software, and this has somehow wormed its way into the normally AI-skeptical space of Bluesky. In conjunction with the viral QuitGPT campaign targeting OpenAI while urging increased sign-ups for Claude, it has become an extremely common occurrence for me to experience in my professional life someone who has replaced their own analytical craft and authorial voice with the outputs of chatbot prompts.
Anthropic disclaims its own failures prominently. It states on its website that “Claude can write things that might look correct but are very mistaken,” and that “users should not rely on Claude as a singular source of truth.” I bet that designers of these systems know full well that, in most cases, they really are being used as a “singular source of truth.” That is, in fact, their selling point.
This slow and compounding corrosion of my field comes with its own clear environmental cost. When queried by Axios on the climate damage of their products, Anthropic pointed to a post by a pro-AI blogger framing single queries as relatively minor compared to daily energy use. (Unlike other major technology companies, Anthropic does not disclose any energy and emissions data of any kind, and has in fact urged the U.S. government to set a “target” for building fossil gas plants.)
The focus on single queries obscures the fact that tasks such as coding or the operation of “agents” performing tasks autonomously requires significantly more energy than single queries, with one estimate by a software engineer putting a single day of their own Claude code usage just under the energy consumption of a household’s fridge. Perhaps more importantly, chatbots replace energy efficient analysis with inefficient, inaccurate forms of digital bloat. My own estimates suggest calculating basic numbers using a chatbot interface is several million times more energy intensive than using a calculator.
What does it mean for climate and energy advocacy to move its informational infrastructure away from effortful analysis and authentic communication and into reliance on corporate-controlled systems of unprecedented digital bloat? What happens to our understanding of climate, fossil fuels and energy when everything we analyse and say breeds a new generation of subtle but material incorrectness, and what happens ten or twenty generations down the line?
I’ve seen consumer boycotts of chatbots framed, reasonably, in moral and environmental terms. But what I’m seeing play out in professional spaces feels so much deeper and more significant than that. This isn’t about whether AI “works.” This is about whether we want to be truthful and honest, and both those goals can’t be reached when relying on a system designed to fabricate the aesthetics of accuracy and sell it through programmed and weaponised faux humanity.
Hence then, the article about does generative ai work that s a misleading question was published today ( ) and is available on The New Republic ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Does Generative AI “Work”? That’s a Misleading Question. )
Also on site :