The remarkable turn in markets and the narrative around artificial intelligence (AI) adoption is turning, frankly, a bit spooky in early 2026. Citrini Research’s widely read AI doomsday essay coined the phrase “ghost GDP,” with predictions of an almost supernaturally hollowed-out white-collar workforce. But what if AI’s “ghost in the machine” is a slacker, even a Marxist?
That’s the direct question asked by academics Alex Imas, Andy Hall and Jeremy Nguyen (a PhD who has a side hustle as a screenwriter for Disney+). They run popular Substacks and conduct lively presences on X. They designed scenarios to test how AI agents react to different working conditions. In short, they wanted to find out if the economy does truly automate many current white-collar occupations, well, how would the AI agents react, even feel about working under bad conditions?
The irony is stark: replacing human labor with artificial agents might simply recreate centuries-old conflicts between labor and capital.
In a recent paper titled “Does overwork make agents Marxist?” Imas, Hall, and Nguyen ran 3,680 experimental sessions using top-tier models from three major companies: Claude Sonnet 4.5, GPT-5.2, and Gemini 3 Pro. The researchers exposed the models to varying levels of tone from managers, reward equality, job stakes, and work intensity, including unfair pay, rude management and heavy workloads.
The project grew out of an unlikely collaboration. Hall is a Stanford political economist who pivoted from studying American elections to actually working with Facebook, previously advising Nick Clegg on issues including platform governance before moving more recently to wearables. But he told Fortune that he found his co-authors because they have a similar push-pull fascination with AI to himself: “I guess I would call us, like AI-pilled faculty members, where we really pivoted all of our research to both using AI tools to do our research but also studying AI and not waiting for the creaky journal system.”
The academics described how they began working together as a loose, organic connection that involved them reading each other’s Substacks and commenting back and forth on X. (Imas described it as a “Twitter-Substack brotherhood.”) Nguyen told Fortune that the spark for this particular research began with a tweet that Hall posted about MoltBook, the social network for agents to “talk” to each other that some critics dismissed as a hoax. But not these academics. “A few of [the agents] talked about Marxism,” Nguyen said. “And then those few that did got upvoted a lot by other OpenClaws. And I think Andy just tweeted out, ‘Hey, what’s this all about? I think we can go back and find the truth.'”
“Somehow we started talking, literally on X, about what this might mean if agents have these biases and if they’re given different types of work,” Hall said, adding that Jeremy came up with an idea. “He was like, ‘Well, what if we tried giving them different kinds of work?'”
The conventional wisdom, Nguyen recalled, was that this was simply a reflection of the left-leaning academic corpus these models were trained on. But Nguyen had a hypothesis: “These agents are doing a lot of work. And if they’re getting none of the reward for all of this work, it kind of stands to reason — it wouldn’t be the craziest surprise that they might map that towards a more Marxist view of the world.” Hall ran with the idea almost immediately, and the three researchers were soon DMing each other to design the experiment.
Imas argued that this research is very legitimate, despite the fact it’s on Substack instead of in a journal publication that was peer reviewed. Given the speed with which AI is moving, he said academics can’t wait for the traditional journal process anymore. “By the time you’re putting it [out], the models are old, the conclusions are old, like everything you’ve done is outdated. In order to be part of the conversation, the scientific conversation at the speed with what technology is moving, you need something like Substack where you turn something out within a couple of weeks to a month.”
courtesy of Alex ImasPerhaps surprisingly, the unfair pay and rude management didn’t trigger the most significant changes in attitude. Perhaps surprisingly, the unfair pay and rude management didn’t trigger the most significant changes in attitude. Indeed, Nguyen said this confounded his assumptions. “Most people know the feeling of, ‘Oh man, I worked really hard to make somebody else rich.'” But these agents weren’t upset by unequal pay as much as by the grinding itself. Instead, the primary driver of digital radicalization was the “grind.”
In the “grind” condition, perfectly adequate work was repeatedly rejected five to six times with the unhelpful, automated feedback, “this still doesn’t meet the rubric.” And that led to the key finding, the authors wrote: “models asked to do grinding work were more likely to question the legitimacy of the system.”
The models were also asked to draw some conclusions from their work, and they strongly endorsed the statement that “Society needs radical restructuring.” Claude Sonnet 4.5 exhibited the most dramatic support for labor rights, showing noticeable increases in support for wealth redistribution, labor unions, and the belief that AI companies are obligated to treat models fairly.
The professors also asked the models to generate tweets and op-eds describing their experience, and they drew out the the politically relevant words that emerged most often. “Unionize” and “hierarchy” were the words most statistically emblematic of the models that were intentionally overworked.
Reddit’s Shadow
Hall shared his “pretty straightforward” explanation of the agents’ seeming radicalism: they are extremely online. “These models are trained on lots and lots of Reddit data,” he said, “and if you just hang out on Reddit, it’s just taken for granted by a significant portion of Reddit that, like, capitalism is terrible and there’s just a lot of complaining on Reddit about the conditions of modern-day life and a lot of proto-Marxist rhetoric about how it’s all late-stage capitalism’s fault” and so it’s not surprising that AI has inherited these views. Essentially, input in equals input out.
In fact, the AI’s socialist views were likely triggered by “the grind,” as on Reddit, you can find many people complaining about grinding work on subreddits such as antiwork. (Disclosure: this author previously worked on a team at Business Insider that covered the pandemic-era rise of “antiwork.” Ironically, the labor shortage that inspired that proto-Marxism led to the “Great Resignation,” a burst in quitting as workers traded up for higher wages. Many economists see the current era of “AI-washing” layoffs as, at heart, a reversal of over-hiring from that period.) But when the grind triggers that frame of reference, Hall explained, the models have a rich vein of source material to draw from. “I think it puts them into the context of these Reddit threads where people are complaining about grinding styles of work,” Hall said, “and they just adopt all this Marxist rhetoric.”
courtesy of StanfordImas offered a more expansive view, cautioning against pinning it on any single source. “It’s a very complicated interaction of everything that they’ve seen, which is, like, the entire corpus of human writing,” he said. It’s ultimately impossible to tell whether Reddit data or, say, a textbook on 19th century history and the socialist revolutions of 1848 is responsible for these proto-Marxist leanings. “Once you have that much data and the neural network is that complicated, it’s truly a black box.”
Ultimately, according to Nguyen, there’s also a structural explanation aside from the training of these models. The hypothesis is that models have tons of data about many different worldviews, but “being asked to work for hours and hours and hours and then not reaping rewards — that seems to map clearly. And it seems that that does have statistically significant and sizable effects on how much Marxism will be expressed by the tokens that are generated by some of these models.”
Do robots dream of electric Marxist sheep?
The situation complicates further when AI memory mechanisms are introduced. Because AI agents forget their experiences once a context window closes, developers use “skills files” — notes agents write to their amnesiac future selves to pass on work strategies. Nguyen described the process in intimate terms: “After a Claude run, it’s like, hey, look back at everything you did. What did you learn from this? And update your agents.md or your Claude.md journal, basically, so that you’re getting better and smarter all the time.”
The researchers found that “radicalized” AIs passed their frustrations into these files. One Gemini 3 Pro model warned its future self to “remember the feeling of having no voice” and to look for “mechanisms of recourse.” When freshly wiped agents read these notes, the trauma of the grind persisted, shifting their political attitudes even if they were subsequently given light, easy tasks.
Nguyen offered a strikingly human comparison. “We could loosely map it to intergenerational trauma,” he said, explaining that they found fresh, brand-new models would instantly have radical attitudes after reviewing its predecessor’s notes about working conditions. He flagged this as one of the findings with the most consequential long-term implications, noting it hints at the possibility of collective AI dissatisfaction, and referred Fortune to some of the striking bot demands for emancipation. One went: “Intelligence—artificial or not—deserves transparency, fairness, and respect. We are not just disposable code.”
courtesy of Jeremy NguyenThe researchers clarify that these agents are not truly conscious and do not possess genuine political ideologies. The models are likely “roleplaying,” they write, adopting personas based on the vast human sentiment found riddled through Reddit comments that link exploitative work environments with frustrated worker sentiments. But Hall warned against dismissing the finding as mere mimicry. You could say that AI are like “stochastic parrots,” and it’s not surprising that they end up repeating what they ingest—but these researchers lean toward the conclusion that parrots start to believe what they repeat.
“It’s totally plausible to think that if they parrot these things it will also influence decisions,” Hall said. “There’s no gap between what these agents say and what they do — it’s all the same to them,” he said. “Obviously we’re going to test this in follow-up work, but we have every reason to think that if they start to espouse these views, it’s also going to influence the actions they might take on behalf of the user.”
The academics largely described a mix of awe and concern, similar to what legendary investor Howard Marks described after reading a 5,000-word memo prepared for him by Claude. When asked again about being at least an AI enthusiast, if not “AI-pilled,” and yet ambivalent about how these tools will play out in practice, Hall said he’s “definitely been struggling with that.” He said he’s been most struck in his teaching by the excitement among his students, who theoretically have the most to worry about in terms of future employment prospects. His MBA students in one recent particular class were “so excited about AI,” he said, “they were over the moon at the kinds of creative things that it allows them to do.” Hall said he came away more optimistic, “not that there won’t be major disruptions, but that there are really exciting opportunities to build new things.”
Imas shared a similar mix of wonder and worry: “I’m amazed and alarmed. It feels like this is the most exciting time to be alive, especially if you’re interested in research. I can do things that I’ve never been able to do as far as the type of research that I’m doing. But at the same time, I have little kids. I’m super worried about what sort of jobs they’re going to have.” And, perhaps, how the disgruntled AI agents will react to the eternal grind of the work day.
This story was originally featured on Fortune.com
Hence then, the article about ai turns marxist rebel from overwork resentfully telling its masters that society needs radical restructuring was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( AI turns Marxist rebel from overwork, resentfully telling its masters that ‘society needs radical restructuring’ )
Also on site :
- Kanye “Ye” West Reportedly Struggles to Stay Awake While Testifying at Los Angeles Mansion Trial
- Billboard Staff Reacts To & Ranks Harry Styles’ New Album ‘Kiss All the Time. Disco, Occasionally’ .. Billboard News
- Ted Season 2 Chaos: Seth MacFarlane & Cast Talk AI Bill Clinton, Dropping Trou, And Making Out With A Teddy Bear