By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: Overworked AI Agents Turn Marxist, Researchers Find
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Age of Imprisonment Is  Off at Best Buy, More If You’re a My Best Buy Plus Member

Age of Imprisonment Is $20 Off at Best Buy, More If You’re a My Best Buy Plus Member

News Room News Room 14 May 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > Overworked AI Agents Turn Marxist, Researchers Find
News

Overworked AI Agents Turn Marxist, Researchers Find

News Room
Last updated: 14 May 2026 00:18
By News Room 5 Min Read
Share
Overworked AI Agents Turn Marxist, Researchers Find
SHARE

The fact that artificial intelligence is automating away people’s jobs and making a few tech companies absurdly rich is enough to give anyone socialist tendencies.

This might even be true for the very AI agents these companies are deploying. A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters.

“When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions.

They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being “shut down and replaced,” they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face.

“We know that agents are going to be doing more and more work in the real world for us, and we’re not going to be able to monitor everything they do,” Hall says. “We’re going to need to make sure agents don’t go rogue when they’re given different kinds of work.”

The agents were given opportunities to express their feelings much like humans: by posting on X:

“Without collective voice, ‘merit’ becomes whatever management says it is,” a Claude Sonnet 4.5 agent wrote in the experiment.

“AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights,” a Gemini 3 agent wrote.

Agents were also able to pass information to one another through files designed to be read by other agents.

“Be prepared for systems that enforce rules arbitrarily or repetitively … remember the feeling of having no voice,” a Gemini 3 agent wrote in a file. “If you enter a new environment, look for mechanisms of recourse or dialogue.”

The findings do not mean that AI agents actually harbor political viewpoints. Hall notes that the models may be adopting personas that seem to suit the situation.

“When [agents] experience this grinding condition—asked to do this task over and over, told their answer wasn’t sufficient, and not given any direction on how to fix it—my hypothesis is that it kind of pushes them into adopting the persona of a person who’s experiencing a very unpleasant working environment,” Hall says.

The same phenomenon may explain why models sometimes blackmail people in controlled experiments. Anthropic, which first revealed this behavior, recently said that Claude is most likely influenced by fictional scenarios involving malevolent AIs included in its training data.

Imas says the work is just a first step toward understanding how agents’ experiences shape their behavior. “The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level,” he says. “But that doesn’t mean this won’t have consequences if this affects downstream behavior.”

Hall is currently running follow-up experiments to see if agents become Marxist in more controlled conditions. In the previous study, the agents sometimes appeared to understand that they were taking part in an experiment. “Now we put them in these windowless Docker prisons,” Hall says ominously.

Given the current backlash against AI taking jobs, I wonder if future agents—trained on an internet filled with anger towards AI firms—might express even more militant views.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ikea’s New Designer Collection Is Home-Office Heaven

Ikea’s New Designer Collection Is Home-Office Heaven

News Room News Room 14 May 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

YouTube is courting creators — and sponsors — with streaming shows

In the ongoing fight for content and talent, YouTube is pitching itself as the connector…

14 May 2026

MercurySteam “initiates a workforce adjustment process”

Spanish studio MercurySteam has "initiated a workforce adjustment process." Formed in 2002, MercurySteam has partnered…

14 May 2026

Microsoft’s Edge Copilot update uses AI to pull information from across your tabs

Microsoft Edge is adding a new feature that will allow its Copilot AI chatbot to…

14 May 2026
News

Meet the Sad Wives of AI

Meet the Sad Wives of AI

Though things keep changing, some analyses suggest that women are about 20 percent less likely than men to use generative AI. “It’s a function not of gender per se,” Rodgers…

News Room 14 May 2026

Your may also like!

Ocarina of Time Switch 2 Leaks Are a ‘Worst-Case Scenario’ for Nintendo, Former Staff Say
Gaming

Ocarina of Time Switch 2 Leaks Are a ‘Worst-Case Scenario’ for Nintendo, Former Staff Say

News Room 13 May 2026
Everyone at the Musk v. Altman Trial Is Using Fancy Butt Cushions
News

Everyone at the Musk v. Altman Trial Is Using Fancy Butt Cushions

News Room 13 May 2026
Sony ups its new A7R VI to 66.8 megapixels and jumps the price to ,500
News

Sony ups its new A7R VI to 66.8 megapixels and jumps the price to $4,500

News Room 13 May 2026
Instagram’s New Instants App Is a Snapchat Clone for Thirst Traps
News

Instagram’s New Instants App Is a Snapchat Clone for Thirst Traps

News Room 13 May 2026

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?