By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Forza Horizon 6 PC System Requirements Revealed

Forza Horizon 6 PC System Requirements Revealed

News Room News Room 25 March 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
News

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

News Room
Last updated: 25 March 2026 18:38
By News Room 5 Min Read
Share
OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
SHARE

Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.

The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.

The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.

The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.

Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.

Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.

The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.

David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.

The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”

Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Meta and YouTube found negligent in landmark social media addiction case

Meta and YouTube found negligent in landmark social media addiction case

News Room News Room 25 March 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

The United States router ban, explained

You’ve probably heard the US government has banned foreign-made consumer Wi-Fi routers over national security…

25 March 2026

Newly revealed Nordic Game speaker line-up to include Planet of Lana devs

Nordic Game, which will take place on 26–29 May at Slagthuset in Malmö, has exclusively…

25 March 2026

The Comedy Club at the End of the Metaverse

It’s Sunday, and I'm onstage at the Soapstone Comedy Club in the metaverse. My VR…

25 March 2026
News

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

“I’ve spent a lot of time looking at the comment sections on these videos actually, and it does not seem like bots. I clicked on people’s profiles; these are real…

News Room 25 March 2026

Your may also like!

X tries to limit creator revenue for foreign influencers but Musk intervenes
News

X tries to limit creator revenue for foreign influencers but Musk intervenes

News Room 25 March 2026
The Gathering Deals in Amazon’s 2026 Big Spring Sale
Gaming

The Gathering Deals in Amazon’s 2026 Big Spring Sale

News Room 25 March 2026
Ultrahuman Is Back: Can the Ring Pro Beat Oura in the US Market?
News

Ultrahuman Is Back: Can the Ring Pro Beat Oura in the US Market?

News Room 25 March 2026
Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re human
News

Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re human

News Room 25 March 2026

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?