By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: The Former Staffer Calling Out OpenAI’s Erotica Claims
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
The Best Deals Today: Dragon Quest I & II HD-2D Remake, Call of Duty: Black Ops 7, and More

The Best Deals Today: Dragon Quest I & II HD-2D Remake, Call of Duty: Black Ops 7, and More

News Room News Room 14 December 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > The Former Staffer Calling Out OpenAI’s Erotica Claims
News

The Former Staffer Calling Out OpenAI’s Erotica Claims

News Room
Last updated: 11 November 2025 18:02
By News Room 3 Min Read
Share
The Former Staffer Calling Out OpenAI’s Erotica Claims
SHARE

When the history of AI is written, Steven Adler may just end up being its Paul Revere—or at least, one of them—when it comes to safety.

Last month Adler, who spent four years in various safety roles at OpenAI, wrote a piece for The New York Times with a rather alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health. “Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully,” he wrote. “We decided AI-powered erotica would have to wait.”

Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” the mental health concerns around how users interact with the company’s chatbots.

After reading Adler’s piece, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and on this episode of The Big Interview, he talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he’s set out for the companies providing chatbots to the world.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we get going, I want to clarify two things. One, you are, unfortunately, not the same Steven Adler who played drums in Guns N’ Roses, correct?

STEVEN ADLER: Absolutely correct.

OK, that is not you. And two, you have had a very long career working in technology, and more specifically in artificial intelligence. So, before we get into all of the things, tell us a little bit about your career and your background and what you’ve worked on.

I’ve worked all across the AI industry, particularly focused on safety angles. Most recently, I worked for four years at OpenAI. I worked across, essentially, every dimension of the safety issues you can imagine: How do we make the products better for customers and rule out the risks that are already happening? And looking a bit further down the road, how will we know if AI systems are getting truly extremely dangerous?

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Absynth is back and weirder than ever after 16 years

Absynth is back and weirder than ever after 16 years

News Room News Room 14 December 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

Review: Samsung Galaxy XR

I have had the new M5-powered Apple Vision Pro and Samsung Galaxy XR headsets sitting…

14 December 2025

I’m finally beginning to trust Microsoft’s handheld Xbox Allys

I still wouldn’t buy an Xbox Ally, and I still don’t think the tweaked version…

14 December 2025

The Best Portable Power Stations

Other Portable Power Stations We TestedAmpace Andes 600 Pro for $449: This compact power station…

14 December 2025
News

Grok is spreading misinformation about the Bondi Beach shooting

Grok is spreading misinformation about the Bondi Beach shooting

Grok’s track record is spotty at best. But even by the very low standards of xAI, its failure in the aftermath of the tragic mass shooting at Bondi Beach in…

News Room 14 December 2025

Your may also like!

Inside the high drama of the iPhone 4
News

Inside the high drama of the iPhone 4

News Room 14 December 2025
Best Tested Walking Pads (2025): Urevo, WalkingPad, Sperax
News

Best Tested Walking Pads (2025): Urevo, WalkingPad, Sperax

News Room 14 December 2025
The end of OpenAI, and other 2026 tech predictions
News

The end of OpenAI, and other 2026 tech predictions

News Room 14 December 2025
Review: Nanit Home Display Smart Baby Monitor Companion
News

Review: Nanit Home Display Smart Baby Monitor Companion

News Room 14 December 2025

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?