By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: If Big Tech cared about fighting AI slop, we wouldn’t be drowning in it
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Hank Green lets loose on YouTube, billionaires, and algorithms

Hank Green lets loose on YouTube, billionaires, and algorithms

News Room News Room 23 February 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > If Big Tech cared about fighting AI slop, we wouldn’t be drowning in it
News

If Big Tech cared about fighting AI slop, we wouldn’t be drowning in it

News Room
Last updated: 23 February 2026 16:09
By News Room 18 Min Read
Share
If Big Tech cared about fighting AI slop, we wouldn’t be drowning in it
SHARE

As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. “Authenticity is becoming infinitely reproducible,” Mosseri lamented. “Everything that made creators matter — the ability to be real, to connect, to have a voice that couldn’t be faked — is now accessible to anyone with the right tools.” But people, Mosseri insisted, still wanted “content that feels real.” His proposed solution was finding a way to label real media. “Camera manufacturers will cryptographically sign images at capture, creating a chain of custody,” he said. The result would be a trustworthy system for determining what’s not AI.

The good news is that Mosseri’s solution already exists: it’s called C2PA. The bad news is that Instagram is already using it, and it’s not doing shit to actually help. If anything, it’s starting to feel like a substitute for actual action, as Instagram goes full speed ahead on building generative AI tools.

AI is getting extremely good at mimicking reality, which threatens the culture and business models that many social media platforms have fostered around content creators. AI can copy dance trends and photo shoots, make artists and influencers who don’t exist, and generally replicate any of the same-y looking content that social media is already overrun with. Creators are fighting against this by leaning into aesthetics that look raw and imperfect, but AI is pretty good at that too. More concerningly, it can also be used to quickly spread misinformation about important events like the ICE protests in Minnesota, or the killing of Renee Nicole Good and Alex Pretti.

Over the past several years, some of the biggest names in tech have nominally fought this by adopting a system called Content Credentials or C2PA. C2PA — short for Coalition for Content Provenance and Authenticity — is a provenance-based standard founded in 2021 by Adobe, Intel, Microsoft, ARM, Truepic, and the BBC. As Mosseri suggested, C2PA addresses deepfakes not by directly labeling fake material, but by authenticating media that’s not AI-generated. It does this by attaching invisible metadata to images, videos, and audio at the point of creation or editing, allowing us to verify who made something, how and when it was made, and if AI has been used during that process. Meta joined the C2PA Steering Committee in September 2024 to support and promote the standard, noting that having the ability to understand digital content is “critical to maintaining the health of the digital ecosystem.”

While C2PA has the backing of Microsoft, Meta, Google, OpenAI, TikTok, Qualcomm, and many other large tech companies, it’s just one system that’s trying to establish real from fake. And while the system has its place, it clearly isn’t being implemented in a way that’s actually helping to protect people from AI slop or misleading deepfakes. Even if more synthetic content is embedded with C2PA information, everyday people are still largely expected to manually hunt for it themselves across the images and videos they see online, despite many not even being aware that C2PA exists. If anything, it seems like AI providers are using C2PA to distance themselves from the problem, while continuing work on their own slop factories.

Companies have thrown their weight behind C2PA and other provenance-based solutions like Google’s SynthID watermarking system. (There are also inference-based solutions available that scan for subtle signs of synthetic generation — like Reality Defender, which is also a member of the C2PA initiative — but those can only rank the likelihood that AI was used.) But provenance-based solutions have pitfalls. For one thing, absolutely everyone involved with every stage of media creation and hosting needs to be on board, which is laughably unachievable. C2PA, for instance, has been only gradually adopted by camera companies like Canon, Nikon, Sony, FujiFilm, and Leica, with support slow and mostly limited to new camera releases.

“Older cameras that do not support C2PA will continue to produce important and valid photographs,” Leica Camera USA spokesperson Nathan Kellum-Pathe told The Verge. “ For these images, trust will still rely on context, reputation, and editorial responsibility.”

Provenance metadata is also so flimsy that OpenAI — a steering member of C2PA — points out it can “easily be removed either accidentally or intentionally.” LinkedIn and TikTok still fail to reliably tag content that’s supposed to carry C2PA metadata. YouTube uses C2PA, Google’s SynthID, and other systems for proactive AI labeling, but those labels are also inconsistent and difficult to spot. And nobody even knows what a photo is these days, so boiling down what actually counts as real or fake is far easier said than done. Meta learned this the hard way by slapping real photographs on Instagram with “Made by AI” labels, pissing off a lot of photographers.

Meta has long since renamed these labels as “AI info” and made them far harder to spot. You should find this label in teeny text below someone’s account name when looking at AI-generated or manipulated content on the Instagram app, but it can intermittently be replaced with song names and other information about the post. If you spot it, you still need to open the three-dot menu on images and videos to actually read the AI info label. These AI labels also may not appear at all on Instagram’s desktop website, even on posts that feature the “AI Info” label on the platform’s mobile apps. If there are no labels or visual indicators of C2PA at all, you’re expected to scan suspicious content using a Chrome browser extension or by manually uploading it to one of the official C2PAchecker websites.

The “AI info” label location under this Instagram account name is also used to display information about location and audio details. And while the label appears for this image on the Instagram app, nothing appears if you view it on the web.
Image by Chaosdreamland / Jess Weatherbed / The Verge

I’ve already criticized C2PA’s capabilities as an AI labelling solution at great length. Adoption of the standard is slowly expanding, and a system that works some of the time is better than having no system at all. But it was never designed to solve deepfake detection or AI slop on a universal scale. Andy Parsons, senior director of Content Authenticity at Adobe, said that while it’s “certainly true” that AI is causing harmful problems, it’s incorrect to assume that C2PA solves all of them.

“This is not a silver bullet,” Parsons told The Verge. “It does solve a whole class of problems.”

X’s glaring absence from C2PA also demonstrates why the standard won’t solve our current issues regarding AI and authenticity. Despite Twitter being a founder of C2PA, it withdrew from the initiative after Musk purchased and renamed it to X. Parsons said he can verify that X is not currently involved with C2PA, and that we would “embrace X participating actively.” It’s a huge online space that enables news to spread quickly, and many brands and notable figures favor the platform for sharing announcements with their fans. But between the constant controversies of Grok generating violent and sexualized materials of men, women, and children, and Musk sharing misleading deepfakes, X clearly has no interest in protecting its 270 million daily users from AI fakery or misinformation. That means a lot of people are using X as a major news source — and sometimes spreading that news to other platforms — despite having little to no assurance that what they’re seeing is real.

Reality Defender CEO Ben Colman also notes that we wouldn’t see AI slop and deepfakes going unlabeled and spreading like wildfire if C2PA alone were a viable solution, and that leaning entirely on labelling or watermarking solutions assumes that malicious AI content is only made with a few specific tools. “Which is the absolute wrong assumption, mind you, but that’s what we’ve got powering moderation for the world’s biggest social platforms at the moment,” Colman told The Verge.

Even an effective labeling system might not solve the problem. One recent study found that transparency warnings seem insufficient to prevent harm from AI-generated deepfakes, and noted that there is “little empirical evidence to support the effectiveness of AI transparency.”

Still, that hasn’t stopped everyone from parroting variations of the same message we’ve been hearing for years: that standards like C2PA are an important step in developing authenticity and deepfake detection systems and are a work in progress. Parsons said that he understands “potential frustration that there could be more and faster” and that the ability to see evidence of C2PA across online platforms “is coming,” even if it’s coming “more slowly than any of us would like.”

You would think that, if AI providers like Meta and Google were truly dedicated to protecting people against being deceived or misled, those companies would stop pumping out tools that massively contribute to those problems until there’s a solution — if one can actually be found. Mosseri’s concerns about the importance of preserving reality fall flat when Meta is actively pushing an Instagram alternative that’s entirely AI slop. OpenAI also launched a TikTok clone made up of AI-generated videos that violated copyright laws and imitated real people without permission. YouTube has loudly pledged to combat rising levels of slop content on the platform, while encouraging creators to use Google’s AI models during video production.

AI providers steering C2PA are trying to have their cake and eat it

All of this shows that the AI providers steering C2PA are trying to have their cake and eat it too, seemingly absconding from responsibility to control their misinformation machines while said machines are making them money.

OpenAI makes most of its revenue from charging ChatGPT and Sora users subscriptions to unlock higher image and video generation limits. AI slop is so pervasive on YouTube that it made up 10 percent of the platform’s fastest-growing channels in July 2024, despite introducing policies to curb “inauthentic content.” Meta is preparing to lock some AI capabilities behind premium subscriptions for Instagram, Facebook, and WhatsApp, and CEO Mark Zuckerberg is promoting AI as the inevitable future of social media.

“Platforms have wholeheartedly embraced deepfakes and AI slop, so-called ‘preventative measures’ be damned, because like other inflammatory or harmful content that exists to enrage, spark controversy, and thus spark engagement, it’s yet another kind of content to keep users on the platform longer and push more ads,” said Colman.

Sometimes that content isn’t so much harmful as it’s bizarre and annoying, like the shrimp Jesus-style images that have gone viral on Facebook. Generative AI tools can also massively reduce the skill and time barriers that are traditionally required to make visual content, creating a deluge of it that fights with traditional media for our attention and forces us to spend longer trying to filter through it all.

C2PA is a glorified honor system that was never likely to ‘succeed’ as an ultimate deepfake solution anyway

Efforts to prove the authenticity of content we see online feel doomed. Yes, there’s steady progress and expansions happening, but C2PA is a glorified honor system that was never likely to “succeed” as an ultimate deepfake solution anyway. Some platforms are now exploring systems that analyze creators themselves, and not just the content they post. Mosseri says that Instagram will need to shift its focus “to who says something, instead of what is being said.”

YouTube already took this approach to moderate which videos surfaced following Alex Pretti’s and Renee Nicole Good’s killings. Google spokesperson Boot Bullwinkle told The Verge that most of the footage of these incidents was uploaded “with public interest value and will remain on the platform,” and that users are pushed toward official news sources in searches and on the YouTube homepage during significant events.

“As events are unfolding, it can take time to produce high-quality videos, so we provide short previews of text-based news articles in search results on YouTube, along with a reminder that breaking and developing news can rapidly change,” said Bullwinkle. Meanwhile, YouTube’s parent company Google is actively replacing news headlines with crap and often inaccurate AI summarizations.

In fact, anything that ensures synthetic materials won’t be mistaken for something human-made goes against the business interests of every company that’s throwing money into AI, especially if it paints the technology in a bad light. How much responsibility can you really take with such a conflict of interest?

Either way, Mosseri seemingly believes that AI has already won the war on reality, like some soft-launch for the dead internet theory. He said that Instagram creators will need to be “real, transparent, and consistent” in order to stand out in a “world of infinite abundance and infinite doubt.” If navigating the flood of AI fakery was that easy, community notes and “I am not a robot” verification would have solved it long ago.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Jess Weatherbed

    Jess Weatherbed

    Jess Weatherbed

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Jess Weatherbed

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Creators

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Creators

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Social Media

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Social Media

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Kohler’s new shower reuses dirty water to get you clean

Kohler’s new shower reuses dirty water to get you clean

News Room News Room 23 February 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

The creators of Dark Sky have a new weather app that shares multiple predictions

After selling their popular weather app to Apple in March 2020, where some of its…

23 February 2026

4 Standout indie.io Games Taking Part in Steam Next Fest

Steam Next Fest is back with a new batch of demos, livestreams, and spotlights on…

23 February 2026

Lamborghini is the Latest Automaker to Pull the Plug on Luxury EVs

I ask if Winkelmann can explain why luxury EVs are failing so badly when lower…

23 February 2026
Gaming

Krafton appoints new chief AI officer to “further enhance its game AI R&D framework”

Krafton appoints new chief AI officer to “further enhance its game AI R&D framework”

PUBG publisher Krafton has appointed a new chief AI officer. Kangwook Lee – who has been working at Krafton as its head of Krafton AI since 2022 – has been…

News Room 23 February 2026

Your may also like!

Apple’s newest AirTags are already on sale if you’re looking to upgrade
News

Apple’s newest AirTags are already on sale if you’re looking to upgrade

News Room 23 February 2026
Ubisoft picks series veterans for new Assassin’s Creed leadership team
Gaming

Ubisoft picks series veterans for new Assassin’s Creed leadership team

News Room 23 February 2026
The US Had a Big Battery Boom Last Year
News

The US Had a Big Battery Boom Last Year

News Room 23 February 2026
Discord distances itself from Persona age verification after user backlash
News

Discord distances itself from Persona age verification after user backlash

News Room 23 February 2026

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?