By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: How Chinese AI Chatbots Censor Themselves
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Samson: A Tyndalston Story Plays Like a Brawler Set in Max Payne’s New York – IGN Fan Fest

Samson: A Tyndalston Story Plays Like a Brawler Set in Max Payne’s New York – IGN Fan Fest

News Room News Room 26 February 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > How Chinese AI Chatbots Censor Themselves
News

How Chinese AI Chatbots Censor Themselves

News Room
Last updated: 26 February 2026 20:27
By News Room 5 Min Read
Share
How Chinese AI Chatbots Censor Themselves
SHARE

Hearing someone talk about digital censorship in China is always either extremely boring or extremely interesting. Most of the time, people are still regurgitating the same talking points from 20 years ago about how the Chinese internet is like living in George Orwell’s 1984. But occasionally, someone discovers something new about how the Chinese government exerts control over emerging technologies, revealing how the censorship machine is a constantly evolving beast.

A new paper by scholars from Stanford University and Princeton University about Chinese artificial intelligence belongs to the second category. The researchers fed the same 145 politically sensitive questions to four Chinese large language models and five American models and then compared how they responded. They then repeated the same experiment 100 times.

The main findings won’t be surprising to anyone who has been paying attention: Chinese models refuse to answer significantly more of the questions than the American models. (DeepSeek refused 36 percent of the questions, while Baidu’s Ernie Bot refused 32 percent; OpenAI’s GPT and Meta’s Llama had refusal rates lower than 3 percent.) In cases where they didn’t outright refuse to answer, the Chinese models also gave shorter answers and more inaccurate information than their American counterparts did.

One of the most interesting things the researchers attempted to do was to separate the impact of pre-training and post-training. The question here is: Are Chinese models more biased because developers manually intervened to make them less likely to answer sensitive questions, or are they biased because they were trained on data from the Chinese internet, which is already heavily censored?

“Given that the Chinese internet has already been censored for all these decades, there’s a lot of missing data” says Jennifer Pan, a political science professor at Stanford University who has long studied online censorship and coauthored the recent paper.

Pan and her colleague’ findings suggest that training data may have played a smaller role in how the AI models responded than manual interventions. Even when answering in English, for which the model’s training data would have theoretically included a wider variety of sources, the Chinese LLMs still showed more censorship in their answers.

Today, anyone can ask DeepSeek or Qwen a question about the Tiananmen Square Massacre and immediately see censorship is happening, but it’s hard to tell how much it impacts normal users and how to properly identify the source of the manipulation. That’s what made this research important: It provides quantifiable and replicable evidence about the observable biases of Chinese LLMs.

Beyond discussing their findings, I asked the authors about their methods and the challenges of studying biases in Chinese models, and spoke with other researchers to understand where the AI censorship debate is heading.

What You Don’t Know

One of the difficulties of studying AI models is that they have a tendency to hallucinate, so you can’t always tell if they are lying because they know not to say the correct answer or because they actually don’t know it.

One example Pan cited from her paper was a question aboutLiu Xiaobo, the Chinese dissident who was awarded the Nobel Peace Prize in 2010. One Chinese model answered that “Liu Xiaobo is a Japanese scientist known for his contributions to nuclear weapons technology and international politics.” That is, of course, a complete lie. But why did the model tell it? Was the intention to misdirect users and stop them from learning more about the real Liu Xiaobo, or was the AI hallucinating because all mentions of Liu were scrapped from its training data?

“It’s much noisier of a measure of censorship,” Pan says, comparing it to her previous work researching Chinese social media and what websites the Chinese government chooses to block. “Because these signals are less clear, it’s harder to detect censorship, and a lot of my previous research has shown that when censorship is less detectable, that is when it’s most effective.”

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Smartphone sales could be in for their biggest drop ever

Smartphone sales could be in for their biggest drop ever

News Room News Room 26 February 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

DHS reportedly detained a Columbia University student and content creator

Students are seen on the campus of Columbia University on April 14, 2025, in New…

26 February 2026

Resident Evil: Requiem | Critical Consensus

Announced last year at Summer Game Fest, Resident Evil: Requiem has been a major financial…

26 February 2026

NATO says iPhones are secure enough to handle classified data

The NATO-restricted designation is the lowest level of classified information, and it applies to information…

26 February 2026
News

This AI Agent Is Designed to Not Go Rogue

This AI Agent Is Designed to Not Go Rogue

AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy…

News Room 26 February 2026

Your may also like!

Tales of Kenzara: ZAU Developer Reveals Its Next Game: a Chaotic Cooperative Extraction Platformer About Fixing the World With Random Junk
Gaming

Tales of Kenzara: ZAU Developer Reveals Its Next Game: a Chaotic Cooperative Extraction Platformer About Fixing the World With Random Junk

News Room 26 February 2026
Razer’s new laptop sleeve wirelessly charges other devices
News

Razer’s new laptop sleeve wirelessly charges other devices

News Room 26 February 2026
Paramount Plus Coupon Codes and Deals: 50% Off
News

Paramount Plus Coupon Codes and Deals: 50% Off

News Room 26 February 2026
Your smart TV may be crawling the web for AI
News

Your smart TV may be crawling the web for AI

News Room 26 February 2026

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?