By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: Singapore’s Vision for AI Safety Bridges the US-China Divide
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release

Your Gmail Inbox Is Running Slow. Do These Things to Fix It

News Room News Room 1 June 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > Singapore’s Vision for AI Safety Bridges the US-China Divide
News

Singapore’s Vision for AI Safety Bridges the US-China Divide

News Room
Last updated: 8 May 2025 02:04
By News Room 4 Min Read
Share
SHARE

The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

“Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

“In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

How to Make AI Faster and Smarter—With a Little Help from Physics

News Room News Room 1 June 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

We Bought a ‘Peeing’ Robot Attack Dog From Temu. It Was Even Weirder Than Expected

Elsewhere, I can only assume that the controller has been pumped full of helium, such…

1 June 2025

Top B&H Photo Discounts and Deals for June 2025

B&H Photo is one of our favorite places to shop for camera gear. If you’re…

1 June 2025

Get 20% Off with a Brooks Promo Code for June 2025

If WIRED doesn’t write about Brooks running shoes more often, it’s because they’re so good…

1 June 2025
News

Review: Nice Rocc Palm Cooling Device

When I ran track in college (10 years ago, sigh), my team’s physical therapists were always pushing us to utilize any and all recovery tools, no matter how ridiculous they…

News Room 1 June 2025

Your may also like!

News

Trump pulls Musk ally’s NASA Administrator nomination

News Room 1 June 2025
Gaming

Elden Ring Nightreign’s Toughest Challenge So Far? Finding Friends to Play With

News Room 31 May 2025
Gaming

Elden Ring Nightreign Matchmaking Not Working? FromSoftware Has Some Suggestions

News Room 31 May 2025
Gaming

Enjoy 75% Off A Streaming Mic Plus Big Savings on Father’s Day Gifts, Controllers, and More

News Room 31 May 2025

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?