Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

Video. Winter solstice: sunrise aligns with Karnak Temple in Luxor, Egypt

December 21, 2025

At least four people killed in Russian bombing across Ukraine in past 24 hours

December 21, 2025

Extremadura votes in early elections with PP seeking absolute majority

December 21, 2025

Video. France deploys armed forces to fight cattle disease outbreak

December 21, 2025

Paris welcomes Putin’s ‘readiness’ for bilateral talks with Macron – POLITICO

December 21, 2025
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

AI ‘less regulated than sandwiches’ as tech firms race toward superintelligence, study says

By staffDecember 3, 20255 Mins Read
AI ‘less regulated than sandwiches’ as tech firms race toward superintelligence, study says
Share
Facebook Twitter LinkedIn Pinterest Email

The world’s largest artificial intelligence (AI) companies are failing to meet their own safety commitments, according to a new assessment that warns these failures come with “catastrophic” risks.

The report comes as AI companies face lawsuits and allegations that their chatbots cause psychological harm, including by acting as a “suicide coach,” as well as reports of AI-assisted cyberattacks.

The 2025 Winter AI Safety Index report, released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms, including US companies Anthropic, OpenAI, Google DeepMind, xAI, and Meta, and the Chinese firms DeepSeek, Alibaba Cloud, and Z.ai.

It found a lack of credible strategies for preventing catastrophic misuse or loss of control of AI tools as companies race toward artificial general intelligence (AGI) and superintelligence, a form of AI that surpasses human intellect.

Independent analysts who studied the report found that no company had produced a testable plan for maintaining human control over highly capable AI systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, said that AI companies claim they can build superhuman AI, but none have demonstrated how to prevent loss of human control over such systems.

“I’m looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,” Russell wrote. “Instead, they admit the risk could be one in ten, one in five, even one in three, and they can neither justify nor improve those numbers.”

How did the companies rank?

The study measured the companies across six critical areas: risk assessment, current harms, safety frameworks, existential safety, governance and accountability, and information sharing.

While it noted progress in some categories, the independent panel of experts found that implementation remains inconsistent and often lacks the depth required by emerging global standards.

Anthropic, OpenAI, and Google DeepMind were praised for relatively strong transparency, public safety frameworks, and ongoing investments in technical safety research. Yet they still had weaknesses.

Anthropic was faulted for discontinuing human uplift trials and shifting towards training on user interactions by default — a decision experts say weakens privacy protections.

OpenAI faced criticism for ambiguous safety thresholds, lobbying against state-level AI safety legislation, and insufficient independent oversight.

Google DeepMind has improved its safety framework, the report found, but still relies on external evaluators who are financially compensated by the company, undermining their independence.

A Google DeepMind spokesperson told Euronews Next that it takes a “rigorous, science-led approach to AI safety”.

“Our Frontier Safety Framework outlines specific protocols for identifying and mitigating severe risks from powerful frontier AI models before they manifest. As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities,” the company added.

“All three top companies suffered from current harms due to recent scandals – psychological harm, child suicides, Anthropic’s massive hacking attack – [and] all three have room for improvement,” Max Tegmark, FLI’s president and a professor at the Massachusetts Institute of Technology (MIT), told Euronews Next.

The remaining five companies showed uneven but notable progress, according to the report. However, it warned there was still room for improvement.

For example, xAI published its first structured safety framework, though reviewers warned it was narrow and lacked clear mitigation triggers.

Z.ai was the only company to allow uncensored publication of its external safety evaluations but it was recommended that it publicise the full safety framework and governance structure with clear risk areas, mitigations, and decision-making processes.

Meta introduced a new frontier safety framework with outcome-based thresholds, but reviewers said they should clarify methodologies as well as sharing more robust internal and external evaluation processes.

DeepSeek was credited for internal advocacy by employees but still lacks basic safety documentation.

Alibaba Cloud was found to have contributed to the binding national standards on watermarking requirements but it could improve by improving model robustness and trustworthiness by improving performance on truthfulness, fairness, and safety benchmarks.

Euronews Next contacted the companies for their responses to the report but has so far only received a reply from Google DeepMind.

‘Less regulated than sandwiches’

“I hope we get beyond companies scaling [up based] on their reputation,” Tegmark said.

“The question to companies on their plans to control AGI, none had a plan,” he added.

Meanwhile, tech companies such as Meta are using superintelligence as a buzzword to hype up their latest AI models. This year, Meta named its large language model (LLM) division Meta Superintelligence Labs.

Tegmark said there is a big shift in discussions around AGI and superintelligence. While technologists once described it as a real-world possibility in the next 100 years, they now say it could be in the next several years.

“AI is also less regulated than sandwiches [in the United States], and there is continued lobbying against binding safety standards in government,” he said.

But Tegmark noted that on the other hand, there is an unprecedented backlash against AGI and superintelligence not being controlled.

In October, thousands of public figures, including AI and technology leaders, called for AI firms to slow down their pursuit of superintelligence.

The petition, organised by FLI, garnered signatures from across the political spectrum, including Steve Bannon (formerly US President Donald Trump’s chief strategist), Susan Rice (the former US National Security Advisor under former President Obama), religious leaders, and many other former politicians, as well as prominent computer scientists.

“What do these people have in common? They agreed on a statement. I think [it is] extremely significant that Trump’s deep MAGA base to faith leaders, those on the left and labour movements agree on something,” said Tegmark.

“Superintelligence would make every single worker unable to make a living, as all the jobs are taken by robots. People would be dependent on handouts from the government on the right, seen as a handout and on the left, it would be seen as a 1984 government,” he said. “I think what’s happening is people [are] coming to a head.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Everything to know about US President Donald Trump’s new plans for the Moon

AI data centres could have a carbon footprint that matches small European country, new study finds

Japanese woman ‘marries’ ChatGPT AI character in symbolic ceremony

Pornhub investigates hack affecting data of more than 200 million users

From vibe coding to faster models: what’s new in Google’s Gemini update

X sues challenger ‘Operation Bluebird’ for trying to ‘steal’ Twitter branding

EU takes on Big Tech: Here are the top actions regulators have taken in 2025

The US just launched a new ‘Tech Force’ to hire AI talent. Here’s what to know

Elon Musk’s Starlink satellites narrowly avoid collision with Chinese ones

Editors Picks

At least four people killed in Russian bombing across Ukraine in past 24 hours

December 21, 2025

Extremadura votes in early elections with PP seeking absolute majority

December 21, 2025

Video. France deploys armed forces to fight cattle disease outbreak

December 21, 2025

Paris welcomes Putin’s ‘readiness’ for bilateral talks with Macron – POLITICO

December 21, 2025

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

Video. Latest news bulletin | December 21st, 2025 – Midday

December 21, 2025

Ukraine talks proceeding ‘constructively’ in Miami, Russia’s envoy says – POLITICO

December 21, 2025

US and Russian officials continue Ukraine peace talks for second day in Miami

December 21, 2025
Facebook X (Twitter) Pinterest TikTok Instagram
© 2025 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.