Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

Nigel Farage’s support for Trump is putting off potential voters – POLITICO

January 23, 2026

Watch: Is this year’s Davos just the Donald Trump Show?

January 23, 2026

Video. Latest news bulletin | January 23rd, 2026 – Morning

January 23, 2026

This was the moment EU leaders agreed Europe must go it alone

January 23, 2026

Greek Prime Minister says ‘most’ European countries can’t join Trump’s ‘Board of Peace’

January 23, 2026
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

AI ‘less regulated than sandwiches’ as tech firms race toward superintelligence, study says

By staffDecember 3, 20255 Mins Read
AI ‘less regulated than sandwiches’ as tech firms race toward superintelligence, study says
Share
Facebook Twitter LinkedIn Pinterest Email

The world’s largest artificial intelligence (AI) companies are failing to meet their own safety commitments, according to a new assessment that warns these failures come with “catastrophic” risks.

The report comes as AI companies face lawsuits and allegations that their chatbots cause psychological harm, including by acting as a “suicide coach,” as well as reports of AI-assisted cyberattacks.

The 2025 Winter AI Safety Index report, released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms, including US companies Anthropic, OpenAI, Google DeepMind, xAI, and Meta, and the Chinese firms DeepSeek, Alibaba Cloud, and Z.ai.

It found a lack of credible strategies for preventing catastrophic misuse or loss of control of AI tools as companies race toward artificial general intelligence (AGI) and superintelligence, a form of AI that surpasses human intellect.

Independent analysts who studied the report found that no company had produced a testable plan for maintaining human control over highly capable AI systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, said that AI companies claim they can build superhuman AI, but none have demonstrated how to prevent loss of human control over such systems.

“I’m looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,” Russell wrote. “Instead, they admit the risk could be one in ten, one in five, even one in three, and they can neither justify nor improve those numbers.”

How did the companies rank?

The study measured the companies across six critical areas: risk assessment, current harms, safety frameworks, existential safety, governance and accountability, and information sharing.

While it noted progress in some categories, the independent panel of experts found that implementation remains inconsistent and often lacks the depth required by emerging global standards.

Anthropic, OpenAI, and Google DeepMind were praised for relatively strong transparency, public safety frameworks, and ongoing investments in technical safety research. Yet they still had weaknesses.

Anthropic was faulted for discontinuing human uplift trials and shifting towards training on user interactions by default — a decision experts say weakens privacy protections.

OpenAI faced criticism for ambiguous safety thresholds, lobbying against state-level AI safety legislation, and insufficient independent oversight.

Google DeepMind has improved its safety framework, the report found, but still relies on external evaluators who are financially compensated by the company, undermining their independence.

A Google DeepMind spokesperson told Euronews Next that it takes a “rigorous, science-led approach to AI safety”.

“Our Frontier Safety Framework outlines specific protocols for identifying and mitigating severe risks from powerful frontier AI models before they manifest. As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities,” the company added.

“All three top companies suffered from current harms due to recent scandals – psychological harm, child suicides, Anthropic’s massive hacking attack – [and] all three have room for improvement,” Max Tegmark, FLI’s president and a professor at the Massachusetts Institute of Technology (MIT), told Euronews Next.

The remaining five companies showed uneven but notable progress, according to the report. However, it warned there was still room for improvement.

For example, xAI published its first structured safety framework, though reviewers warned it was narrow and lacked clear mitigation triggers.

Z.ai was the only company to allow uncensored publication of its external safety evaluations but it was recommended that it publicise the full safety framework and governance structure with clear risk areas, mitigations, and decision-making processes.

Meta introduced a new frontier safety framework with outcome-based thresholds, but reviewers said they should clarify methodologies as well as sharing more robust internal and external evaluation processes.

DeepSeek was credited for internal advocacy by employees but still lacks basic safety documentation.

Alibaba Cloud was found to have contributed to the binding national standards on watermarking requirements but it could improve by improving model robustness and trustworthiness by improving performance on truthfulness, fairness, and safety benchmarks.

Euronews Next contacted the companies for their responses to the report but has so far only received a reply from Google DeepMind.

‘Less regulated than sandwiches’

“I hope we get beyond companies scaling [up based] on their reputation,” Tegmark said.

“The question to companies on their plans to control AGI, none had a plan,” he added.

Meanwhile, tech companies such as Meta are using superintelligence as a buzzword to hype up their latest AI models. This year, Meta named its large language model (LLM) division Meta Superintelligence Labs.

Tegmark said there is a big shift in discussions around AGI and superintelligence. While technologists once described it as a real-world possibility in the next 100 years, they now say it could be in the next several years.

“AI is also less regulated than sandwiches [in the United States], and there is continued lobbying against binding safety standards in government,” he said.

But Tegmark noted that on the other hand, there is an unprecedented backlash against AGI and superintelligence not being controlled.

In October, thousands of public figures, including AI and technology leaders, called for AI firms to slow down their pursuit of superintelligence.

The petition, organised by FLI, garnered signatures from across the political spectrum, including Steve Bannon (formerly US President Donald Trump’s chief strategist), Susan Rice (the former US National Security Advisor under former President Obama), religious leaders, and many other former politicians, as well as prominent computer scientists.

“What do these people have in common? They agreed on a statement. I think [it is] extremely significant that Trump’s deep MAGA base to faith leaders, those on the left and labour movements agree on something,” said Tegmark.

“Superintelligence would make every single worker unable to make a living, as all the jobs are taken by robots. People would be dependent on handouts from the government on the right, seen as a handout and on the left, it would be seen as a 1984 government,” he said. “I think what’s happening is people [are] coming to a head.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Scarlett Johansson, Cate Blanchett among 800 artists calling AI training ‘theft’

Shoppers in Denmark turn to apps to boycott US products amid Greenland tensions

EU telecom reform leaves industry divided over network funding

Elon Musk’s Grok still being used to generate explicit images despite new safeguards, study finds

‘The Silicon Gaze’: ChatGPT rankings skew toward rich Western nations, research shows

NASA rolls out Artemis II rocket for historic Moon mission

Scientists solve mystery of little red dots seen by James Webb Space Telescope

Astronauts return to Earth after first-ever medical evacuation from International Space Station

Elon Musk’s X will block Grok AI tool from creating sexualized images in places where it is illegal

Editors Picks

Watch: Is this year’s Davos just the Donald Trump Show?

January 23, 2026

Video. Latest news bulletin | January 23rd, 2026 – Morning

January 23, 2026

This was the moment EU leaders agreed Europe must go it alone

January 23, 2026

Greek Prime Minister says ‘most’ European countries can’t join Trump’s ‘Board of Peace’

January 23, 2026

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

Lawyer claims Julio Iglesias may face further sex abuse allegations from former employees’

January 23, 2026

Trump’s torched allies confront the world without America – POLITICO

January 23, 2026

Newsletter: With US ties bruised, EU leaders get ‘their act together’

January 23, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.