Close Menu
Daily Guardian EuropeDaily Guardian Europe
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
What's On

Germany has world’s second biggest gold reserve. Time to cash it in?

April 27, 2026

Romanian socialists and far right unveil plan to topple PM – POLITICO

April 27, 2026

Germany sees surge in conscientious objectors amid new conscription law

April 27, 2026

OpenAI just changed its principals. Here’s what’s changing

April 27, 2026

Von der Leyen plays down deregulation clash with German conservatives – POLITICO

April 27, 2026
Facebook X (Twitter) Instagram
Web Stories
Facebook X (Twitter) Instagram
Daily Guardian Europe
Newsletter
  • Home
  • Europe
  • World
  • Politics
  • Business
  • Lifestyle
  • Sports
  • Travel
  • Environment
  • Culture
  • Press Release
  • Trending
Daily Guardian EuropeDaily Guardian Europe
Home»Lifestyle
Lifestyle

OpenAI just changed its principals. Here’s what’s changing

By staffApril 27, 20265 Mins Read
OpenAI just changed its principals. Here’s what’s changing
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI is less concerned with artificial general intelligence (AGI) than it was almost a decade ago and is instead prioritising a broader rollout of its technology, according to a new mission statement for the company.

On Sunday, OpenAI published an update to the company’s “Our Principles” document, which sets out how the company will run its technology in the future.

There are some key differences between this new set of principles and what the company prioritised almost a decade ago, when it was a nascent non-profit artificial intelligence (AI) research organisation.

De-emphasis on artificial general intelligence

In 2018, OpenAI was staunchly focused on artificial general superintelligence (AGI): the idea that their technology would surpass human intelligence, but now, it is just part of the company’s wider AI rollout.

Both versions of the company’s principles say that OpenAI’s mission is to guarantee this technology “benefits all of humanity,” but the 2018 version explicitly mentions building it safely and beneficially.

“Our primary fiduciary duty is to humanity,” the document reads. “We anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

The 2026 version, however, said it needs to continue to build safe systems, but that society needs to contend with “each successive level of AI capability, understand it, integrate it, and figure out the best path forward together.”

The way forward, as CEO and cofounder Sam Altman sees it in 2026, is to democratise AI at all levels by giving everyone access to it and resisting the idea that the technology could “consolidate power in the hands of the few”.

The 2026 principles document also said that it expects OpenAI to work with governments, international agencies and other AGI initiatives to “sufficiently solve serious alignment, safety or societal problems before proceeding further” with its work.

Some examples of doing that could include using ChatGPT to fight back against models that could create new pathogens or integrate cyber-resilient models into critical infrastructure.

Altman gave some clues for OpenAI’s de-emphasis of AGI on his personal blog earlier this month.

AGI has a “ring of power” to it that “makes people do crazy things,” Altman wrote. To fight back, he said the only solution is to “orient towards sharing the technology with people broadly, and for no one to have the ring.”

OpenAI will no longer step aside to compete with a safety product

In 2018, OpenAI said it was concerned that AGI development was becoming “a competitive race without time for adequate safety precautions.”

It was committed to putting a stop to its own models to assist any project that was “value-aligned, [and] a safety-conscious project,” which comes closer to building AGI.

“We will work out specifics… but a typical triggering condition might be a ‘better-than-even chance of success in the next two years,” the 2018 document reads.

In 2026, there is no mention of stepping aside to help a greater cause. Instead, the document acknowledges that OpenAI “is a much larger force in the world than it was a few years ago,” and pledges to be transparent about when and how its operating principles could change.

The company has been in major competition with several rivals, including Anthropic.

In February, Anthropic refused to give the US President Donald Trump’s administration unfettered access to its AI for the military, which led to the company being labelled a supply chain risk and ordered federal agents to stop using Anthropic’s AI assistant Claude in March.

On February 28, OpenAI then stepped in to fill the void, signing a deal with the Department of War, which saw some users boycotting ChatGPT in favour of Claude.

Anthropic was also valued this month at $800 billion (€696 billion), on par with OpenAI.

Vague society-wide callouts

In the 2026 document, OpenAI asks for several societal changes so the world can better adapt to AI.

“We envision a world with widespread flourishing at a level that is currently difficult to imagine,” the document reads. “A lot of the things we’ve only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today.”

This future is not guaranteed, because AI can either be “held by a small handful of companies using and controlling superintelligence,” or “held in a decentralised way by people,” the document reads.

The principles document also reiterates some of OpenAI’s recent policy suggestions, such as asking governments to consider “new economic models” and to develop new technology that will drive down the costs of AI infrastructure.

“A lot of the things that we do that look weird—buying huge amounts of compute while our revenue is relatively small… are driven by our fundamental belief in a future of universal prosperity,” the document reads.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Which country in Europe has the most data centres driving the AI boom?

Can Europe keep its industrial champions in the AI era?

China’s DeepSeek releases new AI model V4. Here’s everything to know as the AI race speeds up

Explained: What is the UK digital services tax and why has it angered Trump?

What is OpenAI’s GPT-5.5, its newest ‘smartest and most intuitive’ model?

Elon Musk’s xAI discussed partnership with Mistral to try and rival OpenAI and Anthropic, report

Samsung employees protest and threaten strike, demanding share of profits amid AI boom

Meet ACE: The AI robot can beat human table tennis pros

ChatGPT mirrors abusive language in heated conversations, study finds

Editors Picks

Romanian socialists and far right unveil plan to topple PM – POLITICO

April 27, 2026

Germany sees surge in conscientious objectors amid new conscription law

April 27, 2026

OpenAI just changed its principals. Here’s what’s changing

April 27, 2026

Von der Leyen plays down deregulation clash with German conservatives – POLITICO

April 27, 2026

Subscribe to News

Get the latest Europe and world news and updates directly to your inbox.

Latest News

French prison guards strike over overcrowding as jails near 90,000 inmates

April 27, 2026

Video. Runners battle mud at annual UK charity race

April 27, 2026

Aman Resorts is set to open a secluded ranch retreat in Texas

April 27, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Europe. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.