OpenAI is less concerned with artificial general intelligence (AGI) than it was almost a decade ago and is instead prioritising a broader rollout of its technology, according to a new mission statement for the company.

On Sunday, OpenAI published an update to the company’s “Our Principles” document, which sets out how the company will run its technology in the future.

There are some key differences between this new set of principles and what the company prioritised almost a decade ago, when it was a nascent non-profit artificial intelligence (AI) research organisation.

De-emphasis on artificial general intelligence

In 2018, OpenAI was staunchly focused on artificial general superintelligence (AGI): the idea that their technology would surpass human intelligence, but now, it is just part of the company’s wider AI rollout.

Both versions of the company’s principles say that OpenAI’s mission is to guarantee this technology “benefits all of humanity,” but the 2018 version explicitly mentions building it safely and beneficially.

“Our primary fiduciary duty is to humanity,” the document reads. “We anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

The 2026 version, however, said it needs to continue to build safe systems, but that society needs to contend with “each successive level of AI capability, understand it, integrate it, and figure out the best path forward together.”

The way forward, as CEO and cofounder Sam Altman sees it in 2026, is to democratise AI at all levels by giving everyone access to it and resisting the idea that the technology could “consolidate power in the hands of the few”.

The 2026 principles document also said that it expects OpenAI to work with governments, international agencies and other AGI initiatives to “sufficiently solve serious alignment, safety or societal problems before proceeding further” with its work.

Some examples of doing that could include using ChatGPT to fight back against models that could create new pathogens or integrate cyber-resilient models into critical infrastructure.

Altman gave some clues for OpenAI’s de-emphasis of AGI on his personal blog earlier this month.

AGI has a “ring of power” to it that “makes people do crazy things,” Altman wrote. To fight back, he said the only solution is to “orient towards sharing the technology with people broadly, and for no one to have the ring.”

OpenAI will no longer step aside to compete with a safety product

In 2018, OpenAI said it was concerned that AGI development was becoming “a competitive race without time for adequate safety precautions.”

It was committed to putting a stop to its own models to assist any project that was “value-aligned, [and] a safety-conscious project,” which comes closer to building AGI.

“We will work out specifics… but a typical triggering condition might be a ‘better-than-even chance of success in the next two years,” the 2018 document reads.

In 2026, there is no mention of stepping aside to help a greater cause. Instead, the document acknowledges that OpenAI “is a much larger force in the world than it was a few years ago,” and pledges to be transparent about when and how its operating principles could change.

The company has been in major competition with several rivals, including Anthropic.

In February, Anthropic refused to give the US President Donald Trump’s administration unfettered access to its AI for the military, which led to the company being labelled a supply chain risk and ordered federal agents to stop using Anthropic’s AI assistant Claude in March.

On February 28, OpenAI then stepped in to fill the void, signing a deal with the Department of War, which saw some users boycotting ChatGPT in favour of Claude.

Anthropic was also valued this month at $800 billion (€696 billion), on par with OpenAI.

Vague society-wide callouts

In the 2026 document, OpenAI asks for several societal changes so the world can better adapt to AI.

“We envision a world with widespread flourishing at a level that is currently difficult to imagine,” the document reads. “A lot of the things we’ve only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today.”

This future is not guaranteed, because AI can either be “held by a small handful of companies using and controlling superintelligence,” or “held in a decentralised way by people,” the document reads.

The principles document also reiterates some of OpenAI’s recent policy suggestions, such as asking governments to consider “new economic models” and to develop new technology that will drive down the costs of AI infrastructure.

“A lot of the things that we do that look weird—buying huge amounts of compute while our revenue is relatively small… are driven by our fundamental belief in a future of universal prosperity,” the document reads.

Share.
Exit mobile version