From repealing Biden’s executive order to appointing people with varying views on AI, Euronews Next takes a look at what Trump 2.0 could mean for the emerging technology.

Throughout his 2024 presidential campaign, Donald Trump didn’t say much about his plans for the artificial intelligence (AI) industry in the United States. 

The Republican Party platform’s only commitment on AI was to repeal a Biden-era executive order because it “hinders AI innovation,” and “imposes radical left-wing ideas on the development of this technology”. 

Otherwise, Trump’s incoming administration says it will support “AI development rooted in free speech and human flourishing”.

Since his win, the president-elect has surrounded himself with both people who are pro and anti-AI regulation, with reports suggesting his allies are looking into Manhattan projects to develop AI military technology. He also wants to put AI in place throughout the US government. 

In reality, what could a second Trump administration mean for AI development in the US? Euronews Next breaks it down as a clearer picture of his future presidency emerges 

Repealing Biden’s AI executive order

President Biden’s 2023 executive order served two purposes, according to Andrew Strait, associate director of the UK’s Ada Lovelace Institute, an AI think tank. 

The first was to create a set framework for how the technology would be used when it comes to national security. 

The other was to encourage a “set of strategies” and a “roadmap” for how the US government might address how AI algorithms could impact people’s access to government supports, such as welfare, housing, healthcare, or education, Strait continued.

If Trump decides to get rid of Biden’s executive order, he will have to do away with the whole thing and possibly reintroduce parts of it in his own version, according to Susan Ariel Aaronson, a research professor of International Affairs at George Brown University in Canada.

In 2020, Trump signed an executive order that defined trustworthy AI, making sure the models are valid, accountable, transparent, and reliable, Aaronson said. 

The executive order also released what it claims was the world’s first AI regulatory guidance.  

“Many of these [Trump] building blocks cannot be achieved without some form of AI or business regulation,” she said. 

Trump could also see some pushback for repealing Biden’s executive order from other tech companies because it “provided clarity regarding the use of AI and its development,” Aaronson continued. 

Future of AI risk research could be in jeopardy

One of the key creations under Biden’s executive order was the US AI Safety Institute, an independent research body studying the risks associated with AI and how to safely adopt it, according to Strait. 

The US AI Safety Institute is responsible for “advancing research and measurement science for AI safety,” according to its strategic vision. 

In the last few months, the Institute pushed all of the major companies to make statements about how they are evaluating AI risks and to disclose the types of frameworks they are adopting, Strait continued. 

The body also started doing pre-release testing with AI companies to figure out whether a system is or is not safe. 

For example, the institute works with Big Tech to see whether their AI models generate racist or toxic content, and measure how often AI chatbots “hallucinate” or make up misleading answers to questions they don’t know the answers to, Strait continued. 

If Trump axes Biden’s executive order, there are other US allies with AI Safety Institutes, like Canada, the UK, and Australia, that could take on the research gap that the US closure could have, Strait continued. 

President Biden could still protect the AI Safety Institute by asking Congress to entrench it in legislation before the end of his mandate, Aaronson said, but she does not know whether he will do so. 

Euronews Next reached out to the National Institute of Standards and Technology (NIST), the government branch that houses the US AI Safety Institute, for their reaction to the recent US election results but said they “could not speculate on the matter”. 

Influence of Trump’s inner circle (de)regulation

Trump is also surrounding himself with people with different positions on AI regulation, Strait said. 

One high profile Trump supporter is billionaire Elon Musk, who is a strong proponent for regulating AI to address the potentially catastrophic risks of the new technology. 

Musk has already been tapped by Trump to co-lead a new “Department of Government Efficiency” (DOGE) that will significantly cut down government spending, according to a Wall Street Journal opinion piece Musk co-authored. 

Musk also supported California’s Bill 1047, which would have created a “duty of care” for developers so they mitigate these types of catastrophic risks, like systems that could become uncontrollable. The bill was ultimately defeated by governor Gavin Newsom’s veto in September. 

Musk’s co-lead on DOGE, Vivek Ramaswamy, also asks for more regulation around AI. 

“Just like you can’t dump your chemicals, if you’re a chemical company, in somebody else’s river, well if you’re developing an AI algorithm today that has a negative impact on other people, you bear the liability for it,” Ramaswamy, a former Republican presidential candidate, told a televised debate in Iowa last year. 

On the anti-regulation side is Vice-President-Elect JD Vance, who spent just under five years as a venture capitalist and biotech executive in Silicon Valley.

Vance told a Senate committee in July that companies are too focused on regulating AI because of the threats the technology poses. These regulations could make it harder for new companies to innovate, he continued. 

Marc Andreessen, the head of influential Silicon Valley venture capital firm Andreessen Horowitz (a16z), is also anti-AI regulation. 

Andreessen in his ‘Techno-Optimist Manifesto’ last year that “regulatory capture” is the enemy.

Share.
Exit mobile version