LONDON — The U.K. government is using artificial intelligence to help put together its landmark review of Britain’s armed forces in what amounts to a radical shakeup of how Whitehall functions.
Britain’s Ministry of Defence is deploying a custom-built AI program to sift through submissions for a comprehensive review of the nation’s defense capabilities, the department confirmed to POLITICO.
The AI model was created by Palantir, a U.S. tech firm, which won the government contract over the summer.
It’s the first time the British government has used AI to help publish a major review and comes as part of a broader push to integrate new tech into Whitehall departments.
The review is being hailed by government officials as an example of technological innovation driving greater public sector efficiency.
There are concerns, however, from some in Britain’s defense industry — one of Europe’s largest — that using AI for such a crucial review could lead to key submissions being overlooked.
One defense expert also said AI could be a “trojan horse” for rogue actors attempting to hack into the British government.
“It’s going to be a glorified word cloud,” another defense industry figure said. “This isn’t the review to be trialing this stuff on.”
New era for defense
Defense Secretary John Healey announced the Strategic Defense Review within days of Labour’s July election victory, saying “we need a new era for defense” and suggesting the U.K.’s military capabilities had been “hollowed out” under the previous Conservative government.
The review — led by former NATO Secretary General George Robertson along with Fiona Hill, a former foreign policy aide to ex-U.S. President Donald Trump — will examine aspects of Britain’s defense capabilities including military recruitment, new weapons procurement and the future of Trident, Britain’s nuclear deterrent.
The government is also under pressure to say when it intends to hike spending on defense to 2.5 percent of GDP, up from the current 2.32 percent, after Labour made a vague promise to that effect ahead of the July election.
Three people with knowledge of the process, granted anonymity because they were not authorized to discuss the matter on the record, told POLITICO the review would be published by March, and that the government would outline its policy response by summer.
Healey’s department has opted to use AI to sift through the thousands of submissions it has received from the different strands of Britain’s military, along with other stakeholders such as arms manufacturers and think tanks.
The model will look for key words and themes in submissions and provide a summary that MoD officials can examine later this month, in what is being called a “review and challenge” phase.
Trojan horse
Professor Mariarosaria Taddeo, an Oxford University academic specializing in the ethics of defense technologies, said the use of AI in this way was “not surprising” and noted that “a proper transformation is happening.”
Taddeo, who also serves as an independent adviser on a number of MoD boards, said: “It’s not a problem if AI is used to support decision-making, but the question is if it is used to support decision-making or if the AI is adopted without any critical thinking.
“And what AI are we talking about here? How has it been delivered, who’s involved, what type of testing has been thought through to stop inherent biases?”
Taddeo added that using AI in this manner could pose a major cybersecurity risk.
“Even if the AI is internal to an organization, are we basically creating a huge Trojan horse? AI is really fragile, it can be attacked,” she said.
But government officials stress that Whitehall civil servants will be conducting oversight of the AI-produced work throughout the process.
One senior MoD insider close to the project, also granted anonymity in order to speak frankly, said it “will enable you to actually solicit people’s views and sort through them.
“It’s not just some poor blog, the back room of the MoD, looking through postcards that have been sent in from across the country or something,” they added.
Gaming the system
A second defense industry figure warned, however, that some firms were trying to game the system by repeating words that would likely make it through the AI program’s filter.
They added that cutting out the human element from parts of the task of sifting through submissions could result in crucial information, from the armed forces and the private sector, being overlooked, while some submissions could be unfairly prioritized.
“Given the nature and importance of this topic, and the work that needs to be done, are they really confident they are going to pick up all the salient points?” they asked.
A Ministry of Defence spokesperson said: “We have been transparent about our ambition to use AI for a wide range of defense applications, and the team are utilizing this technology to help review and analyze the high volume of submissions received to the Strategic Defense Review (SDR).
“This is Britain’s review — not just the government’s. We have consulted serving military, veterans, MPs of all parties, industry, academia and the wider public, whose submissions will be a key feature of the SDR.”