The pitch: Kyle’s message has been that the U.K. is “open for investment” and ready to help western democracies stay ahead in the AI race. “We’ve got a great story on safety and exploiting the opportunities of AI,” he told reporters on Monday. And he said he isn’t looking to upset the status quo when it comes to drafting an AI bill.
Huddle around: “What I’m not going to be doing is disrupting the regulatory or the voluntary settlement [reached at Bletchley Park]” Kyle said. “But I do want to put a bit more strength into the way that it’s underpinned. That’s in our manifesto, and we’re just looking at the right legislative landing zone and the way that we consult leading up to it.”
On the fence: Asked whether the U.K. would sign the summit’s declaration today, after POLITICO scooped that it was wavering amid a lack of U.S. support, Kyle didn’t commit. A government source struck a hawkish tone last night, telling the Guardian the declaration had to be “squarely in British interests”.
Not just Britain: The French will not mind annoying the U.S. — but not having the country which started this whole summit process among the signatories will look odd. Another French ally and EU member state is also on the fence and looking for safety in numbers, according to an official. However, dozens of countries are expected to sign it, including much of the “global south” and the EU.
China’s win? Anne Bouverot, Macron’s envoy for the summit, opened proceedings yesterday by emphasizing the need to ensure that AI leads to “shared progress.” By not signing the declaration, the U.S. risks allowing China to present itself as a more reliable multilateral partner — though it too had redlines over the draft. Beijing’s support for open source AI, however, aligns with co-hosts France and India.
Bye to Bletchley: The draft declaration has also disappointed those concerned about AI risks. Max Tegmark, president of the Future of Life Institute, said it “ignores the science” and the legacy of the U.K.’s Bletchley summit, and urged countries not to sign. Gaia Marcus, director of the Ada Lovelace Institute, said the leaked draft “fails to build on the mission of making AI safe and trustworthy, and the safety commitments of previous Summits.”