The aggressively lobbied code of practice was released last week and is the latest step by the European Commission to limit risks posed by AI models like OpenAI’s ChatGPT or X’s Grok. It comes as Grok is under fire for spewing Hitler-praising comments and other harmful responses.
At its core, the EU’s code of practice is an attempt by officials to get AI firms to follow the bloc’s rules without them having to launch full-fledged investigations.
It is designed to instruct companies on how to comply with the bloc’s Artificial Intelligence Act, a binding EU law. Companies that decide not to sign up face closer scrutiny from the Commission in the enforcement of the AI Act.
But the code has faced months of fierce lobbying from the tech industry.
Kaplan on Friday brought up a letter signed by over 40 top European companies in early July, including Bosch and SAP, which called on the Commission to pause the implementation of the AI Act.
Meta “shares concerns raised by these businesses that this overreach will throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them,” Kaplan said.