The rulebook will ban some AI applications such as algorithmic social scoring and exploitative practices, and establish strict rules for other AI systems used in situations considered “high-risk,” such as education, work or critical infrastructure management.
The most advanced AI models, labeled “general-purpose,” will have to observe a specific set of obligations of mitigation and testing; their compliance will be enforced by a newly created unit within the European Commission: the AI Office.
The law was first conceived in 2021 before the explosion of generative AI tools like OpenAI’s ChatGPT; its drafting process was marred by simmering tensions between European lawmakers and EU governments, including France and Germany, over fears that heavy-handed regulation could stymie Europe’s progress in the global race to become an AI leader. Other points of contention included the possibility of a full-blown ban on facial-recognition technology — which in the final text is limited but not outlawed — and the structure of the AI Office itself.
Thierry Breton, the European Commission’s internal market chief and point-man during late-night rounds of three-way negotiations with governments and parliamentarians, said in the wake of the vote that “Europe is now a global standard-setter in trustworthy AI.”
The Council of the EU is expected to officially adopt the text in April, with the text’s prohibition becoming applicable in late 2024 and rules on general-purpose AI kicking in in early 2025. The remainder of the law will start applying in 2026.
In the interim, a Breton-sponsored “AI Pact” will help companies brace for compliance via voluntary commitments and pledges.