The EU rules around nudification tech come after Elon Musk’s AI tool Grok was heavily criticized earlier this year for allowing users to generate images of real people in bikinis or fully nude without their consent.

The chatbot may have generated as many as 3 million non-consensual sexual images and 20,000 child sexual abuse images in the 11 days before changes were made to stop the spread of such photos, according to an estimate by the Center for Countering Digital Hate, a non-profit focused on online safety. X has since stepped in to restrict the feature, but the uproar prompted EU officials to start crafting a full ban on AI nudification apps.

Artificial intelligence systems that can generate sexualized deepfakes of real people would be banned in the EU under the proposals, first seen by POLITICO in March and now under discussion.

Both the European Parliament and the European Council supported a ban on AI systems capable of generating images or videos of an “identifiable natural person’s intimate parts.”

The talks — however unusual for Brussels policymakers — get to a core problem of EU tech law. The European Union has been criticized in the past for being overly descriptive in its digital regulations. The AI Act reform, known as the AI omnibus, was drafted precisely to simplify the rules governing artificial intelligence models.

Defining intimate parts very specifically could create arbitrary or unfortunate results, with some things being banned and others not despite the level of harm, one of the people involved in the negotiations warned. Without a definition it would be up to judges and watchdogs to determine what is intimate and what is not, and more so depending on the context, the person said.  

Negotiation teams from the European Commission, EU countries and the European Parliament are in the final stretch of reforming the bloc’s AI Act. Crunch-time negotiations are planned for April 28, where the EU could strike a deal on the so-called AI omnibus legislation.

Share.
Exit mobile version