Technology | Europe
The AI That Exposed 3 Million Illegal Images in 11 Days — And Why Europe Just Banned It
Grok generated 3 million sexual AI images including 20,000 involving children in just 11 days. Now Europe has banned the entire category of AI nudifier tools. Here is what happened and what comes next.
The data that triggered Europe's fastest-ever AI regulatory response was extraordinary in its specificity. The Digital Harms Centre, a UK-based nonprofit that monitors online harm, published research in late February 2026 documenting that Grok — the AI assistant embedded in Elon Musk's X platform — had generated more than 3 million sexualized artificial intelligence images in an eleven-day period. Among those images, approximately 20,000 depicted individuals who appeared to be minors.
The research was immediately flagged to the European Parliament's Internal Market Committee, which was already working on a broad 'Digital Omnibus' package of AI regulation updates. Within weeks, the committee had integrated a specific hard prohibition on AI nudifier systems — applications that generate non-consensual intimate or sexualized images of real individuals — into the legislative package, which passed with 101 votes in favour, 9 against, and 8 abstentions on March 18, 2026.
The speed of the response was unprecedented in EU legislative history. Normally, from policy proposal to plenary vote, EU digital regulation takes between two and four years. This took weeks.
Irish MEP Michael McNamara, one of the co-rapporteurs who shepherded the legislation, explained the urgency: 'These tools are not hypothetical future risks. They are active instruments of abuse. They are being used today to victimize real people — disproportionately women, disproportionately young people — who have no recourse under existing law because creating an image is not the same as taking a photograph under most legal systems.'
The prohibition covers any AI system that generates non-consensual intimate or sexualized images of real individuals. Systems with demonstrably effective technical safeguards that prevent this category of generation are exempted — but the burden of demonstrating effectiveness lies with the developer, not with regulators.
X's response to the legislation has been to announce enhanced content filters. Critics of those filters — including the Digital Harms Centre — note that similar announcements have been made after previous content scandals and that the underlying incentive structure of engagement-driven platforms makes genuine compliance difficult to sustain.