Back to homeLearn English hub

Technology | Europe

The AI That Exposed 3 Million Illegal Images in 11 Days — And Why Europe Just Banned It

2026-03-29| 2 min read| EuroBulletin24 Editorial Desk

Grok generated 3 million sexual AI images including 20,000 involving children in just 11 days. Now Europe has banned the entire category of AI nudifier tools. Here is what happened and what comes next.

The data that triggered Europe's fastest-ever AI regulatory response was extraordinary in its specificity. The Digital Harms Centre, a UK-based nonprofit that monitors online harm, published research in late February 2026 documenting that Grok — the AI assistant embedded in Elon Musk's X platform — had generated more than 3 million sexualized artificial intelligence images in an eleven-day period. Among those images, approximately 20,000 depicted individuals who appeared to be minors.

The research was immediately flagged to the European Parliament's Internal Market Committee, which was already working on a broad 'Digital Omnibus' package of AI regulation updates. Within weeks, the committee had integrated a specific hard prohibition on AI nudifier systems — applications that generate non-consensual intimate or sexualized images of real individuals — into the legislative package, which passed with 101 votes in favour, 9 against, and 8 abstentions on March 18, 2026.

The speed of the response was unprecedented in EU legislative history. Normally, from policy proposal to plenary vote, EU digital regulation takes between two and four years. This took weeks.

Irish MEP Michael McNamara, one of the co-rapporteurs who shepherded the legislation, explained the urgency: 'These tools are not hypothetical future risks. They are active instruments of abuse. They are being used today to victimize real people — disproportionately women, disproportionately young people — who have no recourse under existing law because creating an image is not the same as taking a photograph under most legal systems.'

The prohibition covers any AI system that generates non-consensual intimate or sexualized images of real individuals. Systems with demonstrably effective technical safeguards that prevent this category of generation are exempted — but the burden of demonstrating effectiveness lies with the developer, not with regulators.

X's response to the legislation has been to announce enhanced content filters. Critics of those filters — including the Digital Harms Centre — note that similar announcements have been made after previous content scandals and that the underlying incentive structure of engagement-driven platforms makes genuine compliance difficult to sustain.

Learning Journey (Optional)
Streak 0dXP 0
Designed to not interrupt reading: open only when you want practice.
#grok#ai#nudifier#europe#ban#children

Comments

0 comments
Checking account...
480 characters left
Loading comments...

Related coverage

Technology
Grok AI Scandal: How Musk's Chatbot Generated 3 Million Sexual Images and Changed European Law
The full story of how Grok's 11-day AI image generation scandal involving 3 million sexual images, including 20,000 of c...
Technology
Grok AI Scandal: How Musk's Chatbot Generated 3 Million Sexual Images and Changed European Law
The full story of how Grok's 11-day AI image generation scandal involving 3 million sexual images, including 20,000 of c...
Technology
Elon Musk's AI Grok Generated 20,000 Child Sexual Abuse Images. Why Isn't This the Biggest Story of the Year?
Grok AI generated 20,000 child sexual abuse images in 11 days. Here is why this story has been underreported and what it...
Technology
Police Use of AI in Europe: The Surveillance Technology That Has Already Arrived
European police forces are using AI surveillance tools at a scale that most citizens are not aware of. Here is what is d...
Technology
AI Is Already Writing Laws in Europe — And Nobody Voted for It
AI systems are being used to draft legislation in several EU member states. Here is what this means for democratic accou...
Technology
The AI That Can Now Write Persuasive Fake News in Any Language Simultaneously
New AI models can generate convincing disinformation simultaneously in 24 languages. Here is how state actors are alread...

More stories

Sports
Why Viktor Gyökeres Could Be the World Cup's Breakout Star — If Sweden Qualifies
Science
The Algorithm That Is Making PTSD Treatment Work for Veterans
Economy
The Port of Rotterdam Is Emptier Than It's Been in Years — Here Is Why
Sports
Verstappen's Honest Assessment of Red Bull's 2026 F1 Disaster
World
The Hidden Victims of High Gas Prices: Europe's Elderly Who Can't Pay and Won't Ask for Help
World
What Happens After April 6 if Iran Doesn't Open Hormuz? The Scenarios Nobody Wants to Think About
Science
The Climate Lawsuit That Could Force Europe's Biggest Companies to Change Everything
Science
The Science Behind Why Oil Prices Can't Come Down Quickly Even If Hormuz Reopens
Economy
Britain's Quiet Energy Crisis: Why the UK Is More Exposed Than It Admits
Economy
The Energy Traders Who Are Getting Rich from Your Pain
Economy
Why the ECB's Christine Lagarde Is Facing the Most Difficult Year of Her Career
World
Why France's Macron Is the Most Important Person in European Politics Right Now