Technology | Europe
How European Disinformation Agencies Are Failing Against AI-Generated Content
EU disinformation monitoring agencies are struggling to keep pace with AI-generated influence operations. Here is the specific technology gap that's making European fact-checkers obsolete.
EU disinformation monitoring agencies are struggling to keep pace with AI-generated influence operations. Here is the specific technology gap that's making European fact-checkers obsolete.
- EU disinformation monitoring agencies are struggling to keep pace with AI-generated influence operations.
- The European External Action Service's East StratCom Task Force, which was established in 2015 to identify and counter Russian disinformation directed at European audiences, operates with a staff of approximately 50 anal...
- In March 2026, this apparatus is detecting and responding to disinformation at a speed and scale that its designers could not have anticipated when the team was established.
EU disinformation monitoring agencies are struggling to keep pace with AI-generated influence operations.
The European External Action Service's East StratCom Task Force, which was established in 2015 to identify and counter Russian disinformation directed at European audiences, operates with a staff of approximately 50 analysts covering disinformation across 24 EU languages plus several non-EU target languages. It publishes regular Disinformation Review reports and works with national fact-checking organizations across member states.
In March 2026, this apparatus is detecting and responding to disinformation at a speed and scale that its designers could not have anticipated when the team was established. The specific challenge is not that the disinformation is harder to identify as false — the factual content of the disinformation is often straightforwardly incorrect in ways that well-resourced analysts can verify. The challenge is that the production and distribution of disinformation using AI tools now happens faster than any human-staffed organization can respond.
A language model capable of generating persuasive political content in 24 languages simultaneously, with domain-specific knowledge of each country's political culture, can produce a week's worth of coordinated disinformation content in minutes and distribute it through a network of automated social media accounts before any detection system can flag the first piece. By the time the EEAS publishes its Disinformation Review, the content has already been amplified, shared, and partially embedded in public discourse.
The technical solution to this problem requires AI detection of AI-generated content — essentially a counter-AI capability that can identify machine-generated text with sufficient reliability and speed to support timely response. The current state of AI detection technology is genuinely limited: the best available detectors can identify AI-generated text with approximately 75-80 percent accuracy in controlled conditions, but perform significantly worse on naturally varied and adversarially optimized content.
The 2026 French local elections in June are the most immediate high-stakes test of whether EU and member state detection and response capabilities have improved enough to matter.