Technology | Europe
The AI That Can Now Write Persuasive Fake News in Any Language Simultaneously
New AI models can generate convincing disinformation simultaneously in 24 languages. Here is how state actors are already using this capability and what Europe is doing about it.
The European Union's External Action Service published its quarterly disinformation monitoring report in March 2026 with language that was, by the standards of diplomatic communication, unusually direct: 'We are observing the use of large language models by state-affiliated actors to generate and distribute coordinated disinformation campaigns in multiple languages simultaneously, at a scale and speed that human-produced content cannot match.'
The specific capability being described is straightforward in principle and alarming in practice. A large language model trained on news content from multiple countries can generate articles that look and read like genuine news reporting in any of its training languages — including language-specific journalistic conventions, local cultural references, and the particular register of political commentary that differs significantly between, say, Polish and Italian political culture.
For state actors seeking to influence European public opinion — and the EEAS report names Russia as the primary actor in the current operational environment, with Chinese influence operations described as distinct in method and more limited in European targeting — this capability represents a fundamental change from previous disinformation operations. The previous model required either native-language operators to produce content, or machine translation that retained telltale signs of non-native authorship. The new model eliminates both constraints.
The practical implications for the 2026 French local elections — the highest-stakes European political event of the coming months — are significant enough that French intelligence services have been specifically briefed on this capability and have begun coordinating with the social media platforms that would serve as distribution channels. Whether that coordination will be effective is an open question: the gap between the speed at which AI-generated disinformation can be produced and distributed, and the speed at which human fact-checkers and platform moderators can identify and remove it, is substantial and growing.