Technology | Europe
AI Is Already Writing Laws in Europe — And Nobody Voted for It
AI systems are being used to draft legislation in several EU member states. Here is what this means for democratic accountability, who is responsible when AI-drafted laws have errors, and why it matters.
In a committee room of the Bundestag, a legislative assistant using an AI tool drafts a preliminary version of a proposed amendment to energy efficiency regulation. The draft is reviewed, modified, and debated by elected representatives before it becomes law. In Brussels, the European Commission's legal service uses AI-powered document analysis to identify inconsistencies between draft legislation and existing EU law. In the Estonian parliament, considered Europe's most advanced in digital governance, AI tools are used to model the likely impacts of proposed legislation before it is voted on.
None of this is secret. None of it is, in isolation, alarming. AI assistance in complex technical legislative work is a logical extension of the research assistance, drafting assistance, and analytical tools that legislative bodies have always used. The concern — articulated by legal scholars, democratic theorists, and civil society organizations in increasing volume — is about scale, opacity, and accountability.
As AI systems become more capable and their use in legislative drafting becomes more extensive, several questions that were once theoretical become practical: Who is responsible when an AI-drafted provision contains an error that causes real harm? How do legislators maintain genuine understanding of legislation they are being asked to vote on if the drafting process is increasingly automated? And — the deepest question — when AI systems trained on previous legislation generate new legislation that is conceptually consistent with its training data, does that produce incremental refinement or path dependence that prevents genuinely innovative policy approaches?
The EU's AI Act, which entered enforcement in 2026, does not directly address the use of AI in legislative drafting — an oversight that AI governance scholars have identified as significant. The regulation covers AI systems used in employment, education, law enforcement, and several other high-risk categories, but legislative drafting does not appear in the list of regulated applications.
This gap reflects a broader pattern in technology governance: regulation typically addresses uses of technology that are already causing visible harm, rather than uses that could cause harm at a later stage when they are more deeply embedded in critical processes.