Technology | Europe
Lawyers Are Using AI to File Fake Briefs — And Courts Are Sanctioning Them
AI hallucinations in legal briefs have become a serious judicial crisis. Here is the specific court sanctions being handed out, the specific cases, and why law schools are scrambling to respond.
AI hallucinations in legal briefs have become a serious judicial crisis. Here is the specific court sanctions being handed out, the specific cases, and why law schools are scrambling to respond.
- AI hallucinations in legal briefs have become a serious judicial crisis.
- In 2023, two lawyers submitted a brief in a federal case containing specific case citations that didn't exist — cases entirely invented by ChatGPT, cited as real precedent without the lawyers checking whether the specifi...
- NPR's April 2026 reporting confirmed that this specific problem has not resolved — it has expanded: "Early scandals have not slowed lawyers' adoption of AI tools, even as court sanctions over fake legal briefs continue t...
AI hallucinations in legal briefs have become a serious judicial crisis.
The Problem That Embarrassed the Legal Profession
In 2023, two lawyers submitted a brief in a federal case containing specific case citations that didn't exist — cases entirely invented by ChatGPT, cited as real precedent without the lawyers checking whether the specific courts and specific decisions the AI described actually existed. The specific sanction from the federal judge was pointed: $5,000 in fines and a formal reprimand. The particular case received enormous media attention precisely because it was the first major public instance of a specific failure mode that AI companies had been warning about: hallucination, the particular tendency of large language models to generate specific confident-sounding false information indistinguishable from accurate information.
NPR's April 2026 reporting confirmed that this specific problem has not resolved — it has expanded: "Early scandals have not slowed lawyers' adoption of AI tools, even as court sanctions over fake legal briefs continue to rise." The specific cadence of sanctions — whose numbers have been increasing as AI tool adoption in the legal profession accelerates — represents both the particular lagging consequence of early adoption without specific verification protocols and the specific current judicial frustration with practitioners who are either unaware of or choosing to ignore the specific warnings.
Carla Wale, the director of the Gallagher Law Library at the University of Washington School of Law, is developing specific optional AI ethics training for law school students — the particular educational response whose specific content is the subject of active curriculum development at multiple American law schools whose particular relevance was established by the specific practicing lawyer sanctions that their graduates' colleagues have received.
The specific AI ethics training framework that Wale and similar curriculum developers are building addresses the particular hallucination risk in legal contexts — the specific citechecking requirements that any AI-assisted brief preparation must include, the particular verification procedures that distinguish responsible AI use from the specific negligence whose expression is citing non-existent cases to federal courts.
Why Legal AI Hallucinations Are Particularly Dangerous
AI hallucinations are a known limitation of current large language model architectures whose specific expression varies by use case. In some uses — creative writing, brainstorming, general knowledge queries — hallucination produces the particular effect of unexpected but interesting information whose accuracy isn't critical to the specific outcome the user seeks. In legal practice, the specific situation is categorically different.
Legal arguments depend on the particular precedents whose specific existence and specific holdings provide the particular authority that courts rely on in making specific decisions. Citing a case that doesn't exist is not merely inaccurate information — it is the specific misrepresentation of authority that courts and opposing counsel cannot check against a real record. The particular harm is specific: a court making a specific decision based on the particular impression that specific legal authority supports a specific argument, when no such authority exists.
The specific detection mechanism is also more robust in legal contexts than in others: opposing counsel checks specific citations; clerks verify specific references; judges who are experts in their specific jurisdictions notice specific cases that they don't recognize. The particular accountability infrastructure of the legal system creates the specific discovery probability for AI hallucinations that general information contexts don't provide — which is why the specific sanctions being handed out are generating their particular cautionary record.
The specific technical explanation for why legal citation hallucinations are particularly frequent: large language models trained on large corpora of text that includes specific legal opinions, law review articles, and legal journalism have developed the particular pattern of generating plausible-seeming legal citations whose specific format (court name, case name, year, reporter) matches real citations closely enough to pass superficial inspection. The specific cases they hallucinate often have the right general topic area, the right approximate time period, and the right general court level — making them seem like they could be real citations that simply weren't checked.
The Courts' Specific Response and What Law Schools Are Doing
Federal courts have been developing specific AI disclosure requirements whose particular form varies by jurisdiction but whose shared purpose is creating the specific accountability that AI-assisted legal work requires. Several district courts now require specific declarations that AI tools were used in brief preparation, the specific tools that were used, and the specific verification procedures employed to confirm the particular accuracy of AI-generated content.
The American Bar Association's specific formal ethics guidance — whose particular development process involves both the specific technical understanding of AI capabilities and the particular ethical framework of legal professional responsibility — is working toward the particular guidance whose formal adoption creates specific professional standards that specific state bars can enforce through their particular disciplinary processes.
For law schools specifically: the particular challenge is developing curricula that reflect both the specific genuine utility of AI tools in legal research and drafting — whose specific efficiency benefits are real and substantial — and the particular specific risks whose management requires the exact verification skills that AI tools simultaneously make faster but don't eliminate the need for. The specific 'trust but verify' framework that Wale's training program presumably embodies is the particular professional standard whose development represents the legal profession's specific adaptation to the specific capabilities and specific limitations of the AI tools that have become embedded in legal practice whether specific institutional guidance exists or not.