AI IN THE COURTROOM

Fifth Circuit Fines Lawyer $2,500 for AI-Hallucinated Citations in Legal Brief

Attorney filed reply brief with 16 fabricated quotations and 5 serious misrepresentations of law, then gave evasive answers when the court demanded an explanation.

February 21, 2026

A Federal Appeals Court Says Ignorance About AI Is No Longer an Excuse

On February 18, 2026, the U.S. Court of Appeals for the Fifth Circuit published an opinion that should serve as a warning to every attorney who has ever copied and pasted output from ChatGPT into a legal document without reading it first. Attorney Heather Hersh of Jaffer & Associates PLLC was ordered to pay $2,500 in sanctions for filing an appellate brief riddled with AI-generated fabrications, and the court made it clear that the era of blaming the machine is over.

The case, Fletcher v. Experian Information Solutions, Inc. (No. 25-20086), involved a client named Robert Fletcher who sued Experian Information Solutions Inc. and Bridgecrest Credit Company LLC over identity theft claims under the Fair Credit Reporting Act. That underlying dispute, however, is not what made this case notable. What made it notable is that the reply brief Hersh filed in support of her client was, in the court's assessment, substantially or entirely composed by artificial intelligence, and she never bothered to verify whether any of it was true.

Chief Judge Jennifer Walker Elrod, joined by Judges Jerry Smith and Cory Wilson, did not mince words. The three-judge panel issued a show-cause order after identifying glaring problems in the brief, and what they found was worse than sloppy work. It was fiction presented as law.

21 Material Errors: 16 Fabricated Quotations and 5 Misrepresentations

21 Material errors identified in the brief
16 Fabricated quotations
5 Serious misrepresentations of law or fact

The Fifth Circuit's show-cause order identified 21 distinct material problems in Hersh's reply brief. Sixteen of those were fabricated quotations, meaning the brief cited language that was attributed to real court opinions but simply did not exist in those opinions. The remaining five were serious misrepresentations of law or fact, instances where the brief described legal holdings or factual conclusions that bore no relationship to what the cited cases actually said.

Several of the errors involved repeated, erroneous citations to the same two cases. This pattern is a hallmark of AI hallucination: the model latches onto case names that sound relevant, then invents quotations and holdings to fill in the gaps. The result is a brief that looks professionally formatted and reads with apparent authority, but is built on a foundation of fabricated legal scholarship that crumbles the moment anyone actually checks the citations.

For anyone who has watched this problem unfold across the legal profession since 2023, the pattern is painfully familiar. But the Fifth Circuit made clear that familiarity with the problem is precisely why the excuses no longer work.

When the Court Asked What Happened, the Answers Made Things Worse

If the fabricated citations were bad, Hersh's response to the court's inquiry made everything worse. When initially confronted with the problems in her brief, Hersh claimed she "relied on publicly available versions of the cases, which she believed were accurate." In other words, she tried to blame her legal research sources rather than acknowledge what had actually happened.

It was only under further questioning that Hersh admitted using generative AI to "help organize and structure" her argument. The court found this progression of explanations to be misleading and evasive. Judge Elrod characterized the response to the show-cause order as "disappointing," which, in the measured language of federal appellate opinions, is about as close to expressing open frustration as judges typically get.

The Court's Warning to Every Lawyer

The Fifth Circuit panel stated that if it ever was an excuse to claim ignorance about the risks of using generative AI to draft a brief without checking the citations, it "certainly" is not anymore.

The court also noted that it likely would have imposed a lesser penalty had Hersh "accepted responsibility and been more forthcoming" from the start. The evasiveness was treated as an aggravating factor.

This point extends beyond this single case. The ruling signals that using AI irresponsibly to generate legal filings is sanctionable, and that providing evasive or misleading explanations when confronted will increase the penalty. Accepting responsibility may result in a lighter sanction, but it does not eliminate consequences.

239 Cases and Counting: AI Hallucinations in the Legal System Show No Sign of Stopping

The Fifth Circuit's opinion referenced a database tracking AI hallucination incidents in U.S. litigation. As of the ruling date, that database had documented 239 cases where AI-generated fabrications had contaminated court filings. The court observed that the continuing appearance of AI-driven mistakes in litigation "shows no sign of abating," despite nearly three years of high-profile warnings and incidents stretching back to the now-infamous Mata v. Avianca case in 2023.

And the Fifth Circuit case is far from an isolated incident in early 2026 alone. Just days before the Hersh ruling, Senior U.S. District Judge Julie Robinson in Kansas sanctioned four attorneys a total of $12,000 over hallucinated materials in a patent dispute. The lead attorney received a $5,000 fine for using ChatGPT to find case law without verifying the output, while three co-counsel received fines ranging from $1,000 to $3,000 for signing off on content they had not checked.

In Wisconsin, Kenosha County District Attorney Xavier Solis was sanctioned in a February 6 hearing for undisclosed use of AI in a court filing that contained false legal citations. And the Am Law 100 firm Gordon Rees has now been accused twice of filing briefs with AI-hallucinated citations, most recently in a case called Huynh v. Redis Labs, where opposing counsel identified misrepresented case citations and fabricated quotations.

The problem is systemic. Lawyers across the country, from solo practitioners to partners at major firms, continue to feed their legal arguments into general-purpose AI chatbots and submit whatever comes out without reading the citations, let alone checking whether the cases exist.

What the Fifth Circuit Recommends Lawyers Actually Do Instead

One of the more unusual aspects of the Fifth Circuit's opinion is that it did not simply impose sanctions and move on. The panel took the time to offer three practical recommendations for attorneys who want to use AI without ending up in the same position as Hersh. These recommendations effectively function as a roadmap for responsible AI use in legal practice.

Recommendation 1: Use the right tools. The court explicitly warned against using "off-the-shelf, general purpose large language models such as ChatGPT" for legal research. It pointed to products like Westlaw's AI-powered research tools, which limit the model's focus to its database of actual case authorities and provide hyperlinks to every case cited, making verification straightforward.
Recommendation 2: Watch for red flags. When citations repeat or seem "unusually helpful," that is a sign something may be fabricated. The court urged lawyers to verify suspicious citations immediately rather than waiting until a final review that may never happen.
Recommendation 3: Take responsibility. When something goes wrong, own it. The court made clear that Hersh's evasiveness directly increased her sanction. Transparency and accountability, while not a get-out-of-jail-free card, are the only approach that might result in a lighter penalty.

These recommendations amount to a simple message: AI is not banned from legal work, but the responsibility for accuracy rests entirely on the human being who signs the filing. The machine does not have a law license. The lawyer does. And it is the lawyer who faces consequences when the machine hallucinates.

Why Lawyers Keep Making the Same Mistake Three Years After Mata v. Avianca

The most baffling aspect of this entire phenomenon is not that AI hallucinates. Large language models generate plausible-sounding text based on statistical patterns, and they have no mechanism for verifying whether the text they produce corresponds to reality. That is a well-documented limitation. The baffling part is that lawyers, who are trained to verify their sources, who are ethically obligated to ensure the accuracy of their court filings, who have watched dozens of their colleagues get sanctioned for this exact mistake, keep doing it anyway.

Part of the explanation is economic and psychological. Legal research through traditional databases is time-consuming, and ChatGPT produces a fully formatted brief with citations in minutes. The temptation to trust the output is compounded by the fact that fabricated citations look identical to real ones: proper format, plausible party names, and holdings that conveniently support whatever argument the user was making. The model does not flag its inventions. It presents fabricated case law with the same authoritative tone it uses for everything else, and it takes deliberate effort to be skeptical of output that looks this polished.

And part of it is that the penalties, so far, have been relatively modest. A $2,500 fine is not going to bankrupt anyone. Neither is a $5,000 fine or even a $12,000 fine split among multiple attorneys. The question is whether these sanctions will escalate as courts grow increasingly frustrated with a problem that, as the Fifth Circuit noted, shows no sign of slowing down.

What This Means for Everyone Who Uses AI, Not Just Lawyers

The legal profession is a useful canary in the coal mine for the broader challenge of AI reliability. Lawyers file documents that have real consequences for real people. When a brief contains fabricated citations, it does not just embarrass the attorney. It wastes judicial resources. It potentially harms the opposing party, who has to spend time and money debunking phantom authorities. And it undermines public trust in a system that depends on the integrity of the advocates who participate in it.

But the same fundamental problem exists everywhere AI is being deployed. Medical professionals relying on AI-generated summaries that cite studies that do not exist. Journalists publishing articles with AI-fabricated quotes. Students submitting papers with hallucinated references. Business analysts making recommendations based on data that a model invented. The format changes, but the underlying failure is identical: humans trusting AI output without verification, because the output looks too good to question.

The Fifth Circuit has drawn a clear line for the legal profession. If you use AI, you are responsible for what it produces. Ignorance is not a defense. Evasion makes things worse. And the penalties will continue to come.

For the rest of us, the lesson is the same, even if no court has formalized it yet. Every AI output is a draft. Every citation, statistic, and factual claim it generates requires human verification before it can be treated as reliable.

Frequently Asked Questions About AI Hallucinated Legal Citations

What is an AI hallucinated citation?

An AI hallucinated citation is a legal reference generated by a large language model, such as ChatGPT, that appears to cite a real court opinion but does not correspond to any actual case. The AI fabricates the case name, citation format, and quoted language based on statistical patterns rather than a verified database of authorities. These citations often look properly formatted and contain plausible legal reasoning, making them difficult to identify without verification through an official legal research platform such as Westlaw or LexisNexis.

Can lawyers use ChatGPT in court filings?

Courts have not banned the use of AI in legal work. The Fifth Circuit's opinion in Fletcher v. Experian noted that AI tools can assist attorneys when used responsibly, and recommended legal-specific AI products that draw from verified case databases rather than general-purpose chatbots. However, every citation and legal assertion in a court filing remains the responsibility of the attorney who signs it. Several federal courts and state bar associations now require attorneys to disclose whether AI was used in the preparation of filings.

What happens if a lawyer files fabricated citations?

Lawyers who file briefs containing AI-hallucinated citations face sanctions under Federal Rule of Appellate Procedure 38 or Federal Rule of Civil Procedure 11, which require attorneys to certify that their legal contentions are supported by existing law. Penalties have ranged from monetary fines, such as the $2,500 imposed in the Fletcher case, to revocation of pro hac vice admission and mandatory self-reporting to state bar disciplinary authorities. In the Kenosha County case involving DA Xavier Solis, a prosecutor's use of AI-generated false citations contributed to the dismissal of 74 criminal charges.

How many lawyers have been sanctioned for AI hallucinations in court filings?

As of February 2026, a database maintained by legal researcher Damien Charlotin has documented 239 cases involving AI-generated fabrications in court filings, a number the Fifth Circuit noted "shows no sign of abating." The documented cases span federal and state courts across the United States and include solo practitioners, mid-size firms, and Am Law 100 firms. The actual number is likely higher, as many incidents may go undetected when opposing counsel or judges do not independently verify the cited authorities.

AI Failures Are Getting Worse, Not Better

From courtrooms to classrooms to hospitals, AI hallucinations are causing real harm. Stay informed about the growing accountability crisis.

Explore All AI Failures How Hallucinations Work AI in Academic Research

Related Articles

Lawyer Fined for AI Hallucinations Canada Summons OpenAI AI Ethics Crisis 2026 AI Safety Researchers Exodus Is ChatGPT Safe?

Browse by Topic

AI Failures Hub ChatGPT Problems Hub OpenAI Lawsuits Hub AI Hallucinations Hub GPT Bugs & Issues Hub

Explore our complete documentation organized by topic