LEGAL AI FAILURE

Lawyer Ordered to Pay $2,500 After AI-Written Brief Contains 21 Fabricated Quotations

The Fifth Circuit says the flood of AI hallucinations in legal filings "shows no sign of abating" as courts across the country run out of patience with lawyers who trust chatbots to do their homework.

February 20, 2026

The Fifth Circuit Has Had Enough

It's been nearly three years since the first lawyer made international headlines for submitting ChatGPT-fabricated case citations to a federal court. You'd think the legal profession would've gotten the message by now. Yeah, you'd be wrong.

On February 18, 2026, a three-judge panel of the U.S. Court of Appeals for the Fifth Circuit ordered attorney Heather Hersh to pay $2,500 in sanctions after concluding she used artificial intelligence to draft much of an appellate brief and never bothered to check whether the output was accurate. The court identified 21 instances of fabricated quotations or serious misrepresentations of law or fact in her filing. Let that sink in. Twenty-one. In a single brief.

The case, Fletcher v. Experian Information Solutions, stemmed from a Fair Credit Reporting Act lawsuit. Hersh, affiliated with FCRA Attorneys and the firm formerly known as Jaffer & Associates, submitted the brief on behalf of Robert Fletcher, who sued Experian Information Solutions Inc. and Bridgecrest Credit Company LLC over identity theft claims. And here's the thing: the brief she filed was riddled with quotations that didn't exist in the cases she cited, legal principles that were flat-out misstated, and references to authority that simply weren't real.

21 Fabricated quotations or misrepresentations in one brief
$2,500 Sanctions ordered by the Fifth Circuit

When Confronted, She Made It Worse

Look, the hallucinated content itself is bad enough. But what really makes this case stand out is what happened when the court started asking questions. Judge Jennifer Walker Elrod described Hersh's response to the court's show-cause order as "disappointing." And honestly, that's judicial understatement at its absolute finest.

Here's what happened: Hersh initially tried to deflect blame. She suggested she'd relied on publicly available versions of cases and pointed to errors allegedly originating from major legal databases. The court found those explanations "not credible." She only admitted to using AI after being directly asked, and the panel made clear it would've imposed a lesser penalty if she'd accepted responsibility sooner and been more forthcoming. In other words, she had a chance to come clean. She didn't take it.

When confronted with her ethical lapse, Hersh "misled, evaded, and violated her duties as an officer of this court," Judge Jennifer Walker Elrod wrote.

The pattern is becoming disturbingly familiar, and I'm getting tired of writing about it. A lawyer submits AI-generated work without checking it. The court discovers obvious problems. The lawyer blames the technology, blames a database, blames anything other than their own decision not to read the document they signed and filed. And the court, having seen this exact playbook a dozen times before, is less and less inclined to show mercy. Can you blame them?

Three Years of Warnings, and the Problem Keeps Growing

Here's where it gets really alarming. The Fifth Circuit panel didn't limit its commentary to the facts of this one case. The opinion includes a broader observation that should concern every legal professional in the country: the continuing appearance of AI-driven mistakes in litigation "shows no sign of abating." The court cited a database maintained by researcher Damien Charlotin at HEC Paris that tracks AI hallucinations in court filings, which listed 239 such cases at the time of the ruling. Two hundred and thirty-nine.

And even that number understates the scale of the problem. The same database and other tracking efforts have documented an estimated 712 legal decisions worldwide that address hallucinated content, with roughly 90% of those decisions issued in 2025 alone. These aren't hypothetical risks. These aren't theoretical concerns. This is an ongoing, accelerating crisis in the administration of justice, and it's getting worse by the month.

The ruling also makes clear that claiming ignorance about AI risks isn't going to fly anymore. We're nearly three years into high-profile incidents, sanctions, and public embarrassments. Every practicing attorney should know by now that generative AI can and does fabricate case law, quotations, and legal principles. The Fifth Circuit's message is blunt: if you use these tools and don't verify the output, you own the consequences. Period.

A Growing Hall of Shame: Lawyers Caught Filing AI Fabrications

The Hersh case is just the latest entry in what's become an embarrassingly long list of attorneys sanctioned for trusting AI to do their legal research. We've been tracking these cases since the beginning, and let me tell you, the pattern is maddening. Here's a look at the greatest hits.

Steven Schwartz, 2023: The One That Started It All

The one that kicked off this whole mess. New York attorney Steven Schwartz of Levidow, Levidow & Oberman submitted a brief in Mata v. Avianca containing at least six completely fabricated case citations generated by ChatGPT, including fake cases like "Varghese v. China Southern Airlines" and "Martinez v. Delta Airlines." Schwartz's defense? He claimed he was "unaware of the possibility that its content could be false." Incredible. Judge Kevin Castel of the Southern District of New York ordered $5,000 in sanctions, noting the lawyers had exhibited "bad faith" by making false and misleading statements about the brief.

MyPillow Attorneys, July 2025: $3,000 Each

You can't make this stuff up. Two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case were ordered to pay $3,000 each after submitting a filing containing nearly 30 defective citations, including nonexistent cases and misquotations of case law. Judge Nina Y. Wang of the U.S. District Court in Denver imposed the sanctions. One of the attorneys, Christopher Kachouroff, initially claimed he'd written the briefing himself and then "ran it through AI." He eventually admitted that neither he nor his co-counsel had verified the AI-generated version. Just ran it through the chatbot and hit submit. That's it. That was the process.

Amir Mostafavi, September 2025: $10,000 in California

This one's truly special. The 2nd District Court of Appeal in California fined Los Angeles attorney Amir Mostafavi $10,000 after finding that 21 of 23 case citations in his opening brief were fabricated by ChatGPT. Think about that ratio. Twenty-one out of twenty-three. Mostafavi told the court he'd drafted the appeal on his own but then ran it through ChatGPT "in hopes of improving the writing." He admitted he didn't review the AI-generated output before filing and said he was unaware the tool might insert case citations or fabricate material. The sanction was reportedly the largest of its kind in California at the time. And honestly? It still feels light.

Steven Feldman, February 2026: Default Judgment

This is the one that should scare every lawyer in America. In perhaps the most severe consequence yet, U.S. District Judge Katherine Polk Failla entered default judgment against Affable Avenue LLC in Flycatcher Corp. v. Affable Avenue on February 5, 2026. Attorney Steven Feldman repeatedly filed documents containing hallucinated case citations. When the court ordered him to explain, he used AI to draft his response to the show-cause order, which itself contained another fake citation. You can't be serious. He then spontaneously submitted a proposed reply brief with yet another nonexistent case. The client lost the case entirely, not because the merits were weak, but because its lawyer literally could not stop filing fabricated law. The man got caught, was told to explain himself, and used the same broken tool to do it. It's almost impressive in its absurdity.

The Same Story on Repeat: Why Lawyers Keep Getting Caught

Every one of these cases follows a nearly identical script, and it's maddening. A lawyer uses a generative AI tool, usually ChatGPT, to research or draft a legal filing. The AI produces text that looks professional, cites cases that sound real, and quotes legal principles that feel right. The lawyer, either through laziness, time pressure, or misplaced trust in the technology, submits the filing without verifying a single citation. A judge, opposing counsel, or a law clerk notices that the cases don't exist. Sanctions follow. Rinse and repeat.

Here's the thing that makes this cycle so frustrating: the fix isn't complicated. It doesn't require new technology or new regulations. It requires lawyers to do what they were already supposed to be doing. Read what you file. Check the citations. Verify the quotations. These aren't exotic professional obligations. They're the absolute bare minimum of legal practice. We're talking about lawyers, people whose entire job is reading documents carefully.

And yet here we are. It's February 2026. We've had three full years of cautionary tales, bar association warnings, continuing legal education seminars, and viral news stories about lawyers getting sanctioned for AI hallucinations. It keeps happening. The Hersh case proves that awareness campaigns and public shaming haven't been sufficient deterrents. Not even close.

A court in one recent AI hallucination case went so far as to declare that monetary sanctions are proving ineffective at deterring false, AI-generated statements of law in legal pleadings. Let that sink in for a second. If fines don't work, the profession is going to have to find something that does, because judges are running out of patience. And the Feldman case shows they're willing to let clients lose their entire case over it. That's a real person who lost a real case because their lawyer treated ChatGPT like a paralegal.

The Bigger Picture: AI Confidence Meets Human Negligence

Let me be clear about something: these cases aren't really about AI being bad at legal research, though it certainly is. They're about the collision between two forces: AI systems that generate text with absolute confidence regardless of accuracy, and human professionals who mistake that confidence for competence. It's a deadly combination.

ChatGPT doesn't hedge when it invents a case citation. It doesn't flag the fabrication with a disclaimer or a footnote. It presents "Varghese v. China Southern Airlines" with the same syntactic authority as "Brown v. Board of Education." And when a lawyer reads that output under deadline pressure, the smooth, professional formatting creates a false sense of reliability. It looks real. It sounds real. It isn't.

This is the confidence-versus-accuracy problem that runs through every AI failure we document on this site. Whether it's fake citations corrupting academic research, medical misinformation endangering patients, or fabricated case law wasting court resources, the root cause is always the same: AI systems that'll say anything with perfect confidence, and humans who don't verify what they're told.

The legal profession is supposed to be adversarial by design. Opposing counsel checks your work. Judges scrutinize your arguments. The entire system is built around the assumption that people will try to get away with things and that other people will catch them. What nobody anticipated, what nobody even imagined, is that the thing people would try to get away with is outsourcing their thinking to a machine that can't think.

The Uncomfortable Truth

Twenty-one fabricated quotations in a single brief. Nearly 30 defective citations in a MyPillow filing. Twenty-one of 23 citations faked in a California appeal. A client who lost his entire case because his lawyer submitted AI-generated filings four separate times without checking any of them. These aren't outliers. They're not flukes. They're the natural, predictable result of a profession that adopted a powerful, unreliable tool and skipped the part where you learn how it actually works. And until something fundamentally changes, we'll be writing about the next one in a few weeks.

More on AI Hallucinations and Legal Failures

The legal system is just one of many institutions being undermined by overreliance on AI-generated content. And it won't be the last.

Academic Citations Crisis Confidence vs. Accuracy How Hallucinations Work

Related Articles

Lawyer Fined for AI Legal Brief Canada Summons OpenAI AI Ethics Crisis 2026 AI Safety Researchers Exodus Is ChatGPT Safe?

Browse by Topic

AI Failures Hub ChatGPT Problems Hub OpenAI Lawsuits Hub AI Hallucinations Hub GPT Bugs & Issues Hub

Explore our complete documentation organized by topic