When Lawyers Let AI Do Their Writing: “Hallucinations” Lead to Real Consequences

When Lawyers Let AI Do Their Writing: “Hallucinations” Lead to Real Consequences

Let’s talk about what happens when legal minds decide to let artificial intelligence write their briefs without double-checking the work. Spoiler alert: it doesn’t end well.

We’re witnessing something almost surreal in courtrooms across the United States. Lawyers—people whose entire profession revolves around precision, verification, and getting the facts right—are getting caught red-handed submitting legal briefs filled with completely made-up case citations. The culprit? AI tools often generate “hallucinated” fake legal precedents that appear legitimate until someone attempts to verify them.

The Numbers Don’t Lie (Unlike the AI)

Here’s a sobering statistic: courts have already disciplined lawyers in at least seven cases over AI-generated fiction in the past two years. And that’s just what we know about—most of these embarrassing moments probably never make it into published court decisions.

The problem has become so severe that one of the country’s largest personal injury firms recently sent an urgent email to its 1,000+ lawyers, essentially cautioning that any improper use of AI could result in termination.

When Big Law Firms Make Big Mistakes

In May 2025, two prestigious law firms learned an expensive lesson about AI oversight. As detailed in this LawSites report, attorneys utilized multiple AI tools, including CoCounsel, Westlaw Precision, and Google Gemini, to assist in drafting a brief. The result? Nine out of 27 citations were wrong, including two cases that simply didn’t exist.

The special master overseeing the case wasn’t having it. He found their conduct to be “tantamount to bad faith” and imposed sanctions totaling $31,100. Ouch. That’s what happens when you pass around AI-generated content without mentioning its origins or bothering to verify whether those impressive-sounding cases exist.

The Case That Started It All

We can trace this mess back to Steven Schwartz, a New York lawyer who, in 2023, became the poster child for AI gone wrong in legal practice. He used ChatGPT to research a brief against Avianca Airlines and submitted six entirely fictional case citations. When the judge questioned the citations, Schwartz went back to ChatGPT and asked if the cases were real. ChatGPT confidently assured him they were legitimate and could be found in “reputable legal databases such as LexisNexis and Westlaw.” The judge called it “an unprecedented circumstance” and fined the firm $5,000.

Why Smart People Keep Making Dumb Mistakes

Here’s what’s fascinating— and terrifying —about this pattern: these aren’t lazy lawyers or legal novices. These are experienced attorneys at major firms who should know better. So why does this keep happening?

MIT Technology Review hit the nail on the head: “The fact that high-powered lawyers, whose very profession it is to scrutinize language, keep getting caught making mistakes introduced by AI says something about how most of us treat the technology right now.” AI tools have this magical quality—you ask a complex question and get back what sounds like a thoughtful, authoritative answer. Over time, these tools develop a “veneer of authority” that makes us trust them more than we should.

Legal expert Maura Grossman from the University of Waterloo observes that lawyers fall into two camps: those who are “scared to death” of AI and won’t touch it and the early adopters who are “tight on time” and eager for anything that can help them meet tight deadlines. Which group isn’t checking their work carefully?

The Technical Reality: AI Doesn’t Actually “Know” Anything

Here’s the thing everyone needs to understand: AI hallucinations aren’t a bug—they’re a feature of how these systems work. Large language models generate responses based on statistical patterns in their training data, rather than verifying facts. They’re essentially very sophisticated prediction engines that guess what words should come next.

Yet companies are marketing AI tools to lawyers with promises like “Feel confident your research is accurate and complete” (Westlaw Precision) and AI that’s “backed by authoritative content” (CoCounsel). Those marketing claims didn’t prevent Ellis George from getting slapped with a $31,000 fine for trusting the technology.

Professional Responsibility in the Age of AI

The American Bar Association saw this train wreck coming and, in July 2024, issued its first formal ethics opinion on AI. The bottom line? Lawyers must “fully consider” their ethical obligations when using AI, including duties related to competence, confidentiality, communication, and fees.

The ABA made it crystal clear: attorney ethics rules require lawyers to verify their work, and that responsibility extends to “even an unintentional misstatement” produced through AI. In other words, “the AI made me do it” isn’t a valid defense.

When Courts Fight Back

Judges are taking these violations seriously. Sanctions range from $1,000 to $5,000 in typical cases, though as we’ve seen, they can go much higher. Courts are also imposing referrals for attorney discipline, mandatory AI training, and other professional consequences.

Some judges have gotten proactive. U.S. District Judge Brantley Starr now requires lawyers to certify whether they used AI in their filings and confirm that a human reviewed the work for accuracy. This certification requirement has spread to other Texas courts, creating a new layer of professional accountability.

What Needs to Happen Now

The path forward requires a reality check across the legal profession:

Stop Treating AI Like Google: Only 10% of law firms have AI use policies. Firms need comprehensive training programs that explain what AI does versus what people think it does.

Make the Consequences Real: As Santa Clara University law professor Edward Lee puts it, monetary sanctions alone won’t stop this practice. State bars should consider submitting AI-generated fake citations as grounds for disciplinary action, including potential license suspension.

 Or revocationEmbrace Verification as a Core Skill: The ABA advises that lawyers don’t need to become AI experts, but they must understand the capabilities and limitations of the AI tools they use. That includes understanding that verification isn’t optional—it’s fundamental.

The Bigger Picture

Here’s what keeps us up at night: MIT Technology Review warns that “those mistakes are getting caught (for now), but it’s not a stretch to imagine that at some point soon, a judge’s decision will be influenced by something that AI makes up, and no one will catch it.”

That’s the real danger. We’re not just talking about professional embarrassment or financial sanctions anymore. We’re talking about the integrity of our legal system.

Harvard Law School’s David Wilkins frames it perfectly: the legal profession needs to figure out how to train lawyers properly while incorporating these powerful new tools. AI isn’t going anywhere, and it genuinely can make legal practice more efficient and accessible. But the fundamental obligation to verify facts and get the law right? That’s not negotiable.

The Bottom Line

AI is a tool, not a replacement for professional judgment. When lawyers forget that distinction, they end up as cautionary tales in articles like this one. The technology will continue to improve, but human oversight will always be essential.

For now, the lesson is simple: if you’re going to use AI in your legal practice, treat it like you would any other research tool that requires verification. Because the alternative—becoming the next lawyer in the headlines for all the wrong reasons—isn’t a risk worth taking.

 

Disclaimer: The information on this website is provided solely as a service to interested Internet users. While the information on this site is about legal issues, it is not legal advice. Moreover, due to the rapidly changing nature of the law and our reliance on information provided by outside sources, we make no warranty or guarantee concerning the accuracy or reliability of the content at this site or at other sites to which we link.

Leave a Reply

Your email address will not be published. Required fields are marked *