Busted: Lawyers’ AI Excuses

In what’s becoming an increasingly common occurrence in courtrooms across the nation, lawyers are facing sanctions for submitting legal briefs containing fake citations generated by artificial intelligence tools. What’s perhaps even more astonishing than the AI’s ability to fabricate legal precedents is the creative range of excuses attorneys are offering to explain their mistakes.

The Epidemic of AI-Generated Fake Citations

Judges across the country have described the proliferation of fake AI-generated case citations as an “epidemic” that’s bogging down court proceedings and wasting valuable judicial resources. In a comprehensive review of 20 cases compiled by French lawyer and AI researcher Damien Charlotin, Ars Technica found a disturbing pattern: rather than simply admitting their reliance on AI tools, lawyers are offering increasingly implausible excuses for their failure to properly vet their filings.

A Pattern of Absurd Excuses

The excuses lawyers offer when caught using AI tools that generated fake citations range from the merely implausible to the outright bizarre:

Claiming Ignorance of AI Use

Perhaps the most common excuse is that lawyers simply didn’t realize they were using AI. Some attorneys have argued they mistook AI-generated content for standard search results. In one notable case, a California lawyer claimed he thought Google’s AI Overviews were regular search results. More often, however, lawyers blame underlings or even clients for incorporating AI-generated content without proper oversight.

In a particularly convoluted case from Texas, a lawyer was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing. When questioned directly, the lawyer claimed, “Is your client an attorney?” to which the court was forced to respond, “No, not at all your Honor, just was essentially helping me with the theories of the case.”

Technical Difficulties as a Defense

When the ignorance defense fails, some lawyers turn to technical excuses:

  • A New York lawyer blamed malware and hackers for adding fake citations to his filing after initially admitting he used Microsoft Copilot
  • An Alabama attorney claimed that “toggling windows on a laptop is hard,” explaining why he chose Ghostwriter Legal over established research tools
  • Several lawyers have cited login issues with established legal databases like Westlaw as justification for turning to AI tools

The “Rough Draft” Defense

Some attorneys have claimed they accidentally filed draft versions of documents rather than final versions, though judges have noted that this excuse is particularly weak when the fake citations appear throughout the document in a coherent pattern.

Sanctions for AI Misuse

Courts have responded to these incidents with varying degrees of sanctions, from minimal fines to substantial monetary penalties:

  1. Minimal sanctions ($150-$1,000) for attorneys who immediately admit their AI use and show remorse
  2. Moderate penalties ($5,000-$10,000) for those who offer weak excuses or show negligence
  3. Substantial sanctions (up to $85,000) for attorneys who lie about their AI use or show a pattern of abuse

In addition to monetary penalties, some lawyers have faced more severe consequences including referral to grievance committees and requirements to disgorge fees collected for work involving AI misconduct. Texas US District Judge Marina Garcia Marmolejo highlighted the resource strain, noting that “at one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject.”

Failure to Verify AI Output: The Core Problem

Judges have been remarkably consistent in their assessment of these cases: regardless of the excuse offered, the fundamental problem is lawyers’ failure to independently verify information generated by AI tools. As US District Judge Terry F. Moorer noted in an October sanctions order, “basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here.”

The duty to verify legal research and citations rests squarely on the shoulders of the attorney signing the filing. As Judge Nancy Miller emphasized, “the responsibility for correcting erroneous and fake citations never shifts to opposing counsel or the court, even if they are the first to notice the errors.” The path to reduced sanctions, judges consistently note, is to admit AI use as soon as it’s detected, act humbly, self-report the error to relevant legal associations, and voluntarily take classes on AI and law.

Specific AI Tools Implicated

Several AI tools have been specifically identified in these sanction orders:

Popular Generative AI Systems

  • ChatGPT: The most commonly cited AI tool, often used through third-party legal applications
  • Microsoft Copilot: Integrated into Microsoft Office products, making it easily accessible to lawyers working in Word

Specialized Legal AI Tools

  • Ghostwriter Legal: A Microsoft Word plug-in that appeared automatically in the sidebar, leading one Alabama lawyer to describe it as “the allure of a new program that was open and available”

These tools are contrasted with established legal research databases:

  • Westlaw
  • LexisNexis
  • Fastcase

Judges have consistently noted that while AI tools may be faster or more convenient, they simply cannot be relied upon for accurate legal research without independent verification. This point is underscored by the American Bar Association’s Formal Opinion 512, which outlines the ethical responsibilities lawyers face when incorporating generative AI into their practice.

High-Stakes Intersection of Technology and Legal Ethics

These incidents highlight serious concerns about the intersection of technology and legal ethics. The American Bar Association has reiterated that existing professional conduct rules apply to AI use by lawyers, and several state bar associations have issued formal guidance on the ethical use of generative AI in legal practice.

The New York City Bar Association’s Professional Ethics Committee recently issued Formal Opinion 2024-5 providing comprehensive guidance on navigating the ethical use of generative AI. The opinion emphasizes that lawyers must maintain competence when using any technology, including AI tools.

As US bankruptcy judge Michael B. Slade bluntly stated, “At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud.” This sentiment is echoed in the ABA’s Formal Opinion 512, which provides essential guidance for lawyers and law firms using Generative AI tools, focusing on key areas of concern including competence, confidentiality, communication, and candor.

Consequences for the Legal Profession

Beyond individual sanctions, these incidents raise broader questions about how the legal profession adapts to new technologies. Judges are spending increasing amounts of time identifying and addressing fake AI citations, time that could be better spent on substantive legal issues.

The pattern of excuses also raises concerns about honesty and professional integrity. When a lawyer changes their story multiple times, as the New York attorney did when he first claimed malware, then retreated to his original position about using Copilot, courts are forced to question not just competence but character. As Judge Thermos noted in that case, the excuse was an “incredible and unsupported statement,” particularly since there was no evidence of the prior draft existing.

Conclusion

While artificial intelligence tools have the potential to revolutionize legal research and practice, these incidents demonstrate that lawyers must approach these technologies with caution, competence, and rigorous verification practices. The excuses offered by sanctioned attorneys may be creative, but they’re ultimately ineffective in addressing the core issue: that lawyers remain responsible for the accuracy and integrity of their legal filings, regardless of the tools they use to draft them.

As bar associations continue to develop guidance on AI use in legal practice and courts issue increasingly detailed opinions on professional responsibilities in the age of AI, one thing is clear: ignorance, whether willful or not, is no longer an acceptable defense when it comes to legal research and citation accuracy. The legal profession’s embrace of AI tools must be accompanied by an equally strong commitment to verification and professional responsibility.

Sources

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *