Why Do Lawyers Rely on ChatGPT Despite Risks and Ethical Concerns

Why Do Lawyers Keep Using ChatGPT?

Lawyers keep using ChatGPT primarily because it offers perceived efficiency and helps manage tight deadlines, despite its known limitations and risks. The integration of AI in legal tasks creates a useful support tool, but it remains critical for legal professionals to verify AI-generated content.

Efficiency Amid Time Constraints

Lawyers often face significant time pressures, handling multiple cases simultaneously. ChatGPT and similar AI tools provide a way to quickly draft documents, summarize cases, and search legal language. This perceived time-saving benefit makes AI attractive in fast-paced legal environments.

Modern legal research platforms, such as LexisNexis and Westlaw, also incorporate AI. These integrations streamline access to relevant case law and statutes, enhancing productivity. For many lawyers, AI serves as a helpful assistant that leverages vast amounts of data instantly.

Widespread AI Adoption in Law

Recent surveys by Thomson Reuters reveal that 63% of lawyers have used AI tools at some point. About 12% say they use AI regularly. Lawyers commonly rely on ChatGPT to generate summaries of case law, explore statutes, and find sample legal language for orders and motions.

  • AI assists in drafting preliminary documents to save research time.
  • Generative AI models help organize legal citations and references.
  • Many lawyers prioritize exploring AI implementation to increase efficiency.

Risks of Hallucinations and Errors

The reliance on AI comes with significant risks. ChatGPT can produce “hallucinations”—fabricated information presented confidently. Lawyers have faced issues submitting filings with false citations and incorrect legal claims.

Notable cases highlight this problem:

  • In 2024, a filing concerning journalist Tim Burke contained major misquotations related to First Amendment law, leading to the motion’s removal by the court.
  • Lawyers in a copyright case admitted to using the Claude AI, which generated inaccurate citation details in key declarations.
  • Legal expert Jeff Hancock submitted filings with citation errors caused by ChatGPT’s hallucinations.
  • A California judge discovered that a brief contained entirely fictitious case law, undermining its credibility.

Responsible AI Use Requires Verification

Legal professionals must treat AI outputs cautiously. Experts advise viewing ChatGPT as a junior associate — a tool that requires close supervision and thorough fact-checking before submission.

Attorneys should never outsource research and legal writing to AI without verifying citations and content validity. Understanding AI’s limitations is essential to prevent ethical lapses and disciplinary actions.

ABA Guidance on AI in Legal Practice

The American Bar Association emphasizes that lawyers have a duty to maintain technological competence. This includes understanding generative AI’s evolving role and the risks it entails.

Key ABA recommendations include:

  • Gain a general understanding of AI benefits and potential harms.
  • Evaluate confidentiality risks when entering client information into AI systems.
  • Consider transparency with clients regarding the use of AI tools.

Summary of Key Points

  • Lawyers use ChatGPT mainly for time efficiency and ease of legal research.
  • Many do not fully understand AI’s strengths and weaknesses, leading to errors.
  • AI-generated hallucinations can cause serious mistakes in legal filings.
  • Careful verification of AI outputs is critical to responsible use.
  • ABA guidance calls for lawyers to develop technological competence and manage confidentiality.
See also  AI Prompts Transform Filmmaking from Development to Distribution

Why Do Lawyers Keep Using ChatGPT?

Lawyers continue to use ChatGPT mainly because it seems to save them time in their hectic workflow, despite the risks and misunderstandings surrounding AI’s true capabilities. This is the quick answer, but the real story is more nuanced and quite captivating. Let’s take a walk through the legal world’s romance with AI and the lessons lurking behind the courtroom drama.

Picture a lawyer with a towering caseload, deadlines lurking like sharks, and mountains of case law to scan. Enter ChatGPT and AI-powered tools embedded in familiar databases like LexisNexis and Westlaw. These AI assistants promise to speed up research and drafting, offering a shortcut in a notoriously slow and meticulous profession.

The reality? Many lawyers don’t really get what ChatGPT is or how it works. One lawyer famously treated ChatGPT as a “super search engine” before learning the hard way that it operates more like a highly persuasive parrot — it can produce either helpful insights or complete nonsense (“hallucinations” in AI-speak). Some even submitted legal filings stuffed with fabricated citations. Oops.

The Time Crunch and Illusion of Efficiency

Lawyers live under relentless time constraints. Handling dozens of cases simultaneously means every minute counts. Surveyed by Thomson Reuters in 2024, **63% of lawyers admitted to trying AI**, with 12% integrating it regularly. They mainly use it to write case law summaries or to find sample forms and statutes.

Imagine getting a fast draft of relevant precedents or a rough outline of a memo, freeing up mentally taxing hours. It’s no surprise then that half of the surveyed attorneys ranked exploring AI’s potential as a top workplace priority. AI looks like a magical time saver.

And speaking of magic, the integration of AI into powerhouse legal research platforms like LexisNexis or Westlaw has made AI less of a novelty and more of a daily tool. This normalization fast-tracks AI adoption, but it also blurs the line between a reliable assistant and an overconfident guesser.

The Hall of AI Hallucinations: When AI Gets Too Creative

Here’s where the story turns from “cool helper” to something more cautionary. AI “hallucinations,” as lawyers call them, are fabricated case citations or outright false information dressed up in fancy legal language. These errors have real consequences.

One high-profile example involves Tim Burke, a journalist with a legal drama involving Fox News footage. His lawyers slipped in a motion filled with misquoted case law. The judge immediately struck it down, calling out the “significant misrepresentations.” Another case saw lawyers submitting a witness declaration citing a nonexistent paper with inaccurately listed authors. Even respected misinformation experts have stumbled by trusting ChatGPT too much.

Judges, the ultimate gatekeepers, aren’t fooled for long. Judge Michael Wilner’s story is telling: he read a brief, found it persuasive, then looked up the cited cases — only to discover they didn’t exist. That’s a lesson from the courtroom trenches: AI can sound confident but still mislead.

Treating ChatGPT Like a Junior Associate—Checks Required

Some lawyers, like Arizona election lawyer Alexander Kolodin, recommend treating AI like a junior associate. You wouldn’t accept a fresh junior’s work without reviewing it, would you? The same applies. AI can draft, brainstorm, or organize citations, but its output needs human judgment.

See also  Is C3.ai Stock a Good Investment? Analyzing Growth Metrics and Market Implications

Perlman, an expert voice in this field, emphasizes verification. You can’t just trust AI blindly. The American Bar Association echoes this by urging lawyers to maintain “technological competence” and understand AI’s evolving landscape. The tech is a tool, not a replacement for careful legal reasoning.

Confidentiality and Ethical Considerations

Using ChatGPT isn’t just about accuracy. Attorneys must also wrestle with confidentiality. Feeding sensitive client information into AI tools, accessible via cloud platforms, demands sharp ethical navigation. The ABA guidance warns lawyers to weigh these risks and decide if clients should be informed when AI tools are in play.

It’s not just about who gets to use the tool, but how it’s used. Responsible use involves transparency, understanding both benefits and risks, and applying care at every step.

Why do lawyers keep using ChatGPT? A Final Perspective

To sum up, lawyers’ continued use of ChatGPT boils down to a blend of pressure and promise:

  • Time-saving: AI helps tackle the massive workload faster.
  • Normalization: AI tools are integrated into trusted platforms, making them part of daily work.
  • Misunderstanding: Some lawyers don’t fully grasp AI’s limitations, seeing it as a flawless oracle.
  • Risks: AI can hallucinate, producing erroneous, even dangerous legal citations.
  • Best practices: Treating AI like a junior associate and verifying output can mitigate errors.
  • Ethics: Confidentiality and client communication require serious consideration.

Lawyers are not blindly rushing; rather, they’re navigating unfamiliar terrain. ChatGPT is a fascinating, flawed tool that requires respect, caution, and clear-eyed understanding.

Practical Tips for Lawyers Using ChatGPT

  1. Double-check citations: Always verify case law, statutes, and quotations produced by AI.
  2. Keep it confidential: Avoid inputting sensitive client details into public AI platforms.
  3. Use AI features in trusted platforms: Use AI integrated within tools like Westlaw or LexisNexis, as they often tailor AI for legal reliability.
  4. Treat AI as a junior colleague: Review and refine all AI outputs before submission.
  5. Stay informed: Keep up with ABA guidelines and tech updates to maintain competence.
  6. Be transparent with clients: Consider discussing AI use proactively with clients.

The Bottom Line

ChatGPT holds serious allure for legal professionals. Its seductive promise to ease workloads is real, but it comes with pitfalls. The last thing anyone wants is a judge discovering your brief cites fictional law. The savvy lawyer uses AI like a cautious pilot: appreciating the power of the tool, but steering with steady hands, sharp eyes, and a clear strategy.

If you’re a lawyer or in a legal field, the question isn’t just “Why do lawyers keep using ChatGPT?” but also “How do we keep using it responsibly and well?” Now that’s a conversation worth having.


Why do lawyers rely on ChatGPT despite its known inaccuracies?

Lawyers use ChatGPT mainly to save time on research and drafting. Many see it as a helpful starting point but still verify the output carefully. They treat it like a junior associate who needs review before final use.

How common is AI use among lawyers in their daily work?

About 63% of lawyers have used AI tools, and 12% use them regularly. Many apply AI to draft case law summaries and research statutes, forms, or sample legal language.

What risks do lawyers face when submitting AI-generated legal documents?

AI can produce incorrect or fabricated citations, called hallucinations. Submitting such errors can lead to sanctioned filings and damage credibility with judges.

How do lawyers ensure responsible use of ChatGPT and similar tools?

They double-check AI-generated content and treat it as a draft requiring human review. Awareness of AI limits and verifying citations helps mitigate risks before submitting documents.

What guidance does the American Bar Association give about lawyers using AI?

The ABA advises lawyers to maintain technological competence, understand AI’s risks and benefits, protect client confidentiality, and consider informing clients when they use AI tools.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *