Disclaimer

The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.

Skip to Content

Why Our Law Firm Bans Generative AI for Research and Writing

Our law firm has a policy forbidding our lawyers to use generative artificial intelligence to produce legal products such as briefs, motion arguments, and researched opinions.

Here are some of the reasons we think this is the only right course.

Large language model generative AI digests a huge amount of literature and then predicts what a human most likely would say in response to questions and limited by how the question was presented. But generative AI sometimes ”hallucinates,” a clever term describing what the tool has done when caught in an obvious lie. Media coverage about this phenomenon usually includes a quote along the lines of: “We don’t understand why it does that, and we are working on ways to control it.”

If the tool is working as described, predicting the most likely human response based on an enormous library of human works, then all it does is “hallucinate.” It predicts what a human would say, barely tethered by fact. At some point, such fabrications will be obvious, but at no point can the output of a product that works this way be trusted with a task unless the user knows immediately whether each statement in the product is true or false.

Apologists have suggested that generative AI could help summarize a lawyer’s own work, using a prompt that asks “digest the briefs and write an oral argument.” Generative AI certainly can do that, and it would be readily verifiable. But every appeals judge or justice since John Jay has told us oral arguments shouldn’t regurgitate the briefs, and oral argument is (or should be) a carefully crafted standalone art form.

Generative AI’s adherents in the legal profession and even the skeptics, including some courts, warn that all citations must be checked and certified to be accurate. That misses the point.

None of this “generative” output is based on thought, analysis, understanding, or upon a truly insightful examination of the case law, statutes, the facts of the case, and the policy reasons underlying either cases, statutes, or analogous sources of the law. Generative AI doesn’t think. “Remove the demonstrable lies” doesn’t cure the problems, including the problem of undetected lies.

Take the infamous New York case where two unfortunate lawyers submitted the ChatGPT brief containing citations to non-existent cases, quotations from other non-existent cases, and other ‘hallucinogenic’ problems. The reaction of the legal community in the main was that the lawyers should have double-checked those citations.

Think about it. Remove the fake cases and the misrepresented quotes. Would that result in a good brief? Would the court have had the benefit of a professional’s careful reasoning and analysis? Would the client have been well-served? Would any of the purposes of a brief (other than getting something timely filed) have been met?

It seems that “take out the provable fakes” should be an unacceptable solution to the problem of a generative AI-assisted brief. How does a lawyer responsibly check the rest of it? By starting over and doing the work, then comparing this work product to the fabricative AI product. The exercise saves no work at all if a thoughtful brief is the goal. Even then, the risk of being lulled or misled by the defective starting point of this analysis should be self-evident.

This isn’t like when a partner has to confirm that the cases a first-year associate cites are accurate. Associates are taught to think like lawyers, their drafts have thought and judgment behind them, and they’re presumably not compulsive liars.

What about a generative AI program trained only on a large library of only reliable material, such as the West system or the preserved research and output of a large law firm? This might cut down on the risk that the generative AI tool has scraped rubbish off the internet at large, but it will do nothing to stop its resort to the fabrication of cases.

We have no way to determine what the tool saw, what it missed, what it overlooked, what it misunderstood, the analogies it failed to make, what authorities were truly supportive or adverse based on a nuanced understanding of the facts of our own case and the facts of decided cases, what policies animated these cases, or what lines of analysis might have been left out of the query used for the search.

A generative AI tool can’t do any of these things because it can’t think. The narrative it generates is still a made-up answer created with algorithms, not with thought and judgment.

I hope no court we appear before is tempted to use generative AI to write its opinion. That might be even worse.

This invention may be useful for some purposes—those where the user immediately knows whether the output is accurate, or where the accuracy isn’t important to the user, and where zero understanding or analysis of sophisticated concepts is required. But it simply can’t be used to produce competent legal product intended to aid a client or assist a court in the important work of fairly deciding disputes and developing the law.


Reproduced with permission. Published February 28, 2024. Copyright 2024 Bloomberg Industry Group 800-372-1033. For further use please visit https://www.bloombergindustry.com/copyright-and-usage-guidelines-copyright/

 

Authored By
©2024 Carlton Fields, P.A. Carlton Fields practices law in California through Carlton Fields, LLP. Carlton Fields publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information and educational purposes only, and should not be relied on as if it were advice about a particular fact situation. The distribution of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship with Carlton Fields. This publication may not be quoted or referred to in any other publication or proceeding without the prior written consent of the firm, to be given or withheld at our discretion. To request reprint permission for any of our publications, please use our Contact Us form via the link below. The views set forth herein are the personal views of the author and do not necessarily reflect those of the firm. This site may contain hypertext links to information created and maintained by other entities. Carlton Fields does not control or guarantee the accuracy or completeness of this outside information, nor is the inclusion of a link to be intended as an endorsement of those outside sites.