For Now, Generative AI Is Risky for Class Action Counsel
Our firm recently conducted a survey that reveals conflicting views by companies regarding whether their outside counsel should use generative artificial intelligence.
Based on interviews with general counsel or senior legal officers at more than 300 Fortune 1000 and other large companies across a variety of industries, almost two-thirds — 61.8% — of respondents to the survey said their outside counsel should be using generative AI "in some way."
At the same time, over 25% believe their outside counsel should not use generative AI at all, citing unknown risks and unproven results.
Although generative AI has made a media splash in recent years, many lawyers remain unfamiliar with the concept. This article uses the class action survey results as a springboard to explore several topics of increasing importance to class action lawyers:
- What is generative AI? What are its benefits? What are its limitations?
- What risks — specifically, what class action risks — do companies that use it face?
- Why do clients believe class action defense litigators can and should use generative AI to create better case insights and lowered fees, and hopefully better case outcomes for clients — and is that really possible at this time?
Perceived Benefits of Generative AI
Generative AI is a form of artificial intelligence that generates text, images and other content based on specific data on which the model was trained. Its models differ from earlier forms of AI in that they do not simply function as a retrieval service. Rather, they use algorithms that seek to predict how humans would respond to specific questions put to them.
Proponents of generative AI believe it has the potential for good. Meta Platforms Inc.'s chief AI scientist Yann LeCun opined in a recent interview that modern AI will bring a lot of benefits to the world, and that chatboxes will "democratize creativity to some extent."[1]
In touting the benefits of generative AI, perhaps large companies' chief legal officers are following the lead of their business executives. An MIT Technology Review survey of 600 senior technology executives in large enterprises or public sector organizations reports that companies are sharply focused on retooling for a data and AI-driven future.
Every organization surveyed "will boost its spending on modernizing data infrastructure and adopting AI during the next year, and for nearly half, 46%, the increase will exceed 25%."[2] Eighty-one percent of the largest organizations — with annual revenue of more than $10 billion — already operate 10 or more AI systems, and 28% use more than 20.[3] Eighty-eight percent of surveyed companies already use generative AI.[4]
Generative AI Limitations and Risks
Despite the possibilities, many commentators express cautions about the use of generative AI. Gartner Inc. analyst Avivah Litan identifies five risks of generative AI.[5]
The first is the generation of errors called "hallucinations." Simply put, the information ChatGPT produces is sometimes simply wrong, or we should say, conspicuously wrong.
We must remember that generative AI cannot think. It has no actual understanding of the data it surveys, and the user has no idea what the tool reviewed, or ignored, or what insights a human might discern from the raw materials.
When generative AI publishes a response to an inquiry, this does not reflect judgment or insight. It's much like autocorrect on our smartphones in merely predicting what sequence of words might follow appropriately from some selected starting point.
Sometimes this output will be noticeably wrong to the user without actual cite checking or perhaps after cite checking. But absent such conspicuous errors, we are still left to trust a predictive modeling tool, not a thoughtful, insightful adviser who has been taught to "think like a lawyer" and who actually possesses that aptitude.
It is possible, of course, that the model may contain accurate information up to a point, but then it can and admittedly will make up other things to fill in the gaps. We also know that the information it provides is sometimes biased. The outcomes can reflect the biases, whether racial, gender or otherwise, of the society from which it draws its answers.
A recent McKinsey & Co. article said the risks of error and bias can be mitigated. It suggests "it's crucial to carefully select the initial data used to train these models to avoid including toxic or biased content."[6]
On the other hand, who decides what is "toxic" or "biased"? Already we are seeing complaints that different AI language models are skewed to reflect distinct political biases; ChatGPT, for example, is viewed by some to be "left-wing libertarian."[7]
Google LLC recently had to pause its language model Gemini's ability to generate images after receiving significant backlash when the model generated images of historical figures in a variety of ethnicities. Other problems with Gemini making headlines include a refusal to answer whether Adolf Hitler negatively affected society worse than current cultural figures such as Elon Musk.[8]
The second risk identified by Litan involves creating "deepfakes," or using generative AI to create fake videos, photos and voice recordings that use the image and likeness of another person.
A humorous example is the widely distributed AI-generated photo of Pope Francis in a puffer jacket.[9]
A U.S. Court of Appeals for the Ninth Circuit judge observed last year in Project Veritas v. Schmidt, however, that, more ominously, "anyone can access and learn how to use AI-powered generative adversarial networks to create convincing audio or video 'deepfakes' that make people appear to say or do things they never actually did."[10]
Sexually explicit deepfakes of pop icon Taylor Swift making the rounds on the internet illustrate the point.[11]
The third risk involves data privacy concerns. Are users' input data being collected? How are they stored and reviewed? Privacy concerns led Italy to ban ChatGPT altogether.[12]
Fourth, "the advanced capabilities of generative AI models, such as coding, can also fall into the wrong hands, causing cybersecurity concerns."[13]
Finally, copyright is a concern because generative AI models that draw on massive quantities of data don't always differentiate between protected and unprotected source material.
Based on these and other concerns, Chief Justice John Roberts said in his 2023 report on the judiciary that the use of AI in law "requires caution and humility." He explained that at the trial court level:
Machines cannot fully replace key actors in court. Judges, for example, measure the sincerity of a defendant's allocution at sentencing. Nuance matters: Much can turn on a shaking hand, a quivering voice, a change of inflection, a bead of sweat, a moment's hesitation, a fleeting break in eye contact. And most people still trust humans more than machines to perceive and draw the right inferences from these clues.[14]
In appeals as well, judges perform "quintessentially human functions." AI "is based largely on existing information, which can inform but not make such decisions."[15]
My law partner and firm general counsel Peter Winders expresses deeper criticism of generative AI. He notes that:
We have no way to determine what the tool saw, what it missed, what it overlooked, what it misunderstood, the analogies it failed to make, what authorities were truly supportive or adverse based on a nuanced understanding of the facts of our own case and the facts of decided cases, what policies animated these cases, or what lines of analysis might have been left out of the query used for the research.
Generative AI "can't do any of these things because it can't think."[16]
Perhaps the biggest risk of generative AI is what former Bush administration Defense Secretary Donald Rumsfeld called the "unknown unknown" — the unknowns we don't know we don't know.[17] Simply put, because generative AI is so new, we don't know the extent of the risks of its use.
Class Actions Involving Generative AI
The survey shows that companies are concerned that the use of generative AI could generate class actions. One vice president and associate general counsel of a Fortune 500 retailer predicted, "It's not here yet, but they are coming."
Actually, class actions over generative AI already have arrived. In June and July 2023, for example, class actions were filed in the U.S. District Court for the Northern District of California against OpenAI and Alphabet Inc., alleging their generative AI tools violate privacy and property rights.[18]
Respondents to our survey reported that data privacy dominates the expected class actions arising from the use of generative AI.
They see the greatest risks as flowing from the unintended release of sensitive data, the misuse of AI, limited controls over access and phishing using chatbots — 44% of respondents listed privacy and data security as their biggest class action concern from generative AI.
One deputy general counsel for a regional bank said, "I suspect the use of generative AI could result in data leaks and privacy issues if confidential information is released."
An assistant general counsel for a large health care company said, "I know some attorneys are starting to bring lawsuits against the use of these bots because the conversations and exchanges are being recorded."
In fact, some software developers filed a class action against GitHub Inc.'s development of two AI coding tools, Copilot and Codex, in the Northern District of California. Among other things, they alleged that GitHub "improperly used" their "sensitive personal data" by incorporating it into Copilot "and thereby selling and exposing it to third parties."[19]
The court dismissed this claim last year, however, because the plaintiffs failed to allege any disclosure of personal information and therefore failed to allege an actual or imminent injury sufficient to confer standing.[20]
Almost 12% of corporate counsel also suggest that the use of generative AI could lead to class actions in the form of discrimination claims. One general counsel of a large insurance company predicted discrimination suits will come "because much of the information already has an inherent bias built-in."
About 6% thought using generative AI could result in intellectual property class actions. For example, because generative AI models mine for their input data from a vast quantity of sources, they may not distinguish between data that is protected by intellectual property rights and data that is public domain. Several such lawsuits already have been filed.
In the GitHub class action, the plaintiffs alleged that Copilot, an open-source AI coding tool, "reproduces licensed code used in training data as output with missing or incorrect attribution, copyright notices, and license terms."[21]
The court dismissed a damages claim because, while the complaint identified several instances in which Copilot's output matched licensed code written by a GitHub user, none of these instances involved licensed code published to GitHub by plaintiffs.[22]
Nonetheless, the court found standing to exist for injunctive relief claims because the plaintiffs asserted in the complaint that the number of times users used Copilot made it a virtual certainty that a plaintiff's code would be displayed with copyright notices removed or in violation of the plaintiffs' open-source licenses for profit.[23]
In another class action, Andersen v. Stability AI Ltd.,[24] three artists alleged in the Northern District of California that Stability AI scraped copyrighted images from the internet without permission to train its Stable Diffusion product to produce "output images" in the "style of" particular artists without attribution to the original source.
The court dismissed most of the claims last year — with leave to amend, in part because the images produced by the models were not substantially similar to the plaintiffs' art and the images were derived from "five billion images," making it implausible that the plaintiffs' works were involved. Notably, the court refused to strike the class allegations at the pleading stage.[25]
Similarly, a generative AI art model may create a new image from existing art without the original artist's knowledge or approval.[26] A general counsel for a large manufacturer said copyright infringement could be an issue. This same in-house attorney also thought if generative AI is used for design interface in the manufacturing process, "you could have potential product infringement for product liability class actions."
Other risks mentioned by respondents to our survey included insurance claims, defamation claims and securities fraud claims. A general counsel of a private university expressed concern that "putting out wrong and harmful information to the public" could lead to defamation suits.
Perceived Possibilities for Using Generative AI in Managing Class Action Litigation
Legal technology commentators predict that AI will cause the day-to-day role of an attorney two years from now to look very different than it looks today.[27]
Whether those predictions turn out to be true or not, the majority of respondents to our survey said they believe that outside law firms they work with in the class action space should start making use of generative AI in their class action defense work now to lower defense costs.
These respondents believe it is beneficial in performing repetitive and lower-level work such as routine correspondence and draft memos. They tout the promise of saving time and money and freeing up more lawyer time for strategic thinking.
It is possible that generative AI may be useful to accomplish simple tasks where no reasoning ability is required and accuracy can be immediately verified. Some class action practitioners already are using generative AI in this way.
Objectors to a class action settlement apparently used it in preparing objection forms — albeit unsuccessfully.[28] But it is important to keep in mind that generative AI does not simply fetch and retrieve documents or other information of interest.
It purports to review such material and then generates a description of what it supposedly saw. But this output is fraught with the perils described above.
Unlike Google or traditional e-discovery tools, generative AI is not mechanically conducting word searches. So we cannot be certain that the narrative it provides factually describes the contents of the database.
This tool is not simply a refinement of more conventional artificial intelligence apps that we have used for decades. Currently, our law firm, for example, employs over 72 such conventional AI apps to improve the efficiency and to lower the costs of our client services. Generative AI is an entirely different kind of tech tool, and many who are studying it believe it is "not ready for prime time" for the work that law firms perform.
The use of generative AI in drafting legal memos and briefs has proven particularly problematic.
In Mata v. Avianca Inc., a U.S. District Court for the Southern District of New York judge sanctioned two New York lawyers in June 2023 for submitting a legal brief generated by ChatGPT. The brief contained citations to six fictitious cases.[29]
The judge said: "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool," such as Westlaw or LexisNexis "for assistance."
Nonetheless, "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings," and the lawyers "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question."[30]
Ironically, opposing counsel cited three of the fake cases in a table of authorities in one of its briefs. The court called this an "innocent mistake" that submitting counsel "promptly caught and corrected on its own."[31]
In June 2023, a Colorado lawyer confessed that he used ChatGPT to draft a motion for summary judgment; several of the cited cases were made-up.[32] He was suspended from the bar as a result.[33] The suspension order said he violated Colorado's Rule of Professional Conduct 1.1, Rule 1.3, Rule 3.3(a)(1) and Rule 8.4(c).
The Texas Court of Appeals also heavily criticized a Texas lawyer last year for preparing an appeal brief in a criminal case that cited three cases that didn't exist.[34]
Although the government suggested the appellant's brief was prepared using generative AI, the court declined to report the appellant's attorney to the state bar because it had "no information regarding why the briefing is illogical." The court did dismiss the appeal because of the appellant's "failure to adequately brief an issue."
In Pegnatori v. Pure Sports Technologies LLC, in October 2023, a U.S. District Court for the District of South Carolina judge, on a motion for preliminary injunction in a patent dispute, refused to credit the defendant's use of ChatGPT to define "foam" as used in a patent.
The court agreed with the plaintiffs that the defendant's use of ChatGPT was "defective" because ChatGPT didn't exist when the patent was issued and it "has recently been found to be an unreliable source of information, especially in legal proceedings." It stated it would be "taking its eye off the ball if it applied the ChatGPT definition in its review of extrinsic evidence."[35]
Also in October 2023, the U.S. District Court for the District of New Mexico dismissed a pro se plaintiff's public records inspection claims in Morgan v. Community Against Violence, where several of the plaintiff's case citations were "fake."[36] The court warned the plaintiff that it would make no allowances for citations to fake, nonexistent, misleading authorities.[37]
A bankruptcy judge in the U.S. District Court for the Southern District of Florida noted in In re: Vital Pharmaceutical last year that, in preparing the introduction to an opinion, he prompted ChatGPT to prepare an essay about the evolution of social media and its impact on creating personas and marketing products.
The essay relied on five sources, none of which existed. The judge said he "discarded the information entirely," but added the cautionary note that "[r]eliance on AI in its present development is fraught with ethical dangers."[38]
State bar rules also prohibit lawyers from revealing confidential client information to third parties such as ChatGPT without consent.[39] ChatGPT recently added pop-up disclaimers that caution users about inputting sensitive information, although it deleted a similar caution in its FAQs.
ChatGPT also added an opt-out form that allows users to opt out of sharing information and storing chat histories. Nonetheless, chat histories remain reviewable.[40]
Thus, one commentator recommends opting out of sharing information using the opt-out form, turning off chat history, and avoiding sharing sensitive information — especially privileged information — when using ChatGPT.[41]
More broadly, privacy concerns and limitations imposed by bar rules should be top of mind when lawyers use any generative AI models, even for limited purposes involving routine actions. As ABA Model Rule 1.1(8) requires, attorneys must keep abreast of changes in the law and its practice, including the benefits and risks associated with new technology.
In the survey, 14.7% of respondents saw a more limited role for generative AI in reviewing documents. They saw a lower risk here because there is proven, nongenerative versions of AI technology already in use in this area.
Lawyers have used technology-assisted review to identify and tag potentially discoverable documents for years.[42] Sidley Austin LLP teamed up recently with Relativity to conduct an experiment on a closed case file and evaluate how well the generative AI program GPT-4 would perform in coding documents for responsiveness.
GPT-4 correctly identified on average approximately seven out of 10 documents and identified most of the responsive documents. Many of the errors were attributed to ambiguities in review instructions and the fact that additional information provided to attorneys during their review was not part of the initial review instructions provided to GPT-4.
The report concludes: "For now, GPT-4 may be best suited at paring down the universe of documents that could then be reviewed using traditional tools and manual human review."[43] The errors that occurred in this test case may inhere in the use of generative AI. We can only speculate about why the tool was wrong 30% of the time.
A small but strategic 11.8% of respondents said they want outside counsel to use generative AI to come up with new insights from analyzing large data sets. Their goal is to find new data to support their defense.
This benefit is theoretically possible, but users simply cannot take as true the output of generative AI tools at this time. And we must remember that generative AI is incapable of having any insights into anything. It simply cannot think. So we rely on these suggested insights at our peril.
Clients come to lawyers because of our deep experience in tackling knotty legal and factual problems. We cannot simply default to a robotic tool that was not developed, in the first instance, for use in legal work and certainly cannot "think like a lawyer," no matter what.
Finally, almost a quarter of respondents to the survey said they are skeptical of any noticeable changes to defense strategies or overall costs of litigation because of outside law firms using generative AI.
Perhaps these in-house attorneys have been around for long enough to see other technologies' application to class actions but not to have noticed a measurable change in case outcomes or in attorney fees from the application of those technologies.
Because of the problems with generative AI outlined above, some law firms, including my own, prohibit the use of generative AI in producing any legal work product.[44]
Obviously included in this prohibition would be the preparation of class certification briefs and Rule 23(f) petitions for permission to appeal. But also included would be preparing class notice and class settlement agreements, both of which, though form-based, require significant thought and human reasoning ability beyond the current capabilities of generative AI models.
Conclusion
Arthur C. Clarke said in 1962, "Any sufficiently advanced technology is indistinguishable from magic."[45] Generative AI isn't magic, but its long-term implications for the practice of law, in the class action space and otherwise, are unknown.
When asked to write about the "legal implications of artificial intelligence in courts," ChatGPT responded:
In conclusion, while AI has the potential to improve efficiency and accuracy in court cases, its integration into the legal system requires careful consideration of these legal implications to maintain fairness, justice, and the protection of individual rights.[46]
This understates the risk. Perhaps one day generative AI will bear the weight of the breathless predictions put to it and become the agent of fundamental change to the legal industry just as prior technology advances, such as how the invention of the printing press changed book publishing in the 15th century or computers changed how companies did business in the 20th.
For now, class action litigators should view the marketing hype in the same way we view the hype for driverless cars. In other words, generative AI simply is not ready for prime time. At present, generative AI is more a target for class actions than it is a tool to be used in class action practice.
Reprinted with permission from Law360.
[1] Steven Levy, "How Not To Be Stupid About AI, With Yann LeCun," Wired (Dec. 22, 2023), https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/.
[2] MIT Technology Review Insights, Laying the foundation for data-and AI-led Growth, A global survey of C-suite executives, chief architects and data scientists, https://www.databricks.com/sites/default/files/2023-11/mittr-x-databricks_survey-report_final_06nov2023.pdf, at 4.
[3] Id. at 5.
[4] Id.
[5] https://www.zdnet.com/article/the-5-biggest-risks-of-generative-ai-according-to-an-expert/.
[6] https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai.
[7] Melissa Heikkiia, "All language models are rife with different political biases, MIT Technology Review (August 7, 2023), https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/.
[8] Dan Milmo and Alex Hern, "Google chief admits 'biased' AI tool's photo diversity offended users, The Guardian (Feb. 28, 2024), https://www.theguardian.com/technology/2024/feb/28/google-chief-ai-tools-photo-diversity-offended-users.
[9] https://www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/.
[10] Project Veritas v. Schmidt , 72 F.4th 1043, 1075 (9th Cir. 2023) (Christen, J., dissenting).
[11] See Laurie Segall, Opinion: The Taylor Swift AI phots offer a terrifying warning, CNN (Jan. 31, 2024), https://www.cnn.com/2024/01/31/opinions/taylor-swift-deepfakes-ai-segall/index.html.
[12] https://www.zdnet.com/article/italy-just-banned-chatgpt-could-the-us-be-next/.
[13] https://www.zdnet.com/article/the-5-biggest-risks-of-generative-ai-according-to-an-expert/.
[14] Chief Justice John Roberts, 2023 Year-End Report on the Federal Judiciary, at 5-6, https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf.
[15] Id.
[16] Peter Winders, Why Our Law Firm Bans Generative AI for Research and Writing, Bloomberg Law News (Feb. 28, 2024).
[17] https://en.wikipedia.org/wiki/There_are_unknown_unknowns.
[18] P.M. v. OpenAI LP, 2023 WL 4335507 (N.D. Cal.) (class action complaint); J.L. v. Alphabet Inc., 2023 WL 4491393 (N.D. Cal.) (class action complaint). The plaintiffs voluntarily dismissed the P.M. case on September 15, 2023. Case 3:23-cv-03199 (Doc. 38).
[19] Doe 1 v. GitHub Inc. , 2023 WL 3449131, at *2, -- F. Supp. 3d – (N.D. Cal. May 11, 2023).
[20] Id. at *4.
[21] Doe 1 v. GitHub Inc. , 2023 WL 3449131, at *2.
[22] Id. at *5.
[23] Id. at *6.
[24] Andersen v. Stability AI Ltd. , 2023 WL 7132064, -- F. Supp. 3d – (N.D. Cal. Oct. 30, 2023).
[25] Id. at *16-17.
[26] Although not a class action, the Supreme Court's decision in Andy Warhol Foundation for the Visual Arts Inc. v. Goldsmith , 598 U.S. 508 (2023), will likely inform any class actions filed along these lines. The Supreme Court held that Andy Warhol's use of a rock-and-roll photographer's iconic photographs of Prince to create a silkscreen portrait of Prince was not protected from copyright infringement by "fair use" because it shared the "purpose and character" of the original work.
[27] Dorna Moini, "Navigating the AI Frontier: Empowering Attorneys Is the Key to Responsible Transformation, Law.com (December 18, 2023), https://www.law.com/legaltechnews/2023/12/18/navigating-the-ai-frontier-empowering-attorneys-is-the-key-to-responsible-transformation/.
[28] See In re AMC Entertainment Holdings Inc. Stockholder Litig. , 299 A.3d 501, 539 n.215 (Del. Ch. Ct. July 21, 2023).
[29] Mata v. Avianca Inc. , 2023 WL 4114965, -- F. Supp. 3d --, S.D.N.Y. June 22, 2023).
[30] Id. at *1.
[31] Id. at *1 n.1.
[32] https://krdo.com/news/2023/06/13/colorado-springs-attorney-says-chatgpt-created-fake-cases-he-cited-in-court-documents/.
[33] See People v. Crabill , 2023 WL 811898 (Colo. Nov. 22, 2023).
[34] Ex Parte Lee, 673 S.W. 3d 755 (2023).
[35] Pegnatori v. Pure Sports Technologies LLC , 2023 WL 6626159, at *5-6 (D.S.C. Oct. 11, 2023).
[36] Morgan v. Community Against Violence , 2023 WL 6976510 (D. N. Mex. Oct. 23, 2023).
[37] Id. at *7-8.
[38] In re Vital Pharmaceutical , 652 B.R. 392, 398 n.12 (Bankr. S.D. Fla. June 16, 2023).
[39] See, e.g., Rules Regulating the Florida Bar 4-1.6.
[40] Foster Sayers, "Legal Ethics and ChatGPT: Is OpenAI Listening to (Us)ers?" Contract Nerds (August 9, 2023) https://contractnerds.com/legal-ethics-and-chatgpt-is-openai-listening-to-users/#:~:text=KEY%20TAKEAWAYS%3A,confidentiality%20and%20attorney%2Dclient%20privilege; Mark C. Palmer, "The Rise of ChatGPT: Ethical Considerations for Legal Professionals 2Civility (May 12, 2023) https://www.2civility.org/ethical-considerations-for-chat-gpt-for-legal-professionals/.
[41] Sayers, "Legal Ethics and ChatGPT."
[42] See Rio Tinto PLC v. Vale S.A. , 306 F.R.D. 125 (S.D.N.Y. Mar. 2, 2015) (approving parties' agreement to use TAR for document reiew); In re Valsartan, Losartan, and Irbesartan Prods. Liab. Litig. , 337 F.R.D. 610, 616 (D.N.J. 2020) ("We are past the time when parties and courts view TAR as an outlier").
[43] Colleen Kennedy, Matt Jackson, and Robert Keeling, "Replacing Attorney Review? Sidley's Experimental Assessment of GPT-4's Performance in Document Review," Law.com (December 13, 2023), https://www.law.com/americanlawyer/2023/12/13/replacing-attorney-review-sidleys-experimental-assessment-of-gpt-4s-performance-in-document-review/.
[44] Peter Winders, Why Our Law Firm Bans Generative AI for Research and Writing, Bloomberg Law News (Feb. 28, 2024).
[45] Arthur C. Clarke, Profiles of the Future: An Inquiry Into the Limits of the Possible, quoted in Vala Afshar, "Measuring trust: Why every AI model needs a FICO score, zdnet.com, August 22, 2023, https://www.zdnet.com/article/measuring-trust-why-every-ai-model-needs-a-fico-score/.
[46] Quoted in Erin McGroarty, "ChatGPT has fabricated legal cases: can lawyers use AI ethically?" The Cap Times https://captimes.com/news/chatgpt-has-fabricated-legal-cases-can-lawyers-use-ai-ethically/article_8afd8705-831b-5f51-835d-2e8be4b234a3.html.
The information on this website is presented as a service for our clients and Internet users and is not intended to be legal advice, nor should you consider it as such. Although we welcome your inquiries, please keep in mind that merely contacting us will not establish an attorney-client relationship between us. Consequently, you should not convey any confidential information to us until a formal attorney-client relationship has been established. Please remember that electronic correspondence on the internet is not secure and that you should not include sensitive or confidential information in messages. With that in mind, we look forward to hearing from you.