Skip to content

Lawyer’s Use Of OpenAI’s ChatGPT Backfires As AI Bot Fabricates Cases

A lawyer used AI-chatbot in a case, leading to fabricated legal arguments and possible sanctions.

In a startling turn of events, a lawyer’s use of OpenAI‘s ChatGPT for legal assistance has backfired as the AI-powered chatbot fabricated nonexistent cases. 

What Happened: In a bizarre case surrounding Mata Vs. Avianca, which involved a customer suing the airline for a knee injury caused by a serving cart, the use of ChatGPT took a surprising turn. 

Mata’s lawyers objected to Avianca’s attempt to dismiss the case and submitted a brief containing numerous purported court decisions generated by the AI-powered chatbot. 

A Star Alliance branded Avianca airlines Airbus A330-243 arrives at Los Angeles international Airport on July 30, 2022 in Los Angeles, California. (Photo by AaronP/Bauer-Griffin/GC Images)

At first glance, this may seem like a funny incident, an honest mistake, or both, but what transpired next was nothing short of outlandish.

When pressed to produce the actual cases in question, the plaintiff’s lawyer once again sought assistance from ChatGPT, leading to the AI inventing elaborate details of those made-up cases, which were then captured as screenshots and incorporated into the legal filings. 

Adding to the astonishing sequence, the lawyer went as far as asking ChatGPT to verify the authenticity of one of the cases, to which the AI responded affirmatively, resulting in the inclusion of the AI’s confirmation screenshots in yet another filing. 

In response to this extraordinary situation, the judge has scheduled a hearing for next month to address the possibility of imposing sanctions on the lawyers, recognizing the need to discuss and address the consequences of this unprecedented circumstance.

It is pertinent to note that last month it was reported that an Indian judge couldn’t decide on whether bail should be given to a murder suspect who used ChatGPT for assistance. 

The Bing app is seen in the Apple App Store in this photo illustration in this photo illustration on 30 May, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)

Why It’s Important: What makes the entire case so bizarre is the fact that while it is well known that OpenAI’s ChatGPT and other generative AI models like Microsoft Corp’s (NASDAQ: MSFT) Bing AI and Alphabet Inc.’s (NASDAQ: GOOG) (NASDAQ: GOOGL) Google Bard, tends to hallucinate and provide even made up facts with utmost conviction.

Previously, a Reddit user highlighted similar issues stating that when people become excessively self-assured about AI’s (in this case, ChatGPT) abilities and disregard the potential dangers of seeking advice from the chatbot in sensitive fields like medicine or law, issues arise. 

The worst part is that even tech-savvy individuals can fall prey to the well-documented hallucinations that ChatGPT is notorious for. 

© 2023 Zenger Zenger News does not provide investment advice. All rights reserved.

Produced in association with Benzinga

Edited by Alberto Arellano and Sterling Creighton Beard

“What’s the latest with Florida Man?”

Get news, handpicked just for you, in your box.

Check out our free email newsletters

Recommended from our partners