ChatGPT Lawyers Sanctioned
The Judge in the infamous ChatGPT lawyer case (Mata v Avianca) has now issued his judgement. He has sanctioned the ChatGPT lawyers.
Judge P. Kevin Castel issued a 34 page Opinion [PDF]. It concludes that a penalty of $5,000 is jointly and severally imposed on the lawyers in question. In addition they must send a letter individually addressed to each judge falsely identified as the author of the fake “Varghese”, “Shaboon”, “Petersen”, “Martinez”, “Durden” and “Miller” opinions. The letter shall identify and attach the court’s Opinion and Order. Together also with a transcript of the hearing of June 8, 2023 and a copy of the April 25 Affirmation, including the fake “opinion” attributed to the recipient judge.
Good Lawyers (no sanctions) v ChatGPT Lawyers (sanctioned)
The Opinion opens:
In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Rule 11, Fed. R. Civ. P. Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.
Fake Submissions cause Problems
Judge Castel went onto explain the problems with fake submissions:
Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.
A statement you could modify for the problems of using ChatGPT for legal research:
Many harms flow from the the use of ChatGPT in legal research. The party using it wastes time and money in detecting the deceptions that it produces. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of the law firm using ChatGPT. It promotes cynicism about the legal profession.
ChatGPT Lawyer Sanctions show ChatGPT does not Outperform Junior Lawyers
All harms that, only a month or two ago, the Legal Techerati were oblivious to as they lauded ChatGPT as “mindblowing” and “cataclysmic” at the British Legal Technology Forum 2023.
Richard Susskind in an interview with Law.com, published on 1 March 2023, said that:
ChatGPT is the most remarkable system I’ve seen in over 40 years of working on AI.
Susskind went onto say:
I think it’s fair and not insulting to say that ChatGPT probably outperforms a lot of junior lawyers in some tasks already.
Clearly, as we now know following the ChatGPT lawyers being sanctioned, not when it comes to legal research! That would be insulting to say.
Avoiding ChatGPT Hallucination citations in Court
The sanctions imposed on the ChatGPT lawyers are “limited to what suffices to deter repetition of the conduct or comparable conduct by others similarly situated.”
So it is clear that, understandably, the court does not want to see a repetition of lawyers using ChatGPT and relying on its output in the way that the lawyers in Mata v Avianca did. Some courts have already issued guidance on Generative AI usage.
As I have said before:
If anything, Mr. Schwartz [the main ChatGPT lawyer in Mata v Avianca] has done the legal profession a big favour by bringing this fact [that ChatGPT hallucinates] mainstream. Lawyers will now approach ChatGPT with the health warning in mind that has not been amplified enough to date. Similar to lawyers being more conscious about how to use Zoom after the ‘cat lawyer‘ incident.
The sanctions issued by Judge Castel, against the ChatGPT lawyers, assists in highlighting the dangers of ChatGPT hallucinations and dampening the early hype created by the Legal Techerati. It will hopefully also bring the deficiencies of ChatGPT for legal research to the attention of litigants in person.
The embarrassment from the widespread news coverage of the case, coupled with the fines, should be enough to deter the lawyers from again falling prey to ChatGPT’s hallucinations, according to legal ethics experts.
Definitely the ChatGPT lawyers, they said so themselves. But also I reckon any lawyer who has heard of this case, which by now must be most of those on the planet.
More ChatGPT Lawyers
And there have been other ChatGPT lawyers. Reuters reported just last week that a Colorado:
attorney cited nonexistent cases generated by OpenAI’s ChatGPT in a court filing in a case over contested car payments.
The young lawyer, Zachariah Crabill, said he unknowingly included the “fictitious” cases in his client’s motion, according to court documents. He apologized in a May 11 affidavit and said ChatGPT accurately answered previous inquires, so it “never even dawned on me that this technology could be deceptive.”
Crabill, who did not respond to a request for comment, moved to withdraw from the case last week. His firm, Baker Law Group, declined to comment and did not respond to follow-up questions.
Tools or Lawyers?
There remains murmurings that it is not the tools that are at fault but the lawyers who can’t use them properly. However, it is very clear that the new shiny tool in question is currently not fit for purpose when it comes to legal research. And that is a message that the sanction of the ChatGPT lawyers has hopefully finally got across to the masses.
Image information: The image was created using DALL-E2, a Generative AI program that creates images based on prompts. My prompt was “A judge hitting a lawyer over the head with a gavel in court.” DALL-E2, like ChatGPT, also hallucinates as can be seen by the three ‘arms’ at play!
Other Posts on Legal Chatbots
For other posts about Legal Chatbots and ChatGPT on this blog see: Chatbots
Reactions on Social Media to ChatGPT Lawyers Sanctioned
On LinkedIn the following comments have been made:-
Andrew King (MD of Lennons):
Great post, Brian.
I agree with your sentiments entirely about the positive impact of the judgment in essentially sending a message to the legal profession about the reliability of relying on ChatGPT.
It’s easy to get carried away with something that is gaining popularity but the judgement should, if nothing else, convince users that one cannot simply plug and play with ChatGPT and place full reliance on it.
Me:
Thanks Andrew. It has been interesting to watch some of those who did get carried away about it in the beginning gradually (or in some cases swiftly) change tack!
-+-+-+-+-+
Clare Fraser (Access to justice | AI product developer | legaltech | NP member Law Soc Scot):
So no misconduct findings then? If not, I reckon the financial sanction could have been worse. More a rap on the knuckles really.
At least they didn’t hit him on the head with a hammer like your pic. Small mercies.
Me:
As Judge Castel observes in his opinion: “In considering the need for specific deterrence, the Court has weighed the significant publicity generated by Respondents’ actions. (See, e.g., Alger Decl. Ex. E.) The Court credits the sincerity of Respondents when they described their embarrassment and remorse. The fake cases were not submitted for any respondent’s financial gain and were not done out of personal animus. Respondents do not have a history of disciplinary violations and there is a low likelihood that they will repeat the actions described herein.”
-+-+-+-+-+
Mark Elliott (CivTech Programme Director, Scottish Government):
Let’s be fair to ChatGPT. Today I questioned a citation it gave me, and the response was…
‘I apologize for the confusion, but as an AI language model, I don’t have real-time access to external sources or the ability to provide specific citation details.’
So we all know this (at least now).
So did the lawyers in question not think to double check? I thought due diligence has ‘check and check again’ at its heart.
‘Theya culpa’, to mangle a phrase.
Ann Packard FRSA Hon (Elected Member, RSA Fellowship Council. Private, public & third sector career. Past active member of IPR, BAIE and for 30 years NUJ member):
Great new phrase!
Me:
I wouldn’t ask ChatGPT to reconfirm a citation it gave you! You were lucky that you got the response that you did.
It is well known for often doubling down on its previous responses and confirming them as correct.
Indeed, the lawyers in the case in question did just that. As I reported in an earlier blog post:
“Mr. Schwartz, apparently believing the magic of ChatGPT, asked it to verify that the cases were real and it told him that they were!”
The double checking should not happen via ChatGPT but via more established and reliable fact checking means.
‘Theya culpa’ for not doing so hence the sanctions handed down to them by the Court.
The hype surrounding ChatGPT has led many (including obviously these lawyers) to think that it is magical, can do anything and will always be correct.
Thankfully for all of us (although unfortunately for Mr. Schwartz, the other lawyer involved and their law firm) this case has highlighted the severe drawbacks of ChatGPT if it is being used as a research tool.
Why use it at all if you are going to have to spend considerable time using other fact checking tools to simply discover that most of what it has told you is gibberish (as Judge Castel called its output)?
Mark Elliott:
To be honest, gaving played around with it for half a year and found what it’s useful for, and its limits, I was just curious as to what it would reply with.
Me:
Another interesting thing is that the six fake cases are now being reaffirmed as authority for the fake decisions they referred to. Using Bing ChatGPT and asking it what those cases are authority for it will tell you (as though they are real) and give you links to back this up. But the links are to all the reports of the cases being referred to as fake. So it can’t even from that information see that they are fake and tell you so. And so the problem worsens!
-+-+-+-+-+
Sue Ingram (Helping Managers Handle Difficult Conversations in the Workplace – And Avoiding The Need For Them In The First Place):
Public embarrassment and humiliation have been deemed sufficient punishment – certainly would think that anyone would think twice before hiring such a ‘shortcut’ legal team!
In this world it is not what you ‘know’ that is relevant but where you know it from – and ChatGPT is not the place.
Me:
Indeed Sue. The magic it was supposed to weave when we all first heard about it is waning by the day.
-+-+-+-+-+
Amit Sharma (High Speed Solicitor acting for u/hnw clients who value their time and want 100% attention on their prime deals, IG @bettercallsolve):
Law school 101- provenance is and remains pervasive.
Me:
Indeed Amit!
-+-+-+-+-+
Hannah Beko (Helping law firms, lawyers & consultants increase productivity & profitability, reduce burnout & attrition by providing transformational impactful training lawyers rave about | Training | Workshops | Coaching | Speaking):
👍 It’s great for a bit of advertising or marketing blurb but not for legal research or advice that’s crazy!
-+-+-+-+-+
Ian Lawton (Growth & Venture Capital / M&A / PE / Corporate / Senior Associate):
Having looked at the case in some detail, the health warning should apply to all tech and not just ChatGPT. It also reminds lawyers not to be lazy. The lawyer in question did not double check the cases, he did not initially read the judgements that were referenced, which were littered with mistakes and were obviously fake, he did not read the disclaimers given by ChatGPT around accuracy of information and probably worse he was solely looking for cases that supported what he was hoping to argue rather than considering what the settled legal position was. However, really interesting posts and good to spread awareness of lazily relying on tech and not double checking your or your colleges’ work.
Me:
Indeed and agreed Ian. But now at least everyone knows the specific danger of ChatGPT hallucinating. Hopefully, as a result, lawyers will realise that it is not about to replace them anytime soon (as the original hype surrounding it suggested) and instead it is best avoiding it altogether when it comes to doing legal research.
-+-+-+-+-+
Basil Manoussos (Manager @ The Cyber Academy | Forensic Scientist | Expert Witness in Digital Evidence @ Strathclyde Forensics Ltd | BCS Influence Board | Visiting lecturer, Sheffield Hallam Uni):
Got to love cases like this!
-+-+-+-+-+
Elaine Pasini MCIM (ILFM Head of Communications):
Wow – excellent update of the lawyer using ChatGPT case (ie what was Mr Schwartz thinking – blows my mind).
Thanks to Brian Inkster for writing this blog.
-+-+-+-+-+
Nela Gosine LLB (Hons), MBA (Work Package Manager at 4EQM Services):
The judge went lightly in this matter, but I don’t think it will remain so much longer.
-+-+-+-+-+
Danish Riyaz (Proposal Manager at iQuasar, LLC):
AI can never replace HI.
One Comment