ChatGPT and Legal Gibberish
| |

ChatGPT and Legal Gibberish

The Judge in the infamous ChatGPT lawyer case (Mata v Avianca) this week suggested what ChatGPT produced was “legal gibberish”.

Possibly a better and more realistic term than legal hallucinations!

Judge Castel had called a hearing for 8 June with a view to considering whether any sanctions were appropriate for the lawyers using ChatGPT to produce fake citations that they then lodged with the court without checking them for accuracy.

Truly Mortified by ChatGPT’s Legal Gibberish

In advance of the hearing lawyers acting on behalf of Stephen Schwartz lodged a Memorandum of Law. The Preliminary Statement begins:

In the Order, the Court describes this situation as “unprecedented.” We agree. We can find no case where, as here, a lawyer using a new, highly-touted research tool obtained cases that the research tool itself completely made up. The lawyer, Mr. Schwartz, had no idea this was happening, even when opposing counsel brought their inability to locate the cases to his attention. ChatGPT even assured him the cases were real and could be found on Westlaw and LexisNexis, and continued to provide extended excerpts and favorable quotations. Now that Mr. Schwartz and the Firm know ChatGPT was simply making up cases, they are truly mortified; they had no intention of defrauding the Court, and the mere accusation – repeated in hundreds (if not thousands) of articles and online posts – has irreparably damaged their reputations. They have apologized to the Court in earlier submissions and do so again here.

Schwartz’s lawyers do not think he should be sanctioned by the court. That should happen “only if they acted with subjective bad faith, that is, if they actually knew the case law was false and provided it in order to defraud the Court.” However, in this case Mr. Schwartz “turned to ChatGPT, which he understood to be a highly-touted research tool that utilizes artificial intelligence (AI).”

Poster Children for the Perils of Dabbling with new Technology

They end the Preliminary Statement:

Finally, sanctions would serve no useful purpose. Mr. Schwartz and the Firm have already become the poster children for the perils of dabbling with new technology; their lesson has been learned. The Firm has taken and is taking a series of remedial steps: obtaining better research tools for its lawyers; implementing firm-wide CLEs in technology; imposing policies against using AI tools without checking, and more. At this point, any additional sanctions the Court imposes will be merely, and unnecessarily, punitive.

The Legal Press are to Blame for the ChatGPT Legal Gibberish

Giving the facts and background the lawyers state that:

Mr. Schwartz did not fully understand ChatGPT. Mr. Schwartz had never before used ChatGPT for legal research; he knew of the technology from his college-aged children.  Mr. Schwartz also remembered reading several articles touting the benefits of artificial intelligence tools and their use in professional settings, including at law firms. Indeed, as discussed below, during the early part of 2023 when Mr. Schwartz conducted his research, such articles were ubiquitous in the legal and public press, with some predicting that AI tools such as ChatGPT could make legal research obsolete.

Productions lodged with the court backing up this statement include articles in the legal press/blogs hyping LawDroid and Harvey.AI. No articles from The Time Blawg were cited 😉

Complete Fiction

He had no idea at the time, however, that in response to a question ChatGPT could generate an answer that, while intended to converse with and inform its user, was complete fiction – and a cleverly disguised fiction to boot.

Examination

Then on 8 June Judge Castel had the opportunity to question Mr. Schwartz on how and why he produced fake citations via ChatGPT to the court.

Inner City Press live tweeted proceedings.

I reproduce here some of the more interesting interactions between Judge Castel and Mr. Schwartz / Counsel to Mr. Schwartz as live tweeted by Inner City Press:-

ChatGPT produces  “Legal Gibberish”

Schwartz: Yes. I used Fast Case. But it did not have Federal cases that I needed to find. I tried Google. I had heard of Chat GPT…
Judge Castel: Alright – what did it produce for you?
Schwartz: I asked it questions
Judge Castel: About the Montreal Convention? Or the position you wanted to take?
Schwartz: Yes, for our position.
Judge Castel: You were not asked for an objective view, but cases to support your position?
Schwartz: I asked it for its analysis
Judge Castel: Did you ask Chat GPT what the law was, or only for a case to support you? It wrote a case for you. Do you cite cases without reading them?
Schwartz: No.
Judge Castel: What caused your departure here?
Schwartz: I thought Chat GPT was a search engine
Judge Castel: Did you look for the Varghese case?
Schwartz: Yes. I couldn’t find it.
Judge Castel: And yet you cited it in your filing.
Schwartz: I had no idea Chat GPT made up cases. I was operating under a misperception.
Judge Castel: Mr. Schwartz, I think you are selling yourself short. You say your verify cases.
Schwartz: I, I, I thought there were cases that could not be found on Google.
Judge Castel: Six cases, none found on Google. This non existent case Varghese, the excerpt you had was inconsistent, even on the first page. Can we agree that’s legal gibberish?
Schwartz: I see that now. I just thought it was excerpts

Playing with Live Ammo

Counsel to Schwartz: There used to be only Lexis, Westlaw and the books. Now there are many, many more. There are 100s of AI vendors that law firms use. Many lawyers have been burned. My client was playing with live ammo, which made up caselaw. He had no idea.

The Public Needs a Stronger Warning

Counsel to Schwartz: He was conducting research; he thought Chat GPT was collecting information from actual sources. But it was not wilful.
Judge Castel: Do you have something new to say?
Counsel to Schwartz: Yes. The public needs a stronger warning.

Guidance from the Courts

The need for more awareness around ChatGPT producing legal gibberish has resulted in some courts issuing guidance for the benefit of lawyers (and litigants in person) on the use of generative artificial intelligence in conducting legal research.

Avizandum

Judge Castel has taken the case to, what we call in Scotland, Avizandum. He will take time to consider his decision and provide it in writing at a later date. I am sure when he does it will not be capable of being called “legal gibberish”!

We will bring you Judge Castel’s decision on The Time Blawg when we see it.

Update – 24 June 2023

We now have Judge Castel’s decision. See: ChatGPT Lawyers Sanctioned

Other Posts on Legal Chatbots

For other posts about Legal Chatbots and ChatGPT on this blog see: Chatbots

Reactions on Social Media to ChatGPT and Legal Gibberish

On LinkedIn the following comments have been made:-

Steven Mather (Solicitor. Remarkablaw®️ Business Law advice. Buying and selling businesses, shares, contracts, people, legal issues, trade marks and just about everything legal thrown at your business I can help with):

Potential strike off here for misleading the court? He should have known to check. The SRA wouldn’t go easy on him.

Me:

Schwartz’s lawyers do not think he should be sanctioned by the court. That should happen “only if they acted with subjective bad faith, that is, if they actually knew the case law was false and provided it in order to defraud the Court.” However, in this case Mr. Schwartz “turned to ChatGPT, which he understood to be a highly-touted research tool that utilizes artificial intelligence (AI).”

Steven Mather:

Perhaps their regulation is different, but here it would be whether he “ought to” have known or checked.

1.4

You do not mislead or attempt to mislead your clients, the court or others, either by your own acts or omissions or allowing or being complicit in the acts or omissions of others (including your client).

2.2 You do not seek to influence the substance of evidence, including generating false evidence or persuading witnesses to change their evidence.

2.4 You only make assertions or put forward statements, representations or submissions to the court or others which are properly arguable

You do not waste the court’s time.

2.6 You draw the court’s attention to relevant cases and statutory provisions, or procedural irregularities of which you are aware, and which are likely to have a material effect on the outcome of the proceedings.

SRA would effectively lock him up and throw away the key 😳😁

Me:

I wonder if “being complicit in the acts or omissions of others (including your client)” also includes robots?!

If Mr. Schwartz is sanctioned maybe he will sue Open AI like a radio host is: https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit

-+-+-+-+-+

Clare Fraser (Access to justice | AI product developer | legaltech | NP member Law Soc Scot):

Am softie, I’d let him off. There’s plenty partners who haven’t checked trainees research. The issue here for me is lack of case law in the public domain.

Me:

I think I would be inclined to let him off this time too (but not if he did it again!). He was daft (incompetent) for not double and triple checking its output. But he had been led to believe ChatGPT was magic. His stupidity has raised awareness of Generative AI’s limitations in a way the legal press and LegalTech conferences have failed to do to date. That will benefit the legal profession and the court process overall in the long term. So maybe he should be applauded?!

-+-+-+-+-+

John Craske (Director of Innovation at International Law Firm CMS):

The hype might be annoying, even misleading or misrepresentative, but surely we are all used to that from every corner of life?

What’s the expression – “a bad workman blames his tools”. Like any other tool, those using ChatGPT et al need to understand it and use it properly.

But I think he’s had his sanction too! Infamy.

Me:

The hype seems particularly bad in #LegalTech. Always has been e.g. AI five + years ago. Not to mention blockchain and the metaverse!

You do need to know what tool to use for the job in hand. It is now fairly clear that tool is not ChatGPT when legal research is the job in hand.

I’ve commented today on the sanction: ChatGPT Lawyers Sanctioned

Similar Posts

One Comment

  1. It seems to me that this is a classic example of the absence of common sense. Relying on ChatGPT is pretty much the same as doing a Google search and relying on the first result irrespective of the source. At least with Google you can check the source – currently not possible with ChatGPT.

    However I think that future versions of ChatGPT will become far more accurate and these hallucinations/gibberish will be largely ironed out. But for the foreseeable future anything generated by ChatGPT needs to be checked by a human against genuine sources.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.