Litigants in Person and ChatGPT
| |

Litigants in Person and ChatGPT

The Law Society Gazette reported, at the beginning of the week, that a Litigant in Person in the UK used ChatGPT to research the law resulting in false citations being put before the court.

This comes hot on the heels of a lawyer in the USA doing exactly the same thing. I blogged about that last weekend: The Legal Hallucinatory Detectorist

The Litigant in Person and ChatGPT Story

The litigant in person and ChatGPT story as reported by John Hyde in the Law Society Gazette:-

A litigant in person tried to present fictitious submissions in court based on answers provided by the ChatGPT chatbot, the Gazette has learned.

The civil case, heard in Manchester, involved one represented party and one unrepresented: proceedings ended for the day with the barrister for one side arguing there was no precedent for the case being advanced.

The Gazette understands that the following day, the LiP returned to court with four case citations, each backing the point they were trying to make.

On closer inspection by the barrister, it transpired that one case name had simply been fabricated, while the other three were real case names but with the relevant cited passages being completely different to the judgment in each. For all four citations, the paragraphs quoted were completely fictitious, though appearing completely legitimate.

It is understood that the judge quizzed the litigant in person, who admitted they had asked the AI tool ChatGPT to find cases that could prove their argument.

The chatbot appears then to have delved into a bank of case names and created excerpts purportedly from these cases which responded to the question asked of it. The judge accepted the misleading submissions were inadvertent and did not penalise the litigant.

Not much noise about the Litigant in Person and ChatGPT incident

This UK case did not get the coverage or comment that the USA one did. No doubt because this one didn’t involve a lawyer but the other one did.

Incompetence

What is interesting though is the noise on social media about the lawyer, Mr. Schwartz, being incompetent for not checking the citations that ChatGPT spat out. It was the lawyer’s fault not the fault of the Generative AI program. Everyone apparently knows that you don’t use ChatGPT for legal research because it hallucinates (i.e. makes things up that are not true – also known as tells lies). No one told us that when it supposedly passed the Bar Exam 😉

ChatGPT passed the bar exam but don't use it for legal research

Blame the Lawyer not ChatGPT

As a result the lawyers involved face possible sanctions at a court hearing on 8 June. I agree wholeheartedly that the lawyer’s failure to check the citations produced by ChatGPT was negligent. But I would also question why he was using ChatGPT in the first place and put the blame on that at the door of the hype created around it in the press (including the legal press). As I said in my last post:

If anything, Mr. Schwartz has done the legal profession a big favour by bringing this fact mainstream. Lawyers will now approach ChatGPT with the health warning in mind that has not been amplified enough to date. Similar to lawyers being more conscious about how to use Zoom after the ‘cat lawyer‘ incident.

Blame ChatGPT not the Litigant in Person

Whereas in the UK case because it was a litigant in person and not a lawyer that used ChatGPT there was no penalty with the judge accepting that the misleading submissions were inadvertent.

So where a litigant in person is involved in court proceedings you can firmly blame ChatGPT for getting it wrong. Maybe that is why this particular case has not had much coverage in the legal tech press 😉

Reactions on Social Media to Litigants in Person and ChatGPT

On LinkedIn the following comments have been made:-

Michael Hinchliffe (Top 1% Legal Copywriter on LinkedIn helping law firms with their LinkedIn accounts & posts, sales copy, legal articles and website content to generate more leads & business.):

How can we address the responsibility of users when utilising AI tools like ChatGPT to avoid similar mishaps?

Me:

We need to educate them about the dangers (hallucinations) of using these particular tools. Hopefully the publicity surrounding these early incidents (although the litigant in person one has so far been downplayed) will help.

Laura Jeffords Greenberg (I help optimize legal counsel and legal team performance | Experienced Legal Executive | ChatGPT User for Legal Work):

I believe the problem is that people do not understand how ChatGPT works or how to use it. It is not a replacement for a search engine like Google. It predicts what is going to come next in the data pattern of the text and creates new content in the process.

Me:

Indeed. It is clearly not suitable for legal research. However, as soon as it appeared on the scene it was going to replace paralegals and maybe even lawyers. As recently as a 10th May the British Legal Technology Forum were still hyping it up with little mention of the drawbacks involved: British Legal Technology Forum 2023: The one with a Dragon and Chat GPT – This is perhaps why people do not understand how ChatGPT works or how to use it. We need #LegalTech conferences and publications to actually spell this out in no uncertain terms.

Laura Jeffords Greenberg:

That is an excellent point. We need to educate the legal industry on AI and LLMs like ChatGPT and how we can leverage them to benefit legal work.

-+-+-+-+-+

On Twitter the following comments have been made:-

Jack Shepherd (@jackwshepherd):

Great piece from @TheTimeBlawg today on litigants in person using ChatGPT and facing the well known issues.

-+-+-+-+-+

Danae Shell (@danaeshell):

Ooh thank you! Just did a warning to our users about this issue with some tips!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.