Judges and ChatGPT
| |

Judges and ChatGPT

Judges have entered the ChatGPT fray and are giving their views on the use of Generative AI in Court.

This is perhaps not surprising following recent incidents of lawyers and litigants in person producing fake case citations to courts. All generated by the master hallucinator that is Chat GPT.

Judge says a Human Being must check ChatGPT output for accuracy

Dazza Greenwood highlighted the Judge’s ChatGPT intervention on LinkedIn:

A federal judge on the Northern Distict of Texas has now provided a very constructive certification process for litigants to follow in his court to ensure any material inputs to filings have been competently fact-checked, including verification of citations.

UPDATE: The essence of the certification is “…any language drafted by generative artificial intelligence…will be checked for accuracy, using [authoritative legal sources], by a human being before it is submitted to the Court.”

The rule and certification form are in the linked site below. Here is the relevant text of the rule:

“All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court’s judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.”

It is interesting that the examples of Generative AI given by the Judge includes, along with ChatGPT and Google Bard, Harvey.AI. No one seems to know what Harvey actually does. Perhaps all the more reason for signalling it out!

Judges should not target ChatGPT

Carolyn Elefant was not happy. She expressed her anger on LinkedIn:

I completely disagree with this judge’s ridiculous requirement that lawyers certify that they either didn’t use generative AI to draft a brief or that the results were checked by a human.

Why single out Chat GPT or other AI for this treatment?

Does the certification apply if you use Casetext , LawDroid – Generative AI Legal Technology or a Westlaw product built on GPT? Looks like lawyers are safe only if they haul out the print reporters that the judge referenced in his order.

Let’s say it again.

The issue is not Chat GPT or AI or tech but failure to master legal research 101.

You can cite a case from a print reporter but if you only read the head notes or fail to shepardize it, your cite will still be inaccurate.

It’s also a huge mistake to target a specific technology – particularly one that empowers solo and small law firms and can lower the cost of legal services.

Ten years ago, I wrote about the folly of trying to regulate specific types of social media because they change so quickly that the rules are outdated by the time they’re issued. https://lnkd.in/gjKmqbEs. The same is true here. ChatGPT is already improving and this problem may no longer be an issue six months from now.

This order does nothing that Rule 11 doesn’t already address. But the order isn’t just duplicative but dangerous because it will stymie innovation and scare lawyers from using a powerful tool.

We should not be applauding this judge, but unfortunately, many ethics regulators will probably follow his lead.

I think the Judge’s edict on ChatGPT will include Casetext, LawDroid (which I reviewed before the days of ChatGPT) and any Westlaw product built on GPT if indeed Generative AI is involved in the mix. Legal Tech vendors who do AI washing beware!

Carolyn thinks “the issue is not Chat GPT or AI or tech but failure to master legal research 101”. This is, of course, true to an extent. But it is clear that ChatGPT produces bullshit and Courts should not have their time wasted on dealing with bullshit. It does not seem unreasonable for a known technology that has started to cause chaos in the courts to be singled out and highlighted to lawyers (and hopefully litigants in person too) as something to be very careful of.

Lawyers are supposed to be good at reading and thinking

As Tom Goodwin commented:

“Or that the results are checked by a human”
Perfectly reasonable. I thought lawyers were supposed to be good at reading and thinking

ChatGPT and “Bullshit Jobs”

My reference to bullshit and ChatGPT comes from this article by Madhumita Murgia in the Financial Times: Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’. In that article it is stated by Murgia that:

Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write, tasks that anthropologist David Graeber called “bullshit jobs”. AI-generated text is not delightful, but it could perhaps be useful in those certain areas, he concedes.

“But the fact that LLMs are able to do some of that — that’s not exactly a resounding endorsement of their abilities,” he says. “That’s more a statement about how much bullshit we are required to generate and deal with in our daily lives.”

Thanks to Thomas Hirschmann for highlighting that article via LinkedIn.

A lawyer appearing in Court is, I would proffer, not doing a “bullshit job”. Lawyers, therefore, should not be using a program that generates bullshit to do their job. Aron Solomon had a badge for this back in 2019. I would put it on ChatGPT today where legal research is involved:

Legal Tech 100 per cent BS Logo

We need to accept that ChatGPT is in no way a substitute, never mind a replacement, for many of the tasks that a lawyer does in a day. This image, which has been doing the rounds on social media over the past few days, is very apt:

ChatGPT - finish this building

The Practical Reasons why the Judge wants to control ChatGPT use in Court

Aleksandr Tiulkanov gave his thoughts on LinkedIn on Carolyn Elefant’s views:

What Carolyn Elefant is saying regarding the legal research 101 (you must actually read and verify your sources) is not wrong.

And I agree with what Robert Ambrogi is saying regarding the Avianca case (the lawyer’s technological incompetence is key, and it is no defence).

But I also understand the practical reasons which led the US judge critiqued by Carolyn to propose what he did (requiring the lawyers to certify they have verified any generative AI outputs used):

1) Lexis, Westlaw and other similarly constructed legal information systems are rules-based heavily expert-curated domain-specific systems and are thus designed for trustworthy legal information retrieval.

2) Generic generative AI technology systems based on currently available autoregressive large language models (LLMs) and trained to a large extent on webscraped garbage are not designed for that!

Searching something with them by means of open-ended traditional search queries is actually an abuse of technology. They are not intended for such searches. Large language models work best when you provide sufficient context as prompt, such as the draft you want to rewrite in a certain style. See Joanna Bryson’s post on this one. Ask her, Yann LeCun, or other trusted expert people in your network if you doubt what I am saying.

3) As practice has already shown, some lawyers, even or especially with decades of practice, fail to distinguish the specifics of generative AI.

They are used to trust heavily expert-curated traditional legal information retrieval systems and they extend this trust unreasonably to unintended uses of generative AI systems.

They are fooled by the persuasiveness of outputs and they think you can ask these systems to explain their reasoning and receive a valid answer (you can’t!).

I’ve foreseen this kind of confusion. In January, I’ve provided a commonsense reminder for how to approach generative AI, using ChatGPT as the most popular example.

Since April, I am also explaining how generative AI is different, what it is good and not good for, during my Responsible AI webinars.

Unfortunately this is still not yet common knowledge, which makes it reasonable for the judge to do what he did as a stopgap, until such time when the above becomes common knowledge among lawyers and the mass LLM delusion recedes. The judge has basically made it mandatory to follow the process described in my flowchart:

Is it safe to use ChatGPT for your task? (flowchart - Aleksandr Tiulkanov)

A flowchart by Aleksandr Tiulkanov that I also referenced in my post: The Legal Hallucinatory Detectorist

Other Technology and the Courts

I recall appearing in a case before the Scottish Land Court where opposing Counsel (a junior Advocate) had an iPad to refer to and little else. I was there with ring binders and marked up hard copy cases/notes.

The Advocate took much longer than me to locate information via his iPad and at one point quoted from a document that was not the same as the one before me and the court. The document had been adjusted and the Advocate was referencing an earlier draft he had on his iPad.

The Chairman of the Court referred to the fact that the Advocate was new to this court but that I was well known to the court. He pointed out that I was also very well known for my use of technology but clearly knew its limitations. The inference to the Advocate was that if he appeared in the Scottish Land Court again he should follow a leaf from my book and not rely so heavily on technology that may not actually be fit for purpose.

Disillusionment in the Hype Cycle

Not every lawyer is as tech savy as Carolyn Elefant or I might be. It does not seem unreasonable to me for Judges to issue guidance on the use of ChatGPT in their courts. That guidance is clearly needed especially, as I have said before, given how the legal tech press and legal tech conferences have been hyping up ChatGPT technology rather than giving lawyers a dose of reality about it. That, thankfully, is now starting to happen.

We are with ChatGPT (at least its use as a legal research tool) perhaps now on the slope of disillusionment, in the Gartner Hype Cycle, heading towards the trough of disillusionment after hitting the much hyped peak of inflated expectations.

Gartner Hype Cycle Curve

Reactions on Social Media to Judges and ChatGPT

On LinkedIn the following comments have been made:-

Dazza Greenwood (Legal Tech Consultancy and Research. Deep diver into generative AI for law and legal processes. Founder of law.MIT.edu (research) and CIVICS.com (consultancy).):

From law.mit.edu/ai Task Force:

“Recently, a federal judge on the Northern District of Texas promulgated a rule requiring more explicit and specific certification than what Rule 11 provides to confirm clearly “…any language drafted by generative artificial intelligence…will be checked for accuracy, using [authoritative legal sources], by a human being before it is submitted to the Court.” At this point in history, we think it’s appropriate to encourage the experimentation and use of generative AI as part of law practice, but caution is clearly needed given the limits and flaws inherent with current widely deployed implementations. Eventually, we suspect every lawyer will be well aware of the beneficial uses and also the limitations of this technology, but today it is still new. We would like to see an end date attached to technology-specific rules such as the certification mentioned above, but for the present moment, it does appear reasonable and proportional to ensure attorneys practicing before this court are explicitly and specifically aware of and attest to the best practice of human review and approval for contents sourcing from generative AI.”

Me:

Thanks. That is interesting and seems a sensible approach by MIT. But I’m not sure we can put an end date on such rules, as they suggest, just now when the technology is so new and we don’t really know where it is going.

Dazza Greenwood:

I’m not speaking for the task force here, but personally my hope is that the duplicative more general existing rules, such as in this case Rule 11, will simply be understood to apply w/r/t these specific technologies, and the need for additional technology specific rules will eventually lessen and disappear as the technologies become better understood in the main stream.

For instance, looking at the example you provided in your post of the attorney reading an earlier version of a document from an iPad, is it better for the court to make a new rule about using iPads, or simply to apply the existing rules regarding use of documents that are the authoritative version?

To the extent these technologies present a need for new rules, those new needs should be identified narrowly and clearly shown to fall outside of existing rules.

It requires some creativity and presence of mind to understanding correctly apply existing rules to new circumstances, and we are going to need to be doing that a lot with these new technologies. Luckily, the common law is very well tuned for this exercise.

Me:

Thanks. Agree with all of that.

The main issue as I highlight in my blog post and previous ones I have written is the general lack of knowledge amongst lawyers (and indeed the public at large) of the limitations of LLMs. The legal press and legal conferences have initially hyped the technology and given the impression that it is magical. It clearly isn’t. Until the reality about the actual limitations of the technology becomes mainstream these court rules are probably very necessary simply as a red flag. Agree that through time they can probably be relaxed.

Gerald Garcia (CEO en Business Advocacy):

I agree on the need and urgency of training the legal sector, greetings dear Dazza!

-+-+-+-+-+

Tommy Butler (Digital Tourism, Hotels, Holidays, Over 400 Travel Sites Online):

Maybe one questions they should ask is why they cant be sued..

Me:

I think we are set to see a lot of litigation against the makers of LLMs. Mostly probably around copyright and privacy issues rather than the hallucinations. Lawyers need not worry about being put out of work by AI. On the contrary a new stream of work is being created for them!

Tara TAUBMAN-BASSIRIAN LL.M (🌈Advocating privacy beyond compliance🌏 promoting GDPR, Data Protection and IP as an asset 🌏Voted Privacy Hero of the Year🌈):

Brian Inkster what about the client of that lawyer submitting hake case laws?

Me:

If the client suffered loss as a result that would be a professional negligence claim against their lawyer.

-+-+-+-+-+

Monika Wendleby (Partner på Passacon AB/Författare/Föreläsare/Bidrar med stöd i frågor om GDPR, NIS, öppna data med mera!Partner på Passacon AB/Författare/Föreläsare/Bidrar med stöd i frågor om GDPR, NIS, öppna data med mera!):

Intressant om AI i juridiskt analysarbete. Lite förvånande att en domare måste ta fram tydliga regler hur AI-texter får användas, borde ju vara en självklarhet. Är det inte lite av kejsarens nya kläder att så blint anamma en ny teknik okritiskt? Vi vet ju inte heller vad som händer om personuppgifter ingår i frågan, har hört att en del pratar om AI i HR. Hoppas att vi snart får AI Act (även om GDPR, säkerhetsskyddslagen och NIS redan nu reglerar en hel del)!

Translated by LinkedIn to English as:

Interesting about AI in legal analysis work. A little surprising that a judge has to develop clear rules on how AI texts may be used, should be a matter of course. Isn’t it a bit of the emperor’s new clothes to so blindly embrace a new technology uncritically? We also do not know what happens if personal data is included in the issue, have heard that some talk about AI in HR. Hope that we soon get the AI Act (although GDPR, the Protective Security Act and NIS already regulate a lot)!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.