LLMs in Law Hype v Magic Mickey Mouse in The Sorcerers Apprentice

LLMs in Law: Hype v Magic

I had an exchange about Large Language Models (LLMs) in law, on Twitter (otherwise now known as X), a couple of days ago with Zach Abramowitz:

GenAI in Law - Hype v Magic

LLMs in Law – The Hype

I do think GenAI, in its current form, is over hyped and especially so when it comes to using it for the provision of legal services. I can explain, with a little help from Bing Chat with GPT-4, how LLMs work:

A Large Language Model (LLM) is a type of language model characterised by its large size, which is enabled by AI accelerators that can process vast amounts of text data, mostly scraped from the Internet.

The artificial neural networks built for LLMs can contain tens of millions to billions of weights and are trained using self-supervised learning and semi-supervised learning. LLMs work by taking an input text and repeatedly predicting the next token or word. The Transformer architecture has contributed to faster training of LLMs.

Notable examples of LLMs include OpenAI’s GPT models, Google’s PaLM, and Meta’s LLaMa.


It also has to be appreciated that LLMs have their limitations:

LLMs have showcased impressive capabilities, but they do have limitations. One significant limitation is the need for massive computational resources to train and fine-tune these models effectively. The computational demands can strain IT infrastructure and may not be feasible for smaller organizations with limited resources.

Another limitation is the tendency of LLMs to make mistakes when they encounter out-of-domain data, and the possibility of catastrophic forgetting. LLMs can also be biased based on the data they are trained on, which can lead to biased results and perpetuate existing social and cultural biases.

Lack of transparency is another limitation of LLMs, as they are highly complex and difficult to interpret.

Finally, LLMs may have a limited understanding of the context and meaning of the language they process, which can lead to inaccurate or inappropriate responses to certain inputs.

Inability to Think

Most importantly LLMs cannot think:

While LLMs have demonstrated impressive capabilities, they are not capable of thinking in the same way humans do. LLMs are trained using massive datasets, which enables them to recognise, translate, predict, or generate text or other content . They are simply tools designed to process and generate text based on patterns learned from vast amounts of data.

Bullshit Jobs not Legal Jobs

Ted Chiang put it more directly when interviewed by the Financial Times for an article on ‘The machines we have now are not conscious‘:

Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write, tasks that anthropologist David Graeber called “bullshit jobs”. AI-generated text is not delightful, but it could perhaps be useful in those certain areas, he concedes.

“But the fact that LLMs are able to do some of that — that’s not exactly a resounding endorsement of their abilities,” he says. “That’s more a statement about how much bullshit we are required to generate and deal with in our daily lives.”

Parrots and ChatGPT

For a good article about what LLMs can and cannot do see: ‘You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this‘. And whilst on the topic of Parrots and ChatGPT:

ChatGPT v Parrot

Appreciating how LLMs do actually work and what their limitations are does, of course, assist in realising and appreciating the hype created around them, especially when considering the use of LLMs in law.

LLMs in Law – The Magic

Since the buzz first started about ChatGPT-3, and got louder with the launch of ChatGPT-4, there has been overinflated expectations as to the effect on LLMs in law.

In May at the British Legal Technology Forum 2023, which appeared to be almost entirely devoted to LLMs in law, we heard words such as “mind-blowing”, “cataclysmic” and “life changing” to describe it.

Richard Susskind, who chaired that Forum, has said to Law.com:

Chat-GPT is the most remarkable system I’ve seen in over 40 years of working on AI

He went on to herald the ability of ChatGPT to “outperform” junior lawyers:

I think it’s fair and not insulting to say that ChatGPT probably outperforms a lot of junior lawyers in some tasks already.

Law.com reports:

He pointed out that no junior lawyer could generate a speaker agreement in five seconds, but the generative AI, when asked, could produce an “extremely serviceable” one.

A statement that ignores completely the fact that law firms have style banks to which any junior lawyer would be expected to turn to in the first instance rather than creating something from scratch.

Susskind was committing the third deadly sin of the Seven Deadly Sins of Legal Tech Predictions: Performance versus Competence. As Rodney Brooks pointed out:

People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system.

Today’s robots and AI systems are incredibly narrow in what they can do. Human-style generalizations do not apply.


That initial buzz got a bit of a knock when the hallucinatory abilities of LLMs in law became apparent with lawyers being sanctioned by a court in the USA for using ChatGPT to produce fake case citations.

However, we seem to be back on the magic track again with Legal Tech commentators on the whole still hopeful that LLMs in law will be “mind-blowing”, “cataclysmic” and “life changing”. It certainly still appeared to be a hot topic at the ILTA conference in Orlando the other week.

Imagining Magic

Imagining magic is, of course, the second deadly sin of the Seven Deadly Sins of Legal Tech Predictions. As Rodney Brooks cautions:

Watch out for arguments about future technology that is magical. Such an argument can never be refuted. It is a faith-based argument, not a scientific argument.

Overestimating in the Short Term

And the first deadly sin of Legal Tech predictions, invariably also committed when speaking about LLMs in law, is overestimating and underestimating. This is a reference to Amara’s Law whereby:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

The effect of LLMs in law are definitely being overestimated in the short term.

It is also clear to me that those imagining magic and overestimating the effect of LLMs in law in the short term, have very little idea about how LLMs might work in an actual real law firm.

Use cases for LLMs in Law

I have been working in law as a practising solicitor for over 30 years and running my own law firm, Inksters, for over 24 years. Always with a passion for using technology to provide legal services more effectively and efficiently. At Inksters we appointed the first ‘Legal Process Engineer‘ to work in any law firm under that title. We have done a huge amount of work with implementing document automation. An area I believe many law firms would be better spending their energy and time on before being distracted by LLMs in law. I will do a blog post on Document Automation v GenAI at a future date.

I am not a luddite but I am a realist.

I am constantly looking at how we can use technology at Inksters to improve the provision of legal services. Unfortunately, so far, I just don’t see it with generative AI.

When I ask for use cases the tumble weed usually drifts by.

GenAI to transform our entire business

Déjà Vu

5 years ago we had the same issue. AI was the talk of the town at Legal Tech conferences. At the beginning of 2018 I made a prediction about AI in  law:

I predict that in 2018 AI will continue to be a de rigueur slot in legal technology conferences. But delegates will continue to leave none the wiser as to what they are actually supposed to do with AI in their own legal practices or how much it might cost them.

Despite this more law firms will be boasting in 2018 that they have adopted AI but the reality will be that their actual adoption will be no better than what they have done to date with document automation.

Fast forward a few weeks and that prediction was coming true at Legal Tech conferences. From the Legal Practice Management Conference 2018 in London I reported:

We never found out who the law firm was, who they were paying or what they were getting for £200,000 on research to find out what they could be using AI for in their firm.

The three delegates who claimed their law firms were using AI gave examples from the floor that sounded like basic process automation to me and not necessarily something that would actually involve or have to involve AI. Automation and AI often appear to be one and the same in the eyes of lawyers. They are not.

Yet again we didn’t have real life, properly explained, use cases of AI in practice in law firms. Use case examples are needed that delegates can assimilate and decide from whether or not such use would be beneficial to them and their law firms. Delegates need to know how much it will cost. Being told they might need to spend £200,000 just to find out what AI they might need will just frighten them.

I have yet to attend a legal technology conference where this has happened. I will let you know when I do. If, in the meantime, you have any real life AI in law firm stories to share please do so in the comments section below. If you think AI in law is over hyped let us know that too.

No real life use cases were forthcoming back then. I think my 2018 prediction still holds good in 2023.

AI and First Year Associates

Josh Kubicki recently placed a bet on LinkedIn that:

Artificial intelligence will fundamentally alter the traditional pathway for first-year associates in the next 5 to 8 years.

He saw a future of LLMs in law, where

One AI-enabled associate can achieve what multiple traditional associates can. Result? The once-bustling corridors of fresh first-year associates begin to look sparse.

I commented:

I guess first-year associates in big law firms (not a world I have ever inhabited) must do very mundane work compared to their peers in small law firms.

Hugh Angus, Solicitor in the Scottish Government Legal Directorate, commented in response to me:

In the US that is a first year trainee, and I think your point still stands – too many firms, especially very large ones, don’t stretch trainees enough as there is a reluctance to spend the time on proper supervision, and give them work that could be better done by a paralegal. I know I did, until I learnt the value of good paralegals.

The discussion moved over to Twitter (no I am not going to call it X) where Josh Kubicki said:

Agree with this but this was a social post not a thesis/case study where those nuances can be clearly explained. Perhaps that’s next. 👀

I do believe what I spelled out will not be confined to BigLaw. It will not be a universal development however.

But where are the use cases?

ChatGPT implementation asap

Looking for Use Cases for LLMs in Law

Interestingly Josh Kubicki has gone looking on Twitter for the use cases for LLMs in law, with this post:

AI and small law practices and SMBs. I am looking for great products that use generative AI for:
– customer/client (c/c) segmenting
– c/c direct outreach
– c/c marketing
– c/c onboarding
– inbound interest automation
– email management (tagging, auto replies)
– Conflicts
– proposal writing and management
– invoice and billing

So far just tumble weed in response.

Imaginary AI?

But it is not even clear what you can actually do with products that do apparently have LLMs for use in law.

Harvey is a good example of this. No one seems to know what it actually does even although 3,500 lawyers at Allen & Overy have apparently tested it. They must all have signed NDAs! The Harvey website gives no clues.

A recent article in The New York Times [paywall], ‘In Reversal Because of A.I., Office Jobs Are Now More at Risk‘, gave a little hint at what it might do – looks like the imaginary rabbit is just a search engine!

Harvey - The AI Rabbit

LLMs and Charging by the Hour in Law

Jordan Furlong recently posted on Twitter that in his view:

The single most difficult habit Generative AI will force law firms to break is charging clients by the hour for the on-the-job training of their new lawyers. It’s astonishing law firms have pulled it off for as long as they have.

I responded that I didn’t think that would happen at all:

They will continue to pull it off as they have with every other tech advancement that has increased efficiencies over the years. Don’t think GenAI is any more special (especially in law) than any that have gone before it.

Jordan replied:

The smart money is always on lawyers continuing to practise law and run their businesses the way they like, no matter the circumstance. My money is not that smart.

I came back:

As someone running a law firm very differently from the norm I should perhaps have a different perspective on it. However, either for my own law firm model or the traditional model (in particular #BigLaw model) I just don’t see GenAI having, in the short term, much real impact.

Automating 20% of Legal Work Tasks with LLMs in Law?

Jordan Furlong also posted, a few weeks ago, on Substack an article on ‘Leading your law firm into the Gen AI Era‘. In that article he says of LLMs in law:

Using anonymized benchmark data from law firm time entries supplied by LexisNexis CounselLink, Toby Brown and Greg Lambert calculated that about 20% of legal “work tasks” could potentially be automated by Generative AI, resulting in “a revenue reduction of about 23.5% to law firms in the very near future.” Reasoning that associate work is more vulnerable to Gen AI than partner work, Toby and Greg went on to estimate that in a conservative scenario (partner billable hours down 5%, non-partner hours by 20%), revenue on a sample commercial matter would fall by 13% and profit by 11% — but in a more dramatic scenario (partner hours down 20%, non-partner hours by 40%), revenue plummets by 30% and profits by 28%.

I pondered on LinkedIn:

As I have said elsewhere did anyone ever do the same calculation on document automation back in the day? Might have ended up with something higher than 20%? How did that go? Jumping over and ignoring document automation in favour of GenAI is not a good solution. #BringBackBoring with document automation with added GenAI, if there is a good use case, might be?

But what is the good use case for LLMs in law?

Jack Shepherd, responding to the same article by Jordon Furlong on LinkedIn, said:

What I don’t get about this is where the analysis is around the *specific* workflows that will be affected by new technology developments. I see a lot of people saying numbers of associates will be reduced, I don’t see anybody talking about the specific tasks generative AI will replace, the redefined process and the roles that are therefore not needed as a result…as we are learning about AI, we also need to make sure we still know about what employees actually do.

Indeed, the magicians don’t seem to know what the employees in an actual real law firm do or what LLMs in law might be capable of replacing them to do.

AI will replace ... really means ....

The Speed of 10 Associates?

A few weeks ago Alex Su asked on LinkedIn:

Why does this wave of generative AI seem to be moving through the legal industry more quickly than previous technologies? Part of it—as I wrote in my Substack last weekend—is that generative AI impacts the core of what a lawyer does. But there are other reasons, too. The legal landscape looks very different today than it did just a few years ago.

Josh Bobrowsky said in response:

I think part of the reason is that Generative AI is helpful to more people in the legal field than almost any other tech at a fairly broad range of uses

Most lawyers I speak with are using it and tend to say it is like a fairly smart first year associate. It does good work, but you have to check up on it because it will almost certainly have mistakes

The difference is that what it does well it can do with the speed of 10 associates.

I asked Josh Bobrowski:

What is it doing that a smart first year associate would normally do? I’ve not seen any use cases clearly explained yet to justify associate replacement.

Josh answered:

Reviewing contracts for issues, looking for typos or errors, helping draft initial documents…

What are some things you think a first year associate can do that AI cannot?

I replied:

Think Word already had typos sorted a while back. Very few initial documents need drafted from scratch. Can’t think of anything a good document automation system can do that AI would be able to add much if anything to.

I’d like to see a demonstration of the contract review systems and the issues it might identify as that is not something I have any experience of.

AI cannot, as far as I am aware, operate online court systems, land registration systems, land tax systems, lender portals, legal aid systems, power of attorney systems, have meetings with and take instructions from clients, appear in court or at tribunals, witness signatures, communicate without prompts and hallucinations, make phone calls to clients or third parties to ascertain information required, understand and apply legal reasoning to particular situations, make a cup of tea, be human, etc.

Josh came back with:

good points, I think i was referring more to what can be done from a reading and writing perspective.

I concluded:

But this immediately raises the actual limited role of GenAI when it is suggested that they can replace Associates. They can’t. Associates do a lot more than read and write. And often when reading and writing it is within online portals or other realms where AI has no fit at present.

So what are the use cases for LLMs in law? Answers in the comments section please! A magic wand as a prize for any actual use cases produced.

Image credits: The Sorcerer’s Apprentice – Fantasia © Disney; Parrot and ChatGPT © r/ProgrammerHumor; GenAI/ChatGPT Cartoons © Tom Fishburne – Marketoonist; Harvey (1950 film) © Universal Pictures

Reactions on LLMs in Law: Hype v Magic

On LinkedIn the following comments have been made:-

Zach Abramowitz (Killer Whale Strategies Founder, Legal Startup Investor):

I am certainly not saying you don’t understand how LLMs work, but your post in the link doesn’t reflect a detailed in the weeds understanding. It’s exactly the kind of explanation I’m talking about in the tweet. It fails to address vector embedding, dimensions, weights (these are basics) and the concept of the attention mechanism and why that’s so critical.


As I say in the post Chat GPT-4 produced the details of how it works! I guessed it would know better than me. Now if you are saying it doesn’t that surely demonstrates once more how poor it actually is? But to get to the nub of the issue if you can name some actual use cases for GenAI in a real law firm I’ll send you a magic wand.

Zach Abramowitz:

I’ve gotten much more detailed descriptions. I suppose you need to know the right prompts


I’ve been honing my skills as a Legal Hallucinatory Detectorist rather than as a Legal Prompt Engineer 😉 https://thetimeblawg.com/2023/05/29/the-legal-hallucinatory-detectorist/

Julie Schroeder (Generalist GC/CLO, former trial lawyer, AI/ML thought leader. Over eighteen years of experience as a tech savvy CLO, GC, Co-Founder, and Privacy Officer for SAAS IT in regulated industries with big data sets and AI/ ML.):

Zach Abramowitz – great points- but you are speaking in data science specifics like “vector embedding” and it may be hard for people with other specialities to understand and apply your points (which I agree with.)

Zach Abramowitz:

Julie Schroeder – right so the key isn’t to have a simplistic understanding of a new technology and then make absurd predictions about its non-efficacy. The goal is to actually work hard to understand how the technology works and how it’s different from what came before. My experience is those that take this step come out the other side going “wow, this is going to impact everything.”

Julie Schroeder:

Zach Abramowitz – well, at least that is my strategy! I am a bit of an odd duck in that I have 20 years of experience which included co founding a NLP company in 2016 with a human in the loop and sentiment analysis. So I have had to learn how to speak different languages- to continue to beat this dead horse 😉

I would love to talk more offline. DM me if that also works for you? Thanks!


Zach Abramowitz – Most lawyers (and indeed people in general) don’t operate like that. They want technology that is intuitive and easy to operate and allows them to do their job efficiently and effectively. They just need to know it works and will give them the desired result.

How it works is irrelevant. They will need training on how to use it and if that becomes over complicated (e.g. requiring a degree in prompt engineering) then you will not get very far. Indeed you will often struggle to get them to properly use a reasonably straightforward case management system.

Having to explain vector embedding, dimensions, weights and the concept of the attention mechanism is going to lose their attention very fast!

As Jack Shepherd commented on Twitter today: “I was speaking to a friend today who said they hates speaking to tech vendors because they literally do not understand the words they say. Because they don’t speak the language of end users. You’ve got to make this stuff relevant to them.”

So unless you can make selling ChatGPT to lawyers a whole lot easier to explain and understand (with use cases of course) you are probably flogging a dead horse.

Zach Abramowitz:

Brian Inkster – you find ChatGPT not intuitive and easy to operate?


If you need to know all about vector embedding, dimensions, weights and the concept of the attention mechanism (as you say you do) before using it then no – as I am clearly using it wrongly given your dissatisfaction with the answer it gave to my prompt asking it how it worked.

Zach Abramowitz:

Brian Inkster – You don’t need to know that before using it. But if you’re going to develop a strategy, make bold predictions and begin building solutions using LLMs then I certainly recommend you know how the technology works. If you just want to get going and use ChatGPT, Bard, Claude or any of the others you could probably just read a 1 pager best practices and you’re mostly set to go.


Gary Thompson FCII Chartered Insurance Practitioner (Insurance Professional providing Expert Witness Opinion | Insurance Claims Audits | Solicitors CPD Training and Risk Management Advice | Insurance Dispute Resolution Solutions):

A short reflection. A few days ago I was at home working on my laptop. My wife asked what I was doing and I thought I was being clever by answering that I was using AI to organise my work flow for the day. In response she took my laptop and said go mow the lawn!


Can’t you get robots to do that for you these days 😅

Alan Harrison (Intellectual Property Attorney | Mechanical and Nuclear Engineer):

Great wives think alike.

Alex Smith (Global Search & AI Product Lead (Senior Director) at iManage):

Brian Inkster – surely robot lawn mowing is a solved problem. Using driverless car AI.


Or using robot vacuum technology? Oh… wait… maybe not…

iRobot shop with cleaner

Alex Smith:

That photo is one of the greatest of all time.

Antti Innanen:

iRobot was founded by the same guy that did the Seven Deadly Sins article, Rodney Brooks. This was news to me. Maybe there is some irony there too…


Clare Fraser (Turning Open Domain Data into Accessible Rights Guidance | Bridging Legal Tech & Gov Tech):

I have a use case in law but not a law firm, does that count for half a wand? Also re thinking, LLMS aren’t designed to think, they predict.


It would at least be a start! So yes half a wand or a very small one could be yours!

If they can predict but not think that explains why their output is as bad as most legal futurists 🤣

Clare Fraser:

might a prediction be of more value than thought? Do you ever know the answer to a legal problem and work backwards to articulate the rationale ie why it is right? To me that’s prediction.


Now I don’t think you are going to be able to get LLMs to go through that process (not yet anyway)!

Clare Fraser:

How do you know when something is right?


When the Supreme Court rules on it 😉

Clare Fraser:

Which helps you predict the right answer 😉


Only if they have ruled on the question.


Seán Fobbe (Human Rights Lawyer | Legal Data Scientist | Open Data Advocate):

Good roundup. I discussed a couple of GenAI use cases in a blog post recently: https://seanfobbe.com/posts/2023-07-27_some-thoughts-on-large-language-models-in-the-legal-domain/ However, I’m as skeptical as you are that LLMs are the right fit for these problems.


Thanks Seán. Nice article and I am with you on your conclusion: “LLMs and transformer technology are here to stay, although I have serious doubts as to whether they will improve the practice or business of law as much as their proponents claim. I do see some modest opportunities, but nowhere near the potential that evangelists preach. Mostly, I think, people will be disappointed.”


Kyle Bahr (Legal Ops & Tech Innovator | Fortune 200 Commercial Disputes & Contracts Attorney):

Hi Brian Inkster, in-house litigator here.

I recently used Claude 2 and Chat GPT Code Interpreter to prep for oral argument on a motion for summary judgement in U.S. federal court.

Specifically, the AI analyzed case briefs and record evidence, and generated possible arguments, counterarguments, and questions from the bench (with suggested best answers). I did this in several separate threads, and the most valuable one was when the AI judge ruled against my side, and I got to dive into its reasoning.

As a junior associate 15 years ago, I was given the same types of papers and told to spend billable hours coming up with all sorts of questions and arguments, so the partners would be “prepared for anything.” But what value did those hours really provide to the client?

I found the AI — especially Claude 2 (with its 100k token context window) — to be an efficient and effective substitute for this junior associate work. It helped me and my outside counsel feel more confident that we weren’t missing something. Claude 2 also accurately predicted several of the judge’s questions.

This is one of the ways I use AI daily in my in-house practice.


Thanks Kyle Bahr. You win the first magic wand. DM your postal address to me please.

Kyle Bahr:

Harry Potter's address

Clare Fraser (Turning Open Domain Data into Accessible Rights Guidance | Bridging Legal Tech & Gov Tech):

*coughs* this is technically second magic wand 😉


First full one – you are only on a half 😅


Michael Robbins (Builder of human+digital learning ecosystems | RSA Fellow):

Law-specific and firm-based Community Language Models #CLMs will rectify the shortcomings of LLMs for legal use.

These will be business intelligence neural networks that are trained on content that you control aggregated into a Community Knowledge Graph #CKG.

It will require ongoing RLHF (Reinforcement Learning from Human Feedback) and HITL (Humans In The Loop).

Existing LLMs will also be used to help train and improve these. (Working through that)

This is a longer-term journey. AI won’t replace lawyers but lawyers using AI will replace those who don’t.


Thanks Michael. As I have seen debated elsewhere this is going to involve law firms getting their data in order. For some (most?) that in itself is going to be a huge task.

Lawyers who use document automation don’t appear to have replaced those that don’t. I expect the same to be true of AI.

Katherine Thomas (Growth and strategy for legal businesses. Adviser, consultant, speaker and lecturer. Owner – Free Range Lawyers):

Brian Inkster – I’m not sure they need to get their data in to use Gen AI. One of the benefits is it works with unstructured data. So I’m wondering where this leaves knowledge curation.


Katherine Thomas – According to others, better placed than me to comment on it, structure is important. At the British Legal Technology Forum 2023 Jack Shepherd stressed that metadata is so important. With it you end up with a usable database. But you do need to curate and contextualise documents to achieve that.

Jack Shepherd (Specialist in legal tech and knowledge management):

Brian Inkster it’s fundamental. Take a use case of Q&A on a legal issue. If you are using RAG for this, you want it pointing at trusted good quality data. If you are doing anything with contracts you want to be pointing at good quality examples and the right type of contract (if you can’t use a template). Both rely on metadata. Think also about retrieval based methods for obtaining knowledge rather than generative – these also rely on metadata.


Thanks Jack. As an aside I couldn’t easily find a template for something the other day. I asked Bing ChatGPT-4. It gave me a link to the template I was looking for but it was behind a Thomson Reuters’ Practical Law paywall! I then asked a colleague who e-mailed to me what I was needing – they had done one before. Think this demonstrates that (a) ChatGPT is not a substitute for style banks; (b) To find a style that has not been indexed in your own style bank you would need structured data within your own document management system to enable ChatGPT applied to that database to easily find it (although would a search function applied to that not be more useful – as you are simply attempting to locate an existing document not create a new one?); and (c) phone a friend might be more effective than ChatGPT.


Sebastian Soltau (Bridging the gap between legal and the business in LatAm | Legal Innovation | Legal Ops | Legal Tech):

Will read this tomorrow, but I have one immediate reaction: should we be talking about bull**** jobs or bull**** tasks? I’m pretty sure legal jobs have tons of the latter.

Also, here’s a long and detailed explanation of how LLMs work. Helped me a lot to understand this technology’s potential: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/


Thanks Sebastian. I now know what I wondered – that Voodoo is at play when you use ChatGPT!

I look forward to hearing your further thoughts tomorrow.

In the meantime on bull**** tasks there will always be those in most lines of work and law will not be an exception. But when you read my post tomorrow do see my exchange of views with Josh Bobrowsky under the sub-heading ‘The Speed of 10 Associates?’. There I struggle to find even bull**** tasks in a law office that an LLM could usefully handle. I am of course open to be corrected and my call for use cases in exchange for a magic wand is there for that reason.


Antti Innanen:

Our small company frequently employs GenAI for various purposes, including brainstorming, creative writing, and correcting errors.

Furthermore, we use Midjourney for visualisations, Eleven Labs for audio and Synthesia for presentations. I think that we would make a great use case!


Thanks Antti Innanen. But is that really doing the legal tasks of or replacing a junior associate?

Antti Innanen:

You would have to define ”legal tasks” and perhaps also elaborate why it would be more valuable if it would do ”legal tasks” than other tasks.

We are happy to talk more about how we use AI in our daily work. ”Legal” for me is not that important.

And why is it always the junior associate that gets replaced? Lol it is not the senior partners that are going to ”replace” any juniors by using AI.


Maybe anything that is not marketing? That is where the use cases mostly seem to be? The legal futurists seem to like replacement from the bottom of the ladder. I guess they see that as an easier thing to predict. But, as I say in my blog post, I’m not sure they know the tasks that those associates actually do.

Antti Innanen:

Associates should be alright. Hopefully, big law firms and their partners will undergo some heavy changes.

Happy to tell more in the DMs.


Please do so.


Debbie Reynolds (The Data Diva | Global Data Privacy & Protection Expert Strategist | Technologist | Keynote Speaker | Author | Educator | Cyber | Advisor | Futurist | Internet of Things (IoT) | #1 Data Privacy Podcast Host | Polymath):

Brian Inkster, thank you for your excellent article and your uncommon common sense about Generative AI uses in law.

When clients find ways to embrace new technologies, it puts pressure on law firms to be “in the know” about new technologies so they try to find small ways to participate in the hype.

The buying frenzy is upon us, but soon, these firms will find out they are not equipped to implement these innovations because it may not fit or properly augment their existing low-tech processes.

Generative AI is best for low-stakes use cases and will not be a way to replace people because it cannot do what a human can do.


Thanks Debbie. I am in agreement with you.

Clare Fraser (Turning Open Domain Data into Accessible Rights Guidance | Bridging Legal Tech & Gov Tech):

I agree with you as well Debbie Reynolds re not doing what humans can do but I don’t see that as the aim, the aims for me are stuff like scalability, efficiency, consistency, cost reduction and availability.

Debbie Reynolds:

Clare Fraser – thank you. Having advised lawyers and law firms for longer than I would like to mention on digital transformation some will go too far in imagining that this technology can do more than it can do. No technology is an easy 🔵 button but this will not stop the spin and efforts to make it seem so.


Agne Zasinaite (Intellectual Property | Sports, Media & Entertainment Law | Commercial | Dentons, Helix):

I have used it as a more efficient Google – if I needed to understand the industry area for a contract I’m working on, it is helpful. I could find the same information on Google most likely but it would take longer. Next attempt will be to analyse a contract received from a third party but I think this might need extra training before it’s fully correct


I think we are all probably using it from time to time as that. Although often in search I find that many short sharp responses that you can then delve into are better than one ChatGPT essay. Bing Chat GPT-4 also effectively limits the sources of information cited in its answers in a way that wouldn’t happen if you were using a traditional Google search. Both have their pros and cons and I think it is a case of being aware why you are using one over the other and which is best for the purpose in hand.


Denis Potemkin (Founder of Majoto | contracts that people understand and love | faster deals more trust through design and automation):

Always good to see someone calling bullshit on hype, Brian. Would love to see some of those points expanded on and debated! I’m pretty sure there are exciting things that people are working on, which will have profound long term impact. But I suspect genAI will only be a cog in a bigger machine.

As for current use cases: there must be some use cases surely?! People talk about them on LinkedIn. Including in house lawyers. Let’s hear about it !


Thanks Denis. In the long term I am sure there will be useful developments that we can trace back to ChatGPT. But as I say lets not overestimate its effects in the short term.

You will note that I have resorted to giving away prizes of magic wands to get these use cases. I truly want to hear about them.

Denis Potemkin:

Maybe you should offer a 50 pound Amazon voucher?


What! You think no one wants a magic wand?!

Denis Potemkin:

Amazon IS a magic wand. People might believe you more LOLs.


The magic wands will probably be ordered from Amazon in any event!

Julie Schroeder (Generalist GC/CLO, former trial lawyer, AI/ML thought leader. Over eighteen years of experience as a tech savvy CLO, GC, Co-Founder, and Privacy Officer for SAAS IT in regulated industries with big data sets and AI/ ML.):

Brian Inkster do loop😉


Erik Williams (🚀 #Philanthropreneur | #Technologist | #TechnologyCommercialization | #Startup and Growth-Stage Advisor | Because getting to market isn’t enough to dominate it 🚀):

L. Thorne McCarty, it sounds like you and Brian Inkster should get together and have a chat.

L. Thorne McCarty (Professor of Computer Science and Law, Emeritus, at Rutgers University):

I am surprised that no one in the comments has mentioned CoCounsel by Casetext. I am an academic (now an Emeritus Professor), not a practicing lawyer, but I have been experimenting with their Research Memorandum skill. It does a very good job surfacing the arguments, pro and con, on any arbitrary legal proposition in any jurisdiction, with supporting cases and statutes.


I think you win the second magic wand! DM me your postal address please 🙂

L. Thorne McCarty:

The magic wand should really go to Pablo Arredondo and his colleagues at Casetext. They built the system!


Ah, but I was simply looking for people to alert me to use cases and you, not they, did that. So the offer of a magic wand still stands if you want to DM to me your address 🙂


Josh Kubicki (Business Designer | Entrepreneur | Professor):

I couldn’t reply in full here, so I did so on Twitter. Check it out.

But I like the challenge you put out there. I do respect what you have built and I respect your POV. Perhaps consider mine.

As Tweeted:

Thanks for the shoutouts/callouts @TheTimeBlawg. Good vigorous debate here.

A few reactions and thoughts to further this:

1. Is “legal” work somehow extraordinary? Or perhaps, is all legal work extraordinary? I often see this mindset that devalues work that lawyers don’t do themselves. It is a slippery slope as we often define “legal work” in a manner that is self-referential – “It is the work lawyers do.” That in and of itself does not make it extraordinary. The category of bullsh_t, as you say, surprised me a bit. You are successful and I admire that which is why I thought you would know the value of this type of so-called non-legal work in running a legal services business. And if it can be done better, that is likely a good thing. Help me understand this better as I might be missing the distinction.

2. I’ve been fortunate to dedicate time writing on the practical use of generative AI in the legal field. My consistent writing and personal usage has allowed me to discern fact from fiction and gain insights from peers. Here are some real-world applications. In considering them, I cannot tell you if they are bullsh_t work or legal work (most likely both).

a. Let’s start with your referenced template banks. You are lucky to have them or have built them. Many small firms and solos do not have them for every use case. So, they are taking what they have and then using GenAI to fit them for purpose. They are doing this in order to save time, catch every nuance, as a second set of eyes, and to actually do the typing of the words. Yes, they then review and edit as appropriate but what initially took say 30 minutes now takes 20. Maybe 15. Not a huge gain but it can add up.

b. An in-house counsel used these tools recently to do the following. They constantly heard that the business units disliked how the legal function explained things to them. Either it came off as legal jargon or patriarchal explaining. This was not intentional but was the result of the two sides simply not being more aware of what everyone did and needed. Now this could have been addressed in meetings, workshops, or training. But that takes time and in-house don’t have that time to give. So instead GenAI was used to explain the business to the lawyers in a way they could more easily understand. Then they took that and applied their typical legal-centric way of communicating and produced content that blended both legal and business-friendly language in a manner that lawyers and business people found satisfying and helpful. Is this bullsh_t? I don’t think it is.

c. Maybe you are looking for something purely “lawyerly” to satisfy your demand. How about writing consent notices? Or drafting up a preliminary response with a draft rider to a proposed MSA? Sure, one can argue that any good lawyer can do these things easily and quickly but the world is one where all lawyers are good and well-practiced within all areas of law.

3. Ok so even if this is all bullsh_t in the eyes of some folks, it is nevertheless work. Work that either gets ignored or is done poorly. This creates unhealthy businesses in which lawyers and employees suffer from directly and indirectly. Stunted growth. Artificially shrunk profits. Overwork. Redundant task blindness. And so on. These tools do excel at some of this work and that should be cheered and applauded because it can help lawyers operate their businesses better. That should be a goal we can all get behind.

4. My tweet looking for solutions was not because there are none that address those. I am looking for specific tools for each of those categories in the legal markets. Some already exist outside legal. The crickets you hear from my post are more about its lack of reach (I am not an influencer) than lack of solutions.

5. My prediction on the future of associates is firm. I was a young lawyer in-house when I left law school and I reviewed the bills and had many contacts and friends that were 1st and 2nd year associates. It was no secret that their work wasn’t necessarily extraordinary or “legal” just because there were lawyers doing it. It was work that had to get done in a legal context. Business people review more contracts in a day than lawyers do. Is that legal work, bullsh_t work, or just work?


Thanks Josh. How times have changed that you can do a longer reply on Twitter than you can on LinkedIn! I’ll respond in full later today.

Isabel Parker (Partner, Deloitte Legal):

Josh Kubicki – a good and thoughtful response. Thank you for taking the time. I have found your work on GenAI very useful and (following your advice) now use it daily. I am learning a lot and it is helping my colleagues in our delivery and our clients too.


Isabel Parker – Can you explain how you are using it in your delivery to your clients? You could be in the running for a magic wand!

Isabel Parker:

Brian Inkster – not traditional legal delivery (I am non practicing now) so might fall within your bs category! Good use cases we have found are in regulatory gap analysis, contract simplification, provision extraction. I don’t think it’s magic. I do think it’s transformative for businesses, including legal services.


Josh Kubicki – Now replying in full here rather than on Twitter as unlike you I don’t have a blue tick that allows me long tweets. Isn’t technology wonderful!

However, I will still have to split my response up here but not so much as I would have had to do on Twitter!

Thanks for spending the time on giving some comprehensive thoughts in response to my blog post.

My further thoughts in response:

1. Some “legal” work is I think quite extraordinary. Other “legal” work can be quite mundane. Often lawyers are doing mundane work that they could and should be delegating elsewhere (including potentially by using tech) to free up their time for the extraordinary work. At Inksters we use existing (not GenAI) tech to do that as much as we possibly can. See my exchange of views with Josh Bobrowsky under the sub-heading ‘The Speed of 10 Associates?’. There I struggle to find even bull**** work in a law office that an LLM could usefully handle. I am of course open to be corrected and my call for use cases in exchange for a magic wand is there for that reason.

2a. I had assumed most law firms would have templates to hand, even small firms and solos. You would usually have someone you know who could supply one if needed. But if that is something lawyers are finding a use for then all good and well for them. My view would remain that GenAI (that might generate a different template each time – and not always an accurate one for what is required) would be a poor substitute for using a properly configured document automation system.

2b. I think all lawyers should be trying harder to be less legalistic in their communications with their clients for obvious reasons without having to resort to GenAI. I know I have certainly done so over time. However, if some lawyers are feeling the benefit of a tech layer to do that for them then all good and well for them, if a little bizarre.

2c. Again I would have thought lawyers would have a style handy for such things (although the specifics you mention I am not familiar with).

3. This is what I have successfully used document automation for at Inksters. I think most law firms could gain more from implementing that than from being distracted by GenAI. There might well be a use case for mixing the two for certain tasks.

4. There is tech that does most if not all of this but not necessarily with GenAI. A GenAI layer may be unnecessary. It may in some cases assist. We are probably at too early a stage to see the integration of GenAI into existing tools that currently perform tasks without it.

5. See 1. above.

Kyle Bahr (Legal Ops & Tech Innovator | Fortune 200 Commercial Disputes & Contracts Attorney):

Hi Josh, for 2a in your Tweet, the template bank example, maybe lawyers could also have Gen AI create templates for them by feeding many examples of the same type of document into the tool. It could analyze, synthesize, and generate a template with blanks or brackets saying what info should be filled in.

(I let some judges in my circle know about your free-to-the-judiciary AI course, hopefully they’ll enroll!)

Josh Kubicki:

Brian Inkster – I think we win (or lose) for fragmenting this discussion over multi-response cross-platform threads!


Scott Simmons (Turning Lawyers Into Rainmakers | Business Development Coach & Trainer for Lawyers I Legal BD Consultant):

The first one that sprung to mind was CoCounsel by Casetext https://casetext.com – then I saw that someone else has already mentioned it.

I haven’t seen it action (your post reminded me to get on one of their demos), but I’ve read a bit about it and it sounds intriguing.

I love your article – it shows how much time you’ve spent learning about LLMs.

Here’s my concern: you’re asking about current use cases in law and the short term impact. Those are very limited, as the responses have shown (though Kyle Bahr’s use case is superb! 👏).

My fear is that this conversation will give too many lawyers a lot of comfort to continue going on without learning what LLMs are, what they can do and what they will likely do in the future.

While there aren’t many use cases now, there will be – and there will be in the relatively near future, it’s just how progress works.

Yes, we have to put it in perspective; but we also have work on the basis that there’s a very good chance that AI will do the drafting, research, risk assessment, onboarding, data collation, and much more in the future.

I know of an AI tool that is used to read the tone of a prospect’s conversation to assess the likelihood of them buying a product!

Possibilities are endless.


Yes, They, not you, get a magic wand!

The fact that current use cases in law are very limited begs the question why?

Surely it is because LLMs are not currently up to the tasks that need to be done in an actual law firm?

If that is currently the case then lawyers don’t need to waste a lot of time learning about something now that is going to be of little use to them in real life. Like they were encouraged to learn about blockchain, NFTs, Clubhouse and the Metaverse 😉

If in the future LLMs can do more wonderful things in law than they can now then that will be the time for lawyers to concern themselves about them.

However, many probably won’t regardless. That is what we have unfortunately seen in law with document automation. That is proven technology that has been around for many years that can create huge efficiencies for lawyers. Yet adoption has been very poor. Catherine Bamford is speaking about that at LegalGeek next month.

AI (probably not LLMs) will no doubt do much more for us in the future. Much of what you list is already done by #LegalTech not necessarily AI. Many lawyers would do well dusting down the tech they already have and seeing what they can do with it that they are not currently doing.

Scott Simmons:

You’re right, I don’t deserve the magic wand! 😂

I agree with everything you’ve written (except the bit about blockchain etc. – if you’re a lawyer that knows about them, you’re in super niches that are worth A LOT of fee income!). There’s legaltech out there doing the things I’ve listed and lawyers should be using it to improve the customer experience.

That said, AI will take all those things to a different level. To wait until AI can do it before you learn about it will be too late. Look at what’s going on with the new Fixed Recoverable Costs regime coming in – lawyers are waiting and waiting, when they need to be preparing for it. Waiting for it to come in will be too late to make it work properly.


Being in super niches where you are litigating the topic is very different from using the tech in law. If you look back on my past musings on that topic I have always said that lawyers will do very well out of simply just doing their job to unravel the mess that some of this tech causes. We are seeing the same with lawsuits arising from GenAI at the moment.

As I said in my blog post we were told AI would take things to a new level 5 years ago. We are still waiting. This latest buzz gives some a little hope but my own gut feeling is that we still have a long way to go.

If you read my ‘Seven Deadly Sins of Legal Tech Predictions’ article then do take particular note of sin numbers 5 and 7 in this regard.

As I say, when it comes to tech I don’t think lawyers should be waiting. They should be implementing proven tech that will do wonders for them now if implemented properly and effectively. That is what I have been doing since I formed Inksters over 24 years ago. However, I know that remains unusual in legal practice.

Kyle Bahr (Legal Ops & Tech Innovator | Fortune 200 Commercial Disputes & Contracts Attorney):

Scott, thanks for the kind words, and I just want to add an observation to this discussion with Brian.

I think we’re not seeing many use cases because lawyers are afraid of using the currently available Gen AI tools, since they’re training models. The reason: privilege and confidentiality.

That concern is certainly valid, but probably being applied too broadly. Not everything in a legal professional’s day is privileged.

My use case, for example, used only publicly available records. I use AI daily, too, for small administrative tasks. I can imagine many other applications with legal documents, especially contacts, if identifying information is removed before inputting.


Stephan Breidenbach (Hochschullehrer, Mediator und Unternehmer – entwickelt ein 1,5-Grad-Gesetzespaket @GermanZero):

Watch Legal OS. They have an awesome use case


jon LABAN (Talks about #opensource, #datacenters, #esg , #climatechange , #energy , #emissions , #heatreuse , #circulareconomy and #sustainability):


How do generative language models work


Larry Bridgesmith J.D. (“The opposite of a fact is a falsehood. The opposite of a truth may be another profound truth”):

Reducing unintelligible lawyer speak to plain language and identifying hubris?


Is that a client use case for running the e-mail they receive from their lawyer through ChatGPT to understand it?!


Dror Futter (Corporate Lawyer Focusing on Venture Formation and Financing, Mergers and Acquisitions and Commercial and IP Agreements | External GC):

Brian Inkster – you totally nail it. Seems like every new technology brings with it the predicted demise of first year associates and a drop in the cost of legal services. Just seems like no one told the law firms.

Wrote about this at the end of last year.

“What Does the Most Dangerous Street Corner in 1990 NYC Have to Do With ChatGPT and Its Impact on the Cost of Legal Services?”



Thanks Dror. Good post with good points. I remember when I started practising law 30+ years ago contracts were much shorter than they are today. Lawyers could finish most of their work by the afternoon knowing they had nothing more to do until the post arrived in the morning. Technology has created (and will continue to create) work for us. For example, ChatGPT hallucinations creates the need to double and triple check its output!

Dror Futter:

When I went back to law firm practice after a lengthy time in house, I purposely sought out low overhead options that would allow me to offer lower rates. My surprise was that for all the talk about legal fees, for most clients it is like the weather – something that everyone complains about but is reconciled to. How else, for example, can your explain first year rates – when the client pays for the privilege of giving attorneys the practical training their law schools did not.

Without client pushback, as you suggest, I suspect AI will be a net profit center for law firms for the foreseeable future.

Julie Schroeder (Generalist GC/CLO, former trial lawyer, AI/ML thought leader. Over eighteen years of experience as a tech savvy CLO, GC, Co-Founder, and Privacy Officer for SAAS IT in regulated industries with big data sets and AI/ ML):

Dror Futter – I used the”Robot apocalypse” in my most recent post so I completely understand!


Daniel Rizzi 🦉 (Tech Scaleup Anti-Lawyer | Co-founder @ We-Watch | Helping Founders Need Less Lawyers as They Scale | Sharing My Startup Journey | Business Fundamentals):

This is a good take! As a sole practitioner, I couldn’t think of an obvious use case for this. Its extremely useful if as a lawyer you don’t have access to any precedents. The reality is most lawyers do.


Thanks Daniel. Prerequisite knowledge and avoiding risk meaning we are not going to be replaced by it anytime soon!

Generative AI could speed up drafting if you need to draft a clause you’ve never written before. It does decent job of summarizing contracts (with errors, potentially useful for a junior associate). You still need prerequisite knowledge for this to be useful.

The elephant in the room is if it can generate good enough contracts for scratch (and iterate on them), clients no longer need lawyers who just pump out templates. Its still not good enough to do this in a low risk way yet.

Daniel Rizzi:

Personally I think the doctors will be hit first!


I will prefer a human in the mix on that front for now I think.


Eugénie Michalopoulos (Lawyer and Data Protection Officer | Commercial, Technology, IP, Data Protection, Cybersecurity and Risk | International Speaker):

I think the other thing we need to discuss is; who will bare the risk that we as lawyers bare? Can an insurance company insure us ‘fully’ for PII if we are using LLMs to produce 30-40% of our work? What happens with the emerging lawyers of today; would you say they are as knowledgeable and skilled as the lawyers before them? LLMs can’t replace the critical thinking analysis we must use every single day. What about the test for whether we are of ‘good character’ before we can even be considered a lawyer, take an oath to then become admitted? What happens there? I am a relatively ‘young’ lawyer, have been in the legal and technology industries for almost a decade and of course I am an advocate for the revolution of such a dogmatic industry that is the legal industry. However, whilst my generation would err on the side of groundbreaking technologies to optimise efficiency; I’m afraid that using an LLM specifically in the ways you describe could be detrimental to the sanctity of the profession.
I, like all lawyers around the world, have worked my ass off to get to where I am, such a tool does not actually solve the problems in the industry. It’s like putting a band-aid on rather than actually addressing the root cause. I digress.

Katherine Thomas (Growth and strategy for legal businesses. Adviser, consultant, speaker and lecturer. Owner – Free Range Lawyers):

I reckon the PII question is answered simply: responsibility for the advice stays with the lawyer providing it – so – just as partners need to check the work of a junior, they need to check the output from an LLM. It seems straightforward to me but I am very open to being told that there are complexities I have missed!


Katherine Thomas – I think that is correct. See this article on Legal Futures: Law firms risk claims without safeguards for generative AI


Nam Truong (Human Centred Technologist | Digital + Innovation at Law Squared | MBA Candidate):

There’s definitely a lot of hype with GenAI but I think it’s best to keep one eye open. We are only at the “Blackberry” of GenAI. Wait til we get to the iPhone X!

AI’s work by ingesting data and making predictions based on that data. GPT, which is the model that these GenAI tools are built off is really just enabling AI to ingest more data and make better predictions than ever. Like going from a horse drawn cart to a V8 Engine. The most advanced tech companies are adopting GPT as the gold standard for building AI.

Reality is, legal needs to walk before it can run with technology. To your point, existing tech is under-adopted. However, my opinion is that this technology will just keep getting better and thus more relevant…


Thanks Nam Truong. However, there is another school of thought that it might just get worse not better as it trains on its own poor output:

Researchers caution against AI model collapse caused by self-generated data overshadowing human-integrated internet data

There are already signs of it getting dumber:

GPT-4 is getting significantly dumber over time, according to a study

Is Chat GPT Getting Dumber?

And is the AI boom already over?


Katherine Thomas (Growth and strategy for legal businesses. Adviser, consultant, speaker and lecturer. Owner – Free Range Lawyers):

Great challenge article Brian Inkster, although I am left wondering if a lot of legal documentation also falls into the BS categorization of “filler text that no one necessarily wants to read or write”. (Reaches for tin hat…).

Alex Smith (Global Search & AI Product Lead (Senior Director) at iManage):

Katherine Thomas – so 70% of every contract judging by all the Gen AI start ups. 😭


Katherine Thomas – Indeed this is probably true but do we want that text if ever read (maybe only when it becomes the subject matter of a dispute!) to turn out to be gibberish of the type that not even a lawyer would come up with?


Jamie Wodetzki (Chief Product Officer at Catylex, co-founder of Exari (acquired by Coupa), investor, recovering lawyer, obsessed with AI):

Brian, it will amuse you no end that the Catylex uses a magic wand to indicate high confidence machine results in data extraction… so I can dish out magic wands much quicker than you 🤣

Catylex magic wands


Murray Gottheil (Speaking up from the Safety of Retirement):

Brian Inkster – Thank you! This is excellent and well worth the read.


Mikhail Garber (Principal Software Developer | Lead | Ex-Amazon):

Informative summary, thx


Sean McWeeney Jr., LLM (Associate at Graham Thompson | Focused on Cyber Law, Assisting Clients in Navigating the Bahamian Digital Legal Landscape):

Thank you for sharing!


Andrew Smales (UK and Zug based – Author of BITCOIN AND BLOCKCHAIN WITHOUT THE BULLSHIT):

Ah Fantasia. Amazing on Laser Disc back in the day 😃


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.