What is the future of legal services?
That is a question that Professor Richard Susskind attempted to answer at the University of Strathclyde on 19 October 2023. Although, Professor Susskind has, of course, spent the past 40 years trying to answer that very question.
The Future of Legal Services: Drills and Holes?
Before the talk started I chatted with a legal academic sitting in front of me. I discussed the likelihood of hearing nothing new. He said that I didn’t always do new talks. I countered that I perhaps renewed them a little more often than Susskind did. Furthermore, if the Professor went on about drills and holes again I just might have to leave.
It wasn’t very long into the talk before drills and holes were mentioned by Susskind. The legal academic turned around to see if I was going to leave. I thought it perhaps rude to do so at such an early stage in the lecture. Furthermore, dear reader, I did not want to deprive you of this write up!
For the uninitiated, Susskind uses the drills and holes analogy whenever he speaks about the future of legal services. This is that no one wants a drill. What they want is the hole.
Whilst some in legal circles may be forgiven for thinking that Susskind came up with this analogy I am told by Google that it actually predates his birth. The earliest strong match for the adage known to ‘Quote Investigator‘ occurred in an advertisement in a Somerset, Pennsylvania newspaper in 1942:
Hardware stores report that over one million men bought one-quarter inch drills in one year. Not one of those million men wanted the drills. They wanted quarter inch holes in metal or wood.
People who buy life insurance don’t want life insurance; they want monthly income for their families.
The advertisement was run by agent by C. C. Wagner of the Provident Mutual Life Insurance Company of Philadelphia. Yet, Quote Investigator conjectures that the drill adage was already in circulation.
How does the drill and hole analogy fit with lawyers? As Bob Ambrogi, reporting on the same talk (minus the GenAI) by Susskind in 2009 (14 years ago), put it:-
Lawyers need to look for the hole in the wall. Susskind relates how power tool manufacturer Black & Decker orients new executives. They are shown a power drill and asked, “This is what we sell, isn’t it?” When the new recruits answer, “Of course,” they are shown a hole drilled in a wall. “No, this is what our customers want. Your job is to find ever more creative ways to give our customers what they want.” Lawyers have to stop thinking about the power drill and start thinking about new ways to deliver the hole.
Other Commentators on Drills and Holes
I don’t think it works like that at all. You see an electric drill in a shop and decide you want it. Then you take it home and wander around your house looking for excuses to drill holes in things.
So is GenAI a new way for lawyers to deliver the hole that their clients are looking for? Is it the future of legal services? Or is it just the latest shiny new toy for lawyers to wander around their offices looking for excuses to do anything with but actually help their clients?
Jason Plant, writing nearly a decade ago, said of the Black & Decker story as told by Susskind:
It’s a great story, but not if you stop to think about it some more. I mean I still use a drill, just like my dad did and his dad before him. Why? Because it’s the best way of getting a hole! And I think that it the case in corporate law, the client is getting satisfactory holes. Sure they’d like it cheaper, but this is happening, costs are going down (see last post).
But to back up my previous post. I suspect, like when I’m shopping for a drill, there is a point where a price gets so low it starts alarm bells ringing. “Is this cheap thing going to last?”, “I’d be better getting the Bosch wouldn’t I, not sure that cheap store brand will drill through what I want”. At this point the client will start to accept the costs.
And as the Jolly Contrarian says:
To be sure, in the meantime the legal market has incrementally changed: it has absorbed every innovation — fax, email, internet, mobile telephony, mobile internet, cloud computing, offshoring, outsourcing, and it is currently embedding what it can of neural networks and natural language processing — but not really in the ways that the thought leaders had in mind.
But all this change notwithstanding, it is an ongoing source of frustration for the legal imagineers that the fundamental structures of the legal profession haven’t been revolutionised. They’ve rolled with the punches. Clifford Chance seems still to be in rude health, as far as anyone can tell.
Futurologists had in mind flying robo-taxis and hoverboards. What we got was Uber and electric scooters. You might say is no bad thing: thought leaders say, “yes, yes, yes: but the revolution is yet to happen, and happen it surely must.” See, for example, this curious piece from A&O.
Yet, we have had all manner of changed circumstances thrown at us since the developed world lost its major stabilising influence in 2016: political insurrection. Disease. Dislocation. War. The Pentaverate. Prince died. All of these things, you would think, would accelerate the rate of change. But the only constant since then has been the ongoing good health of the traditional legal industry.
So what is going on? Où est la révolution?
The Future of Legal Services: GenAI?
So is GenAI the future of legal services delivery? Will it replace our current metaphorical drills in creating a better hole for our clients?
We were told by Susskind that:
systems are increasingly capable and are taking on more or more tasks.
No mention though of legal tasks that GenAI might actually be capable of doing and sensible to use it for. As I pointed out in my last post, and the one before that, actual use cases in law are far and few between.
Susskind discussed exponential growth. He told us something amazing is happening. Neural networks are doubling every 3.5 months. It is amazing what ChatGPT does. But the underpinning technology will increase its power 3,000 times over the next decade. There is a greater advancement at the moment than we have ever witnessed. By 2030 we will have systems operating at remarkable levels. This is the decade that it will really take hold we were told.
Everyone has some idea about Moore’s Law, which suggests that computers get better and better on a clockwork-like schedule. What Gordon Moore actually said was that the number of components that could fit on a microchip would double every year. That held true for 50 years, although the time constant for doubling gradually lengthened from one year to over two years, and the pattern is coming to an end….
Similarly, we have seen a sudden increase in performance of AI systems thanks to the success of deep learning. Many people seem to think that means we will continue to see AI performance increase by equal multiples on a regular basis. But the deep-learning success was 30 years in the making, and it was an isolated event.
That does not mean there will not be more isolated events, where work from the backwaters of AI research suddenly fuels a rapid-step increase in the performance of many AI applications. But there is no “law” that says how often they will happen.
Susskind went onto say that the best is yet to come for the future of legal services. By 2030, apparently, our lives will be transformed by things we have yet to see. Systems such as ChatGPT are already unfathomable. This will only get more so.
We were given an example of systems that can tell whether you are happy or not. HuggieBot will scan your face to see your current feelings and will give you the hug you need. This may be the future of HuggieBot but current versions (as far as Google could tell me) appear to rely on the human hug to get a hug that is responsive to it. Although when I saw an image of HuggieBot I was rather alarmed by the thought of it coming anywhere near me let alone actually hugging me. This seemed to me like a rather overblown and expensive drill to create a largely unnecessary hole. And I cannot see HuggieBotLawyer being the future of legal services!
Susskind told us that we often think that AI has no empathy. It has no emotions. However, Susskind asked ChatGPT 3.5 to write a sympathy e-mail and the result brought tears to his eyes. It was better than any condolence letter Susskind had ever seen:
That is, of course, not GenAI having empathy or emotion. GenAI just puts together a document from information scraped by it from the data it has to hand. If you simply Google “write me a sympathy letter” you will find lots of examples on websites just like the one that Susskind asked ChatGPT to create. All ChatGPT did was to feed off of those pre-existing examples.
This was Susskind committing the second deadly sin of ‘The Seven Deadly Sins of Legal Tech Predictions‘ by imagining magic. No magic (and certainly no empathy or emotion) involved if you just look behind the curtain.
In the question and answer session at the end someone (who declared themselves not to be a lawyer but a web pioneer) pointed out that ChatGPT was not proactive. It just mimicked others. When compiling the sympathy letter it was not, for example, offering to bring the grieving child soup. Susskind retorted that no one had ever offered him soup when he was grieving!
Soup or not, what the ChatGPT sympathy letter lacks is any personal knowledge of the deceased father or the child who is suffering from their loss. Only a true friend could add that detail from their personal knowledge. Without that it is rather bland and lacking in any true connection.
Applying that to legal tasks and the future of legal services: ChatGPT cannot replace the personal knowledge a lawyer has about a specific legal case or transaction. Factual knowledge that varies from case to case and is essential when applying the law or legal process to it. We are a long way from recreating that (no a carefully worded prompt won’t wash) and certainly not just seven years away.
I thought we were all finished by now with the metaverse in legal technology talks. It is surely so 2022? But not Susskind.
In the Q&A session at the end a lawyer who went to university with Susskind asked: If the routine work of junior lawyers is to be taken on by machines how do you then train those junior lawyers. Susskind’s answer was using the metaverse!
Susskind said that if we can create simulators for space travel we can do the same for lawyers. The metaverse is important. Susskind thinks that Mark Zuckerberg will be proven to be right but before his time. We can do much in the next 5 years. How we teach needs to be changed said Susskind.
I have previously commented on Susskind’s analogy between space travel and/or surgical simulators and learning the law using simulators:
I’m not so sure that is a good comparison. A flight simulator allows you to travel through virtual space. An operation simulator allows you to dissect virtual human tissue. This is very different from the actual tasks of solicitors where much is reading, thinking, analysing and writing. Not sure why or how a virtual reality simulator is going to help me draft a conveyancing contract or a set of court pleadings.
Oh, and Mark Zuckerberg didn’t invent the metaverse. Keanu Reeves pointed this little fact out in an interview in 2021, to promote The Matrix Resurrections, when he said:-
Can we just not have metaverse be like invented like by Facebook… The concept of metaverse is like way older than that… It’s like way older… So for that moment to get to… I’m just like, come on man.
The Evolution of Chatbots?
Also in the Q&A session came a question from a ‘Lawyer Technologist’ who told us that he had developed a chatbot with a law firm before the days of GenAI. He told us that it failed because clients wanted the human touch. Susskind replied that in those days the technology was very different from ChatGPT. Now with ChatGPT you have more confidence that you are in safe hands.
Hmm… as I have stated before about pre and post ChatGPT Chatbots:
Before ChatGPT (BCGPT)
BCGPT Chatbots were just plain stupid and incompetent. But at least, usually, if they didn’t know the answer they stopped functioning or told you they didn’t know the answer or offered you a human replacement.
After ChatGPT (ACGPT)
ACGPT Chatbots hallucinated so you had no idea whether what they were telling you was true or made up gibberish (as one Judge called it) without double or triple checking the output from more reliable traditional sources.
Hallucinations or Usefully Wrong?
Susskind did, at least, admit that ChatGPT “makes mistakes” but quickly moved on from that fact by stating that it was just “early days”. The suggestion was that later versions of ChatGPT might solve that issue.
Not dwelling on this massive problem is a huge mistake for any legal futurist to make. But it obviously interferes with the now old and carefully crafted narrative on drills and holes and the future of legal services.
Not once did Susskind use the word “hallucinates”. The now well accepted term for ChatGPT simply making things up or creating gibberish rather than perhaps making a genuine mistake. Neither did Susskind use the term “usefully wrong” which is one favoured by Microsoft for downplaying the drawbacks of ChatGPT.
It is remiss (negligent?), I would suggest, for anyone giving a talk on GenAI not to cover this fact fully and frankly. It was, as I reported earlier this year, a topic noticeably lacking at The British Legal Technology Forum 2023 chaired by Professor Susskind.
There is a school of thought that it is a problem that is likely not to go away. One that could in fact get much worse before it gets any better as GenAI trains on its own hallucinations! Hallucinations that Google is picking up on and assuming to be authoritative rather than actually recognising them as pure gibberish.
The Future of Legal Services: Pollyfilla?
In summary, I believe that Susskind’s views on the future of legal services are too high level. They ignore completely:
- The detail of what most lawyers do on a day to day basis.
- What lawyers are likely to continue doing on a day to day basis for the foreseeable future.
- The limitations of GenAI when applied to actual legal tasks.
GenAI is not about to change that anytime soon. It certainly won’t in the way imagined by Susskind by the year 2030.
If you hear Susskind on the future of legal services do have a pot of Pollyfilla to hand. You can then fill the cracks and holes in the arguments presented.
Image credits: Robot images created using Microsoft Bing Image Creator powered by DALL-E 3 and prompted by Brian Inkster; HuggieBot via Techxplore; The Wizard of Oz © MGM; Susskind’s Metaverse created historically using DALL-E 2 by Brian Inkster; and all other images taken by Brian Inkster at the University of Strathclyde on 19 October 2023.
Reactions on The Future of Legal Services: GenAI, Drills and Holes?
On LinkedIn the following comments have been made:-
Gary Thompson FCII Chartered Insurance Practitioner (Insurance Professional providing Expert Witness Opinion | Insurance Claims Audits | Solicitors CPD Training and Risk Management Advice | Insurance Dispute Resolution Solutions):
Having survived Y2K (remember the hype) I’m now preparing for Y3K!
Susskind did mention his worries about autonomous weapons!
Jane Clemetson (Lawyer: media, IP, commercial, tech and data protection | European Woman of Legal Tech | NED | Bit of a #girlyswot | Resourceful | FRSA):
Brian Inkster – apparently autonomous weapons weren’t a thing with governments and defence procurement people etc until recently because of concerns about them being “hacked” and attacking the people who fired them but now that is “less of an issue” – Lord knows why.
Nancy Myrland (LinkedIn™️ Coach For Lawyers | Legal Marketing & Business Development | Content, Social Media, Podcasting, Video & Virtual Presentation Consultant | Individual & Group | Speaker, Trainer & Advisor for Lawyers & Law Firms):
Brian Inkster, as in combat drones?
Nancy Myrland – Are they autonomous?
Only as much as autonomous vehicles are that are programmed with a destination. I do suspect there are bad actors developing them, though, because they have nothing better to do with their time.
Philip Hannay (Managing Director at Cloch® | lawyer to those with profitable ideas.):
Keep on keeping on!
I shall Philip!
Jane Clemetson (Lawyer: media, IP, commercial, tech and data protection | European Woman of Legal Tech | NED | Bit of a #girlyswot | Resourceful | FRSA):
In relation to your last two questions, it depends on the soup and depends on the robot. And also, although it may be heretical to say so, I think empathy can be an over-rated quality for lawyers, whilst lawyering.
Tomato or Parsnip? C3P0 or Twiki? If ChatGPT is as empathetic as Susskind thinks it is, clients will be able to detect that it is not real live lawyers doing the work for them then 😉
Tomato, obvs and Heinz Cream of Tomato at that. C3PO possibly a bit angular and I’m currently Googling Twiki, so will have to get back to you on that. Make no comment on empathy – except to say that maybe it’s like sincerity ie once you’ve got it faked, you’re home and dry.
I hope your search revealed this Twiki?
Michael Burne (Rebel with a cause. Changing the way the legal profession works one step at a time. Like taking photos too):
Where do I sign for the Power BI book and lecture tour please? People will pay real folding money to hear all kinds of things – even common sense that rarest of beasts!
Unfortunately Michael there does not appear to be a big demand from #LawTech Conference organisers to introduce common sense to their programmes, especially in the UK. They appear more liberated abroad having spoken more freely/honestly about #LegalIT at conferences/events in the Netherlands, Cyprus, Ireland and Canada. Other countries are welcome to contact me if they think their conference delegates could do with a dose of #LegalTech reality over hype!
How about the #LawTech equivalent of a televised presidential debate? 👏🏻
Like the one I did last month with Denis Potemkin and Nicola Shaver?!
Yup. Will Prof Susskind join you?
Like the Musk v Zuckerberg cage fight I fear that may never happen 🙂 But I’m up for the live debate (not cage fight for the avoidance of any doubt!)
Brian Inkster the debater not the fighter 😂
Deborah Parry do Carmo (Lawyer-Linguist • CIoL-certified • Legal and financial translator: Dutch and Portuguese to English • Now upskilling: Specialist Paralegal Qualification in Wills, Trusts & Executries (Scotland) • Triple national: UK/SA/PT):
“Pay no attention to that man behind the curtain” 😉
GenAI will train on its own hallucinations, the same as what already happens in adaptive neural machine translation systems. And then instead of doing the work we’re trained to do, we’ll be fixing its dietary indiscretions (think of “garbage gut” in dogs: gag, heave, slurp).
If we don’t remain in control of our own workflows, our jobs will be fixing mistakes we’d never make ourselves.
Instead of taking a step back and a bit of time to assess what something can actually do, and whether it adds value to a workflow, hype merchants are climbing over themselves to be the first to announce that next shiny toy. In practice, they often disappoint.
I just showed HuggieBot a middle finger, couldn’t help myself.
Thanks Deborah. This is why I have been saying for some time that a Legal Hallucinatory Detectorist is far more important than a Legal Prompt Engineer 😉
Deborah Parry do Carmo:
Haha, we’re already doing it with machine translation; I’ll have to reference this when I get back to posting!
Paolo Tonelli (Founder and CEO at Codify Legal Publishing Inc.):
Some good points, under the sardonicism. Perhaps GPT could help you to edit that out! 😉
Paolo Tonelli It wouldn’t be a Time Blawg post without a heavy dose of sardonicism 😉 I asked ChatGPT to summarise it – very bland and boring and it hallucinated along the way. This is why ‘The Time Blawg’ will always remain free of ChatGPT but might use GenAI from time to time to create images to compliment the writing 🙂
Olly Buxton (Argumentative boffin):
Brian Inkster +1 for sardonicism
Colin Levy (Chief Lawyer @ Malbek | Legal Tech Author and Commentator | Fastcase 50 2022 Honoree | Legal Tech Startup Advisor and Investor):
Agreed Paolo Tonelli!
Graeme Johnston (Software to map work – before that a lawyer):
To lay out my premises — I see slightly more potential for genAI than you, Brian, but way less than what RS seems to, essentially for the reasons you give.
As to RS’s speech –
When I first read his work some 20+ years ago, I thought it interesting albeit with major weaknesses. Since then, I’ve never heard him say anything else that really made me sit up. Just a tired formula into which the latest ‘new thing’ is inserted.
Ironies and projections abound. For example, in one talk c5 years ago, he accused the legal profession of deploying Maslow’s hammer; and this speech sounds like a generic, genAI effort. There’s some unintentional revelation in his lame training use case for the metaverse, and the fact he thought the letter of sympathy was good.
But, more seriously, thanks for writing it up. I increasingly fear that the high-level plausibility combined with flawed understanding of legal and human realities, risk misleading people who don’t understand much about those realities, or software, or both. I tend to stay away from public comment on what I see as bs, preferring a positive and fairly light tone. But the time may have come. I’ll ponder and maybe write something substantive.
I believe there may be more potential for AI that is not necessarily GenAI. We seem to have gotten caught up in a whirlwind around ChatGPT. Building on AI advancements that were already in place rather than a ChatGPT tangent might be a better route for legal to take. But I profess no expertise in that regard.
I think the BS needs calling out for the very fact you point out: the misleading of most lawyers and their clients.
Aron King Solomon had a badge for that back in 2019. It is still very apt (maybe even more so) for use today:
Thanks Brian. Yes, agreed re AI.
One thing that brought home to me how harmful the hype can be was talking to some law undergraduates earlier this year. A couple of them seemed to be in a rather fatalistic ‘what’s the point?’ mood based on what they’d heard about the future adjudication claims made by the promoter of the infamous DoNotPay. It made me realise how even students of law may take such claims at face value.
Indeed. And this applies to associates in BigLaw as much as to law undergraduates as I pointed out under Sin No. 4 (Suitcase Words) in the Seven Deadly Sins of Legal Tech Predictions post.
Scott Simmons (Turning Lawyers Into Rainmakers | Business Development Coach & Trainer for Lawyers | Legal BD Consultant):
I notice you’ve moved from there being no use cases to acknowledging there are at least some – albeit ‘few and far between’.
Isn’t this how it works? And won’t the use cases keep growing and improving?
Finding reasons to resist it isnt going to move the profession forward. Yes, we have to proceed with caution; and yes, there are problems – there will be, it’s still at such an early stage.
But GenAI is here and it has the potential to improve how we provide our services and advice.
We should embrace that.
Not sure I’ve ever said there were none?!
I put out a search for them (due to my difficulty in finding any in my own day to day practice of law) and got only 2.5 use cases from that search.
At Legal Geek, just the other week, a Big Law panel on use cases told us: “There are no actual use cases of GenAI in real live practice of law taking place in law firms at the moment. If anyone tells you there are, then they are lying.”
Certainly not a case of resisting but being curious and realistic!
If you knew about me, technology and the practice of law you would know that I am a big advocate of using technology to make legal processes more efficient. I have done more than most on that front especially using the cloud, document automation and legal process engineering. I have always been an early pioneer of adopting legal technology that I considered could benefit me and my law firm, Inksters, and where there was a clear use case for so doing.
Just because GenAI is here doesn’t mean we have to embrace it if there is no actual use case for it in what we do on a day to day basis.
Pizza cutters are here. I bought one this afternoon because I cook and eat pizza. It would be daft for someone who doesn’t to buy one.
Blockchain, NFTs, Clubhouse and the metaverse have all come and gone in recent years. We were all told we must embrace them. Many did and now look back with regret. None of those moved the profession forward.
I do believe, however, that unlike those examples GenAI has some use for lawyers outside of actual legal tasks. I have, for example, embraced it in generating images for my blog posts as today’s post and last week’s one demonstrate. It is great for that.
A few people – myself included – suggested looking at CoCounsel as a use case in law.
It doesn’t surprise me that a BigLaw panel said there are no use cases – AI will eventually destroy their business model. Even the World Economic Forum says legal businesses will gain back around 50% of time because of AI – what’s that going to do to the billable hour?
And even if you stand on the side of there being no use cases right now, we should still be exploring it with open minds because it will impact the law in a big way.
CoCounsel was, of course, one of the 2.5 use cases referred to.
Don’t think there is any fear of AI destroying the BigLaw business model any day soon! Susskind’s quote about the fact that they don’t need to change will probably hold true for some time yet. That is a point that I can agree with him on.
These percentage figures are, as we all know, based on thin air and no substance.
BigLaw are experimenting with it. They have the pockets to do so.
Small law firms don’t. They are more likely or best advised to allow the Legal Tech suppliers to come up with the Tech and if proven to work in law on defined use cases then use it.
That exploration is happening but not every lawyer in the country needs to drop their day job to play with it.
Not sure if this will interest you, but here’s a viewpoint on why waiting isn’t a good idea: https://www.linkedin.com/posts/dcaseyflaherty_genai-showerthoughts-activity-7122023199097159680-KMU_?utm_source=share&utm_medium=member_ios
Olly Buxton (Argumentative boffin):
Scott Simmons the point is Gen AI is basically free help, widely available, so however good it is, and whatever it brings, its commercial value will quickly fall. It doesn’t change the basic laws of supply/demand. If it costs nothing to make and supply is infinite, the price at which it intersects with the demand curve is nil.
You could argue that any software technically costs nothing to make.
Some parts of GenAI are currently free to use because each time we use it we are feeding it more data. It won’t be free forever, that’s just capitalism.
As with all new things on the market, there will be a flood of new entrants, most of which will fall away, leaving those few that are left to dominate the market and set the price.
And the reality is that anyone who uses AI to get legal answers without the support of a lawyer will be bearing the risk of the outcome. And anyone who wants to take the risk of getting the law wrong isn’t a client worth having anyway.
Scott Simmons – Does that not therefore mean that the reality is that the Big Law model is not about to be destroyed anytime soon?
Adam Manning (Notary Public, Senior Civil Litigation Solicitor and Associate at Biscoes Solicitors | Business Networker | Proud member of BNI!):
The future of legal services – trying your very hardest do the best job you can professionally, ethically and empathically for your client.
Best response so far!
Antti Innanen (⚫ Ⓜ️ 🍄):
I believe “use cases” will become irrelevant once Copilot is integrated with Microsoft 365 programs. It’ll be like discussing use cases for Word or PowerPoint.
It might be challenging to pinpoint specific “cases,” given that AI usage isn’t a simple on-or-off matter. You can use it sparingly (perhaps for minor corrections like I’ve done here) or more extensively.
ChatGPT receives 60 million website visits daily and recorded 1.43 billion monthly visits in August 2023. Legal use cases exist there too.
We use GPT and other AI tools (Midjourney, Eleven Labs, Synthesia, AI translation tools) daily, but given our small size, we might only count as a 0.5 case example. That brings the total to three!
Once Copilot is integrated with Microsoft 365 the use case for building your own will certainly vanish fairly quickly 😉 Mitch Kowalski made that prediction some time ago.
Susskind at his talk thought that tomorrow’s lawyers should be building AI systems. It sounded like there would be no lawyers doing any lawyering. That is as necessary as lawyers learning to code: https://thetimeblawg.com/2018/02/24/lawyers-and-coding/
But that Copilot integration won’t make actual legal use cases irrelevant. It does not follow that legal tasks will be capable of being done by GenAI just because you have Copilot on hand. That is like saying lawyers use the WordArt function in Word for legal tasks 😉 We will still be searching for the Copilot use cases or maybe will have given up by then – in which case the irrelevance will be real 🙂
Building software is hard for software companies and nearly impossible for law firms.
There are excellent products available, and the likelihood of law firms or lawyers creating superior products is close to zero in my opinion.
There are numerous effective ways to prime or fine-tune existing tools for legal work. Even with the right custom instructions, it’s possible to significantly enhance the output of these tools. This should be our focus.
Coding is a valuable skill, but it’s worth considering why lawyers should invest in it. Coders are good in coding. Maybe we could collaborate with them.
Regarding use cases, I think you make it sound more dramatic than what it is.
When Netflix recommends a series, you don’t think of it as “using AI.” Similarly, when I correct mistakes in this reply, it’s not about “using AI for legal work.”
Many use cases will be ordinary, boring, everyday actions.
Unusually I find myself agreeing with you on the build point 😉
On use cases it is not me that is being dramatic but those that are saying all associate lawyers are going to be made redundant as a result of GenAI or that the Big Law model is about to be turned on its head.
However, again, I find myself agreeing with your punchline. The use cases are going to be “ordinary, boring, everyday actions” rather than performing actual legal tasks. It is not the End of Lawyers 😉
Nathan J. Corr (Trainee Solicitor – Addleshaw Goddard | Co-Founder – Law School 2.0):
The 7 Deadly Sins of Legal Tech Predictions in case anyone is looking for the full list:
1. Overestimating and Underestimating
2. Imagining Magic
3. Performance versus Competence
4. Suitcase Words (words that carry a variety of meanings)
6. Hollywood Scenarios
7. Speed of Development
and Brian’s talk from Lexpo’19 (as Lexpo Bot) in quite possibly the best Legal Tech event outfit ever (yes, even rivalling some of the Legal Geek outfits): https://vimeo.com/345675570
Thanks Nathan J. Corr for highlighting all 7. Probably all 7 were committed on Thursday! I maybe need to do a new post that just looks at these sins in relation to current GenAI hype. Thanks for linking to the video. The original blog post by me on the 7 deadly sins is here: Travels through the Blawgosphere #2 : Artificial Intelligence and Law ~ Robots replacing Lawyers?
Nathan J. Corr:
Thanks for linking the original blog post Brian – shall keep an eye out for the post on the 7DS of GenAI hype!
Kyle Bahr (Legal Ops & Tech Innovator | Former Fortune 200 Legal Operations, Commercial Disputes & Contracts Attorney, Federal Law Clerk, and International Law Firm Attorney & Paralegal):
This next RoboCop movie looks AWESOME, Brian! Can’t help but notice it takes place here in the States 🇺🇲
I used a hole-driller to jazz up the movie poster. LinkedIn won’t let me post a video here in a comment, but you can hop over to YouTube to see it:
Just using the smallest of my magic wands for this. Animation by LeiaPix: https://convert.leiapix.com/
Now Kyle, that’s a use of AI that I can get on board with!
David Huston (Head of Business Process Improvement at DLA Piper):
I remember being super sceptical of the smartphone when it came out and I swore it would never work for banking or any trustworthy tasks!
I remember being the same of blockchain, NFTs and the metaverse but never of the smartphone 😉
That was exactly my point! We should catch up!
Stefan Eder (do ut des):
🙌➡️ food of thought on a number of key topics
🚦if you want to use #AI tools in legal practise start with doing your homework
🚦 which #AI technologies can really be used and how do they work and where are precedents you can build on (there is ample)
🚦what are use cases that work and scale
🛑 do not follow a dream as #AI is about mathematics statistics and coding – there is no “artificial intelligence” – at least not as far one can see
🛑 be aware of hallucination and bias
🛑 do not rely on data you don’t understand
🛑 be aware of the risk from manipulated data or malicious models and in particular #AI browser extensions
🚀but there is a lot of potential for the legal sector in traditional #AI and possibly hybrid applications based on #LargeLanguageModels
🚀 start with working on your data with NLP and Knowledge Graphs – you will be surprised of what you can uncover and achieve
🚀 think in processes build on data you understand and you will find there is a wealth of potential use cases that can scale in the industry
🚀 look out for the contributions of leading experts who share lots of their expertise
#technology #future #responsibility
Thanks Stefan. All good points to take cognisance of.
David Gilroy (I lead a marketing agency who specialise in helping law firms make more money from their website. I am looking for M&A (to acquire) opportunities in the agency space):
This post and the blog post are worth a quick and then slow, read. But to answer Brian’s questions:
Q: What do you think? Is GenAI the future of legal services?
A: Mostly no, but it some places it will speed up what certain sections of the legal sector does.
Q: Will legal services be transformed by GenAI by 2030?
Q: Is the drills and holes analogy past its sell-by date?
A: No. It’s a simple metaphor that everyone understands.
Q: Will Mark Zuckerberg be proven to be right, but before his time, about the metaverse?
A: No. I was using the Metaverse on CompuServe in 1995, it was called WorldsAway. It was pants then and it’s pants now.
Q: Should we simply ignore hallucinations?
A: No, we need to find a way of ‘teaching’ the AI. Who’d have thought!
Q: Does GenAI have empathy? Do you want to be brought soup when grieving?
A: No it does not, it’s just churning out worlds in a predictable order. Words alone are not empathy.
Q: Do you want a hug from a robot?
A: Sure, it would be fun. I wonder if the ones from Boston Dynamics can do that yet rather than just jump around and do somersaults?
Thanks David. You forgot to answer the question about the soup 🙂
Bugger, thought that was all one question, and a weird one at that!
Andrew Neill (Senior Manager @ Addleshaw Goddard | AI, Product Management, SaaS):
+10 for the robot drilling hole image.
Richard Bisiker (Global Executive Coach & Leadership Change Transformation ► Chartered Director ► ILM Approved Centre & Assessor ► Helping People Reach their Full Potential):
GenAI in our courts? Fantastic balanced arguments and views from Brian Inkster, yet again challenging the runaway predictions. Worth a read…
Olly Buxton (Argumentative boffin):
A great post, from Brian Inkster: on the old drills and holes chestnut.
Another way of looking at this is that the “hole in the wall” that everyone wants is not the one Professor Susskind *assumes* it is. Rational, logical and sensible processes are not always the prime motivating force when you deal with another human. Are we after cold, clear, clinical, comprehensive legal advice? Or something nice to sit on?
It calls to mind that famous exchange from Silence of the Lambs.
“First principles, Clarice: Simplicity. Read Marcus Aurelius. “Of each particular thing ask: what is it in itself? What is its nature?” What does she need, this legal eagle of yours?
Clarice: To innovate!
Lecter: No! That is incidental. What is the first and principal thing a legal eagle does? What needs does she serve by “innovating”? …
Read on …
Anthony Kearns (Chief Client Experience Officer and Consultant to General Counsel):
Love this Olly and Brian!! Law is an interpersonal profession that meets largely emotional needs. A better, faster, “righter” answer to a technical question is only useful if it serves this purpose.
Denis Potemkin (Founder of Majoto | contracts that people understand and love | faster deals more trust through design and automation):
OML too brilliant, stop it.
Warren Simmons (Legal and policy principal, immigration and nationality):
Susskind was boring 25 years ago so at least in that sense he is consistent.
Alex Smith (Global Search & AI Product Lead (Senior Director) at iManage):
Paul Marshall (Barrister, Cambridge, LSE):
I wish Susskind was.