Way back in 2012 I announced I was doing a new series (Travels through the Blawgosphere) that would be an occasional review of blog posts I had come across relative to the past, present and future practice of law (which is what this blog is all about).
I did say at the time:-
How regular the series becomes remains to be seen. I am not going to promise anything. It will very much depend on what other blawgers produce and I find from time to time.
Well the first episode in the series was posted on 6 May 2012 (Innovation and Law) and there hasn’t been one since! But it is interesting to look back at that post and note that six years later similar debates about innovation in law firms still ensue. If there have been any changes it might be that there is now a lot more law firms with a ‘Director of Innovation’ or the like than there was back then.
It is also interesting to see mention, in my 2012 post, of the faux-law firm Robot Robot & Hwang founded in 2010 as “a legal startup” having a single human partner. Eight years on from the founding of that ‘firm’ and Robots taking over from lawyers is perhaps bigger news than it was back then but probably still as far fetched as it was then.
It is the current hype about robots and the law, fuelled by apparent advances in Artificial Intelligence, and blog posts I have seen over the past week on this topic that has resulted in this review of those.
Of great significance in debunking the Legal AI hype is the post The Seven Deadly Sins of AI Predictions by robotics pioneer Rodney Brooks. This is not specifically about Legal AI but just AI in general. It is, however, very relevant to the hype generated around Legal AI. It also actually dates from a whole year ago but it has not dated in any way in that year. I came across it as a result of a tweet concerning a recent interview with Rodney Brooks (which is very worth listening to) that has as its background his post from last year. That tweet came from Tim Harford (whose own views on technology I will also look at in this post).
Brooks thinks the claims that robots will be taking over our jobs anytime soon (particularly in the volumes often cited) are “ludicrous”.
Considering claims that the jobs of grounds and maintenance workers in the U.S. are under threat Brooks points out:-
How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero.
He goes on to say:-
Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.
This will be as true of law.
We also all know what happened when a security robot was introduced in a D.C. office building:-
Our D.C. office building got a security robot. It drowned itself.
We were promised flying cars, instead we got suicidal robots. pic.twitter.com/rGLTAWZMjn
— Bilal Farooqui (@bilalfarooqui) July 17, 2017
The first deadly sin is overestimating and underestimating. Brooks makes reference to Amara’s Law whereby:-
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
This can be seen in countless examples in articles and blog posts concerning AI and law where Robot Lawyers are apparently just around the corner (or already here) and we need to hide behind our sofas quickly for the fear they should instil upon us. A quick Google search returns headlines such as:-
The first two of these articles (the first from the BBC no less) reference Case Cruncher (a ‘robot lawyer’) beating human lawyers in a competition about PPI (payment protection insurance) claims. I have previously debunked this ‘competition’ on the basis that the real lawyers utilised knew nothing about PPI claims!: CaseCrunch v Lawyers NOT Deep Blue v Kasparov
A year later and no real sign of law firms actually using this technology. The CaseCrunch website is all about the competition and press coverage with nothing on there about clients and testimonials.
The other articles reference ROSS Intelligence (an AI research tool), LexMachina (legal analytics software), DoNotPay (a Chatbot that compiles a parking ticket response letter for you) and Xiaofa (a physical robot that is 1.46 meters tall, can move its head and hands and speaks in a child’s voice answering over 40,000 litigation questions and 30,000 legal issues).
The first two of these are tools that assist lawyers in their work and are unlikely to replace them. The third fulfils a function that most people using it would never actually contact a lawyer for (the cost of the lawyer would outweigh the cost of the parking ticket). The fourth is just bonkers.
The last headline did not accurately reflect what David Green, director of the Serious Fraud Office, perhaps did predict within the article itself. He was referring to AI being used to sift through evidence in preparing a case not to replace lawyer representation in the court room. So again over sensationalising the actual position with attention grabing headlines.
What these articles are actually saying (if you ignore the hype that surrounds them) is that AI will assist lawyers with tasks that AI can usefully assist them with. It is another tool in the lawyers arsenal along with many other (non-AI empowered) technologies that they currently employ and will in the future employ.
The second deadly sin is imagining magic. As Rodney Brooks cautions:-
Watch out for arguments about future technology that is magical. Such an argument can never be refuted. It is a faith-based argument, not a scientific argument.
Much of the current Legal AI hype falls into the ‘magical’ pot.
Alex Hamilton in his tongue-in-cheek open source glossary for the terminology in LawTech refers to magic in his definition of AI:-
Artificial Intelligence: A term for when a computer system does magic. “General” artificial intelligence refers to thinking computers, a concept that for the foreseeable future exists only in science fiction and lawtech talks. “Narrow” artificial intelligence refers to a limited capability (albeit one that may be very useful) such as classifying text or pictures, or expert systems. Discussions of AI that blur general and narrow AI are a good indication that you are dealing with bullshit.
Also suddenly technology that has been with us for years is magically being AI driven when it does not need to be and often probably isn’t.
Document Automation falls into that category. Lawyers have had at their disposal for 20+ years fairly sophisticated document automation technology (no AI required). My generation, of course, apparently knew nothing about legal technology.
The fact is that these systems have been woefully under utilised by most law firms.
However, all of a sudden we now apparently have AI enabled Document Automation. Which won’t in any way increase law firm utilisation but may sell some licences to the uninitiated.
But many of the new kids on the block, who are really just reinventing the wheel, are professing to use AI when they patently are not doing so, e.g.:-
[N.B. Originally I had an embedded link to this LinkedIn post but unfortunately that post was, within 24 hours, deleted from LinkedIn. That is a great pity because the numerous comments were very relevant and spot on to this debate. They have now been lost. One comment was that this LinkedIn post should be printed out and framed – I have, sort of, now done so albeit online. Oh… the particular “legal AI” thing referred to in the LinkedIn post and the link within that was to Robot Lawyer Lisa.]
This is known as AI washing:-
A marketing effort designed to imply that a company’s brands and products involve artificial intelligence technologies, even though the connection may be tenuous or non-existent.
AI washing appears to be fairly prevalent in the legal technology world. The same problem exists with blockchain:-
Blockwashing (v. int.): The practice of touting something that was entirely possible without a blockchain as particularly special because you are using a blockchain.
— Zach Smolinski (@ZachSmolinski) September 25, 2018
I highlighted the practice of Blockwashing in the legal world earlier this year but unfortunately did not know of this definition at that time: Hack the Law to Reinvent the Wheel? and London Legal Hackathon bend the rules?
And if there was ever AI washing going on it must be the AI-brewed Paisley Snail Pale Ale produced by Thomson Reuters.
Rodney Brooks moves onto the third deadly sin, performance versus competence, and points out:-
People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system.
Today’s robots and AI systems are incredibly narrow in what they can do. Human-style generalizations do not apply.
Another mistake those reporting on AI in law make over and over again and by doing so they simply fuel the hype.
Chatbots probably fall into this category. We hear that a legal Chatbot has performed a simple task and the next we know they are replacing lawyers. The fact is that Chatbots are currently very limited in what they can do and if AI is supposedly attached to a Chatbot then it is probably in a washing machine set to full spin. For more on that do follow my series on Chats with Legal Chatbots.
The fourth deadly sin is suitcase words (words that carry a variety of meanings):-
“Learning” is a powerful suitcase word; it can refer to so many different types of experience. Learning to use chopsticks is a very different experience from learning the tune of a new song. And learning to write code is a very different experience from learning your way around a city.
When people hear that machine learning is making great strides in some new domain, they tend to use as a mental model the way in which a person would learn that new domain. However, machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in, making rapid progress in a new domain without having to be surgically altered or purpose-built.
A good example of this could be seen in a recent report from Legal Cheek on an event at Clifford Chance’s London office to explore the impact artificial intelligence (AI) is having on the legal profession.
A senior associate in Clifford Chance’s Intellectual Property team explained that AI at its core “is software that is capable of writing its own software”!
Another Clifford Chance associate (a technology, media & telecommunications specialist) said at the same event “Fundamentally, AI is great, it’s cheaper, quicker, better, less troublesome… and you might get better results since AI directors don’t care about bonuses”!
Legal Cheek might want to invite Rodney Brooks along to speak at their next AI and Law event.
Moore’s Law is also debunked by Rodney Brooks when it comes to robots taking over and the fifth deadly sin of exponentials. Brooks says:-
Everyone has some idea about Moore’s Law, which suggests that computers get better and better on a clockwork-like schedule. What Gordon Moore actually said was that the number of components that could fit on a microchip would double every year. That held true for 50 years, although the time constant for doubling gradually lengthened from one year to over two years, and the pattern is coming to an end….
Similarly, we have seen a sudden increase in performance of AI systems thanks to the success of deep learning. Many people seem to think that means we will continue to see AI performance increase by equal multiples on a regular basis. But the deep-learning success was 30 years in the making, and it was an isolated event.
That does not mean there will not be more isolated events, where work from the backwaters of AI research suddenly fuels a rapid-step increase in the performance of many AI applications. But there is no “law” that says how often they will happen.
A similar point was made in a very recent post by Mark Husband on Jason Holt Insights (without perhaps appreciating the point by Rodney Brooks about Moore’s Law coming to an end):-
Moore’s law, conceived in the 1970’s continues to accurately predict the doubling of processing power every two years; if computing power was, in any way connected to or causative of the “End of Lawyers” it would have done so by now!
Moore’s Law has been a constant reference point for legal futurists at legal technology conferences for many years now. They just might have to revisit their PowerPoint slides on this one.
Next up from Rodney Brooks is the sixth deadly sin of Hollywood scenarios:-
The plot for many Hollywood science fiction movies is that the world is just as it is today, except for one new twist.
In Bicentennial Man, Richard Martin, played by Sam Neill, sits down to breakfast and is waited upon by a walking, talking humanoid robot, played by Robin Williams. Richard picks up a newspaper to read over breakfast. A newspaper! Printed on paper. Not a tablet computer, not a podcast coming from an Amazon Echo–like device, not a direct neural connection to the Internet.
It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged. They ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences. They will evolve technologically over time, and our world will come to be populated by many other intelligences, and we will have lots of experience already.
Yet again in the world of AI and law this fits. Look again at the headlines and articles I highlighted earlier. We are given a picture of law firms today with no particular changes other than the Robot Lawyer suddenly descending amongst them.
In the interview with Rodney Brooks I mentioned earlier he also draws out analogies using Star Trek and 2001: A Space Odyssey.
Whilst on the topic of Hollywood scenarios, in a post on Lawyers Weekly in Australia, Giri Sivaraman uses Blade Runner analogies to conclude why he is not scared of robot lawyers.
Likewise Tim Harford in his post What We Get Wrong About Technology uses Blade Runner and Rick Deckard using a payphone to call the replicant Rachael rather than a smartphone in a similar way to the reference by Rodney Brooks to Richard Martin in Bicentennial Man reading a newspaper rather than reading from a tablet or listening to a podcast.
Harford points out that:-
Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.
Harford, like Brooks, concludes that Robots will not be taking away our jobs:-
A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”.
The last of the seven deadly sins of AI predictions from Rodney Brooks is speed of deployment. He says:-
A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.
Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.
This will be so true in law firms. I have written before about BigLaw having to sweat their legacy technology systems. Whilst some will be dabbling in AI (and I understand this includes some buying an AI license from a Legal AI vendor just to say they have AI even if they are not using it!) it will be a long time before it is embraced to a universal extent. DLA Piper have admitted that they are only using “about 1%” of the overall potential for legal AI technology.
Much of the actual AI use in law firms today centres around e-discovery. Most law firms (outside BigLaw) are simply not involved in that area. The use case for AI across most legal services that most law firms provide has yet to be demonstrated or proven in any meaningful way. I have previously highlighted this point around AI presentations at legal technology conferences. Such conferences really should have keynotes from the likes of Rodney Brooks.
On the general question about robots taking away our jobs from us (legal or otherwise) it is brought out in the post Every study we could find on what automation will do to jobs, in one chart that:-
In short, although these predictions are made by dozens of global experts in economics and technology, no one seems to be on the same page. There is really only one meaningful conclusion: we have no idea how many jobs will actually be lost to the march of technological progress.
Indeed, where do these predictions come from and what are they actually based on?
So to sum up. There is much hype out there about robots taking over the work of lawyers. Apply the seven deadly sins that Rodney Brooks has so well enunciated to any articles you read on Legal AI to debunk the hype and see the wood from the trees. AI is a tool that lawyers can and will usefully use and which they should only use when it is relevant, necessary and cost effective to do so. There is also much technology currently in their arsenal that they could be using to greater advantage before they consider even looking at AI.
Legal AI hypesters may consider me and “a smattering of militants with a more conservative nature” to be spoilsports “writing off the whole thing as hype”, though we apparently “seem to have been left adrift by reality”. I think Rodney Brooks has nailed the reality on the head and I will happily stick with his viewpoint for now and the foreseeable future.
What do you think?
Image Credits: The Daleks from Dr Who © BBC; C3PO and R2D2 from Star Wars © Lucasfilm; Harry Potter © Warner Bros; and Blade Runner Replicants © Warner Bros