Roman Kubiak looks at how artificial intelligence is likely to change the face of the legal landscape 

Just as many of us have begun to get our heads around digital assets and cryptocurrencies, in sweeps artificial intelligence (AI), specifically generative AI.

roman-kubiak-600x400

It has been simultaneously hailed as the tool with which to drive labour costs down while raising productivity, and one of the biggest existential crises to face humanity. Many of our social media and news feeds have been awash with stories, targeted ads and posts about both the wonders and horrors of AI.

Perhaps the biggest catalyst for the recent discussions was the highly publicised launch of OpenAI’s chatbot, ChatGPT-4, the latest iteration of its language model which can generate text responses remarkably similar to human speech. Alongside demonstrating that ability at its launch event, ChatGPT-4 also spread fear among those in the professional services industry by solving a complex tax calculation.

ChatGPT-4 is just one of a number of generative AI systems capable of generating text, images, audio, video and other media, usually in response to user prompts. But what does all of this mean for us?

Friend or foe?

These generative AI systems have led many in the legal sector and beyond to voice concerns that AI will soon replace our jobs. On 26 March 2023, Joseph Briggs and Devesh Kodnani at Goldman Sachs released a report, The Potentially Large Effects of Artificial Intelligence on Economic Growth, which postulated that “roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.”

Based on their estimates, some 44% of current work tasks undertaken in the legal industry are particularly highly exposed to automation by AI.

And just a day after ChatGPT-4’s launch event, PwC announced a strategic alliance with AI startup Harvey, to “help generate insights and recommendations based on large volumes of data” which it hopes will help “identify solutions faster”. 

PwC is keen to stress that outputs will be “overseen and reviewed by PwC professionals” and its press release even ends with what some may see as a note of assurance to staff and clients that: “AI will not be used to provide legal advice to clients and AI will not replace lawyers, nor be a substitute for professional legal services.”

It’s life, Jim, but not as we know it

So what exactly do we mean when we talk about AI? In its strictest sense the term ‘artificial intelligence’ means a system or systems which mimic human cognitive behaviour and functions. 

In reality, AI is already playing a significant role in our day-to-day lives.

Whether you’re asking Amazon’s Alexa or Google Home to recommend you a song, browsing through Netflix or posting an update on LinkedIn, chances are that AI is involved in some way, constantly gathering data about your preferences and habits to help enhance user-experience and, frankly, boost sales and revenue.

Ever since Alan Turing first introduced the Turing test in his paper Computing Machinery and Intelligence in 1950, society has both raced to develop AI, while at the same time prophesising our downfall as a result of it.

And this is where generative AI has been hitting the headlines. Its ability to produce remarkably human responses and engage in conversations (one Google engineer was put on leave after declaring his belief that Google’s LaMDA chatbot system was “sentient”), render frighteningly realistic images (a so-called photograph of the Pope looking rather dapper in a full-length puffer jacket was discovered to have been generated using the AI tool Midjourney) and even produce songs in the style of well-known artists (TikTok user ghostwriter977’s “heart on my sleeve”, which has since been removed from Spotify and other streaming platforms, produced an impressive rendition of a duet between Drake and The Weeknd) have forced us to question how much of what we see, hear and even think is real or, instead, the product of AI.

Judgment day

The views of Ge Wang, associate professor at Stanford University, is that “the cat is not going back in the bag”.

So have we crossed the Rubicon? Headed, inevitably, on a one-way trip towards our own self-destruction. The architects of our own downfall? 

Or does this new brand of AI present a real opportunity for the legal sector?

In May 2018, the Law Society published its report on the issue as part of its “Horizon Scanning” series – Artificial Intelligence and the Legal Profession. That report, itself citing an earlier report by the Law Society – Capturing Technological Innovation in Legal Services (Chittenden 2017) – noted a number of areas in which AI is already being applied, which included:

  • document analysis, including due diligence, real estate transactions and GDPR compliance x
  • contract analysis and risk reporting
  • real-time privacy policy drafting
  • legal advice resource based on natural language input
  • a clinical negligence decision support system
  • European Court of Human Rights case outcome prediction, in which the AI algorithm had a 79% rate of accuracy 
  • public legal education.

The report also considered the likely implications for the legal profession of

AI, including:

  • changes in the nature of legal jobs
  • changes to organisational structures and business models
  • lower costs and alternative fee structures
  • in the shorter term, a reduction in lower-skilled legal jobs, known as “technological unemployment”
  • over time, the replacement of higher skilled roles. 

The last two have understandably caused the most concern among many in the sector. The report cites a LawGeex study which compared the results of a review of non-disclosure agreements by AI on the one hand and 20 top US lawyers on the other. The AI system took 26 seconds to complete the review with a 94% success rate for assessing risks, compared to 92 minutes and an 85% accuracy rate for the lawyers, placing us mere mortals rather firmly in the proverbial dark ages. 

One step back…

However, if history has taught us anything, it’s that over time, technology and human advances have had the potential of benefitting society as a whole. While there is a case to say that such advances have broadened the gulf between the haves and have nots (not a debate for this article), many argue that these advances have brought a wealth of benefits.

With the industrialisation of the 1800s and mass production, goods became more affordable, there was a rise in specialist professions and quality of life for many improved. With this came trade unions, voting rights and higher wages.

In its June 2018 follow-up report for the Horizon Scanning series, Future Skills for Law, the Law Society observes that the lawyer of the future will need new skills. While recognising that some 800,000 jobs may have been lost over the past 15 years due to technology, it also cites figures by big four accountancy practice, Deloitte, that nearly 3.5 million new jobs have been created in the same period which bring a wealth of opportunities for the current and, in particular, next generation of lawyer. 

The team at Goldman Sachs agree, anticipating that many of those whose jobs may be taken over by AI will eventually find new opportunities which emerge either directly from AI or in response to the higher labour demand generated by the boost in productivity from those employees supported by AI. The example they provide is of the creation of new jobs as a direct result of the dotcom revolution, such as webpage designers, software developers and digital marketing professionals, now a staple of many law firms. They suggest that 85% of growth in employment over the past 80 years can be explained by the advances in technology and its collateral benefits.

There are also potential benefits to those in senior roles, with AI having the potential to help businesses with scenario-based projections, to strategise and, ultimately, to make better decisions. 

High risks, high gain

Generally, then, many commentators are optimistic, citing the increased productivity which AI can bring and, along with it, global output. 

And if recent academic studies are to be believed, those firms who have embraced AI, the so-called ‘early adopters’, have seen an average 3.1% increase in annual worker productivity growth.

Of course, AI, and how quickly we adopt it, raises questions over ethics. Even controversial tech billionaire, Elon Musk, along with others, recently warned of the risks of embracing AI too quickly. However, given Mr Musk’s propensity for unpredictability (is that an oxymoron?), the fact that he was a co-founder of OpenAI but resigned from the board some years ago, and that he is reportedly looking to launch a rival company to compete with OpenAI, may have even the non-sceptics questioning his motives.

Nonetheless, it does raise questions around what it takes to be a lawyer. Many of those reading this will be regulated and will owe clear and strict duties to act in the best interests of clients, be officers of the Senior Courts of England and Wales, and to ensure that we do not place our profession into disrepute.

While there are those who sadly do not adhere to those duties and values, the vast majority do. 

In his 1942 short story Runaround, science fiction writer Isaac Asimov famously created his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. He later added a fourth, or as he called it, zeroth law which superseded the others:
  5. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Whether or not those rules are fit for purpose is something which is already being hotly debated. For the modern lawyer in the modern law firm, however, the key will be to ensure not only that our IT systems and infrastructure are both robust and secure enough to take on this technology but – just as, if not more, importantly – that we as a profession are ready to embrace this technology in a meaningful, positive, measured and responsible way.