3kites logo

Is AI really here yet, have we missed the boat, are we struggling to keep up? 3Kites will help you

Kemp IT Law (Richard Kemp) and 3Kites (Jenni Tellyn and Paul Longhurst) provide a timely update on AI in legal.

Against the backdrop of Richie Sunak’s AI Safety Summit and the seemingly daily pronouncements introducing AI-enabled everything, Paul starts by contrasting the current situation with our reaction to other significant technological shifts in the last 200 years.

Naysayers warned early train passengers of potential risks to the human body if propelled at 30, 40 or 50 MPH.  When, later in the 19th century, cars liberated travellers from railway tracks, the British government required that they be preceded by someone waving a red flag.  Jump forward a hundred years or so and Airbus introduced planes with three independently programmed computer systems whereby decisions had to be agreed upon by all of them and if one disagreed, the other two over-ruled it.  These systems used code generated by computers raising concerns that people might not understand how they worked … although the safety record of Airbus has proved this approach, which has been evolved over time, to be spectacularly successful.

This is not to ignore the many accidents that pioneering travel has brought with it.  However, it would be hard to imagine life without trains, planes or automobiles today or to ignore the impact that removing them would have on economies across the globe.  My point here is that humans, often with good reason, are cautious of technological leaps and yet we have also learned to embrace these advances over time to the benefit of humankind.  I believe that we will look back on AI in the same way but that, as of today in 2023, we should be concerned about how these tools are used until we are sufficiently aware of their impact and how best to control them.

novaplex

To date, we have seen law firms working on innovative solutions whilst, at the same time, banning their lawyers from using ChatGPT and other generative AI tools for client-facing work and being wary about approaching clients for consent to do so for fear of opening cans of worms.  Clients, on the other hand, have sometimes been more than happy to use the likes of ChatGPT before approaching their law firms to check that the generated opinion or document is legally sound.  Whether this approach would in fact result in a cost or time saving is highly debateable but client demand may be pushing firms to seek to adapt, even where such firms aren’t yet fully embracing AI internally.  The dynamic of how clients may require firms to flex their legal services pricing to reflect the use of time-saving AI tools will be interesting to monitor as everyone gets more savvy about what the tools can be deployed on and how much of a game changer they really are in practice.  So what’s the answer?  Jenni looks at what we are actually seeing law firms doing and what we might reasonably anticipate them implementing soon.

Law firms are experimenting.  Cautiously.  Many have gone out to their businesses (after their Managing Partners got excited about whether AI will be transformative) to ask for ideas on use cases to explore.  I’d imagine that over half the suggestions gleaned from businesses through these discovery exercises are for tools or functionality that already exist in the firm’s tech stack which people haven’t fully adopted or which would require tweaks to human-led processes rather than an AI tool to implement in order to drive an efficiency.  Some of the use cases which we will look back on as transformative in years to come may not have been thought of yet as firms continue to experiment.  And the experimentation extends beyond use cases into firms developing their own AI tools, either in-house or partnering with vendors, trained on their own content in order to facilitate users’ explorations of both capabilities and risks.

Given the innate conservatism of most lawyers, the risks of using AI tools are at the forefront of firms’ discussions and combined with a certain amount of cynicism.  Risks include fears that an AI tool outside the firm will use sensitive data or that the tool will get something wrong which will be relied upon… embarrassingly.  Scepticism abounds – having to fact check/source each statement that an AI tool, optimised for plausibility not accuracy, will come up with won’t in fact save a junior lawyer much time on the task they have set it.  Added to this scepticism is the concern that we might deskill our lawyers and business services teams or strip them of their creative powers by overusing AI assistants to do their day-to-day work (like teenagers using ChatGPT to do their homework for them and compromising critical thought processes).

The risks around the content which the AI tools crawl in order to generate their responses are real.  There are already copyright infringement cases which are emerging where websites have been scraped of copyrighted content without permission from the authors.  And there are real concerns that businesses which seek to build their own models that are confined to scraping only their own internal content do not have enough volume of content in any practice areas to enable the tool to learn enough to be the powerful aid they would like it to be.

The golden rules firms seem to be adopting so far are first to think carefully about what goes in and out of the tools (what/whose data is being ingested and ensuring that the outputs are carefully vetted before they are used in anger).  And secondly to treat the AI tool like an enthusiastic junior who needs close supervision!  The use cases which are proving the most promising in trials are those where a more experienced lawyer already knows the answer they expect the tool to deliver (whether a summary of a long document or a precis of a meeting they actually attended) and can then use the tool to verify their thought process or to save a little time in pulling the draft together.  Though whether the tool can detect sarcasm or irony might be a limitation for meeting summaries!  Firms are very cautious about using the tools for legal research given the scope for catastrophe and time-wasting if it gets things wrong.  That might be an area where firms leave the likes of the e-resources vendors to develop their own AI-enabled bolt-ons to their products and bear the potentially eye-watering attempts to increase subscription costs for these tools.

The extractive use cases are proving more fruitful rather than creative/generative use cases.  So, for example, using the tools to quickly pull title information out of large numbers of leases into a report format in a real estate context, or using AI tools to generate ideas for thought leadership opportunities rather than to draft the articles feels like safer territory for law firms.  The way that large language models work is to predict what the next word should be in the output based on the training dataset the model has ingested.  The potential for superficially plausible gibberish to be created by this mechanism is currently too great.  Pure “creativity” from AI tools makes lawyers nervous!  And most clients don’t want a “creative” Facility Agreement, they want a short and accurate one!

At base, we see firms looking at enhancing their services rather than replacing them so far but it is early days in the evolution of the revolution.  Richard picks up the thread here and, in particular, how to manage the firm’s risk with clients.

For law businesses not at the commodity services end, the risks with generative AI are evolutionary not revolutionary, to coin Jenni’s phrase.  At the moment it’s cyber security, cloud and data protection that are keeping your insurer up nights, not ChatGPT… yet.  The US has always been the crucible for litigation of new tech, and generative AI is no different – for example, the current wave of copyright and IP infringement disputes around large language models based on wholesale scraping of the internet.

The narrative here at the moment is a familiar one:

  • make sure in the firm’s engagement terms that you clear lines with the client to use AI in the first place;
  • check there are no insurance ‘funnies’, eg specific terms in the firm’s insurance arrangements around use of AI;
  • assess in the engagement terms themselves any ‘compliance with law’ obligations, third-party breach and intellectual property terms particularly where imposed by the client;
  • articulate clearly and transparently what is happening and who’s doing what with client data – for other than ‘normal’ legal work, firms are increasingly using Statements of Work like any other professional services provider to set out the detail;
  • think about your client facing service documentation – we haven’t yet got to the stage of detailed Product Descriptions and DPAs (Data Processing Addenda) on the other end of links nested in the firm’s documentation (like software providers), but it’s coming;
  • set expectations around service levels and performance – at this early stage of AI adoption, it’s fairly standard market practice to articulate that the AI is being used in beta or on a trial basis and the firm accepts no liability for data ingestion, use or output;
  • make sure the firm follows basic AI hygiene, eg avoiding bias and discrimination and ensuring reproducibility of results – whether AI is being used in beta or production, clients are likely to insist on this.

3kites logo
Independent consultants to the professional services sector.