Robot mores by Damien Behan, Brodies

This article was also featured as an opinion piece in the December 2016 issue of Briefing. To read the issue in full, download Briefing. 

Inspired by the Fringe Festival that lands here in Edinburgh ever summer, I was tempted to call this column “Aye, Robot” – concerned, as it is, with the rise of artificial intelligence (AI) in law firms. I think you’ll agree it’s best to leave jokes to the professionals.

AI is getting serious, though, and is in vogue – from spooky TV drama Humans, depicting a world where humanoid robots become conscious and run amok, to the likes of Stephen Hawking, Elon Musk and Bill Gates warning that uncontrolled AI could wipe out humanity. Closer to home, here in the legal sector there has been an increase in predictions it will revolutionise the practice of law. And we’ve seen some real-world, practical examples emerge.

But let’s step back and reflect on what AI truly means in this context. AI may be weak or strong. Strong AI tries to replicate human intelligence, and is the branch Hawking et al are concerned about. But it’s weak (or narrow) AI we’re seeing in the context of legal work – the application of computing resource and machine learning to the narrow focus of ‘understanding’ large bodies of content. It helps to think of AI intelligence not in the sense of our human intelligence, but the intelligence the security services rely on – information gathered, sifted and analysed to help guide decision-making.

Whereas ‘big data’ helps us to make sense of structured information, the legal AI being deployed now is about processing and analysing unstructured data in the form of written words, to derive meaning from it and to query it using natural language. The technologies behind Siri, Amazon Echo and Google’s search interface all use AI to better ‘understand’ the questions we ask – and to determine answers by analysis of vast oceans of content from multiple sources.

Machine learning is adept at understanding what ‘normal’ looks like, and then identifying variations – so it’s ideal for analysing, for example, a large volume of contracts, and identifying which are non-standard, potentially causing a problem. As the name suggests, the machine learns what the documents contain, and extracts entities and concepts, without having to be told what the data is, or exactly what to do. 

It may feel new, but the use of such technology to understand large bodies of content has been evolving for many years in the e-discovery world – in technology assisted review. So what we’re seeing today is evolution rather than revolution.

In its current incarnations, at least, we’ve little to fear. And if a single lawyer can use AI to review in a single day, documents that would take a team of 20 a month, it will free up time for work that computers can’t currently do: understanding the nuance of client problems, and resolving complex issues that aren’t well defined. So the rise of the robots is, for now, a positive move – even if, like me, they aren’t great at writing jokes … yet. 

Briefing's publications and events focus exclusively on improving the work and worlds of law firm management leaders.