AI literacy is the new standard. Are you meeting it? – LexisNexis
Do you understand the tool you’re using?
In a High Court case in 2025, 18 of 45 submitted case citations turned out not to exist. The AI had invented them, and nobody had checked.
The claimant’s solicitor said she hadn’t read the cases herself; she’d relied on AI to do it for her. This isn’t just a high-profile one-off situation, this is a strong example of what happens when AI output enters the workflow without a verification step.
A High Court judge issued a formal warning about the integrity of AI-generated legal work, with implications for the administration of justice and public confidence.
These concerns are reflected in our January 2026 survey of UK-based legal professionals, where 69% said they’re worried new lawyers lack verification and source-checking skills.

AI literacy isn’t knowing what the tool can do. It’s knowing what you still need to verify.
The risk isn’t in using the tool, it sits in the workflow, and what gets skipped when people are moving fast.
The literacy gap: when adoption outpaces verification
That’s the literacy gap. Adoption has outpaced understanding. The tools arrived faster than the training. Many legal professionals are now using AI regularly without a clear framework for when to trust it, when to question it, and when to put it down entirely.
In short: AI is accelerating the workflow, but it isn’t replacing the checking standard.
That’s not a criticism. It’s where the profession is. The question is whether it stays there. And it’s a professional risk, not a personal preference. AI doesn’t transfer liability. It just creates new places for errors to hide.
UK institutions are addressing this directly. The Courts and Tribunals Judiciary has published guidance on AI use. In practice, it reinforces a simple point: AI use still requires verification and professional judgement. What changes with AI isn’t the existence of mistakes. It’s the shape of them. They can look confident. They can look complete. They can be delivered at speed, and repeated at scale. So the margin for unchecked error shrinks at the same time as the volume of work produced increases.
That’s why AI literacy isn’t an optional extra. It’s becoming the standard of competent legal work. AI literacy isn’t learned through a policy document. It’s learned in supervision: what gets checked, what gets sent back, and what gets allowed through.
If this is showing up in your team’s workflow, The mentorship gap report goes deeper on what good verification looks like in practice.
A three-question standard for competent AI use in legal work
Building AI literacy starts with three questions. Ask them every time.
Where does this come from? Not all AI is built the same way. A tool grounded in authoritative legal sources is a different thing entirely to one drawing on the open internet. Knowing which you’re using is the starting point.
When was it last updated? Law moves. If what you’re relying on isn’t current, your advice can drift without you noticing. That’s true with human memory too. AI can make drift harder to spot, because the output arrives confident and complete.
Does this citation actually exist? Yes, this still needs saying. Check it. Every time.
The legal professionals pulling ahead aren’t avoiding AI out of caution or adopting it without thinking. They’re building judgement alongside it. They understand it well enough to know when to trust it, and when to push back. That’s a different skill set to simply knowing how to use the tools. It takes longer to build. It’s also significantly harder to replicate.
That’s the standard. Are you meeting it? If AI use is increasing in your team, the question is whether verification is increasing with it.
The mentorship gap report goes deeper on how legal teams are building judgement alongside AI, including the behaviours and supervision habits that reduce avoidable error. Download the report and explore the full insights today.



