
AI is becoming an increasingly powerful tool within legal practice, offering opportunities to streamline research, draft documents and manage litigation more efficiently. However, as recent cases have demonstrated, reliance on GenAI tools without adequate verification can lead to serious professional failings. The English courts have now issued one of their clearest warnings yet: misuse of AI in legal proceedings poses significant risks to the administration of justice and will not be tolerated.
In <span class="news-text_italic-underline">R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin)</span>, the Divisional Court raised serious concerns about the use of GenAI tools in litigation. The court considered two cases under the “Hamid” jurisdiction where lawyers had presented material containing non-existent or inaccurate authorities, apparently generated through AI tools. Although contempt proceedings were not pursued, the court referred the lawyers involved to their regulators and issued a wide-ranging warning that misuse of AI threatens the integrity of the justice system.
The cases were heard under the Hamid jurisdiction, established in <span class="news-text_italic-underline">R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin)</span>, which allows the court to regulate its own procedures and ensure lawyers comply with their duties to the court. Both matters involved the suspected use of GenAI to produce legal arguments or witness statements containing false citations.
In <span class="news-text_italic-underline">R (Ayinde) v Haringey LBC</span>, a barrister and solicitor from Haringey Law Centre filed judicial review grounds citing five non-existent cases. When challenged, they gave inconsistent explanations and failed to address the issue adequately. Costs were awarded against them, and the matter was referred to the Bar Standards Board (“<span class="news-text_medium">BSB</span>”) and the Solicitors Regulation Authority (“<span class="news-text_medium">SRA</span>”).
In <span class="news-text_italic-underline">Al-Haroun v Qatar National Bank QPSC</span>, a solicitor relied on 45 authorities, 18 of which did not exist, in witness statements drafted with input from his client. He admitted failing to verify the sources, apologised and reported himself to the SRA.
In both matters, the court noted the serious professional lapses involved but, given the circumstances, declined to commence contempt proceedings.
The Divisional Court, comprising Dame Victoria Sharp P and Johnson J, emphasised the professional and ethical duties of lawyers. These include the obligation not to mislead the court, to verify authorities and to present arguments responsibly. The court noted that GenAI tools can produce outputs that appear authoritative but may include fabricated case law, inaccurate quotations, or false assertions.
The judges stressed that lawyers who use AI tools for research or drafting have a professional duty to check accuracy against authoritative sources such as the National Archives, official Law Reports and databases from reputable publishers. Reliance on clients, trainees, or unsupervised AI-generated material does not absolve practitioners of responsibility.
The court also highlighted its powers under the Hamid jurisdiction: admonition, referral to regulators, costs sanctions, striking out, or contempt proceedings. In serious cases, misuse of AI could even constitute perverting the course of justice.
The court warned that misuse of GenAI poses a direct risk to <span class="news-text_medium">public confidence in the justice system</span>. It called on heads of chambers, managing partners and regulators to take urgent action to ensure practitioners are properly trained, supervised and equipped to use AI responsibly. The judgment will be circulated to the Bar Council, the Law Society and the Council of the Inns of Court, with an invitation to consider further safeguards.
Importantly, the court underlined that freely available GenAI tools are not reliable for legal research. Their plausible but inaccurate outputs make them unsuitable without rigorous verification. Guidance from the Bar Council, BSB, SRA and judiciary has already cautioned against uncritical reliance on AI, but the court found that guidance alone is insufficient and must be backed by enforceable standards.
This judgment represents one of the most direct judicial interventions to date on the risks of AI misuse in litigation. While contempt proceedings were not pursued, the court left no doubt that future cases may lead to severe sanctions. Law firms and chambers must act now to implement policies on AI use, provide training and ensure robust supervision.
For practitioners, the lesson is clear: AI can assist but never replace legal judgment and unverified AI outputs must not be placed before the court.
<span class="news-text_medium">Case:</span> <span class="news-text_italic-underline">R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin)</span> (6 June 2025, Dame Victoria Sharp P and Johnson J).