
AI has transformed the legal profession by improving productivity, efficiency and the delivery of legal services. Yet its misuse has attracted the close attention of the courts, particularly where lawyers have relied on authorities that later proved to be fictitious. In recent months, both solicitors and barristers have been criticised for submitting pleadings, witness statements and applications that referred to cases which did not exist. These authorities appear to have been generated by AI tools.
In <span class="news-text_italic-underline">R (Ayinde) v London Borough of Haringey [2025] EWHC 1040 (Admin)</span>, the legal team relied on five non-existent cases and the barrister involved was unable to provide a satisfactory explanation when questioned. In <span class="news-text_italic-underline">Bandla v SRA [2025] EWHC 1167 (Admin)</span>, a former solicitor submitted numerous fabricated authorities when appealing against their strike-off. The Court struck out the appeal as an abuse of process, stressing the need for “decisive action to protect the integrity of the court’s processes”.
This trend is not entirely new. In <span class="news-text_italic-underline">SW Harber v HMRC [2023] UKFTT 1007 (TC)</span>, a litigant in person used AI-generated citations in a tax appeal. While that incident was initially regarded as exceptional, the rise in similar cases suggests a wider and rapidly growing problem. The Ayinde and Al-Haroun matters prompted urgent intervention. The President of the King’s Bench Division invoked the Hamid jurisdiction, which allows the court to call lawyers to account for serious procedural or ethical failures. The judgment issued under this jurisdiction (<span class="news-text_italic-underline">Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin)</span>) offers significant guidance on the role of AI in litigation and the professional obligations that apply.
The court emphasised that:
The court also observed that although regulators have issued guidance on the use of AI, current measures are inadequate. Further regulatory intervention is expected.