
On 22 December 2025, the Law Society of England and Wales provided <a href="https://www.lawsociety.org.uk/topics/blogs/westminster-update-law-society-gives-evidence-on-ai-and-human-rights" target="_blank" class="news-text_link">evidence</a> to the Joint Committee on Human Rights as part of its inquiry into AI and human rights. The session formed part of a broader series of Westminster discussions addressing AI regulation, access to justice and justice system reform.
The Law Society’s data and technology policy adviser Dr Janis Wong appeared before the Committee alongside Ellen Lefley of JUSTICE and Louise Hooper of Garden Court Chambers. Responding to questions from Lord Alton of Liverpool, Dr Wong highlighted ongoing regulatory challenges including fragmentation, definitional uncertainty and the need to strike an appropriate balance between innovation and protection from harm.
Dr Wong emphasised that AI regulation should be grounded in clear principles to ensure that future frameworks are capable of addressing emerging harms. In response to questions from Baroness Lawrence, she noted that the UK could draw lessons from the <span class="news-text_italic-underline">EU Artificial Intelligence Act</span>, particularly its risk-based approach, prohibitions on certain uses and baseline transparency standards. She also observed that the Act applies extraterritorially to both developers and users.
Questions on transparency mechanisms prompted discussion of potential statutory tools including registers of algorithms and mandatory labelling of AI-generated content. On data protection, Dr Wong expressed concern that the <span class="news-text_italic-underline">Data (Use and Access) Act 2025</span> may not offer sufficient safeguards in relation to automated decision making, particularly given limited individual consent protections.
The panel also addressed liability across the AI lifecycle. It was noted that insurers are struggling to quantify AI-related harms and that governments should take a proactive approach to clarifying responsibility and ensuring that AI involvement can be identified when harm occurs.
The update also covered the appearance of the Lord Chancellor and Deputy Prime Minister David Lammy MP before the Justice Select Committee. He addressed proposals to restrict jury trials, citing the need to reduce delays for victims and modernise a system largely unchanged since the 1970s. He confirmed that a courts bill will be introduced in spring 2026.
Members raised concerns about criminal legal aid and workforce shortages with Lammy pointing to increased funding for legal aid advocates, supported pupillages and rising court maintenance budgets. He also outlined plans for greater digitisation of civil courts and further support for mediation in family law.
During separate parliamentary sessions, questions were raised about the impact of AI on human rights including privacy, discrimination and transparency. While acknowledging risks, the Prime Minister, Sir Kier Starmer MP, emphasised the potential benefits of AI in public services including criminal justice provided that safeguards and human oversight are maintained.
The discussions underline growing political focus on AI governance, transparency and accountability. For policymakers and practitioners, the emphasis is increasingly on principled regulation that protects rights while supporting innovation and ensuring the justice system remains effective and accessible.