
The debate around artificial intelligence (“<span class="news-text_medium">AI</span>”) in international arbitration has, to date, centred largely on its use by legal practitioners. Less attention has been given to how arbitrators themselves could (or should) deploy AI and Generative AI (“<span class="news-text_medium">GenAI</span>”) tools in performing judicial functions. The rapid progress of machine learning and predictive analytics presents opportunities to increase efficiency, but also raises legal and ethical challenges for those entrusted with adjudicative responsibility.
In England and Wales, no legislation expressly regulates the use of AI in arbitral proceedings. The <span class="news-text_italic-underline">Arbitration Act 1996</span>, recently amended by the <span class="news-text_italic-underline">Arbitration Act 2025</span>, remains silent on the subject. Instead, parties and arbitrators must look to soft-law instruments and institutional rules for guidance.
This regulatory gap creates a flexible but uncertain environment. It leaves room for arbitrators to experiment with AI tools in areas such as tribunal constitution, decision-making support, legal research and administrative tasks; though always within the limits imposed by fairness, impartiality and procedural integrity.
The threshold question is whether an AI system could serve as an arbitrator. The answer varies by jurisdiction. Scotland and France, for example, explicitly require arbitrators to be natural persons. Germany’s legislation implies the same, though not in direct terms.
England and Wales falls into this latter category. While the <span class="news-text_italic-underline">Arbitration Act</span> does not expressly state arbitrators must be human, provisions such as section 24 (removal of arbitrators for incapacity or lack of qualifications) clearly presuppose human qualities. This makes it difficult to argue that an AI arbitrator could presently be appointed without legislative reform.
Even if legally permissible, there remain ethical questions. Could an AI arbitrator satisfy the duty of fairness and impartiality that underpins the Act? The current consensus suggests not. For now, human judgment remains a statutory and normative requirement.
AI tools may have a more immediate application in arbitrator selection. Platforms already exist that predict candidates’ likely leanings based on past decisions. In investor-state arbitration, where concerns about systemic bias are well-documented, such tools could promote greater diversity and balance in tribunal appointments, provided they are carefully designed and responsibly deployed.
There is broader acceptance that arbitrators can use AI and GenAI to assist with specific legal and administrative tasks. Guidance from courts in England and Wales, including the rollout of Microsoft’s Copilot Chat for judicial office holders, signals that AI-assisted drafting and document management are becoming part of judicial life.
Institutional rules also support this approach. Under the LCIA Rules 2020, tribunals may make procedural orders regarding technology use (Article 14.6). By analogy to the delegation of tasks to tribunal secretaries (Article 14A), AI could be used for:
Caution, however, is critical. The CIArb’s 2025 Guideline warns against delegating tasks that influence procedural or substantive decisions. If a tribunal relied on AI output containing inaccuracies without verification, it could expose its award to challenge under sections 68 and 69 of the <span class="news-text_italic-underline">Arbitration Act</span>. For now, the safer ground is using AI for purely administrative functions such as managing correspondence, organising case files, or assisting with logistical tasks.
Ultimately, accountability lies with the tribunal. Section 33 of the <span class="news-text_italic-underline">Arbitration Act</span> requires procedures that are fair. Section 1 reinforces that dispute resolution must respect public interest safeguards. These principles mean that arbitrators cannot delegate responsibility to AI.
To manage risk, arbitrators are encouraged to:
AI and GenAI tools hold promise for arbitration, particularly in streamlining tribunal administration and supporting legal analysis. However, their role in decision-making remains constrained by statutory implications, ethical concerns and the “black box” problem of AI reasoning.
In England and Wales, the message is clear: AI can support but not supplant the arbitrator. By embedding transparency, supervision and party consent into their practices, tribunals can explore the benefits of these technologies while safeguarding the integrity of the arbitral process.