
On 18 July 2025, CIETAC unveiled provisional Guidelines on the Use of Artificial Intelligence in Arbitration (the “<span class="news-text_medium">AI Guidelines</span>”). These guidelines represent the first normative framework on AI in arbitration published by an arbitral institution in both China and the Asia-Pacific. While not incorporated into the CIETAC Arbitration Rules, the AI Guidelines provide an important reference point for practitioners, arbitrators and parties as AI becomes increasingly prevalent in dispute resolution.
The AI Guidelines acknowledge various possible uses for AI in arbitral proceedings. They note that AI tools may assist with proofreading, translation and transcription of documents and hearing records, thereby improving efficiency in managing case materials. They may also provide drafting support for procedural orders and selected portions of arbitral awards, streamlining the work of tribunals. Beyond drafting, AI can be employed in managing documents and case files, helping to organise, retrieve and systematise large volumes of material in complex disputes.
The Guidelines further recognise that AI has potential in the collection, review and analysis of evidence, particularly in cases involving substantial technical or factual records. They also identify a role for AI in preparing witness statements and formulating cross-examination questions, supporting parties in hearing preparation. Finally, AI may serve as an auxiliary tool for conducting legal research and assisting with the selection of arbitrators, drawing on data-driven analysis to inform party or tribunal decisions. These examples underscore that AI is being positioned as an <span class="news-text_medium">auxiliary tool</span>, rather than a decision-maker, in the arbitral process.
The AI Guidelines adopt a balanced approach, recognising both the potential advantages of artificial intelligence and the risks associated with its application in arbitration. On the one hand, AI promises to increase efficiency by automating time-consuming tasks, enhancing the quality of drafting, improving the accuracy of data analysis and accelerating the review of large volumes of evidence. These benefits contribute not only to faster proceedings but also to a reduction in overall costs for the parties involved.
On the other hand, the Guidelines caution that the use of AI carries significant risks. Concerns include breaches of confidentiality and data security, inaccuracies or biases embedded in algorithms and the so-called “black box” problem, where the inner workings of AI models remain opaque and difficult to explain. These issues raise questions about transparency and accountability. In addition, there is the possibility of enforcement challenges, as courts or regulators may be hesitant to uphold arbitral awards where reliance on AI undermines confidence in the integrity of the decision-making process.
The guidelines rest on three overarching principles:
Before adopting AI tools, tribunals are advised to:
Importantly, tribunals must not allow AI use to compromise a party’s right to be heard and must independently analyse facts, apply the law and provide reasoning in their awards.
The AI Guidelines suggest a number of practical steps that parties, tribunals and institutions can adopt to minimise the risks associated with the use of artificial intelligence in arbitration. Parties are encouraged to address the issue at the contract stage by including express provisions in their arbitration agreements on whether AI may be used and if so, under what conditions. Tribunals, in turn, are advised to seek the parties’ views on AI during procedural orders or pre-hearing conferences, ensuring that its use is transparent and agreed upon from the outset of the proceedings.
The Guidelines also recommend reliance on CIETAC’s own secure platforms, particularly for services such as electronic filing and AI-assisted transcription, to safeguard confidentiality and data integrity. In addition, they underline the importance of ongoing training and education for arbitrators, counsel and parties, aimed at building a realistic understanding of AI’s capabilities, limitations and the broader regulatory framework governing its deployment.
Taken together, these measures are designed to establish confidence that while AI may be employed to enhance efficiency, its use remains controlled, proportionate and consistent with the principles of fairness and due process.
The AI Guidelines reflect a wider international trend: while AI is increasingly relied upon for research, document review and factual analysis, it should not replace arbitral discretion or legal reasoning. The guidelines serve as a starting framework rather than a rigid rulebook, recognising that the rapid pace of AI development makes detailed prescriptive rules premature.
As China positions itself as a leader in both AI technology and international arbitration, CIETAC’s move may signal further institutional initiatives, both within China and across Asia, to regulate AI in dispute resolution.
The release of CIETAC’s AI Guidelines in July 2025 represents a landmark development at the intersection of AI regulation and arbitration practice. While provisional and non-binding, the Guidelines offer practical direction to parties, counsel and tribunals navigating the opportunities and risks of AI in arbitral proceedings. For foreign companies, the message is clear: AI will play a growing role in arbitration in China, but only within carefully drawn boundaries that preserve fairness, transparency and party autonomy.