Back to news

AI and Chinese Legal Landscape

September 24, 2025

CIETAC Issues China’s First AI Guidelines for Arbitration

CIETAC’s 2025 AI Guidelines mark a first in Asia, outlining principles for responsible AI use in arbitration while preserving fairness and autonomy.

On 18 July 2025, CIETAC unveiled provisional Guidelines on the Use of Artificial Intelligence in Arbitration (the “<span class="news-text_medium">AI Guidelines</span>”). These guidelines represent the first normative framework on AI in arbitration published by an arbitral institution in both China and the Asia-Pacific. While not incorporated into the CIETAC Arbitration Rules, the AI Guidelines provide an important reference point for practitioners, arbitrators and parties as AI becomes increasingly prevalent in dispute resolution.

Potential Applications of AI in Arbitration

The AI Guidelines acknowledge various possible uses for AI in arbitral proceedings. They note that AI tools may assist with proofreading, translation and transcription of documents and hearing records, thereby improving efficiency in managing case materials. They may also provide drafting support for procedural orders and selected portions of arbitral awards, streamlining the work of tribunals. Beyond drafting, AI can be employed in managing documents and case files, helping to organise, retrieve and systematise large volumes of material in complex disputes.

The Guidelines further recognise that AI has potential in the collection, review and analysis of evidence, particularly in cases involving substantial technical or factual records. They also identify a role for AI in preparing witness statements and formulating cross-examination questions, supporting parties in hearing preparation. Finally, AI may serve as an auxiliary tool for conducting legal research and assisting with the selection of arbitrators, drawing on data-driven analysis to inform party or tribunal decisions. These examples underscore that AI is being positioned as an <span class="news-text_medium">auxiliary tool</span>, rather than a decision-maker, in the arbitral process.

Benefits and Risks of AI Use

The AI Guidelines adopt a balanced approach, recognising both the potential advantages of artificial intelligence and the risks associated with its application in arbitration. On the one hand, AI promises to increase efficiency by automating time-consuming tasks, enhancing the quality of drafting, improving the accuracy of data analysis and accelerating the review of large volumes of evidence. These benefits contribute not only to faster proceedings but also to a reduction in overall costs for the parties involved.

On the other hand, the Guidelines caution that the use of AI carries significant risks. Concerns include breaches of confidentiality and data security, inaccuracies or biases embedded in algorithms and the so-called “black box” problem, where the inner workings of AI models remain opaque and difficult to explain. These issues raise questions about transparency and accountability. In addition, there is the possibility of enforcement challenges, as courts or regulators may be hesitant to uphold arbitral awards where reliance on AI undermines confidence in the integrity of the decision-making process.

Foundational Principles

The guidelines rest on three overarching principles:

  1. <span class="news-text_medium">Party Autonomy:</span> Parties are free to determine the extent to which AI may be used, whether to allow or prohibit it and how disclosure of AI use should be handled.
  2. <span class="news-text_medium">Auxiliary Role:</span> AI may assist with procedural or technical tasks, but arbitral tribunals remain responsible for legal reasoning and the final award.
  3. <span class="news-text_medium">Good Faith:</span> Parties retain responsibility for the truthfulness and legality of submissions, regardless of whether AI tools have been employed.

Guidance for Arbitral Tribunals

Before adopting AI tools, tribunals are advised to:

  • assess the necessity and proportionality of AI use;
  • weigh efficiency gains against risks;
  • evaluate the accuracy and security of the tool; and
  • consider the broader regulatory framework in the relevant jurisdiction.

Importantly, tribunals must not allow AI use to compromise a party’s right to be heard and must independently analyse facts, apply the law and provide reasoning in their awards.

Recommended Risk Mitigation Measures

The AI Guidelines suggest a number of practical steps that parties, tribunals and institutions can adopt to minimise the risks associated with the use of artificial intelligence in arbitration. Parties are encouraged to address the issue at the contract stage by including express provisions in their arbitration agreements on whether AI may be used and if so, under what conditions. Tribunals, in turn, are advised to seek the parties’ views on AI during procedural orders or pre-hearing conferences, ensuring that its use is transparent and agreed upon from the outset of the proceedings.

The Guidelines also recommend reliance on CIETAC’s own secure platforms, particularly for services such as electronic filing and AI-assisted transcription, to safeguard confidentiality and data integrity. In addition, they underline the importance of ongoing training and education for arbitrators, counsel and parties, aimed at building a realistic understanding of AI’s capabilities, limitations and the broader regulatory framework governing its deployment.

Taken together, these measures are designed to establish confidence that while AI may be employed to enhance efficiency, its use remains controlled, proportionate and consistent with the principles of fairness and due process.

Broader Context and Outlook

The AI Guidelines reflect a wider international trend: while AI is increasingly relied upon for research, document review and factual analysis, it should not replace arbitral discretion or legal reasoning. The guidelines serve as a starting framework rather than a rigid rulebook, recognising that the rapid pace of AI development makes detailed prescriptive rules premature.

As China positions itself as a leader in both AI technology and international arbitration, CIETAC’s move may signal further institutional initiatives, both within China and across Asia, to regulate AI in dispute resolution.

Key Takeaways for Foreign Parties

  • Foreign companies involved in China-related arbitration now have initial guidance on how AI may be incorporated into proceedings.
  • Awards remain the product of tribunal reasoning; AI cannot substitute judicial discretion.
  • Parties drafting arbitration clauses should consider expressly addressing AI use to avoid uncertainty.
  • Foreign parties must be alert to China’s stringent data protection standards when deploying AI tools.
  • CIETAC’s guidelines mirror global arbitration debates and will likely influence similar initiatives elsewhere.

Conclusion

The release of CIETAC’s AI Guidelines in July 2025 represents a landmark development at the intersection of AI regulation and arbitration practice. While provisional and non-binding, the Guidelines offer practical direction to parties, counsel and tribunals navigating the opportunities and risks of AI in arbitral proceedings. For foreign companies, the message is clear: AI will play a growing role in arbitration in China, but only within carefully drawn boundaries that preserve fairness, transparency and party autonomy.

Address
London:
2 Eaton Gate
London SW1W 9BJ
New York:
295 Madison Ave 12th Floor
New York City, NY 10017
BELGRAVIA LAW LIMITED is registered with the Solicitors Regulation Authority with SRA number 8004056 and is a limited company registered in England & Wales with company number 14815978. The firm’s registered office is at 2 Eaton Gate, Belgravia, London SW1W 9BJ.

‘Belgravia Law’ (c) 2025. All rights reserved.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy and Cookie Policy for more information.