Back to news

AI and Chinese Legal Landscape

December 28, 2025

China’s Multi-Layered Framework for Regulating Generative AI

China’s generative AI regulation explained: core rules, labelling requirements, technical standards and sector-specific compliance risks.

China has taken a structured but flexible approach to regulating generative AI. Its framework is built on multiple regulatory instruments operating at different levels, including national legislation, administrative regulations, mandatory and recommended technical standards and sector-specific rules. Together, these instruments form a dynamic system that allows regulators to respond quickly to technological developments without unduly constraining innovation.

This multi-faceted model aims to ensure that generative AI services are developed, deployed and made available to the public in a lawful, secure and socially responsible manner within mainland China.

Core Regulatory Instruments

At the heart of China’s generative AI framework are two key instruments:

  • <span class="news-text_italic-underline">The Interim Measures for the Management of Generative Artificial Intelligence Services</span>, effective from August 2023, set out the principal regulatory obligations for providers of generative AI services and, in certain respects, their users. These measures establish baseline requirements covering security, content governance and service management.
  • Complementing the <span class="news-text_italic-underline">Interim Measures</span> are the <span class="news-text_italic-underline">Basic Security Requirements for Generative Artificial Intelligence Services</span>, issued by the National Cybersecurity Standardization Technical Committee. First published in February 2024, these requirements provide detailed guidance on security controls, risk management and compliance practices expected of generative AI service providers.

Together, these instruments define the core compliance architecture for AI-related business activities in China.

Content Governance and Labelling Obligations

China has placed particular emphasis on transparency and content governance in relation to AI-generated outputs. The <span class="news-text_italic-underline">Measures for Labelling AI-Generated and Synthesised Content</span>, published in March 2025 and effective from 1 September 2025, impose clear labelling obligations on both generative AI service providers and users. These measures are intended to reduce the risks of misinformation, manipulation and public confusion arising from AI-generated content.

The <span class="news-text_italic-underline">Labelling Measures</span> operate alongside earlier rules, including the <span class="news-text_italic-underline">Administrative Provisions on Deep Synthesis in Internet-Based Information Services</span> (effective January 2023) and the <span class="news-text_italic-underline">Administrative Provisions on Recommendation Algorithms in Internet-Based Information Services</span> (effective March 2022). Together, these instruments address the ethical and social risks associated with AI-driven content creation, modification and dissemination.

Supporting Technical Standards and Guidelines

A number of supplementary standards and technical guidelines give practical effect to the core regulations by specifying detailed compliance requirements.

Key examples include:

  • Cybersecurity Technology – <span class="news-text_italic-underline">Labelling Method for Content Generated by AI</span>, published in February 2025 and effective from 1 September 2025. This is a mandatory national standard setting out technical requirements for labelling AI-generated content.
  • Cybersecurity Technology – <span class="news-text_italic-underline">Basic Security Requirements for Generative Artificial Intelligence Services</span>, published in April 2025 and effective from 1 November 2025. Although formally recommended rather than mandatory, these requirements are widely treated as a benchmark for compliance in practice.

These standards provide more prescriptive technical direction and are routinely relied upon by regulators and industry stakeholders when assessing compliance.

Key Compliance Obligations for Generative AI Providers

Under the regulatory framework, generative AI service providers are subject to a range of obligations across the AI lifecycle. At the input stage, providers must ensure the legality of training data and models. This may require conducting security assessments in accordance with the <span class="news-text_italic-underline">Basic Security Requirements</span> and filing relevant algorithms where services relate to public opinion guidance or have social mobilisation capabilities.

Providers are also required to adopt effective measures to improve data quality, including ensuring accuracy, truthfulness, objectivity and diversity of training data. At the output stage, transparency and public interest safeguards are central. Obligations include informing users about the nature of the AI services provided, implementing anti-addiction measures where relevant and establishing accessible complaint and reporting mechanisms.

To prevent misuse, providers must ensure that AI-generated content is properly labelled and that unlawful content is promptly removed and reported in accordance with applicable rules.

Sector-Specific AI Rules

In addition to cross-cutting regulations, sector regulators in areas such as healthcare, automotive, finance and education have issued targeted measures, guiding principles and draft rules for consultation.

These sector-specific instruments address the heightened risks associated with AI deployment in areas that directly affect public safety and fundamental rights. They typically impose additional obligations relating to privacy protection, data security, transparency and accountability, reflecting the particular sensitivities of each sector.

Key Takeaway

China’s approach to generative AI regulation is characterised by depth, adaptability and sectoral differentiation. By combining high-level rules with detailed technical standards and industry-specific guidance, the regulatory framework seeks to balance innovation with robust risk management. Businesses developing or deploying generative AI in China should closely monitor regulatory developments and ensure that governance, data management and content controls are aligned with this evolving compliance landscape.

Address
London:
2 Eaton Gate
London SW1W 9BJ
New York:
295 Madison Avenue 12th Floor
New York City, NY 10017
Paris:
56 Avenue Kléber
75116 Paris
BELGRAVIA LAW LIMITED is registered with the Solicitors Regulation Authority with SRA number 8004056 and is a limited company registered in England & Wales with company number 14815978. The firm’s registered office is at 2 Eaton Gate, Belgravia, London SW1W 9BJ.

‘Belgravia Law’ (c) 2025. All rights reserved.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy and Cookie Policy for more information.