A brave new world – China formulates a new AI global governance action plan, and issues draft ethics rules and AI labelling rules

Gabriela Kennedy, Partner, and Joanna KC Wong, Associate, Mayer Brown Hong Kong, explore the latest updates on AI governance in the Chinese mainland, and assess their implications for the national and global deployment of ethical AI technologies.

Highlights

  • China’s Global AI Governance Action Plan outlines priorities in innovation, data quality, open-source collaboration and AI safety, reflecting its ambition to influence international standards
  • the Draft Administrative Measures for the Ethical Management of AI Technology introduces formal ethics review requirements for AI projects that may pose risks to human, environmental or societal wellbeing
  • the Labelling Measures for AI-Generated Content establish explicit and implicit labelling obligations to ensure transparency and traceability of AI-generated materials

Artificial Intelligence (AI) development and deployment have been at the forefront of the national agenda in the Chinese mainland for quite some time, with the government identifying AI as a key driver for economic growth and technological advancement. Over the last couple of years, significant investment, research and widespread industry adoption have positioned the mainland at the forefront of global AI innovation.

The AI Plus initiative implementation guideline, issued in August 2025, sets out ambitious goals for the country – a penetration rate of new-generation intelligent terminals and AI agents set to exceed 70% by 2027 and 90% by 2030. This dynamic growth has prompted the introduction of AI-related regulations aimed at ensuring the safe, ethical and inclusive development and deployment of AI technologies nationwide. China has recently announced three significant developments for the rapidly evolving AI regulatory landscape, including the release of:

  • the Global AI Governance Action Plan (AI Action Plan)
  • the Draft Administrative Measures for the Ethical Management of Artificial Intelligence Technology (Trial) (Draft Measures), and
  • the Measures for Labelling of AI-Generated Synthetic Content (Labelling Measures).

These initiatives reflect the ambitions the country has to shape and influence international AI governance, while also establishing robust domestic safeguards for AI research, development and deployment. We discuss these recent updates and their potential impact on businesses operating in or engaging with the Chinese mainland across the AI value chain.

These initiatives reflect the ambitions the country has to shape and influence international AI governance, while also establishing robust domestic safeguards for AI research, development and deployment.

Global AI Governance Action Plan

On 26 July 2025, China issued its AI Action Plan at the World Artificial Intelligence Conference 2025. The AI Action Plan sets out intended action areas spanning innovation, infrastructure, open ecosystems, high‑quality data, green AI, standards and multi‑stakeholder governance. We highlight the key elements particularly relevant for businesses.

  • Innovation and industry adoption. Innovation and experimentation, international collaboration and the transformation of research outcomes into real-world applications are encouraged. Businesses are urged to participate in cross-border technological cooperation, adopt AI in various sectors (such as industrial manufacturing, healthcare, education and smart cities) and share best practices. This opens up opportunities for partnerships, technology transfer and market expansion.
  • Open-source and data sharing. The AI Action Plan calls for the development of secure, crossborder open-source communities and platforms to promote the sharing of resources, lower barriers to innovation and improve accessibility. The AI Action Plan also highlights the importance of open-source compliance and technical safety guidelines, and promotes the open sharing of nonsensitive development resources (such as technical documentation and API documentation). The AI Action Plan also promotes enhanced compatibility between upstream and downstream products to foster a more inclusive and efficient AI environment.
  • High-quality data and privacy. The AI Action Plan promotes the lawful, orderly and free flow of data, supports the establishment of global data-sharing mechanisms and platforms, and encourages the creation of high-quality datasets. It places strong emphasis on safeguarding privacy and data security, while also prioritising the diversification of data to help eliminate discrimination and bias in AI systems.
  • Governance of AI safety. The AI Action Plan prioritises robust AI safety governance by calling for regular risk assessments, targeted prevention measures and the development of a widely recognised safety framework. It advocates for tiered management approaches, risk testing systems, and improved data security and personal information protection. The AI Action Plan also encourages stakeholders to explore the implementation of traceability management systems to prevent misuse of AI technologies.
  • Capacity building and inclusion. AI capacity building is important and this can be achieved through initiatives such as infrastructure development, joint laboratories, safety assessment platforms, education and training programmes, and joint development of high-quality datasets. Importance is also given to improving public AI literacy and skills to help bridge the digital divide.

The AI Action Plan focuses on new opportunities for industry collaboration, innovation and access to shared resources, while encouraging companies to adopt sustainable and inclusive AI development models. It also sets overarching standards for businesses to comply with in areas such as AI safety, governance, data protection and ethical practices, with a view to fostering a business environment that prioritises the responsible use of AI.

Draft Administrative Measures for the Ethical Management of AI Technology (Trial)

On 22 August 2025, the Chinese mainland’s Ministry of Industry and Information Technology, together with nine other central regulators and two national associations, issued the Draft Measures for public comment. The Draft Measures focus on fostering responsible AI innovation, enhancing ethical oversight and protecting the public interest in the development and use of AI.

Broad application

The Draft Measures apply to AI research, development and application within China that may pose ethical risks to life and health, human dignity, the environment, public order or sustainable development, as well as other AI activities subject to an ethics review under Chinese laws.

Organisations involved in regulated AI activities, including tertiary education institutions, research institutes, medical institutions and enterprises are designated as ‘responsible entities’. Where feasible, these organisations shall establish independent AI technology ethics committees (Ethics Committees), and ensure that such committees are adequately resourced and composed of experts in technology, ethics and law to effectively support the work of the committee. Local or sectoral authorities may establish specialised AI ethics service centres (Ethics Service Centres), which are responsible for offering an ethics review, training and advisory services.

Procedures and timeframe

AI projects governed by the Draft Measures will need to undertake an ethics review. This may be conducted either by the organisation’s own Ethics Committee or by a qualified Ethics Service Centre.

To initiate an ethics review, an application must be submitted with a detailed activity plan (including research background and purpose, implementation plan, algorithmic mechanism, types of data involved, sources of the data, testing and evaluation methodology, intended outcome and products, and the intended use case and target users). Applicants are also required to submit an ethics risk assessment and risk mitigation plan regarding the intended use, details of potential risks of misuse or abuse of AI technologies and a compliance undertaking.

Ethics Committees or Ethics Service Centres will determine whether to accept an application for an ethics review and, if accepted, shall conduct the review in accordance with applicable procedures. A decision in an ethics review shall be issued within 30 days. Once approval is granted, the responsible person for an AI project is required to promptly report any changes in ethical risks to the Ethics Committee or the Ethics Service Centre. The Ethics Committee or Ethics Service Centre will have ongoing oversight over approved AI projects, including follow-up reviews at intervals generally not exceeding 12 months, and will have the power to suspend or terminate AI projects if significant ethical risks arise.

The key focus areas for the AI ethics review include:

  • fairness and non-discrimination
  • robust and controllable system design
  • transparency and explainability of the algorithms, and
  • clear accountability through traceable processes.

The ethics review also examines the qualifications of project personnel, the scientific and social value of the research, the balance between risks and benefits, and the adequacy of risk controls and emergency response plans.

The Draft Measures introduce a ‘list of AI technology activities requiring expert second review’, which designates certain high-risk AI activities for mandatory expert reexamination following an initial review by the Ethics Committee or an Ethics Service Centre. Currently, the list includes human–machine integration systems that significantly affect human behaviour, emotions or health, algorithmic applications with the capacity to mobilise public opinion or shape social consciousness, and highly autonomous decision-making systems deployed in high-risk scenarios, such as those involving human health and safety. This list may be updated as regulatory needs evolve.

Businesses involved in AI research, development or services in the Chinese mainland should proactively evaluate their activities for potential ethical risks and determine whether their projects fall within the scope of the Draft Measures. Companies that are contemplating AI projects subject to the Draft Measures should begin to think about establishing an Ethics Committee, if feasible, compile thorough documentation for an ethics review, implement a strong risk assessment framework and mitigation-deployment strategies, and maintain ongoing oversight and reporting mechanisms to address ethical issues as they arise throughout the life cycle of their AI initiatives.

Measures for Labelling of AI-Generated Synthetic Content

The Labelling Measures, released by the Cyberspace Administration of China and other authorities on 14 March 2025, took effect on 1 September 2025. The technical standard on AI content labelling, Cybersecurity Technology – Labelling Method for Content Generated by Artificial Intelligence (GB 45438‑2025) (Labelling Standard), also became effective on the same date. Together, the Labelling Measures and the Labelling Standard provide muchneeded clarity on the content labelling requirements under the Interim Measures for the Administration of Generative Artificial Intelligence Services (GenAI Interim Measures).

Scope of application

The Labelling Measures apply to internet information service providers that use AI to generate text, images, audio, video, virtual scenes or other content – these service providers are already subject to the following existing regulations:

  • Internet Information Service Algorithmic Recommendation Management Provisions (Algorithmic Provisions, in force March 2022), which impose obligations on algorithm transparency, fairness, content moderation and algorithm filing to the regulatory authority.
  • Internet Information Service Deep Synthesis Management Provisions (Deep Synthesis Provisions, in force January 2023), which regulate the use of deep synthesis technologies for internet information services.
  • The GenAI Interim Measures (in force August 2023) provide baseline obligations concerning training data legitimacy, personal information protection, algorithmic transparency, security assessments and model filing.

Key requirements

The Labelling Measures require both explicit and implicit labelling of AIgenerated content (AIGC). Explicit labelling refers to labels that are added to AIGC or interactive scenario interfaces and are presented in a manner (such as through text, audio, graphics or other means) that can be clearly perceived by users. Service providers may refer to the Labelling Measures and Labelling Standards for the specific operational and technical requirements of the labelling of AIGC (text, audio, images, videos and virtual scenarios). Where AIGC can be downloaded, reproduced or exported, explicit labels must remain embedded within the file.

Implicit labelling refers to labels that are added to the data files of AIGC through technical means and are not easily perceived by users. An implicit label should be added to the metadata of the AIGC file and should include key information such as content attributes, the name or code/identifier of the service provider and a content reference number.

The Labelling Measures also outline the obligations of service providers that offer online content distribution services with respect to AIGC. Specifically, they require these providers to verify whether implicit labels are present in the file metadata. If implicit labels are detected, the provider must add prominent explicit labels around the published content to clearly inform the public that the content is AIgenerated. If no implicit label is found, but the user declares the content as AI-generated, the provider should still add an explicit label to alert the public that the content may be AIgenerated. In cases where neither implicit labels nor user declarations are present, but the provider detects explicit labels or other signs of AI generation, the content should be identified as suspected AI-generated and labelled accordingly. For all such scenarios, the provider must also add relevant key information – such as content attributes, platform name or code and content reference number – into the file metadata. Additionally, providers are required to offer necessary labelling functions and to prompt users to declare whether their content includes AI-generated material.

The Labelling Measures require all service providers to clearly specify in their user service agreements the methods, formats and standards for labelling AIGC, and to remind users to carefully read and understand the relevant content labelling requirements. Where a user requests the provision of AIGC without explicit labelling, the service provider may do so only after clearly outlining the user’s labelling obligations and other responsibilities in the user service agreement, and must retain relevant logs and information about the recipients of such content for no less than six months (see Articles 8 and 9 of the Labelling Measures). Users who disseminate AIGC through online platforms shall declare and use the labelling functions provided by the service provider. The Labelling Measures also prohibit any organisation or individual from maliciously deleting, altering, forging or concealing the required labels, or from providing tools or services to facilitate such actions, and from using improper labelling methods to infringe upon the lawful rights and interests of others.

Where the provisions of the Labelling Measures are violated, relevant regulatory departments such as the departments for internet information, telecommunications, public security and broadcasting may address such violations in accordance with the relevant laws, administrative regulations and departmental rules. In particular, overseas Gen AI providers should be aware that the GenAI Interim Measures expressly empower the regulators to take technical measures (such as shutting down network access) against companies providing Gen AI services to the Chinese mainland from overseas that have violated the Chinese laws and regulations.

Takeaways

The recent updates on AI governance and content labelling in China mark a significant step toward fostering responsible, transparent and ethical AI development. Businesses developing or adopting AI technologies in the mainland should proactively review and update their internal policies, technical processes and product designs to prepare for compliance with the new requirements on ethical risk management and content labelling. Establishing dedicated AI governance committees, investing in staff training and integrating robust labelling and traceability mechanisms will be essential to mitigating risks and building trust with regulators and users. As the AI regulatory landscape in the Chinese mainland continues to evolve, businesses should keep an eye out for policy and regulatory developments and should proactively align their policies with emerging standards to effectively manage compliance risks.

The recent updates on AI governance and content labelling in China mark a significant step toward fostering responsible, transparent and ethical AI development.

Gabriela Kennedy, Partner, and Joanna KC Wong, Associate

Mayer Brown Hong Kong LLP

© Copyright Mayer Brown Hong Kong LLP, October 2025

The authors would like to thank Roslie Liu, Legal Practice Assistant at Mayer Brown Hong Kong LLP, for her assistance with this article.

Read More

How emerging technologies are reshaping corporate fraud

CGj reviews an Institute seminar held in May 2025 that explored how emerging technologies are redefining the landscape of corporate operations – and how innovation is fuelling increasingly sophisticated fraud – as well as what governance professionals can do to stay ahead.
Friday | 19 December 2025