Ling Ho, Partner of the APAC IP practice, Clifford Chance, provides an overview of legislative developments in the Chinese mainland and Hong Kong relating to artificial intelligence (AI), and discusses the differing strategic approaches to addressing AI-related challenges in the two jurisdictions.


  • in the fast-evolving AI landscape, governments globally are considering whether to adapt existing legal frameworks to better address AI, or to develop AI-centric legislative and regulatory frameworks
  • the PRC, which is a first mover in relation to AI regulation, has been steadily putting in place targeted rules and regulations to ensure responsible use of AI
  • Hong Kong has up to now relied on existing laws, together with sectoral and subject area guidance from key regulators, to deal with AI, but there are signs that the government is starting to take a more proactive approach to the issue

The People’s Republic of China (PRC, the Chinese mainland) is a first mover in relation to AI regulation. It has had city or regional regulations in place for some time and, more recently, has enacted national AI regulations targeted at particular types of AI services or use. This article explores AI-related legislative developments in Greater China.

The global AI regulatory landscape

We are seeing powerful advances in AI and machine learning – including the introduction of generative AI with an apparent ability to create and personalise. Synergies with other developing technologies, such as neurotechnology and quantum computing, are expected to further expedite AI developments. This presents both vast opportunities and a range of legal, ethical and practical challenges. Organisations exploring AI opportunities are navigating a patchwork of overlapping laws and, in some cases, sector-specific regulation as they develop their AI strategies within their broader ethical, compliance and risk frameworks.

This landscape is evolving as governments globally consider whether to adapt existing legal frameworks to better address AI, with some countries developing AI-centric legislative and regulatory frameworks. A watershed moment will be the promulgation of the European Union (EU)’s AI Act, which is expected shortly. It will be a key milestone, introducing a risk-based framework for AI governance across the AI supply chain, with application beyond the EU and serious penalties for non-compliance.

Within this global landscape of evolving AI legal frameworks, the developments across Greater China demonstrate differing strategic approaches in addressing AI-related challenges. The approach varies from steadily putting in place targeted rules and regulations for AI in the Chinese mainland to, at the other end of the spectrum, reliance on existing laws overlaid with sectoral and subject area guidance from key regulators in Hong Kong. The common theme appears to be that governments are closely monitoring the fast-evolving developments in AI and maintaining an agile approach. In this article, we examine in more detail the AI-related legislative developments in the Chinese mainland and Hong Kong.

Emergence of the legal architecture for AI in the Chinese mainland

The PRC has been steadily putting in place rules and regulations to ensure responsible use of AI. Currently, the regulatory approach is agile and targets specific areas or uses of AI where lawmakers consider this to be necessary. This approach also means that the legislative landscape tends to be fragmented and overlapping, although the concepts underlying the regulation may be similar.

Generative AI

The GenAI Measures. In August 2023, the Cyberspace Administration of China (CAC) – the PRC’s cyberspace security and internet content regulator – released provisional measures targeting content generation (including the generation of text, images, and audio and video content) using generative AI (the GenAI Measures). The GenAI Measures apply to any person that utilises generative AI technology to provide services to the public in the Chinese mainland, including those indirectly providing services through programming interfaces such as application programming interfaces (APIs). When considering the potential impact of the GenAI Measures, businesses should also be aware of the Personal Information Protection Law, the extraterritorial effect of which is triggered if the behaviour of individuals in the Chinese mainland is being analysed and assessed.

Some provisions in the GenAI Measures to note are that generative AI service providers must optimise algorithms to prevent the generation of inappropriate content by AI (for example, content that is discriminatory or inaccurate). Generative AI service providers are required to suspend or terminate services if such content or other improper use of the technology is discovered. Service providers capable of mobilising or influencing social viewpoints or public opinion are also required to complete a CAC security assessment and be ready to respond to relevant regulators in relation to the source of the training data used, as well as the algorithms and technical systems adopted. A service agreement must be put in place between the providers and users of generative AI services, and a complaints-handling procedure must be established by the generative AI service provider.

The NISSTC labelling and security requirements for generative AI service providers. Aimed at facilitating the practical implementation of the GenAI Measures, the PRC’s National Information Security Standardization Technical Committee (NISSTC) – a government standards-setting body, of which one supervisor is the CAC – released practical guidelines for tagging or labelling of generative AI-created content (as required by the GenAI Measures) in August 2023, followed by security requirements in February 2024. The security requirements deal with, among other things, the source of training data, AI model security or safety (specifically, the accuracy and reliability of content generated and model transparency), wider security or safety measures (this encompasses various aspects of safety such as protection of minors and AI-generated content labelling and moderation) and security assessment.

The emerging AI framework 

Other limbs of the PRC’s emerging AI framework include:

Regulating deepfakes. The PRC has introduced regulation on deep synthesis data and technology – defined in the provisions as technology using generative and/or synthetic algorithms such as deep learning or virtual reality to produce text, graphics, audio, video or virtual scenes – which took effect in January 2023. The relevant provisions target illegal activity, such as the production and dissemination of false news, which endangers national security or infringes others’ rights. It regulates deep synthesis service providers, technical support and users.

Rules on recommendation algorithms. Provisions on managing recommendation algorithms came into effect in the Chinese mainland in March 2022. The provisions apply to any entity that uses algorithm recommendation technologies to provide internet information services within the PRC. Service providers are required to ensure the fair and ethical use of such technology.

Ethical principles and related measures. The PRC introduced national-level guidance in the form of Opinions on Strengthening the Governance of Scientific and Technological Ethics in March 2022. The guiding tenet is enhancement of human well-being, with the opinions requiring the establishment of an ethics review committee for organisations engaged in certain activities, specifically in the areas of AI, as well as life sciences and medicine. Specific to financial institutions, the People’s Bank of China correspondingly issued their Guidelines for Science and Technology Ethics in the Financial Sector in order to steer ethical governance in the sector. By way of example to be adapted by organisations in light of actual needs, the National New Generation Artificial Intelligence Governance Professional Committee issued a model code of ethics in September 2021.

The AI position in Hong Kong 

To date, Hong Kong has relied on existing laws, together with sectoral and subject area guidance from key regulators, to deal with AI, with the government closely monitoring evolving developments. There are signs, however, that the government is starting to take a more proactive approach. In his response to a question regarding AI posed in Hong Kong’s Legislative Council in January 2024, Professor Dong Sun, JP, the Secretary for Innovation, Technology and Industry, highlighted two areas the government is currently exploring:

  1. the government has commissioned the InnoHK research centre to study and suggest appropriate rules and guidelines covering the accuracy, responsibility and information security aspects of AI technology and its application, and
  2. the government is studying the copyright issues arising from the development of AI technology, such as infringement issues stemming from the use of others’ copyright for training purposes, and will conduct a consultation in 2024 to further explore enhancement of the existing protection provided by the Copyright Ordinance.

While we await these developments, we will discuss the AI guidance from key regulators currently applicable in Hong Kong.

the common theme appears to be that governments are closely monitoring the fastevolving developments in AI and maintaining an agile approach

Guidance from the data privacy regulator

The Office of the Privacy Commissioner for Personal Data (PCPD) calls for companies to review and critically assess the implications of any AI system on data privacy and ethics and, in particular, to follow the Guidance on the Ethical Development and Use of Artificial Intelligence, issued in August 2021. This guidance refers to internationally recognised AI ethics principles, including accountability, human oversight, transparency and interpretability, fairness and data privacy, as well as reliability, robustness and security. In February 2024, the PCPD published the results of its review of 28 organisations to understand their collection and processing of personal data in the use of AI and their AI governance structures. 21 organisations were found to use AI in their day-to-day operations, with 19 of them having established AI governance frameworks. Interestingly, only 10 organisations collected personal data through AI products and services. The PCPD reminded organisations of their responsibility to ensure data security in the development and use of AI systems.

Guidance from the securities regulator

In the financial services sector, a speech by the Head of Intermediaries of the Securities and Futures Commission (SFC) at the Web3 Festival in April 2023 emphasised that generative AI, as a novel technology, has its own limitations and flaws, and that it is therefore vital to harness its benefits in a responsible way. The CEO of the SFC had this to say regarding generative AI at the Hong Kong Investment Funds Association annual conference in June 2023: ‘As a regulator, the SFC is guided by our philosophy to promote the responsible deployment of technology… firms must… make sure clients are treated fairly. We expect licensed corporations to thoroughly test AI to address any potential issues before deployment and keep a close watch on the quality of data used by the AI. Firms should also have qualified staff managing their AI tools, as well as proper senior management oversight and a robust governance framework for AI applications. For any conduct breaches, the SFC would look to hold the licensed firm responsible – not the AI.’

These same themes are contained in earlier SFC guidance, including the Guidelines on Online Distribution and Advisory Platforms published in July 2019, dealing with the use of AI in the context of online distribution of investment products and ‘robo-advice’ (namely, automated investment advice), and the circular on algorithmic trading published in December 2016.

Guidance from the banking regulator 

In November 2019, the Hong Kong Monetary Authority (HKMA) published one circular on High-level Principles on Artificial Intelligence and another on Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions. The principles set out are consistent with global themes for responsible use of AI, including for boards and senior management being accountable for AI-related outcomes, banks being required to ensure the explainability and ongoing monitoring of AI applications for producing fair and ethical outcomes, and the use of good quality data, together with the safeguarding of personal data. Relatedly, after a thematic examination of algorithmic trading (which may or may not involve AI), the HKMA published guidance in March 2020, which, in addition to reiterating the need for proper governance and regular review of algorithms, also discussed requirements for robust pre-trade controls such as risk limits and tolerance, proper ‘kill’ functionality to suspend trading, business continuity and incident handling, and proper documentation.

Guidance from the insurance regulator 

The Insurance Authority (IA) considered how the current regulatory framework applies to AI chatbots in its periodical newsletter, Conduct in Focus (May 2023 issue). In terms of licensing the use of a chatbot in the insurance process, the IA cited the potential application of its Guideline on Enterprise Risk Management (GL 21), Guideline on Cybersecurity (GL 20) and Guideline on Outsourcing (GL 14). The IA emphasised the need for comprehensive testing under tight governance controls before deployment; clear disclosure of the chatbot’s limitations, use and training dataset, as well as the use, storage and retention of data; and reporting controls and contingency plans. The IA emphasised an insurer or insurance intermediary’s responsibility for a chatbot’s output, as well as their overarching conduct and ethics requirements (including treating customers fairly and corporate governance requirements in the Code of Conduct).

Practical steps for AI strategy development or enhancement

When identifying and exploring opportunities for the use of AI, having multidisciplinary teams involved to ask the right questions to support responsible, informed decision-making is crucial. The starting point is to map the use of AI, understand the legal frameworks and risks, and develop appropriate oversight principles and robust governance programmes to mitigate those risks. Organisations will also need to identify appropriate decision-makers, look at their wider governance structures and processes, and consider their AI-related communications. Although the legal landscape for AI is evolving – across Greater China, the Asia Pacific region and globally – now is the time to develop AI legal and ethical strategies and riskmanagement frameworks.

organisations will also need to identify appropriate decision-makers, look at their wider governance structures and processes, and consider their AIrelated communications

Ling Ho, Partner, APAC IP practice

Clifford Chance

This article was adapted from ‘AI: the evolving legal landscape in APAC’. The full article can be found on the Clifford Chance website:

Read More