Glenn Haley, Partner, and Sharon Chan, Associate, Bryan Cave Leighton Paisner, summarise key highlights from the recent guidance issued by the Office of the Privacy Commissioner for Personal Data (PCPD) on the use of artificial intelligence (AI).

Hong Kong now has its own set of recommended best practices for the development and use of AI, published in a guidance note issued by the PCPD. Businesses that intend to or have begun to use AI in their operations are advised to consider the risk levels of their respective AI systems and to implement the suggested measures for better protection of individual consumers.

Background

An AI system is a machine-based system that makes predictions, recommendations or decisions influencing real or virtual environments, based on a given set of human-defined objectives. The potential social and economic benefits of AI are significant. Healthy use of AI can drive innovation and improve efficiency in a wide range of fields.

However, given the nature of what an AI system does, it also comes with potential risks that no business should ignore. AI systems often involve the profiling of individuals and the making of automated decisions that have real impact on human beings, posing risks to data privacy and other human rights.

Against this background, calls for accountable and ethical use of AI have been on the rise in recent years.

In October 2018, the Global Privacy Assembly (GPA) – a leading international forum for over 130 data protection regulators from around the globe to discuss and exchange views on privacy issues and the latest international developments – adopted a Declaration on Ethics and Data Protection in Artificial Intelligence, endorsing six guiding principles to preserve human rights in the development of AI. Two years later, the GPA adopted a resolution sponsored by the PCPD of Hong Kong to encourage greater accountability in the development and use of AI. Various countries and international organisations have published their respective guidance notes to encourage organisations to embrace good data ethics in their operation and use of AI.

The PCPD published its Guidance on the Ethical Development and Use of Artificial Intelligence (the Guidance) in August 2021. This article summarises the key highlights from the Guidance.

From high-level values to ground-level practices

The Guidance sets out three broad Data Stewardship Values, which transform into various ethical principles and specific practical guidance.  

The three Data Stewardship Values were first put forward by the PCPD in October 2018 in the Ethical Accountability Framework for Hong Kong. Businesses should recognise and embrace these core ethical values. These values should define how businesses carry out their activities and achieve their missions or visions.  

The three Data Stewardship Values entail the following:

  1. Be respectful of the dignity, autonomy, rights, interests and reasonable expectations of individuals.
  2. Be beneficial to stakeholders and to the wider community with the use of AI.
  3. Be fair in both processes and the results:
  • make sure that decisions are made reasonably, without unjust bias or unlawful discrimination
  • accessible and effective avenues need to be established for individuals to seek redress for unfair treatments, and
  • like people should be treated alike. Differential treatments need to be justifiable with sound reasons.

These three core values are linked to some commonly accepted principles such as accountability, transparency, fairness, data privacy and human oversight also found in guidance notes published in other countries or by international organisations. These principles then have been fleshed out and developed into recommended ground-level practicesby the PCPD (see Figure 1).

AI strategy and governance

The Guidance recommends that organisations which use or intend to use AI technologies should formulate an AI strategy. Internal policies and procedures specific to the ethical design, development and use of AI should be set up.

First and foremost, in order to steer the development and use of AI, organisations should establish an internal governance structure that comprises both an organisational-level AI strategy and an AI governance committee (or a similar body). The AI governance should oversee the entire life cycle of the AI system, from development and use to termination. It should comprise a chief-level executive to oversee the AI operation, as well as members from different disciplines and departments to collaborate in the development and use of AI.

Internal governance policies should spell out the clear roles and responsibilities for personnel involved in the use and development of AI. Adequate financial and human resources should also be set aside for the development and implementation of AI systems. Since human involvement is key, the Guidance also recommends providing relevant training to and arranging regular awareness-raising exercises for all personnel involved in the development and use of AI.

Risk assessment and human oversight

The Guidance stresses the element of human oversight and the fact that human actors should ultimately be held accountable for the use of and the decisions made by AI. An appropriate level of human oversight and supervision, which corresponds with the level of risk, should be put in place. An AI system that is likely to cause a significant impact on stakeholders is considered to be of high risk. An AI system with higher risk profile requires a higher level of human oversight.

In order to determine risk level, risk assessments that take into account personal data privacy risks and other ethical impacts of the prospective AI system should be conducted before the development and use of AI. Risk assessment results should be reviewed and endorsed by the organisation’s AI governance committee or body, which then should determine and put in place an appropriate level of human oversight and other mitigation measures for the AI system.

Development of AI models and management of AI systems

To better protect data privacy, the Guidance recommends that organisations should take steps to prepare datasets that are to be fed to the AI systems. Where possible, organisations should consider using anonymised or synthetic data that carries no personal data risk. The Guidance further recommends that organisations should minimise the amount of personal data used by collecting only the data that is relevant to the particular purposes of the AI in question, and strip away individual traits or characteristics that are irrelevant to the purposes concerned.

The quality of data used should be monitored and managed. ‘Quality data’ should be reliable, accurate, complete, relevant, consistent, properly sourced and without unjust bias or unlawful discrimination. Organisations should take appropriate measures to ensure the quality of data and compliance with Person Data (Privacy) Ordinance requirements.

Once datasets are prepared, organisations will have to evaluate, select and apply (or design) appropriate machine learning algorithms to analyse the training data. Mitigation measures that reduce the risk of malicious input and rigorous testing of the AI models are recommended so as to improve the AI system. It also is important to have mechanisms that allow for human intervention and fallback solutions to kick in when necessary.

AI systems run by machines always have a chance (however slight) of malfunctioning or failing. It is therefore important for them to be subject to continuous review and monitoring by human beings. The approach to such human monitoring and review should vary depending upon the risk level. Measures proposed in the Guidance include keeping proper documentation, implementing security measures throughout the AI system life cycle, reassessing risks and retraining AI models from time to time, and establishing feedback channels for users of the AI system.

Taking one step back from the AI systems, organisations are also encouraged to conduct regular internal audit and evaluation of the wider technological landscape to identify gaps or deficiencies in the existing technological ecosystem.

Communication and engagement with stakeholders

Organisations that serve individual consumers using AI should ensure that the use of AI is communicated to the consumers in a clear and prominent manner, and in layman’s terms. Many of the recommendations set out in the Guidance mirror the data protection principles in place for the collection and use of personal data. For example, consumers should be informed of the purposes, benefits and effects of using the particular AI system. Consumers should also be allowed to correct any inaccuracies, provide feedback, seek explanation, request human intervention and opt out of the use of AI where possible. Where appropriate, results of risk assessments and reassessments also should be disclosed to consumers.

Concluding remarks

The Guidance encompasses detailed and practical guidelines that have been developed after careful consideration of relevant international agreements and practices. It provides useful guidance to businesses that intend to jump on the bandwagon of the AI trend, or which are seeking to ensure their current AI systems are compliant with best practices endorsed by the Hong Kong government.

Although compliance with the Guidance is not mandatory, prudent businesses and organisations should implement the recommended measures set out in the Guidance to the extent possible, especially if their use or intended use of AI comes with high data or security risks.

The topic of AI has attracted more and more attention in the international arena in the past few years. General artificial intelligence bills or resolutions were introduced and enacted in a number of states in the US in 2021. In April 2021, the European Commission proposed to regulate AI by legislative means. In addition to implementing the recommended measures, multinational companies should keep a close watch on the development of the law in this area in their relevant jurisdictions.

Glenn Haley, Partner, and Sharon Chan, Associate 
Bryan Cave Leighton Paisner LLP 

© 2021 Bryan Cave Leighton Paisner LLP