Dr Angus Young, Senior Lecturer and Deputy Programme Director of LLM (CFL), Faculty of Law, The University of Hong Kong, overviews the PCPD’s recent guidance on addressing AI risks for personal data privacy, and examines the issues of accountability, human oversight and the prevention of harm from a governance perspective.

Highlights

  • the PCPD has recently published guidance on a model framework to address the challenges posed by AI to personal data privacy
  • AI risks are particularly acute in relation to privacy and data protection as AI and machine learning functions are dependent on data
  • whether an AI system causes harm, and what constitutes harm, are important issues for the governance professional to address, as are the issues of human oversight and accountability

Even though there are no artificial intelligence (AI) laws or regulations in Hong Kong, this does not mean that AI risks are remote. It is evident from the recent CrowdStrike outage that technological malfunctions can cost businesses tens of billions of dollars in liability, as we saw at the airports, banks, financial institutions and hospitals that were paralysed by blue screens across millions of computers. Whilst this episode was not caused by AI per se, it highlights the consequences of a single fault and point of vulnerability from our growing technological dependency. Such vulnerabilities and risks are more acute with the use of AI due to the high levels of automation and autonomous decision-making functions. This is particularly acute in relation to privacy and data protection as AI and machine learning functions are dependent on data. As such, how data is used, collected and shared by AI must be in line not only with legal obligations, but also with ethical principles to safeguard against possible harm to an individual or organisation.

how data is used, collected and shared by AI must be in line not only with legal obligations, but also with ethical principles to safeguard against possible harm to an individual or organisation

PCPD’s guidance on addressing AI risks

In June this year, the Office of the Privacy Commissioner for Personal Data, Hong Kong (PCPD) released its Model Personal Data Protection Framework to address the challenges posed by AI to personal data privacy.

This 56-page model framework is comprehensive in its scope of the subject matter. It is divided into four parts – AI strategy and governance, risk assessment and human oversight, customisation of AI models and implementation and management of AI systems, and communication and engagement with stakeholders. In the foreword, Professor Wong states that, ‘Adopting a risk-based approach, the Framework provides a set of practical and detailed recommendations for local enterprises intending to procure, implement and use AI systems.’ Risk is clearly central to this framework. Governance will inevitably be focused on managing AI risk. However, there are certain aspects that raise some open-ended questions to do with human judgement.

Before this article delves into the complications associated with a risk-based approach to governing AI when it comes to data protection, it would be helpful to examine the principle of accountability as delineated in the PCPD’s model framework in greater detail. At the outset, the PCPD recognises that AI adoption is an increasing trend and that businesses are more likely to purchase AI solutions from vendors, rather than developing in-house solutions, for cost reasons. This translates into a set of recommendations on the best practices for businesses procuring, implementing and using ‘off the shelf’ AI solutions. In part one of the model framework, it is recommended that organisations formulate an AI strategy to demonstrate the commitment of top management to accountability. It stipulates that this is linked to the adoption of ethical principles regarding the procurement, implementation and use of AI. Even if top management sets out such requirements in relation to vendors, it might not be easy for management with non-technology backgrounds to select from the many vendors that could make inflated assurances, or worse – AI washing. Thus, being accountable for third-party products could expose directors to greater uncertainty if the AI system becomes a single point of vulnerability.

Whilst the PCPD recommends that an internal governance structure with sufficient resources, expertise and authority should be established to ensure effective human oversight – such as an AI governance committee or similar body – in practice, the cost and human resource implications could be considerable. The PCPD’s guidance goes on to state that, ‘Human oversight is a key measure for mitigating the risks of using AI.’ Moreover, it advises that, ‘A high-risk AI system should take a “human-in-the loop” approach, where human actors retain control of the decision-making process to prevent and/or mitigate errors or improper output and/or decisions made by AI.’

At the other end of the spectrum, it also notes that, ‘An AI system with minimal or low risks may take a “human-out-of-the-loop” approach, whereby the AI system is given the capability to adopt output and/or make decisions without human intervention to achieve full automation or fully automated decision-making.’ In the middle of the spectrum of risk level of AI systems, organisations ‘may consider a “human-in-command” approach, where human actors make use of the output of the AI system, and oversee the operation of the AI system and intervene whenever necessary’. Such a risk-based approach sounds neat and manageable – however, the assumptions might be overly simplistic.

AI risks and the prevention of harm

A recent article by group of 25 scientists, published in May in the authoritative journal Science, argues that, ‘Rigorous risk assessment for frontier AI systems remains an open challenge owing to their broad capabilities and pervasive deployment across diverse application areas’ (Y Bengio et al, 2024). What they recommend in terms of mitigating risk would require regulators to clarify the ‘legal responsibilities that arise from existing liability frameworks, and to hold frontier AI developers and owners legally accountable for harms from their models that can be reasonably expected to arise from deploying powerful AI systems whose behaviour they cannot predict. Liability, together with consequential evaluations and safety cases, can prevent harm and create much-needed incentives to invest in safety.’ Therefore, an important part of risk management is the prevention of harm.

Looking at the European Union (EU)’s Artificial Intelligence Act 2024 (AI Act), the notion of harm is regulated under Chapter 2, Article 5, which sets out a number of prohibitions against certain specified AI practices. Those AI practices that could cause, or are reasonably likely to cause, significant harm to a person or a group of people are banned. As for high-risk AI systems, the AI Act imposes certain requirements for providers of high-risk AI systems under Chapter 3, articles 8 to 17. This echoes the PCPD’s model framework’s prescriptions, with addition requirements such as technical documentation, record-keeping and putting in place a quality management system. Accordingly, harm is banned under the context of causing or resulting in significant harm.

Earlier, in 2019, the European Commission’s independent High-level Expert Group on AI published a guideline to promote trustworthy AI, with a focus on securing ethical and robust AI. Under section 2.2 of Chapter 1, this guideline lists the four ethical principles in the context of AI systems, the second of which is the prevention of harm. The relevant text is as follows: ‘AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity, as well as mental and physical integrity. AI systems and the environments in which they operate must be safe and secure.

They must be technically robust and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers, or governments and citizens. Preventing harm also entails consideration of the natural environment and all living beings.’

Therefore, the objective of ‘prevention of harm’ in this guideline is to ensure that AI operates safely and securely, and that it does not cause or exacerbate harm to individuals or collectively, including tangible harm to social, cultural, political and natural environments.

In 2021, the PCPD also released its guidance on the ethical development and use of AI. One of seven ethical principles outlined in this guidance is ‘beneficial AI’, where ‘AI should provide benefits to human beings, businesses and the wider community. Provision of benefits encompasses prevention of harm. Where the use of AI may cause harm to stakeholders, measures should be taken to minimise the probability and severity of the harm.’ It is clear from this text that prevention of harm is important, but the guidance also acknowledges that it is not always totally preventable. Minimising harm in terms of its gravity would then be more appropriate.

More importantly, to better appreciate the notion of harm prevention and minimisation, as well as its impact on governance – especially for directors – one would have to explore the roots of harm in law. Interestingly, from a legal perspective, harm is closely associated with the philosophy of criminal law. According to Simester and von Hirsch (2011), the harm principle refers to conduct involving damage to another person’s interests. Furthermore, this should not neglect to demonstrate why the conduct of one person against another is wrongful. The concept of ‘wrongful’ draws its foundations from moral norms of right and wrong. A counterargument states that harm by itself is insufficient to constitute a crime, rather it is ‘wrongs’ that deserve punishment (Alexander and Ferzan, 2011). Nevertheless, the state has a moral and legal obligation to prevent harm from being inflicted on individuals (Ashworth and Zedner, 2014). In essence, prevention and even minimisation of harm is a positive duty that the state cannot ignore, and, as it is morally reinforced, the expectations of society cannot be disregarded either.

Implications for governance

From a governance viewpoint, does a director under the duty of care as set out in section 465 of the Companies Ordinance (Cap 622) have a positive duty to prevent and minimise harm when it comes to the use and application of AI? The answer is not straightforward. There are a few questions to consider.

  • First, appraising the mitigation of risks in the PCPD model framework where human oversight might be required. To determine if this is needed, one must first decide what risk level the AI poses. Whilst the model framework implies that risk can be qualified and measured, risk is a dynamic concept since it can emerge unexpectedly, thus giving rise to the question of whether the unexpected is unforeseeable. If we assume that the unexpected is probable, it can be argued that probable is foreseeable – a vital element of tort liability in the duty of care.
  • Second, to what extent can prevention be realised? A key part of this is to demonstrate that reasonable steps have been taken to prevent harm, and, if harm is discovered, active steps to minimise the extent of the harm inflicted are expected mitigatory actions.
  • Third, the skills of individual directors need to be assessed. The level of technical knowledge or competence becomes a material consideration.
  • Fourth, the delegation and reliance on the expertise of key staff with technical skills. The issue here is whether directors are adequately monitoring the duties of key staff and whether the reliance on those employees is reasonable.

Ultimately, the four questions outlined above are intended as reference points for governance professionals to consider as good governance practices, because there is thus far no authoritative local case law on the interpretation of section 465 in relation to AI.

Conclusion

In its model framework guidance, the PCPD states that, ‘Organisations should ensure compliance with the requirements under the Personal Data (Privacy) Ordinance (PDPO), including the six Data Protection Principles in Schedule 1 thereto, when handling personal data in the process of procuring, implementing and using AI solutions.’ It further notes that, ‘Good data governance goes hand in hand with governance for trustworthy AI. By incorporating the principles of AI governance and “privacy-by-design” into their existing Personal Data Privacy Management Programme and/or data management practices, organisations can reinforce their commitment to personal data privacy protection and demonstrate their accountability.’

In short, good AI practice and accountability is to ensure compliance with the PDPO. There is also an implied expectation to make certain that steps to prevent and minimise harm from AI to individuals and organisations are constantly observed and reassessed, as a matter of good governance practice.

good AI practice and accountability is to ensure compliance with the PDPO

Dr Angus Young, Senior Lecturer and Deputy Programme Director of LLM (CFL), Faculty of Law

The University of Hong Kong

Read More

Retirement age – a sustainability governance issue

The Institute’s latest research report focuses on the all-important issue of retirement age. CGj provides a snapshot of the background to the report, the associated sustainability governance themes and the results of a survey of approximately 1,300 individuals on the topic.
Wednesday | 20 November 2024