AI in corporate decision-making

Danny Kan, Corporate Partner, Stephenson Harwood, and Adjunct Assistant Professor, The Chinese University of Hong Kong, and Michael Mok, Associate, Stephenson Harwood, look at the benefits and challenges of AI to corporate governance, as well as the related responsibilities of directors.

Highlights

  • there is currently no legislation in Hong Kong regulating AI or the use of AI applications, although guidance does exist, primarily built upon ethical principles
  • AI is capable of performing certain human functions at a fraction of both time and cost, presenting an enticing prospect for companies in terms of efficiency gains and cost-saving opportunities
  • AI has the potential to significantly improve the overall corporate governance system and empower directors to make more informed decisions, especially when large data sets are handled, for the benefit of the company and its shareholders 

Artificial intelligence (AI) refers to a family of technologies that involve the use of computer programmes and machines to mimic the problem-solving and decision-making capabilities of humans. This is achieved through data analytics and automated decisionmaking or recommendations, thanks to AI’s self-learning capability. With the proliferation of data and increased computing power, AI has become capable of performing certain human functions at a fraction of the time and a fraction of the cost. Such efficiency gains and cost-saving opportunities present an enticing prospect for companies.

While companies enjoy the benefits AI brings about, the use of AI is not without risks – any wrong decisions made by AI might have a profound impact on a company and the persons to whom such decisions relate. The consequences of these wrong decisions vary, but are the directors of the company responsible for such consequences? The management and mitigation of those risks may require companies to adapt their corporate governance frameworks by implementing certain governance structures, processes and systems. To adapt corporate governance frameworks, does AI have a role to play? Some companies have adopted AI to enhance corporate governance by empowering directors to make more informed decisions for the benefit of the company and its shareholders.

Corporate governance

‘Corporate governance’ refers broadly to the governance of companies. In many cases, the shareholders of a company are different from its directors, who run and manage the company, and accordingly their respective motivations and interests may be different. For this reason, the law has recognised a need to protect the shareholders and their interests. This protection is achieved through the imposition of duties and obligations on the directors, which all strive to ensure that directors act in a manner that does not prejudice the company or its shareholders. This also means that if things go wrong, it is the directors who will have liability.

Practical business uses of AI include banks making loan decisions and preparing loan documentation, healthcare companies making diagnoses and formulating treatment plans, and shops creating personalised shopping experiences for consumers. What if, in reliance on a decision made by AI, a company makes a wrong decision, for example, rejecting a loan application based on an algorithm whose logic is flawed, making a wrong medical diagnosis based on an AI engine trained on biased data, or recommending products and services based on a wrong analysis of consumer preference? Would the company and its directors be held responsible for the consequences of such decisions?

What is the relevant legal framework?

From a legal point of view, AI is no different from other technological developments. However, given the use of AI applications and the potential of AI systems to mimic human decision-making, the implications for society of AI are much more wide-ranging than previous technological advances.

The European Union’s EU Artificial Intelligence Act came into force on 1 August 2024, with the implementation of specific rules subject to a phased approach spanning beyond 2026. This Act is the first comprehensive regulation on AI by a major regulator anywhere in the world, and will operate by assigning risk categories to various AI applications and regulating their use – AI applications assigned as ‘unacceptable risk’ are prohibited, while those assigned as ‘high-risk’ are subject to a more stringent risk management system and human oversight.

At present, Hong Kong does not have any legislation specifically regulating AI or the use of AI applications. While the government launched a two-month public consultation in July 2024 on enhancing the existing copyright law to impose potential infringement liability for certain AI-generated works, as well as the need for responsible and trustworthy AI systems, these proposed amendments merely tangentially touch upon the governance of AI systems. However, this does not mean that there is no guidance at all. The Office of the Privacy Commissioner for Personal Data, Hong Kong has issued two sets of useful guidance – in August 2021 and June 2024 – containing practical tips for companies in their use of AI applications. These tips are primarily built upon ethical principles when using AI applications, including transparency, interpretability, accountability, fairness, reliability, data privacy and human oversight. Other regulatory codes and publications also provide guidance on corporate governance generally, and the use of AI should be considered in the context of such guidance.

Can directors delegate their powers to AI?

Under common law and the Companies Ordinance, directors are charged with the duty to exercise reasonable care, skill and diligence, the duty not to delegate powers except with proper authorisation and the duty to exercise independent judgement. Directors are ultimately responsible for the affairs of the company. Even if directors delegate or rely on AI in the exercise of certain powers and functions, they remain responsible and must exercise independent judgement. Arguably, directors may not completely delegate their decision-making power to AI applications.

To fulfil their duties when using AI, directors would need to put in place structures, controls and systems. These would vary from company to company based on the company’s nature, size, industry and other factors. Companies in regulated industries, such as financial services and banking, will have additional requirements imposed on them by their regulators – the Securities and Futures Commission and the Hong Kong Monetary Authority hold directors and senior management accountable for autonomous decisions made by AI, while listed companies have additional requirements imposed on them by the Hong Kong Listing Rules.

How can the risks arising from the use of AI be minimised?

AI strategy and governance. At the highest level, a company should establish an AI strategy and formulate governance considerations for procuring AI solutions, set up an AI governance committee (or other form of governing body) and provide employees with AI-related training. A good governance structure would encompass the following:

  • all personnel involved in the use of AI should have clear roles and responsibilities in this connection
  • people with the right expertise should carry out the review functions described above
  • training in the use and purpose of AI should be provided to the people who use the AI applications, as well to the people who are involved in the monitoring of its use, so that all relevant persons understand the use, capacity and limitations of – and the risks associated with – the AI applications
  • the security of the AI applications and the relevant data should be protected, such as from external hacking, and
  • access to the relevant data should be restricted to those who need such access (for example, where sensitive personal data or other sensitive information is involved) and the AI applications should not be used in a manner or for any other purpose other than as intended.

Underpinning the governance structure should be the adoption of policies and practice manuals, as well as overall oversight by management.

Risk assessment and human oversight. A company should identify and assess the risks of each AI application it uses. This involves understanding the applications, including their uses and limitations. A risk assessment should then be carried out to determine the extent of human oversight required. In very general terms, risks to companies of the use of AI applications may include the risk that the AI application makes a wrong decision or generates inappropriate output, the risk arising from the use of personal data in AI applications and the risk of abuse of AI applications. Risks may also arise in relation to the security of the AI applications and, in turn, the data contained in the applications, as these may use online facilities.

Factors to consider in such an assessment would include the potential impact on the affected persons and the wider community of the occurrence of the identified risks, and the probability of occurrence of such impact – as well as its severity and duration – and the adequacy of risk mitigation measures. Where the risks are assessed as high, the company may consider taking the decision-making out of the AI application and retaining control over that decision-making (a ‘human in the loop’ approach). Where the risk is low, there may not be a need for human oversight (a ‘human out of the loop’ approach), while for medium-risk applications, a combination of the two approaches might be considered where humans oversee the operation of the AI application and intervene where necessary (a ‘human in command’ approach).

The occurrence of such risks could have adverse impacts on the persons in connection with whom such output is generated. It could potentially also result in liability for the company towards the persons affected by such occurrences, or for reason of any resulting breaches of law and/or liability for directors towards shareholders for breaches of their duties to the company and the shareholders.

What actions can the company’s directors take?

Companies should have in place corporate governance frameworks that ensure that either directors, an AI governance committee (or an equivalent body) or senior management take the following actions.

System design and testing. It is imperative that directors understand the underlying logic of an AI algorithm, instead of treating it as a black box. They should avoid relying completely on the decision logic under AI applications, but should demonstrate that they have taken all reasonable steps to understand the potential biases underlying such decision logic.

it is imperative that directors understand the underlying logic of an AI algorithm, instead of treating it as a black box

An AI application is only as good as the data provided to it. If the data on which an AI application relies is itself unreliable, the output would in turn also be inappropriate – the phrase ‘garbage in, garbage out’, coined in the 1960s in the context of computer science, seems especially apt in relation to AI, with its vast amount of data. Therefore, at the algorithm design stage, if the underlying data contains biases towards certain characteristics, the outputs generated by the AI application may also be skewed by those biases. The biases can potentially be exacerbated by the machine learning process used by the AI application. The AI application should be tested to ensure it performs as designed. Records of the relevant designs and tests should be maintained for future audit.

System implementation. Prior to the use of AI-generated output, review the output to ensure that it is in line with expectations. If any defects are found, or if the directors consider any output to be inconsistent with their expectations, the directors should take the necessary steps to rectify this. Rectification may simply be to rerun the data in the AI application, or to adjust the way the AI application is operated.

The reason for taking such steps is to demonstrate the exercise of independent judgement and due care and skill, in that the directors are not relying solely or excessively on AI, but are also themselves taking steps to ensure that the AI applications are operating and performing as intended. Nonetheless, there is no obligation to ensure that each and every decision is correct, but if any of the directors do not exercise due care and skill or independent judgement, they may be regarded as having breached their duties.

Continuous monitoring and training. AI technologies are constantly evolving and the risk factors regarding the AI applications being used, as well as the reliability of AI models and data used for such a purpose, will inevitably change over time. Such changes will affect the reliability, robustness and security of AI applications. To guard against such impacts, companies should periodically review and test the AI applications to ensure they are operating and performing as intended. If necessary, retrain the AI applications with new data. It is also recommended that fresh risk assessments are regularly conducted and, if necessary, adjustments are made to governance structures.

Training in AI and AI ethics can empower directors with AI governance expertise. The Corporate Governance Guide for Boards and Directors issued by the Hong Kong Stock Exchange in 2021 lists technology know-how as one of the desirable skills the nomination committee of a listed company might consider when looking at a director candidate – and such technology know-how would conceivably include AI and AI ethics.

companies should periodically review and test the AI applications to ensure they are operating and performing as intended

Can AI enhance corporate governance?

At the board level, AI may contribute to more informed decision-making, having taken into account a larger data set for evaluation. Theoretically, AI empowers human decision-making and board deliberations. AI could potentially be a positive disruptor of boardroom dynamics, enabling more objective and independent operational and strategic decisions to be made by companies, since it minimises the influence of human unconscious bias. AI can also assist with the setting and achievement of strategic goals and investment decisions. However, at present, Hong Kong does not have any legal framework for appointing an AI system as a director of a company. The law provides that unlisted companies may appoint natural persons and corporations as directors, while listed companies may only appoint natural persons as directors.

Certain listed companies have reported applying AI to risk management to ensure all risks are effectively identified and managed on a timely basis. In a more extreme case, in 2014 a Hong Kong–based venture capital management fund, Deep Knowledge Ventures, announced that it had appointed a machine learning algorithm called Vital (Validating Investment Tool for Advancing Life Sciences) to its board of directors. Vital was to be consulted and its views on potential investments were to carry equal weight to those of the fund’s human directors. Although Vital did not have the legal status of a director, the board of Deep Knowledge Ventures used Vital to make purportedly more logical decisions, instead of investing in overhyped projects.

Concluding remarks

The use of AI can potentially be of great benefit to companies, but at the same time it presents challenges for corporate governance. While we await any laws and regulations directly related to the use of AI, company directors should remain aware of the potential consequences of the risks of AI. Despite the challenges it brings to corporate governance, AI has the potential to significantly improve the overall corporate governance system and empower directors to make more informed decisions, especially when large data sets are handled. It appears that directors and AI are both in charge, and will go hand-in-hand to impart new dynamics to corporate governance practices.

Danny Kan, Corporate Partner, Stephenson Harwood, and Adjunct Assistant Professor, The Chinese University of Hong Kong, and Michael Mok, Associate, Stephenson Harwood