The potential of artificial intelligence is enormous, but Dr Jag Kundi, a Hong Kong-based scholar-practitioner active in the FinTech space, warns that without thorough ethical oversight, this new tool could have unintended, disastrous consequences.
Artificial intelligence (AI) is concerned with logic and rationality. Ethics would appear to be the antithesis of this as it is concerned with abstract notions of fairness, equality and doing the right thing. Is the title of this article therefore an oxymoron?
As the potential risks of AI become more evident, it is becoming clearer that AI and ethics need to be inextricably linked.
The benefits of AI
AI is a topical subject at the moment given its expected role in nearly everything we do or will do. It is based on a collection of technologies that combine data, algorithms and computing power, and has been driven by advances in computing power and the increasing availability of data which improves the predictability of AI models. Our global digital datasphere is growing rapidly. The International Data Corporation predicts that it will have grown from around 33 zettabytes in 2018 to an expected 175 zettabytes by 2025. One zettabyte is equivalent to a billion terabytes.
The decision-making process in AI is based on predefined complex mathematical relationships known as algorithms (mathematical formulae). AI is a foundational shift in computing – a shift from encoding human knowledge and instructions to learning how to solve problems by inferring and inducing from information.
- The benefits of AI to improve our lives are already apparent. It can:
- improve the efficacy of healthcare diagnoses to be more precise and to provide more preventative care, for example look at the rise of the new pharma-tech companies such as AliHealth and JD Health
- play a role in creating safer and cleaner transport systems, for example AI technology combined with the Internet of Things (IoT) has been used to create ‘smart cities’
- increase the efficiency of farming, for example the use of AI, IoT, data mining and weather predictability has improved farming practices and yields, and
- impact climate change through more advanced climate modelling and predictive analysis, for example it has given us a deeper understanding of weather predictability, greenhouse gas modelling and the energy requirements to improve climate impacts and reduce harmful pollution.
In humans we talk of ‘natural intelligence’ as using the brain to solve problems by learning and reasoning from information. Similarly in machines, AI can be considered as using algorithms to solve problems by learning and reasoning from information. There are two types of AI:
- Narrow Artificial Intelligence (weak AI) – which usually covers a single domain and is not easily transferable, for example self-driving cars, Apple Siri and Spam filters, and
- Artificial General Intelligence (strong AI) – which can learn anything since it learns how to learn, although examples of this type of AI are mainly confined to science fiction movies at the moment.
Most of today’s AI uses machine learning techniques such as supervised learning, unsupervised learning and reinforcement learning. We are still in the early days of AI development, but initial results show that it can be very effective for certain functions, such as providing medical diagnoses and improving web security through malware and spam detection.
The risks of AI
AI transcends markets, culture and geography, and is applicable everywhere. It is already having a significant impact on society and humanity in multiple ways. As such, fundamental questions have been raised about how we should manage and control these systems and better understand the risks involved. Thus, it makes sense to think about what we want these systems to do and make sure we build them with the common good of humanity in mind.
Left unchecked the threats from AI are increasing and could include the following:
- elimination of 40–80% of all jobs (think of AI not only as a job-killer but as a job-category killer)
- restricting or eliminating basic human rights (for example privacy rights)
- accelerating income inequality
- increasing the risks arising from biased data sets and algorithms
- increasing the risks arising from cyber attacks and bad agents
- creating a substantial shift in what it means to be human and live in society
- increasing uncertainty around the legal implications of interactions with AI-based machines
- increasing autonomous weaponry risks, and
- increasing the existential risk that machines will eliminate humans.
This is where ethics is needed to temper the headlong rush into AI. Ethics is fundamentally based on protecting the public interest, the collective well-being of the community and all stakeholders, which is an overriding responsibility that underpins all professional duties and obligations. Many professional associations have ethics at the heart of their professional programmes, such as governance, accountancy, legal and financial institutes.
Given that AI powers many of the technologies that are central to our modern lives, we should be able to have a high degree of trust in its underlying operations. However, for many of us AI is like a ‘black-box’ where we know it is there, but what it does, how it does it and when it does it are beyond our comprehension. The question arises of who does AI serve – the public, government, society or the corporates? Keep in mind that the corporates have invested resources into developing it and ultimately have shareholders expecting a return on their investment. Clearly conflicts can arise.
Autonomous vehicles (AVs) provide a classic example of the ethical dilemmas that AI raises. An AV is capable of sensing its environment and moving with little or no human involvement. For the AV to move safely and to understand its driving environment, an enormous amount of data needs to be captured by myriad different sensors across the car at all times and then processed by the vehicle’s computer system.
Prior to AV technology being released onto the market, the AV must also undertake a considerable amount of training (machine learning) in order to understand the data it is collecting and to be able to make the right decision in any imaginable traffic situation. Moral decisions are made by drivers daily. Imagine a situation in which an AV with broken brakes is going at full speed towards a grandmother and a child. By deviating a little, one life can be saved. In an AV, it is not a human driver who is going to take the decision, but the car’s algorithm. Who would you choose, the grandmother or the child? Or, to up the ante, what if the two people were of different genders or ethnicities – who gets hit?
The legal issues around AVs will also be an explosive minefield once such systems become mainstream. Questions have already arisen around legal liability in cases of accidents. When AI is in control, where does responsibility lie – the driver, the car manufacturer or the AI systems developer? Moreover, what about insurance for AVs? For sure, the way premiums are calculated will need to be revisited. These are typical ethical dilemmas that show the importance of ethics in the development of AI technologies.
The governance implications
Companies are not immune to these ethical dilemmas and are beginning to realise that failing to recognise ethical issues in operationalising their AI systems can impact their bottom line. A 2018 survey by Deloitte of 1,400 US executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organisations don’t yet have specific approaches to deal with AI ethics.
The impact of AI can be via reputational, regulatory and legal risks. These are the external ethical impacts, but internal impacts also arise through misallocation of resources, and product development and commercialisation inefficiencies. A recent article in the MIT Technology Review – The Way We Train AI is Fundamentally Flawed – highlights the mismatch between the data that AI is trained and tested on, often under near-perfect laboratory conditions, and the data it encounters in the real world. AI trained to spot signs of disease using high-quality medical images, for example, struggles to find such signs when provided with low-quality, blurry pictures captured by a cheap camera or smartphone in a busy clinic.
For corporates this threat is real and already becoming a problem, particularly where corporate systems are dependent on AI and machine learning technologies. For example, the Ada Lovelace Institute, an independent global research body that monitors AI and data ethics, has highlighted the way facial recognition systems have been developed using flawed or limited data sets by companies. Most training data sets use commonly available photos that aren’t representative of the diverse ethnicities, genders or social classes of a typical population. These biases are then passed on in the data that informs the system. The empirical evidence is stark and compelling. A study published in December 2019 by the US-based National Institute of Standards and Technology, which analysed 189 software algorithms from 99 developers – the majority of the industry – saw higher rates of inaccuracy for Asian and African American faces relative to images of Caucasians, often by a factor of 10 to 100 times.
This research was in turn based on an earlier study in 2018 by the MIT Media Lab, a research laboratory at the Massachusetts Institute of Technology, that found leading facial recognition systems by Microsoft, IBM and Megvii of China performed at a 0.8% error rate when used on images of white men, but at a rate of 34.7% when tested on images of dark-skinned women. The MIT researchers pointed to the fact that the image data sets used in pattern-learning to develop these facial recognition technologies was found to be 77% male and 83% white-skinned as the primary reason behind the disparity in performance.
As the technology continues to become more powerful, AI also has the ability to cause severe damage if used maliciously. The onus will be on companies to know their customers better, particularly in the case where they provide an on-demand cloud-based platform or service. If criminal players are using their platform to engage in AI-enabled attacks or other criminal acts, then, similar to financial institutions, the regulators will have to respond and may impose ‘know-your-customer’ (KYC) regulations. To be ahead of the curve it may be prudent for companies to start their own enhanced efforts to know who their customers are and what they are doing on their platform in advance of such potential regulations. Consider the big social media platforms that, after the recent debacle on Capitol Hill in the US, are now actively taking an ethical stance in monitoring their platforms and even banning or suspending individuals and corporates they identify as promoting fake news.
Preparing for the age of AI
The path dependency looks to be set as we move towards the most significant technological transformation in human history, one that has immense implications for the economy, politics and law, as well as for social, moral and ethical underpinnings of modern society. As we approach the age of AI, with all the corresponding risks and benefits that it will unleash, we need to be mindful that we are still operating with a 20th-century economic, political and moral framework.
Should we embrace AI and allow the technology to permeate our lives fully, or should we set boundaries on how pervasive this technology can be? When AI can determine our credit score (via analysis of a data set including both online and offline activity) and consequently qualify us instantly for loans without any human intervention in the process, the economic, speed and efficiency benefits are clear. However, if you are Asian, black or female, then such AI systems have a built-in bias in assessing your credit worthiness. Is that fair? Is that ethical? Where are the checks and balances that should level out these biases?
Taking an even deeper dive into the AI rabbit hole, should we grant rights to AI-enhanced robots that are capable of rudimentary decision-making? These are deep and searching questions that society and businesses will need to wrestle with. A new moral political framework will need to be developed to address such issues, as otherwise the consequences could lead to further concentration of power in a few hands, and increasing wealth inequality in society.
A big challenge for corporations is the speed in technology development versus regulatory catch-up. Until the regulatory environment catches up with technology (does it ever?) then corporate leaders will remain culpable for making ethical decisions about the use of AI applications and products. Look at Uber and regulations around taxis, AirBnB and letting of private accommodation, and social media platforms and responsibility of posted content to name a few examples.
There’s a moment in the original Jurassic Park film when Ian Malcolm, played by Jeff Goldblum, says emphatically, ‘Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should’. We are at a similar crossroads with AI technology today and to some extent the ethics that revolve around this technology – just because we can use AI doesn’t mean we should unless there is a compelling reason to do so. Sadly, as we replace more and more human work (and human salaries) with AI-based technology and replace human judgement with algorithms, these commercial considerations may dominate and the migration to AI will be sealed with or without ethical oversight.
Dr Jag Kundi
Hong Kong-based scholar-practitioner active in the FinTech space
The author can be contacted at: dr.kundi@live.com, or via LinkedIn: www.linkedin.com/in/jagkundi.
SIDEBAR: Building Ethical AI – a seven-step plan
A recent article in the Harvard Business Review – A Practical Guide to Building Ethical AI – advised that companies need a clear plan to deal with the ethical issues that AI is introducing. A seven-step plan is provided to help managers operationalise and integrate ethics with AI. These seven steps are:
- identify existing infrastructure that a data and AI ethics programme can leverage
- create a data and AI ethical risk framework that is tailored to your specific industry
- change how you think about ethics by taking cues from the successes in the healthcare industry as a benchmark
- optimise guidance and tools for product managers
- build organisational awareness of ethical impacts due to AI
- formally and informally incentivise employees to play a proactive role in identifying AI ethical risks, and
- monitor AI-based ethical impacts and engage with key stakeholders.