Managing AI along ethical lines

Sophia smiles, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human. The humanoid robot, created by Hanson robotics, is the first robot in the world to receive citizenship and she is also the first non-human to be given a United Nations title.

Sophia is a great example of artificial intelligence[1] (AI), which is a computer system that understands and learns from observations without the need to be explicitly programmed. The outputs of these systems are continuously optimised through learning from data. Whilst this concept may sound like science fiction, AI is already deeply embedded in our lives. Our smartphones, cars, banks, home devices all use artificial intelligence on a daily basis; sometimes itā€™s obvious what itā€™s doing, sometimes itā€™s not.

As AI becomes more and more sophisticated, it also has the potential to revolutionise whole industries and provide significant benefits for us as individuals. AI is already being used to improve medical diagnoses, combat cybercrime and carry out dangerous and repetitive tasks. This month, one of the UKā€™s biggest hospitals announced plans to use AI to carry out tasks such as diagnosing cancer on CT scans and deciding which A&E patients to see first.

As intelligent machines transform our lives and make our world more efficient[2], it also raises a number of ethical issues. For example, the complexity of computer systems makes it difficult to work out why an AI system does what it does.Ā  Ā If AI makes mistakes which cause harm, who should be held responsible? AI also poses some challenges in relation to data protection. AI systems learn through access to new data, so what does that mean in terms of our right to privacy? And, who, if anyone, should be responsible for the output of AI ā€˜thoughtā€™ processes?

AI is clearly posing as many questions as it is answering. According to a recent House of Lords Report AI in the UK: ready willing and able? there isnā€™t even a widely accepted definition of AI, let alone an ethical framework to govern its evolution.[3]

Work on this is beginning at an international level. The European Commission has announced a three-pronged approach to AI.Ā  This will encompass the development of an ethical and legal framework as well as increasing public and private investment, while also ensuring that EU member states prepare for any resulting socio-economic changes. Twenty-four Member States, including the UK[4], have signed the EU Declaration on Artificial Intelligence, committing to work together on the issues raised by AI.

The OECD held a conference on AI entitled Intelligent machines, smart policies in October 2017, which examined whether existing regulatory and liability regimes will be adequate, exploring the balance needed between avoiding citizens being exposed to risk through inadequate regulation and stifling innovation through unnecessary or intrusive interventions.

At governmental level, it is clear that a regulatory programme for AI is at the embryonic stage. Nonetheless, those businesses developing or utilising AI technologies should be taking a responsible and ethical approach, just as they would with other initiatives.

How businesses utilise AI will have a significant impact on how they will be trusted by society. Any long-term or significant loss of trust can have a damaging impact on business viability and progress in society.Ā  The key is transparency and a number of companies, such as Google, Facebook and Amazon have already established ethics boards to monitor the development and deployment of AI.

GoodCorporation is exploring the ethical considerations that should govern the evolution of AI technologies. We will be publishing our thoughts on this Blog and in our quarterly newsletter. Clear definitions need to be adopted. The development of a robust code of ethics will be complex, but the guiding principle is perhaps a simple one; can AI/ML performance be measured and controlled? What is the impact and how does this fit with our ethical values?

Getting a strong ethical code to govern AI will require a multi-stakeholder approach. We would value any thoughts on our blog or direct to our inboxes.

[1] In this article we are using AI to refer to rules-based as well as machine-learning technologies, some define these technologies as being fundamentally different

[2] The World Economic Forum is exploring the potential economic benefits of AI. See: http://reports.weforum.org/digital-transformation/artificial-intelligence-improving-man-with-machine/

[3] https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

[4] The UK Government also announced late last year the creation of a Centre for Data Ethics and Innovation, to ensure ā€œsafe, ethical and ground-breaking innovation in AIā€.

Published May 2018