“An air of urgency” – why we need ethical governance of AI

ORBIT’s innovation team writes about a new method to evaluate the use of artificial intelligence and related technologies

ORBIT – the Observatory for Responsible Research and Innovation in ICT – contributed to Elsevier’s new AI report. The team consists of (from left) Margherita Nulli, ORBIT Project Officer; Bernd Stahl, Investigator, Director of the Centre for Computing and Social Responsibility at De Montfort University; Martin De Heaver, Managing Director; Prof. Marina Jorotka; and Carolyn Ten Holter, Marketing Officer.
ORBIT – the Observatory for Responsible Research and Innovation in ICT – contributed to Elsevier’s new AI report. The team consists of (from left) Margherita Nulli, ORBIT Project Officer; Bernd Stahl, Investigator, Director of the Centre for Computing and Social Responsibility at De Montfort University; Martin De Heaver, Managing Director; Prof. Marina Jorotka; and Carolyn Ten Holter, Marketing Officer.

By the end of 2019, it is estimated that there will be more than 5 billion mobile phone users in the world — more people than have access to a flushing toilet. That figure emphasizes the extent to which technology affects humanity and gives us an idea of how many people are interacting with algorithms that fall under the umbrella of artificial intelligence.

While this technology can be valuable and beneficial, increasing and high-profile instances of misuse have seriously undermined public trust. Even permitted usage, such as the application of AI and algorithmic decision-making to real-world outcomes  such as insurance, immigration and medicine, can lead to highly problematic outcomes.

In 2018, Amazon’s recruitment algorithm — built to ensure that Amazon was finding the people most likely to do well at the company — made headlines for deselecting women’s CVs. The dataset that was used to train the algorithm was historical data about who already worked at Amazon. Because the usual profile of “Amazon employee” was male, the algorithm took that bias and amplified it. The importance of outcomes such as this to people’s daily lives means that those technologies and those decision-making capabilities, as well as other developing technologies, must be scrutinised.

The question of how to do this in a meaningful, powerful and comprehensible way is  problematic. Technologies such as algorithms, robotics and quantum computing differ radically and therefore present radically different challenges. In addition, many of these fields are developing so rapidly that governance frameworks and regulation must deal with a constantly moving target.

Elsevier’s research in this area, explored in their 2018 report ArtificiaI Intelligence: How knowledge is created, transferred, and used, highlights another problem: the lack of common understanding in different fields of some of the most basic terminologies. The phrase “artificial intelligence” is often understood differently by researchers, industry professionals, educators and journalists. That terminological mismatch, and the ensuing conceptual confusion, can only lead to misunderstandings and failures of dialogue in an area where it is vital that common ground is found on which to conduct public discourse.

There is a distinct air of urgency about the need for governance in AI and technology. In the UK, in the last year alone, the Ada Lovelace Institute, and the Centre for Data Ethics and Innovation, have been created, the House of Lords Select Committee on Artificial Intelligence has reported, the All-Party Parliamentary Group on Data Analytics has opened its enquiry into tech and data ethics, and the topic is being addressed across multiple academic fields, industry, policymaking, education, economics and general public discourse. The most cursory scan of news organisations reveals new issues around AI and tech on an almost daily basis, and the constant drip of scandals and unforeseen poor outcomes serves to undermine public trust.


Download Elsevier’s AI report

Download AI report


As Winfield & Jirotka, (2018) point out, for technology to fulfil its promise in terms of supporting economic development, creating opportunities, improving equality, and quality of life, it is vital that it is both trustworthy and perceived to be so. Much of the discourse has focused on ethics, but ethics cannot provide a fixed set of rules that determine good and bad. They are thoroughly embedded in and arise from social contexts. These contexts give rise to standards such as equality. But ethics and standards by themselves are insufficient. Ethicists have a voluminous literature and many centuries of discourse to draw upon in their consideration of thought-experiments such as the infamous trolley-problem – but these considerations cannot advise society directly unless they are translated into actual research and practical applications.

Practical application is the bedrock of frameworks such as Responsible Research and Innovation (RRI) (Stahl et al, 2014). In the UK, the Engineering and Physical Sciences Research Council uses the AREA Framework (Stilgoe et al, 2013) for RRI and is now embedding this into their modus operandi by requiring funding bids to demonstrate how they will use RRI methodologies to enhance their research.

This focus on the real-world application of standards and behaviours – derived from and rooted in ethical positions such as beneficence, nonmaleficence, autonomy and justice – offers a way forward. They may need to be extended to cover principles specific to AI, such as explicability (Floridi et al, 2018). These principles provide a way to increase the level of trust in the various technologies that have presented challenges in terms of their societal application by making those technologies work with and for society. To achieve this, researchers must understand the principles but also be trained in new techniques required to achieve them. The RRI framework uses anticipatory governance, reflexivity and stakeholder involvement to ameliorate harms. These are not tick-box exercises, nor are they simple risk-management, nor are they ethical thought-experiments. Instead, the aim is to permanently adapt the research-and-development mindset in a way that allows for understanding of alternative positions, assess likely or possible outcomes, and carry out mitigating actions to ensure that society’s needs are considered. In particular, stakeholder involvement is key because working in partnership with numerous communities allows researchers to draw on the widest possible sources of knowledge, expertise and viewpoints. Multiple-stakeholder inputs help to address the problem identified earlier of mismatches in language and failures of comprehension.

This set of tools and actions is at the heart of the RRI method being developed by the ORBIT project. This method, incorporating a new metric called RRI Intensity Level, enables researchers to ascertain how and how much Responsible Innovation work needs to be carried out on a project in a way that can be adapted to the needs of the project. It provides the flexibility to accommodate both projects nearing completion and those at an early or theoretical stage, as well as incorporating measures of impact to determine in what way and to what degree society might be affected.

The evidence from the Elsevier report on AI and ethics is that these ethical topics are not yet making a significant impact in journals, but specialist journals such as Big Data & Society and the Journal of Responsible Innovation, as well as ORBIT’s own ORBIT Journal, are providing outlets for researchers to continue the discourse about the development of responsible computer-enabled technologies. It is to be hoped that the large number of conferences, seminars, summits and other forums discussing ethics and AI during 2018 and 2019 will not only raise awareness of the societal issues that new technology can bring with it but provide impetus for real-world methods such as RRI to gain more widespread use. The increasing awareness of ethical concerns can provide an avenue for creating safer, better technology that can work for all.

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Related stories


Comments


comments powered by Disqus