Ethics in AI: Are algorithms suppressing us?

With technology progressing at unprecedented speed, we need to address ethical considerations, experts say

Science and People 2018
Science & People's panel at Berlin Science Week addressed digital ethics. Panelists were Prof. Dr. Ina Schieferdecker of TU Berlin, Prof. Dr. Katharina Simbeck of the Hochschule für Technik und Wirtschaft Berlin, and Matthias Spielkamp, journalist and Founder of AlgorithmWatch.

Why am I seeing an advertisement for flights to Paris just after looking at the touristic landmarks of the French capital? Who determines the credit conditions for my new laptop? How do social media know what content is relevant for me?

The omnipresence of algorithms in the things we do every day is hard to oversee, raising ethical questions about automated decision-making and its effects on the lives of people.

During Berlin Science Week 2018, Elsevier co-hosted a Science & People panel discussion titled “Digital ethics – Are algorithms suppressing us?” The question addressed concerns voiced by many when considering the societal aspects of digitization. Yet all too often, the word “algorithm” is only used for lack of a more accurate term. When considering the ethical implications of algorithms, the subjects of the discussion are in fact “decision support systems” (DSS) that flexibly analyze data and provide suggestions for action.

It is important to note that categorizing DSS as artificial intelligence (AI) may currently be appropriate, but as technological development progresses, perceptions are likely to change. As Matthias Spielkamp, journalist and founder of AlgorithmWatch, explained:

What we called AI 20 years ago, today we no longer call AI but software. Fifteen years from now, we will likely redefine what we call AI today.

Furthermore, it is crucial to keep in mind that algorithms are not self-governing entities but tools created by people to serve a certain purpose.

Although they serve as useful tools, algorithms come with an inherent risk of systematically disadvantaging people based on factors such as ethnicity, gender or social status. Yet the cause for this discrimination does not lie with the algorithm itself but with the data used to train it. DSS make their decisions based on the data that is given to them. Therefore, if biased data is fed into the system, the decisions made and the actions suggested by the DSS will be biased as well and might be ethically questionable. Hence, carefully selecting the data samples provided to the DSS is of crucial importance. In this regard, Professor Dr. Ina Schieferdecker, head of Fraunhofer-Institute for Open Communication Systems (FOKUS), stated:

We have to make sure that technology does not restrict people in their freedom or self-determination, interfere with life or even endanger it. I clearly draw the line when it comes to human rights and the rule of law.

To address potential risks posed by algorithms, consumers are increasingly demanding transparency to better understand the way the services they use work. Consumer advocates and privacy groups call for governmental regulation – an approach that is viewed somewhat skeptically in the scientific community. Defining the object of regulation is challenging: Algorithms consist of programming code, which itself cannot be regulated. Instead, only the respective code-governing entity or the actual result of the code can be regulated. However, the role and responsibility of the individual should not be completely disregarded either. Encouraging the debate in the public sphere is therefore crucial, as Dr. Katharina Simbeck, Professor for Digitization, Marketing Controlling and Analytics at HTW Berlin, concluded:

It is important that we discuss the ethical and social consequences of digitization in the public arena and reflect on how we as society want to deal with them. At the same time, everyone should also be aware of their individual responsibility when using these systems.

Researchers and developers in the field of machine learning and AI are, therefore, increasingly held accountable not only for the factual and methodological accuracy of their research but also for potential ethical ramifications. To raise awareness, some suggest the inclusion of ethical considerations in university education, especially in natural sciences and engineering, and increased interdisciplinary exchange regarding machine learning and AI. These are considered the first steps for researchers to become more aware of the potential ethical implications of their decisions when designing automated DSS. In this regard, Mr. Spielkamp argued:

Automated decision-making systems should be used towards the common good, increase participation and improve justice. These are all large concepts and they are hard to define, but we have to address them because we have to discuss them as a society.

And within society, the panel concluded, researchers can and should do their part.

Science & People

Science & People is an event series that brings together citizens, researchers, political decision-makers, and representatives of Berlin’s bustling tech and startup scene. Science & People seeks to facilitate a conversation between science and society, reaching out to those who might otherwise not be in regular contact with scientific topics, explaining why science matters and creating a hub where the public can actively engage with socially relevant research topics.

Elsevier and project partners Fraunhofer-Verbund IUK-Technologie, Stifterverband für die Deutsche Wissenschaft and Wissenschaft im Dialog (WiD) have hosted Science & People for the seventh time in 2018.

Tags


Contributors


https://www.elsevier.com/__data/assets/image/0003/177744/Daniel-Staemmler2.jpg
Written by

Daniel Staemmler, PhD

Written by

Daniel Staemmler, PhD

Dr. Daniel Staemmler is Executive Publisher at Elsevier, managing a portfolio of neurology journals. After finishing his PhD researching cognitive styles and interactive online learning environments at the University of Hamburg, Daniel moved to San Francisco, where he worked for Shanti's LIFE Institute (Learning Immune Function Enhancement) as their Manager of Research and Internet Services. On returning to Germany, Daniel worked for Bertelsmann AG at their educational start-up scoyo, an online learning platform for school children. He went on to work at the German Institute of Continuing Education for Technologists and Analysts in Medicine (DIW-MTA eV) and Quadriga University of Applied Sciences, before joining Elsevier in 2015.

https://www.elsevier.com/__data/assets/image/0020/820811/Eva-Podgorsek.jpg
Written by

Eva Podgoršek

Written by

Eva Podgoršek

Eva Podgoršek joined Elsevier in 2015 to work on the topics of open access and open science. With a background in political science and her expertise in European research policy, she has worked at the European Commission on Open Science Policy and contributed to the developments on FAIR Data and the European Open Science Cloud. Now back with Elsevier and located in Berlin, she is responsible for several academic and government clients in Germany in her role as Consultant for Elsevier’s Research Platforms. Her expertise on policies in the areas of open science and research data management allows her to work closely with the scientific community to better understand the needs and challenges of universities and researchers and develop mutual solutions.

In her spare time, she continues her engagement in European politics through membership in the Berlin-based Think Tank Polis180, does lots of sports and takes photos.

Comments


comments powered by Disqus