Why am I seeing an advertisement for flights to Paris just after looking at the touristic landmarks of the French capital? Who determines the credit conditions for my new laptop? How do social media know what content is relevant for me?
The omnipresence of algorithms in the things we do every day is hard to oversee, raising ethical questions about automated decision-making and its effects on the lives of people.
During Berlin Science Week 2018, Elsevier co-hosted a Science & People panel discussion titled “Digital ethics – Are algorithms suppressing us?” The question addressed concerns voiced by many when considering the societal aspects of digitization. Yet all too often, the word “algorithm” is only used for lack of a more accurate term. When considering the ethical implications of algorithms, the subjects of the discussion are in fact “decision support systems” (DSS) that flexibly analyze data and provide suggestions for action.
It is important to note that categorizing DSS as artificial intelligence (AI) may currently be appropriate, but as technological development progresses, perceptions are likely to change. As Matthias Spielkamp, journalist and founder of AlgorithmWatch, explained:
What we called AI 20 years ago, today we no longer call AI but software. Fifteen years from now, we will likely redefine what we call AI today.
Furthermore, it is crucial to keep in mind that algorithms are not self-governing entities but tools created by people to serve a certain purpose.
Although they serve as useful tools, algorithms come with an inherent risk of systematically disadvantaging people based on factors such as ethnicity, gender or social status. Yet the cause for this discrimination does not lie with the algorithm itself but with the data used to train it. DSS make their decisions based on the data that is given to them. Therefore, if biased data is fed into the system, the decisions made and the actions suggested by the DSS will be biased as well and might be ethically questionable. Hence, carefully selecting the data samples provided to the DSS is of crucial importance. In this regard, Professor Dr. Ina Schieferdecker, head of Fraunhofer-Institute for Open Communication Systems (FOKUS), stated:
We have to make sure that technology does not restrict people in their freedom or self-determination, interfere with life or even endanger it. I clearly draw the line when it comes to human rights and the rule of law.
To address potential risks posed by algorithms, consumers are increasingly demanding transparency to better understand the way the services they use work. Consumer advocates and privacy groups call for governmental regulation – an approach that is viewed somewhat skeptically in the scientific community. Defining the object of regulation is challenging: Algorithms consist of programming code, which itself cannot be regulated. Instead, only the respective code-governing entity or the actual result of the code can be regulated. However, the role and responsibility of the individual should not be completely disregarded either. Encouraging the debate in the public sphere is therefore crucial, as Dr. Katharina Simbeck, Professor for Digitization, Marketing Controlling and Analytics at HTW Berlin, concluded:
It is important that we discuss the ethical and social consequences of digitization in the public arena and reflect on how we as society want to deal with them. At the same time, everyone should also be aware of their individual responsibility when using these systems.
Researchers and developers in the field of machine learning and AI are, therefore, increasingly held accountable not only for the factual and methodological accuracy of their research but also for potential ethical ramifications. To raise awareness, some suggest the inclusion of ethical considerations in university education, especially in natural sciences and engineering, and increased interdisciplinary exchange regarding machine learning and AI. These are considered the first steps for researchers to become more aware of the potential ethical implications of their decisions when designing automated DSS. In this regard, Mr. Spielkamp argued:
Automated decision-making systems should be used towards the common good, increase participation and improve justice. These are all large concepts and they are hard to define, but we have to address them because we have to discuss them as a society.
And within society, the panel concluded, researchers can and should do their part.
Science & People
Science & People is an event series that brings together citizens, researchers, political decision-makers, and representatives of Berlin’s bustling tech and startup scene. Science & People seeks to facilitate a conversation between science and society, reaching out to those who might otherwise not be in regular contact with scientific topics, explaining why science matters and creating a hub where the public can actively engage with socially relevant research topics.
Elsevier and project partners Fraunhofer-Verbund IUK-Technologie, Stifterverband für die Deutsche Wissenschaft and Wissenschaft im Dialog (WiD) have hosted Science & People for the seventh time in 2018.
comments powered by Disqus