Racism, bloodshed, speciesism, climate change... Do you want a more ethical world? Dr Thilo Hagendorff, a researcher at the University of Tübingen, says you should abandon patterns of thinking that draw an artificial boundary between 'your own' and 'others' and develop unconditional compassion. Do you want more ethical AI and technologies? Forget the dominant principle-based approach and adopt the virtues-based approach. You should also start working towards an ethical work climate.
-Before we begin, can you quickly explain your expertise covering AI ethics, media ethics, and technology ethics?
These are all vague terms, and they often overlap. I have a background in philosophy, but I now consider that I have moved away from this field, as I find that many philosophical discourses are somewhat obsolete. I prefer to conduct transdisciplinary research including sociology, Science and Technology Studies and ethics. When I identify interesting research questions, I wonder whether I can work on them alone or whether I need the expertise of other researchers. For example, I needed help in gathering empirical data and conducting the recent study I published on industry involvement in machine learning research. This study empirically analyses the links between oblique and private institutions in the field of machine learning research. Our analysis compiles papers from the three major conferences on this field of AI over the past five years. I hope that many people will read it so that future arguments can be better differentiated.
-In the face of the multitude of ethical guidelines on AI, how can we verify that current developments are really more ethical?
This is a difficult question and I certainly don't have a definitive answer. I think there are two sides of the same story that can be told: the pessimistic version and the optimistic version.
The pessimist version says that current developments in AI are less ethical. Indeed, there is a lot of "ethics washing" going on at the moment. Many companies are sending strong and continuous signals to the public and legislators to make them feel that they are managing AI ethics, in order to avoid legal regulations. Binding. Their message is: "We have an ethics committee, there is no need for regulation, the influence of non-binding standards is sufficient".
On the other hand, the optimist emphasises that the current abundance of ethical work in AI has the merit of raising awareness among companies and the public of the importance of values in AI, and more broadly in relation to technology.
Which of these views is true? Perhaps both at the same time.
-How to concretely translate the ethical guidelines into -How to concretely translate the ethical guidelines into technical developments?
In an article I published earlier this year, I compared 20 AI ethical guidelines to analyse which values are most important, which are under-represented and which are omitted.
All these guidelines try to differentiate between very abstract principles that are difficult to apply. Furthermore, from each principle different approaches are derived. Take the privacy principle as an example: one guideline will encourage 'privacy by design', another list of criteria to be met to guarantee a minimum privacy, etc. This dominant ethical approach aims at the practitioner's adherence to these main principles and rules. However, empirical studies show that it does not change their daily practice. Therefore, I doubt that this now dominant approach is promising. While the latter is based on values that are intangible mental goals and concepts, we should consider another approach.
This other approach may be based on virtues which are dispositions that each individual can adopt and which affect his or her actions, such as the disposition to take care of others. It is easy to postulate principles for technologies and to aim for socially acceptable goals, but it is quite difficult to educate practitioners and strengthen their virtues, so that, for example, their actions tend to be more inclined towards justice. Apart from some academic discussions (read Deborah Johnson on whether ethics can be taught to engineers, and this interesting study on the framework needed for practitioners to adapt their daily routine), most of the energy is devoted to the principles-based approach, developing sets of principles and moving from principles to practice, when we should be trying the virtues-based alternative.
-Can practitioners with good virtues produce ethical AI?
Principles alone cannot guarantee ethical AI. We need virtues, but they remain personal dispositions. Therefore we also need to create working climates where ethical decisions are not sanctioned but rewarded. To understand these challenges, I recommend this major 2010 meta-study that drew on more than 30 years of research to identify factors of unethical choices covering individual dispositions, the moral issue and the organisational environment (selfish or ethical).
It is essential to keep in mind the different factors in order to understand what is at stake, and that it is not just individual dispositions that are at stake, but several other factors such as, for example, an ethical work climate.
-What are the bases for reflection on the ethics of non-humans?
At the moment, we have a totally anthropocentric view on these subjects. People have moral status, but animals and plants do not. At the same time, scientific discourses such as the new materialism or the actor-network theory say that technical artefacts can act in our society, and in fact have a kind of agentivity. This is counter-intuitive. Intuition would give animals an agentivity, since they are closer to us!
The problem lies in establishing a mental boundary between "one's own" and "others". People draw a circle around them: they include humans, while plants, animals and technical beings remain outside. At the moment, we are seeing very strong orientations towards "one's own". Psychological studies can measure an individual's 'social dominance orientation', a trait that assesses the tendency to think in terms of hierarchical groups. This generates specism as well as racial and gender discrimination.
From a pragmatic point of view, boundaries are necessary for human beings to learn and use terms when we try to grasp reality: they are the basis of our understanding of the world. Moreover, people are inclined to create this mental boundary between groups of 'their own' and 'others': it is a legacy of evolution.
Nevertheless, I would argue that we need a more holistic approach to various issues such as the behaviour of machines, climate change, migration, racism, industrial agriculture and so many others. It is really important to have unconditional compassion for the members of external groups - the 'others'. Bruno Latour, in his theory of the Parliament of Things, called for a politics of artefacts. Even among human beings, Jacques Derrida denounced with his deconstructivist approach the many artificial differentiations that we make and which are the source of much bloodshed.
These fascinating discussions unfortunately remain essentially academic, there is still a long way to go to reach the media and the general public. Yet we must include nature and animals in our ethics, the fate of the planet is at stake.
Is it a good idea to also include robots? I don't know. People have compassion for social robots, studies have shown that the same brain areas are activated when we see injured people and social robots. Maybe we can harness this sense of compassion.
Interview by Lauriane Gorce