The Search for the Holy Grail of Trust in AI

Today there is a comprehensive list of attempts from industry, academia and governments to build the foundations of trust in the technology of artificial intelligence. But in the end, what we want to know is quite simple: can we trust this AI ? To get to this simple answer is a highly complex and dynamic process and in some ways always fuzzy.

Do you trust your mother or your father ? The answer to this question might depend on a whole life spent with your parents, thousands of talks, situations and experiences, resulting in a simple yes or no. Seldomly you trust somebody or something just a little bit. Most of the time trust is a yes or know decision, yet very often based on an immense amount of details and dynamic evaluations.

Trust in AI is no different. In the end we don’t care about the details of this specific facial recognition application of the Chinese government, but we simply want to know if we can trust this application that it will not be deployed to do harm on the chinese people. How do we get there ?

Naturally this is complex mix comprising the whole lifecycle of an AI application, from non-bias in data gathering, non-discriminiation in algorithm building to stability and fault tolerance in deployment. Of course, this is only the tip of the iceberg and there might be hundreds of domains and indicators that should go into the evaluation of the trustability of an application.

TRUST EVALUATIONS WILL HAVE TO CONSIDER DYNAMIC NETWORKS OF RELATIONSSHIPS AND REQUIREMENTS.

Explainability, transparency, fairness and justice – the concepts are legion to evaluate ethical and trustable AI. But in the same way as for personal trust, trustable AI is embedded in a complex network dynamics of relations between concepts. Justice without fairness is not worth much and explainability without transparency will fail.

It is the interaction and relationships of different concepts and requirements that form the final decision to trust an AI. A thourough ethical evaluation will have to consider this dynamic network of requirements, and consequently a one-time effort will not do. It is probable that in the future the measurement of ethicality will become a permanent task of the overall company management as financial and production measurements are today.

Ethical high hopes and reality – a look at AI supported killer drones


Nobody said that ethical and trustable AI will be easy. – If we want to have an idea about the difficulties that the realization of ethical AI will encounter now and in the future, we have to have a look at the continuum of AI applications that might range from play robots for kids to killer drones of the military, and their relation to ethical AI.

Of course, these are vastly different application domains, but both must be in some way receptive to ethical guidelines if AI ethics should make sense at all. In the case of an AI integrated in a children’s toy, there are obvious consquences to be avoided at any rate, for example to harm the phyiscal wellbeing of the child. Here the direction or the desired outcome of ethics is straightforward – no child should be harmed by AI in any toy. But is the ethical direction ofa „common good“ the same in military applications, for example in AI supported killer drones?

To illustrate the point, let’s replace the subject „AI application“ in the EU Ethics Guidelines for Trustworthy AI with the subject „AI supported killer drone“. Without a doubt, both refer to veritable AI applications inside their domain.

  • „No AI supported killer drone should unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans“ (Ethics Guidelines for Trustworty AI, 2018). I beg you pardon?
  • „AI supported killer drones and the enviroment in which they operate must be safe and secure.“ What?

Of course these adapted quotes are extreme and even paradoxical, but they indicate the space in which AI ethics has to operate. The first response could be to argue that the goal of killer drones is unethical form the very start, so any ethical argument must fail naturally.

One could also argue that there is no ethics for all AI application domains, ethical guidelines must be adapted to the respective industry or domain. But how far must they be adapted, how far must ethical requirements obey the internal goals and aspirations of an industry?

What these examples basically illustrate is the tradeoff between the goals and effectivness requirements of an AI application based on its embeddedness in an industry, and the ethical aspirations that are possibly in confrontation with these fundamental requirements (e.g. to be able to kill people). It is certainly not an overstatement that in most cases today the requirements will trump ethical considerations – given that there is no further enforcement mechanism in addition to internal evaluations of the industry alone.

The Pandoras Box of Human Autonomy in AI

Human autonomy is an expression often used in Ai ethics, for example: „Users should be able to make informed autonomous decisions regarding AI systems.“ This statement in the „Ethics Guidelines for Trustworthy AI“ of the EU Commission is as simple as it is far-reaching. The Guidelines further state:“The overall principle of user autonomy must be central to the system’s functionality.“

Human autonomy is often understood as a given fact that is brought into position as a benchmark for the ethicality of a technology. The technology, be it genetics or AI, should not disrupt or endanger the human autonomy. Or, to phrase it philosophically, the technology should not curtail human freedom, human free will. Obviously, phrased like this, the notion of autonomy opens a pandoras box.

Do we only think that we are autonomous?

Do we have free will? How much autonomy do we have as individuals in a regulated society? Do we only think that we are autonomous, but in fact we are not at all? Since ancient times these questions are filling libraries in philosophy and politcs, and yet they are not solved and probably never will. Still, we can think about what this means for the use of autonomy in the context of AI ethics. Should we abandon the notion altogether since it seems to pretend something as a proven fact that it is not?

In a practical understanding of Ai ethics, we can still propose to use the notion of autonomy because we all have an intuitive understanding of what this notion should ideally mean. If a technology disrespects human autonomy, we know that this is ethically wrong, although we might have trouble to specifically say why. Autonomy, in the same way as free will, is a good approximation to a common human ethical understanding. As the famous dictum from a judge about pornography goes: „I can’t define it, but I know it when I see it“, most of us know what disrespect of human autonomy looks or feels like, even we can’t define it.

Still, in AI ethics this can only be the first step. In a second step, we really have to think about the meaning of autonomy in a specific case. What does autonomy mean in the evalluation of an AI application? How autonomous is a person who is either developing or using the application? Can a user practically apply the autonomy she theoretically has etc. ? In short: autonomy is a good first approximation in AI ethics, but it will never be enough to go the root of an AI ethics evaluation.

The ethics of the atomic bomb – and why it matters for AI

Did you ever wonder why there isn’t a field ethics of the atomic bomb? Or why there is a field of ethics for AI today, but there isn’t at this time, nor at any time in the past, a comparable field for the ethics of the atomic bomb? Although the fears for the impact of both technologies are comparable?

In essence, both AI and the atomic bomb are simply technologies like any other technology, electricity, lasers, you name it. One reason for the more complex ethical concern for AI might be that AI is fundamentally „about us“, about one of the most human traits that we can identify: our intelligence. The ultimate threat is that AI will become one of us, or even more than we are. It will become „superintelligent“, as Bostrom’s famous book on superintelligence suggests.

This is a significantly false optics. The technology of AI isn’t in any fundamental way different from the technology for nuclear explosion. Both may have different devastating consequences for the future of humanity, but from an ethical standpoint they are the same problem. It is the problem how we as a society are using our technologies, according to which rules and guidelines.

AI only seems to be different because it can talk, it can (minimially) understand and if we might reach one day general AI, it could become a „person“, it could become a silicon version or ourselves. But apart from the fact that we will not in the near future reach AI (the short argument: general intelligence is wetware with its full biological, non-abstractive complexity, whereas AI, as programmed, will always rely on abstractions, beginning with 0 and 1s) – AI does not pose totally different ethical considerations than an atomic bomb.

Still, AI, although not fundamentally different from the ethical deployment of other technologies, has many differences in detail. AI is much more woven into the fabric of our societies, it is more complex to capture, even invisible to incomprehensible which makes ethical considerations also much more intricate. It has many more perspectives than a simple atomic bomb in its silo, so the theoretical and practical ethical considerations must be capable to handle and find an argumentation for complex situations.

In short, be it AI, the atomic bomb or CRISPR – the fundamental ethical question remains how we as a society decide to deploy our technological tools, for what and for whom.