The Search for the Holy Grail of Trust in AI

Today there is a comprehensive list of attempts from industry, academia and governments to build the foundations of trust in the technology of artificial intelligence. But in the end, what we want to know is quite simple: can we trust this AI ? To get to this simple answer is a highly complex and dynamic process and in some ways always fuzzy.

Do you trust your mother or your father ? The answer to this question might depend on a whole life spent with your parents, thousands of talks, situations and experiences, resulting in a simple yes or no. Seldomly you trust somebody or something just a little bit. Most of the time trust is a yes or know decision, yet very often based on an immense amount of details and dynamic evaluations.

Trust in AI is no different. In the end we don’t care about the details of this specific facial recognition application of the Chinese government, but we simply want to know if we can trust this application that it will not be deployed to do harm on the chinese people. How do we get there ?

Naturally this is complex mix comprising the whole lifecycle of an AI application, from non-bias in data gathering, non-discriminiation in algorithm building to stability and fault tolerance in deployment. Of course, this is only the tip of the iceberg and there might be hundreds of domains and indicators that should go into the evaluation of the trustability of an application.

TRUST EVALUATIONS WILL HAVE TO CONSIDER DYNAMIC NETWORKS OF RELATIONSSHIPS AND REQUIREMENTS.

Explainability, transparency, fairness and justice – the concepts are legion to evaluate ethical and trustable AI. But in the same way as for personal trust, trustable AI is embedded in a complex network dynamics of relations between concepts. Justice without fairness is not worth much and explainability without transparency will fail.

It is the interaction and relationships of different concepts and requirements that form the final decision to trust an AI. A thourough ethical evaluation will have to consider this dynamic network of requirements, and consequently a one-time effort will not do. It is probable that in the future the measurement of ethicality will become a permanent task of the overall company management as financial and production measurements are today.

Ethical high hopes and reality – a look at AI supported killer drones


Nobody said that ethical and trustable AI will be easy. – If we want to have an idea about the difficulties that the realization of ethical AI will encounter now and in the future, we have to have a look at the continuum of AI applications that might range from play robots for kids to killer drones of the military, and their relation to ethical AI.

Of course, these are vastly different application domains, but both must be in some way receptive to ethical guidelines if AI ethics should make sense at all. In the case of an AI integrated in a children’s toy, there are obvious consquences to be avoided at any rate, for example to harm the phyiscal wellbeing of the child. Here the direction or the desired outcome of ethics is straightforward – no child should be harmed by AI in any toy. But is the ethical direction ofa „common good“ the same in military applications, for example in AI supported killer drones?

To illustrate the point, let’s replace the subject „AI application“ in the EU Ethics Guidelines for Trustworthy AI with the subject „AI supported killer drone“. Without a doubt, both refer to veritable AI applications inside their domain.

  • „No AI supported killer drone should unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans“ (Ethics Guidelines for Trustworty AI, 2018). I beg you pardon?
  • „AI supported killer drones and the enviroment in which they operate must be safe and secure.“ What?

Of course these adapted quotes are extreme and even paradoxical, but they indicate the space in which AI ethics has to operate. The first response could be to argue that the goal of killer drones is unethical form the very start, so any ethical argument must fail naturally.

One could also argue that there is no ethics for all AI application domains, ethical guidelines must be adapted to the respective industry or domain. But how far must they be adapted, how far must ethical requirements obey the internal goals and aspirations of an industry?

What these examples basically illustrate is the tradeoff between the goals and effectivness requirements of an AI application based on its embeddedness in an industry, and the ethical aspirations that are possibly in confrontation with these fundamental requirements (e.g. to be able to kill people). It is certainly not an overstatement that in most cases today the requirements will trump ethical considerations – given that there is no further enforcement mechanism in addition to internal evaluations of the industry alone.