Ethical high hopes and reality – a look at AI supported killer drones

Nobody said that ethical and trustable AI will be easy. – If we want to have an idea about the difficulties that the realization of ethical AI will encounter now and in the future, we have to have a look at the continuum of AI applications that might range from play robots for kids to killer drones of the military, and their relation to ethical AI.

Of course, these are vastly different application domains, but both must be in some way receptive to ethical guidelines if AI ethics should make sense at all. In the case of an AI integrated in a children’s toy, there are obvious consquences to be avoided at any rate, for example to harm the phyiscal wellbeing of the child. Here the direction or the desired outcome of ethics is straightforward – no child should be harmed by AI in any toy. But is the ethical direction ofa „common good“ the same in military applications, for example in AI supported killer drones?

To illustrate the point, let’s replace the subject „AI application“ in the EU Ethics Guidelines for Trustworthy AI with the subject „AI supported killer drone“. Without a doubt, both refer to veritable AI applications inside their domain.

  • „No AI supported killer drone should unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans“ (Ethics Guidelines for Trustworty AI, 2018). I beg you pardon?
  • „AI supported killer drones and the enviroment in which they operate must be safe and secure.“ What?

Of course these adapted quotes are extreme and even paradoxical, but they indicate the space in which AI ethics has to operate. The first response could be to argue that the goal of killer drones is unethical form the very start, so any ethical argument must fail naturally.

One could also argue that there is no ethics for all AI application domains, ethical guidelines must be adapted to the respective industry or domain. But how far must they be adapted, how far must ethical requirements obey the internal goals and aspirations of an industry?

What these examples basically illustrate is the tradeoff between the goals and effectivness requirements of an AI application based on its embeddedness in an industry, and the ethical aspirations that are possibly in confrontation with these fundamental requirements (e.g. to be able to kill people). It is certainly not an overstatement that in most cases today the requirements will trump ethical considerations – given that there is no further enforcement mechanism in addition to internal evaluations of the industry alone.