Did you ever wonder why there isn’t a field ethics of the atomic bomb? Or why there is a field of ethics for AI today, but there isn’t at this time, nor at any time in the past, a comparable field for the ethics of the atomic bomb? Although the fears for the impact of both technologies are comparable?
In essence, both AI and the atomic bomb are simply technologies like any other technology, electricity, lasers, you name it. One reason for the more complex ethical concern for AI might be that AI is fundamentally „about us“, about one of the most human traits that we can identify: our intelligence. The ultimate threat is that AI will become one of us, or even more than we are. It will become „superintelligent“, as Bostrom’s famous book on superintelligence suggests.
This is a significantly false optics. The technology of AI isn’t in any fundamental way different from the technology for nuclear explosion. Both may have different devastating consequences for the future of humanity, but from an ethical standpoint they are the same problem. It is the problem how we as a society are using our technologies, according to which rules and guidelines.
AI only seems to be different because it can talk, it can (minimially) understand and if we might reach one day general AI, it could become a „person“, it could become a silicon version or ourselves. But apart from the fact that we will not in the near future reach AI (the short argument: general intelligence is wetware with its full biological, non-abstractive complexity, whereas AI, as programmed, will always rely on abstractions, beginning with 0 and 1s) – AI does not pose totally different ethical considerations than an atomic bomb.
Still, AI, although not fundamentally different from the ethical deployment of other technologies, has many differences in detail. AI is much more woven into the fabric of our societies, it is more complex to capture, even invisible to incomprehensible which makes ethical considerations also much more intricate. It has many more perspectives than a simple atomic bomb in its silo, so the theoretical and practical ethical considerations must be capable to handle and find an argumentation for complex situations.
In short, be it AI, the atomic bomb or CRISPR – the fundamental ethical question remains how we as a society decide to deploy our technological tools, for what and for whom.