Human autonomy is an expression often used in Ai ethics, for example: „Users should be able to make informed autonomous decisions regarding AI systems.“ This statement in the „Ethics Guidelines for Trustworthy AI“ of the EU Commission is as simple as it is far-reaching. The Guidelines further state:“The overall principle of user autonomy must be central to the system’s functionality.“
Human autonomy is often understood as a given fact that is brought into position as a benchmark for the ethicality of a technology. The technology, be it genetics or AI, should not disrupt or endanger the human autonomy. Or, to phrase it philosophically, the technology should not curtail human freedom, human free will. Obviously, phrased like this, the notion of autonomy opens a pandoras box.
Do we only think that we are autonomous?
Do we have free will? How much autonomy do we have as individuals in a regulated society? Do we only think that we are autonomous, but in fact we are not at all? Since ancient times these questions are filling libraries in philosophy and politcs, and yet they are not solved and probably never will. Still, we can think about what this means for the use of autonomy in the context of AI ethics. Should we abandon the notion altogether since it seems to pretend something as a proven fact that it is not?
In a practical understanding of Ai ethics, we can still propose to use the notion of autonomy because we all have an intuitive understanding of what this notion should ideally mean. If a technology disrespects human autonomy, we know that this is ethically wrong, although we might have trouble to specifically say why. Autonomy, in the same way as free will, is a good approximation to a common human ethical understanding. As the famous dictum from a judge about pornography goes: „I can’t define it, but I know it when I see it“, most of us know what disrespect of human autonomy looks or feels like, even we can’t define it.
Still, in AI ethics this can only be the first step. In a second step, we really have to think about the meaning of autonomy in a specific case. What does autonomy mean in the evalluation of an AI application? How autonomous is a person who is either developing or using the application? Can a user practically apply the autonomy she theoretically has etc. ? In short: autonomy is a good first approximation in AI ethics, but it will never be enough to go the root of an AI ethics evaluation.