A remarkably interesting workshop was recently held at the Data & Society Institute in New York about the relevance and position of human rights in regulating artificial intelligence technologies. One of the main discussion points of the day was whether approaches through the lens of ethics and human rights are compatible, in conflict, or mutually enforcing of each other. In this post I will react to my colleagues Christiaan van Veen and Corinne Cath’s recent blog, in which they make some astute and high-level remarks about the purpose and value of analyzing these technologies through the lens of human rights.
Human rights and AI systems in societies
Van Veen and Cath position the legal human rights instruments as a source of power due to their moral legitimacy, since the various conventions, declarations and covenants are ratified or implemented by many nations. Human rights norms and values are more clearly and commonly defined than ethical values because their meaning has been developed and specified through jurisprudence. The universal scope of the legal human rights frameworks are therefore well positioned to address potentially cross-jurisdictional issues that arise from the development and deployment of AI systems in societies. They further state, however, that companies and governments are increasingly imposing on themselves ethical duties and processes that allow them to be perceived as acting in good faith, though without being accountable and allowing them to interpret their own norms and values. In response, Van Veen and Cath call upon the wider human rights community to take their responsibility in the rapidly developing field of AI technology to devise mechanism of accountability.
The ethical lens
While I largely agree with these points, it appears the nature ethics as a discipline, and the intent of applying it in technology development needs to be positioned more precisely. Assessing the design and deployment of a technology in society through an ethical lens means that the decisions and technical design are scrutinized, and the reasoning is justified by considering alternative approaches. Such review will not (necessarily) be conducted in a court of law, but can happen internally at a government agency, a research center, or a technology company. While indeed the values used in ethical reasoning and scrutiny may not have as precise meaning as provided in law, the multitude and flexibility of concepts allows the designers as well as the persons scrutinizing decisions to have a wider debate about the social impacts of technological choices than mere legal compliance checks as encouraged by traditional legal frameworks. If done correctly and systematically, an ethical review procedure is an internal procedure to encourage technological development to be in line with human values and social norms. Of course, it remains hopeful that institutions and organizations conduct meaningful ethical review, rather than using the discipline as a public relations tool.
So, while the human rights framework and the ethics discipline use several of the same starting points (e.g. values such as dignity, privacy, autonomy, health), their operation and ‘added value’ is very different. I argue, not unlike Van Veen and Cath but for clarity, that ethics and human rights approaches can be mutually beneficial. Where ethics lacks mechanisms for accountability beyond what is institutionally agreed (and the famous “front page of the New York Times/court of public opinion” argument), the international human rights framework offers a set of relatively effective remedies. Linking the starting point values of internal ethics review procedures to specific articles in existing human rights frameworks may allow some external leverage and thus accountability of the design of AI technologies. How exactly this will be done is a matter for a new research project, to which I invite both Van Veen and Cath to explore the options with me.
Bendert ZevenbergenAcademic Liaison at Princeton University