Ethics of artificial intelligence will be the competitive advantage of the UK, as the House of Lords suggest in their recent report “AI in the UK: Ready, Willing and Able?” In summary, the report states that AI should be developed for good and not for bad. Such lofty language about doing good with technology sounds wonderful, but is quite meaningless if you don’t confront the value tensions that will inevitably arise from the deployment of AI technologies. This is what ethics is all about! While I wholeheartedly agree that Europe can and should take the lead on AI policies that are based on strong ethics reasoning, more in-depth guidance will be needed from governments.
Ethics analyses take many forms. I frequently lecture to computer scientists about the value of ethics reasoning in their work (see a write up of a lecture in Stanford last year). The common theme in my lectures is that there are many ways of reasoning ethically, but the real value is in being able to justify one course of action over other alternatives. It is the engagement with other ways of approaching a problem, a solution, and their effects, that shows that the decision made is the most defensible way to go. For example, claiming that a system, database, or algorithm is fair, means nothing until you explain how the system is fair, and to whom. Fairness will mean different things to different people affected by the system. A claim to fairness must be defended, and thus be open for scrutiny. The UK report falls short in such thinking, though it’s call to action is encouraging.
How to come up with sound policy
In my previous blog post, I briefly outlined a policy and funding approach that could lead to an actual ethical design of AI technology and policy. I’m convinced that it is only through experimentation with AI approaches on real world problems that we can come up understand the tensions, trade-offs, and ethical dilemmas that sound policy could resolve. Sound policy begins with an evidence base. In the case of AI policy, an understanding of a wide variety of use cases, where interdisciplinary teams have considered, debated, and purposefully designed systems to not only test ethical tensions, but also reflect on their outcomes will inform policy.
Competing with global superpowers in AI
It’s exciting that a group of European scientists and governments have now proposed ELLIS (“European Lab for Learning and Intelligent Systems”), an ambitious European research institute for artificial intelligence. The aim is to compete with global superpowers in AI, such as China and the US. The competition is not only intended for knowledge and technology, but also to retain and attract talent. If and when ELLIS becomes reality, there’s a genuine chance for Europe’s tradition of ethical and social reflection in technology governance to become part of this project and influence the course of AI’s global development.