The European Commission will announce it is drafting a policy roadmap on artificial intelligence policy at the end of April. This document will likely propose an action plan for a policy process, which I predict will consist of many, many consultations. Previously, the European Parliament had called upon the Commission to explore the liability issues around emerging issues posed by innovations in robotics and artificial intelligence, so this Communication is likely a first response in a long policy process. During this process, I will advocate for the importance of ethical analyses of technology and policy, which I will explain briefly in this contribution.
Governments & AI
The Commission’s announcement is timely for many reasons. It will not have escaped the reader that artificial intelligence is currently a very hot topic in research and business, though governments are only beginning to explore the issues. The US had an early start, with two reports exploring policy options from the Obama White House (disclosure: the director of my department Prof. Ed Felten worked on these documents). However, the newly inaugurated President Trump chose not to continue this work and moved these documents to the archives. China, of course, has an ambitious AI policy which the Massachusetts Institute of Technology advisesthe rest of the world should copy.
Definition of the concept
Before diving deeply into analyses, it is important to understand the object and field of regulation, and how the range of technologies interact with society. In a recent submission to the UN on AI policy (with regard to extreme poverty in the US in this case), our department explained some of the fundamentals of AI policy. Importantly, and confusingly, we state that no commonly agreed definition of the concept of artificial intelligence exists and “[t]herefore, the scope of what is and isn’t considered to be AI is flexible.”
Such uncertainty about what should be regulated, requires policy makers to make some distinctions. We advise focusing on the concept of narrow AI, as opposed to general AI. We describe these terms as follows:
“Narrow AI is created to address specific application areas, where machines typically outperform humans in terms of speed, accuracy, and efficiency. Successful applications of narrow AI can be found in many parts of society. General AI requires systems to exhibit intelligent behavior that is (at least) as broad, adaptive, and advanced as a person across the full range of cognitive tasks. While it is unlikely that general AI will be achieved within the next few decades, it is expected that specific tasks performed by humans can and will be replaced by narrow AI applications on an ongoing basis.”
While the exact scope of regulation may not be clear, a range of ‘intelligent’ information technologies, decision algorithms, and embodied forms in robotics are already having a noticeable effect on individuals and society. We are currently working with the University Center for Human Values to understand the deeper social issues affected by specific forms of artificial intelligence. To me, it is becoming increasingly clear that meaningful and targeted policy can only be achieved when the causal link between engineering choices and their social impact (on individual, group, and society-wide level) is properly understood through an ethical analysis, whereby some fundamental questions are asked. These types of analyses will likely expose some areas where regulation is needed, or suggest how existing law may be interpreted.
AI in 2018
In the course of 2018, I will follow the developments of the European AI policy process closely through my contributions to this blog, expanding my argument for the ethical analyses of AI policy each time.
Bendert ZevenbergenAcademic Liaison at Princeton University