AI Policy: Making Sense of Science

Back to articles

11 May, 2017

Technology Ethics | Considerati

People enjoy a space of personal freedom, partly because powerful State and private actors do not have the capacity to control each aspect of life. If left unchecked, developments in information technology, artificial intelligence (”AI”) and data collection may extend the capacity for institutions and companies to invade these freedoms. Public policy options must be explored now to understand and effectively govern the relationship between people and machines. A public debate will only be fruitful, however, if there is a common understanding of the underlying technologies, their likely trajectories, and their unintended consequences.


Participants in the policy debates will likely not have a grounded understanding of how AI processes work (for an intro, see this video). Typically, a conversation on these topics will start from a variety of incompatible starting points, in terms of the capability of systems and assumptions about the way AI will likely develop in future. For example, one participant will argue that machine intelligence cannot surpass human intelligence since the definitions are not comparable, while another predicts that the moment machines surpass humans, we may all be doomed and enslaved to a system we can no longer understand in the year 2040. Any policy debate that does not include representation of technologists who are familiar with the systems and the dynamics of the social contexts they influence, is doomed to be an argument about science fiction theories.

Unintended consequences

In her excellent book “Weapons of Math Destruction,” author Cathy O’Neil makes the point (amongst many others) that even well-meaning deployments of algorithms, machine learning and AI can and will have devastating unintended consequences for those who are governed by the system. We see this in the systematic exclusion of groups from the job market through opaque personality tests and the perpetual perception of criminal enforcement in poor areas, leaving more serious financial crimes untouched. My colleagues at Princeton recently showed how language training of AI system replicates human biases in systems, making powerful computational decision reflect our human approaches with all our engrained prejudices. It is even reinforcing such bias, rather than replacing decision making with a fairer, more just system.

Policy primers

The Obama administration started to explore the social issues raised by AI, as well as suggesting some policy solutions. For example, the labor market and resource distribution will be impacted, so citizens need to be retrained for a world where their current jobs may be automated. The Artificial Intelligence Now initiative has similarly considered the effects of AI on social inequality, the labor market, health care, and ethical responsibilities of proprietors of these systems. The European Parliament recently agreed to a non-legislative report that suggests that liability regimes for autonomous systems (robots or software) may need to be reconsidered, as a causal link between their harmful actions and the intention of the producer or owner may become improvable (pdf).

I look forward to exploring the impact of AI on the concepts of political philosophy (e.g. freedom, justice, power), ethics, and human rights (and vice versa) with the cooperation of many amazing people at Princeton’s Center for Information Technology Policy and the University Center for Human Values.

, ,

Bendert Zevenbergen

Academic Liaison at Princeton University

Related blogs

Towards a general data law?

Last week, Dutch think tank ECP hosted an interesting seminar on the future of data in the...

Read more

The privacy interests of convicts

It seems hard to swallow for many people that criminals also enjoy some form of privacy protection....

Read more

Like to be emailed about Considerati news?

Then subscribe to the Considerati Newsletter! See our privacy statement.