Towards Meaningful AI Policy

Back to articles

20 September, 2017

Towards Meaningful AI Policy | Considerati Partners for the Digital World

The academic year in Princeton is starting, so I have enrolled to audit the computer science course COS 324 – Introduction to Machine Learning. I have no intention to become the next bright spark in artificial intelligence. I’ve joined this course, though, because I feel that many of the legal, policy, and philosophical analyses we read about machine learning are poorly informed. While it may be true that one doesn’t need to be able to operate a tractor to make sensible agricultural policy, I do not think it’s healthy to regulate complex algorithms without understanding the potential and limitations of these algorithms. Ian Bogost argued, rightly in my opinion, that the blind fascination with algorithms has reached a near theological level, which may better be brought back down to Earth. Only then will lawyers, policy makers, and philosophers be able to make meaningful sense of them.

Impact of AI engineering

The knowledge gained through this course will also be relevant for our new project. We aim to understand how decisions in machine learning engineering can affect the social model we’re building towards. In other words, we aim to link the engineering ethics and political theory impacts of AI engineering. Understanding this link will allow us to develop the policies that incentivize engineers, companies, and governments to optimize their machine learning algorithms not just for efficiency and speed, but also concepts such as justice and fairness, applied to their specific contexts of deployment. Our first publications in January of next year will showcase some of this thinking in concrete cases.

Earlier works

Of course, this will stand on the shoulders of giants, such as the MIT and Harvard’s Ethics and Governance of Artificial Intelligence Initiative, the Fairness, Accountability, and Transparency in Machine Learning project, the Artificial Intelligence Now initiative, NYU’s and Data & Society’s efforts. Interesting reading for now includes Ryan Calo’s draft paper “Artificial Intelligence Policy: A Policy” or the RAND Corporation’s new report titled “An Intelligence in Our Image”. If you wish to be involved in this work, please join us for a (yet to be formally announced) workshop/unconference on 9 March 2018!

Bendert Zevenbergen

Academic Liaison at Princeton University

Related blogs

Like to be emailed about Considerati news?

Then subscribe to the Considerati Newsletter! See our privacy statement.