24 March, 2016
The myth that technologies are value neutral artifacts or a-political is often backed up by statements like “a knife can be used for cutting vegetables or hurting people.” While there is some truth to the idea that the morality of a technology depends in part on the ends for which it is used, this holds less true for Internet technologies. Apps, platforms, and network protocols are designed by people who establish a logical container within which others interact, whereby concepts like distance and to some extent time are abstracted. Designers are free to make decisions how these technologies work, under which technical rules information flows, and what the foreseen effects on its users and society are. These are the political and value choices that are addressed in the special edition on Governing Algorithms of the journal Science, Technology, & Human Values.
Malte Ziewitz opens the analysis by asking what an algorithm actually is, and how they exercise their power and influence. Mike Ananny attempts at an answer, focusing on networked information algorithms (NIAs), which he defines broadly as “assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action.” In short, algorithms are not just technical, but sociotechnical. Therefore, a moral assessment of algorithms must focus on the interaction between humans and technology, rather than trying to reverse engineer the technology and find embedded values.
Through a range of examples, Kate Crawford expertly demonstrates how algorithms function to create authority and shift or solidify power, making them inherently political – a technologically embedded reflection of a particular world view. In agreement with Ananny, she shows how the important unit for study or regulation is not the technology or device itself, but the systems of power that it mobilises. She suggests that designers of algorithms should assess their work based on the concept of ‘agonistic pluralism’ – to take into account the social space in which their creation operate, rather than merely focusing on the innovative achievements.
Tal Zarsky explores the regulatory options when algorithms cause inequalities or injustices. Commentators often call for mandatory transparency of the ‘black boxes’ that algorithms have become perceived to be. However, he notes that transparency may lead to other inequalities, such as special interests or technically savvy persons being able to game the system, while others do not take the time or do not possess the skills to assess the transparent algorithm. His economic analysis shows that some negative social impacts can be mitigated through regulatory intervention, while other concerns will then either arise or remain though other properties of the algorithm.
Bruno Latour stated that the study of sociotechnical systems must move beyond matters of fact to matters of concern. Through meaningful cooperation, an opportunity lies here for regulators and legislators to grasp how the Internet works, and – more importantly – what its effects are.
Academic Liaison at Princeton University
In 2012, the European Commission joined forces with national enforcement authorities to examine if...