15 February, 2017
I’ve just returned from giving a guest lecture at Stanford to computer science students as part of the Computer, Ethics, and Public Policy course. The lectures intend to give students the tools to argue and write persuasively about the benefits and risks of new technologies. Given that these are the people who are being groomed to work for tech giants or to develop startup apps that will mediate our lives on whichever way, I found it important to share the following thoughts.
Most importantly is creating the awareness that several conflicting ways of reasoning exist about the virtues of emerging technologies. Engineers and developers may consider efficiency, scalability, and the size of databases worthwhile Ends that make a system ‘good’. It is a generalization, of course, but in my experience that ‘Ends justify the Means’ approach does drive innovation within computer science and network engineering to some extent. Lawyers, on the other hand, will give their blessings if the Means that are used by the system to achieve the Ends (such as the scope of data collection and sensitivity of data processing) adhere to rules and regulations. The regulatory framework upon which these decisions are based is created and upheld by senior politicians who typically consider the social and economic impact as an End of new technologies, but may not be aware of the technical underpinnings of the environment they regulate (e.g. when parliamentary technology oversight commissioners say things like “I am not a high-tech techie, but I have been told that is not possible”).
The conflicts in reasoning about the moral worth and appropriateness of a system by the various disciplines leads to a power vacuum over new Internet technologies. Engineers dictate the logical framework of new information flows, but judges and politicians have some (limited) power to halt innovations. As Gary Marchant aptly notes is his book, there is a “growing gap between emerging technologies and legal-ethical oversight.” This gap leads to new technologies potentially undermining abstract social values, or to an outdated legal framework to uphold rules that themselves are no longer ethically justifiable due to technological change. For example, computational machine learning approaches to judicial sentencing may be more efficient, but can they ensure Ends such as social justice and maintain the authority or legitimacy of social institutions? Alternatively, do command-and-control copyright laws that treats works as tradeable units still promote creativity in an age of abundant information flows, or has the become opposite true?
Participants in conflict about emerging technologies – but who find themselves in the power vacuum – typically appeal to ethics to resolve the tensions between disciplines’ lenses. However, what is ‘ethical’ depends on how much value one assigns to the Ends and Means of information systems. Several different methodological approaches to ethics exist, which lead to varying outcomes of what is ‘ethical’. Ethics is not an exact science and should thus not be approached as mathematical or data problems, which are common in computer science. Ethical dilemmas require more abstract thinking. However, the existing thought experiments in ethics rely on defined physical world parameters (e.g. the trolley problem). They may therefore lose value when applied to the Internet’s different dimensions of time, space, and scope of information distribution.
During the lecture, we discussed some real-life examples in blockchain development, internet censorship measurement, and Grey Hat hacking. Some interesting questions arose about the neutrality of technology, and whether the Ends justify the Means. In blockchain development, for example, some social and institutional ideals of cypherpunks are being hardcoded into technology, while their virtues such as radical decentralization and trustless systems can conflict with existing social institutions. To measure Internet censorship in a given region for whichever benevolent reason, data is typically collected through devices on the local networks, which may put device owners at risk of suspicion for espionage by authoritarian governments. Similarly, ethical hacking at scale (through self-replicating botnets or otherwise) may cause systems that serve vital functions – such as security doors or supervisory industrial control systems – to be breached in order to expose vulnerabilities in systems. Should one attempt to argue persuasively that in these cases the Ends justify the Means? Which discipline’s power prevails in a public discussion about these systems?
Fortunately, we can draw from Immanuel Kant’s Categorical Imperitive when he states that we should “[a]ct in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a Means, but always at the same time as an End.” An update for 2017 would add that a “person” may include their smartphones, IP addresses, browsers, etc. So when developing the next killer data analytics app, think about your Ends and your Means, and how others would reason about those, too. If you find that the underlying regulatory framework is indeed outdated and thus possibly immoral, you may need to cooperate across disciplines to argue persuasively.
Academic Liaison at Princeton University
Recently, the book Discrimination and Privacy in the Information Society has been published. Bart...
What are the consequences of a Brexit for the privacy and data protection obligations of your company?
Can I still transfer data to the United Kingdom? And is it still possible to transfer data from the...