Responding to Internet creepiness

Internet technologies, services, and platforms seem to be getting more sophisticated by the day. Online data flows of any type are increasing in complex ways that can be considered appropriate, but all too often new features on established services are considered ‘creepy’ (for some background, follow The Guardian‘s new series on the invasive nature of some Internet services). The summer of 2015 may be remembered as the moment when concerns about privacy creepiness started changing things. Microsoft and Apple have released particularly strong statements about privacy, EU courts have been formally advised to reconsider the data-sharing-enabling safe harbour provisions with the US, and privacy regulators are asking ethicists to help out. These are clear signs that we’re lost in a privacy policy vacuum.

The phenomenon of ‘creepiness’ was described in a paper from 2013 titled “A Theory of Creepy: Technology, Privacy, and Shifting Social Norms” by Omer Tene and Jules Polonetsky. They explain that a creepy activity is not necessarily “[…] exactly harmful, does not circumvent privacy settings, and does not technically exceed the purposes for which data were collected.” However, they explain that a new technology, device, or information flow is considered creepy when it violates traditional social norms. Laws are not sufficient to guide engineers about expected social norms, however, or to nudge them to make ethical decisions. According to the authors, companies should engage transparently with their consumers about data practices and governments must realise that “social norms are rarely established by regulatory fiat, and that laws that fail to reflect techno-social reality may not fare well in the real world.”

Companies, such as Apple and Microsoft, seem to have picked up on the message about creepiness by publishing very open blog posts about the data they do, but mostly promise, not to collect. Apple and Microsoft end their writing with heart-warming sentences, respectively: “Our commitment to protecting your privacy comes from a deep respect for our customers” and similarly “We will continue to listen and respond, to earn your trust.” Similarly, on a legislative and policy level, the Advocate-General of the Court of Justice of the EU advised the (sitting) judges that the US’s intelligence gathering is too creepy (note: not exact words) for Europe to allow an open-ended data-sharing agreement in the form of the Safe Harbour provision with the US. The US Mission to the EU responded, stating that “the United States does not and has not engaged in indiscriminate surveillance of anyone, including ordinary European citizens.” Of course, this statement does not directly cover the data received from the UK’s GCHQ through its recently revealed KARMA POLICE, which did pretty much “correlate every user visible to passive SIGINT [signals intelligence] with every website they visit[…].”

While policy makers around the world are increasingly stuck trying to update privacy laws from previous technological eras in an effort to counter some creepiness, the European Data Protection Supervisor seems to have given up on legal efforts and institutionalized an advisory ethics board. While we await conclusions from this board, I invite you to join my workshop on the practical ethics and legal philosophy issues arising in Internet engineering and experimentation. This workshop will take place on Saturday, the 24th of October between 4pm and 6pm in room C.0.23 at the Amsterdam Privacy Conference.

(Fun Fact: if you search for ‘ethics’ on the European Data Protection Supervisor’s website, you are presented with some emails written by people interested in the above named ethics board. The Supervisor is not quite leading by example, and maybe is in need for some ethical analysis of its own systems first.)

Bendert Zevenbergen Academic Liaison at Princeton University

Contact me

contact me