Trajectories and consequences of Artificial Intelligence research

Back to articles

14 July, 2017

Autonomous Weapon System | Considerati

Today, the New Scientist magazine published a fascinating issue in which a variety of scholars explore the ten biggest ethical challenges facing science. Interesting articles for you – as reader of this blog – ask whether we should give up information privacy and give robots the right to kill humans.

One particularly other intriguing article asks the question “Should we stop doing science?” While I will not give away the content of the piece here, it does raise food for thought for information and communication technology research, such as the looming changes that will arise with artificial intelligence.

Consider, for example, the amount of power that Internet users are giving private companies over them by sharing their personal data. The extent of this power became tangible when the Trump and Brexit campaigns were able to translate power over people’s attention to political power, at least to some extent, by using innovative psychometric analytics methods.

Advanced Artificial Intelligence

Personal data will serve as the basis to make the new wave of advanced artificial intelligence systems more precise, so there’s an incentive to collect even more. With the ever-changing default privacy settings of online platforms and apps that are setting the framework within which people think and act about their personal data, this incentive will mean that our collective conception of information privacy may indeed shift even further towards openness and personal data sharing in the next few years.

Killer robots

The contribution about killer robots asks whether we may have an ethical responsibility to create machines that make more ‘humane’ decisions, than humans would in times of stress. It is often argued that humans may be too corruptible, racist, biased, or lazy to make good decisions about other people’s fate or legal positions, so we should strive to build artificially intelligent systems to replace humans. The article concludes that there’s value to holding another human accountable when things go wrong, rather than a machine or algorithm. However, with the rapid pace of innovation in artificial intelligence and the increasing pressure to give autonomous systems some recognition as legal persons, it may be sooner rather than later that we try to hold information systems accountable for the consequences of their statistical and mathematical recklessness.

The benefit of scientific progress

The article questioning the benefit of scientific progress, which typically brings with it new problems and even destruction, offers some more parallels to the world of artificial intelligence research & development. For example, just as scientific advancements are often used in the military, so can algorithms be used in an ongoing cybersecurity rat-race. Public funding for artificial intelligence research may thus be prioritized towards offense or defense with regards to other nations and groups, rather than help solving humanity’s problems that may be overcome with these new technologies.

While this blog offers a simplification of reality, it is still useful to reflect on the priorities we have when developing new technologies such as artificial intelligence, and fitting them into our societies. Incentives and the trajectory of research and development can be changed at any moment, but it requires some reflection by the political class.

, , , , ,

bendert_zevenbergen_considerati_2
Bendert Zevenbergen

Academic Liaison at Princeton University

Related blogs

Technology Ethics | Considerati

AI Policy: Making Sense of Science

People enjoy a space of personal freedom, partly because powerful State and private actors do not...

Read more

“Team High Tech Crime is the world’s the most successful cybercrime unit”

According to Kaspersky Labs, the Dutch High Tech Crime Unit is the world’s most successful...

Read more

Like to be emailed about Considerati news?

Then subscribe to the Considerati Newsletter! See our privacy statement.