Dual Use tech policy for AI R&D

Much enthusiasm exists around the applications for artificial intelligence. Indeed, the vast amount of data and computational power that operationalize complex and sophisticated statistical approaches does show very promising applications to address old and new problems. However, a new publication titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” demonstrates how the general-purpose technologies underpinning positive AI applications can also be used for a wide range of less desirable (or even harmful) purposes.

The authors note that the problem is not new: technologies have always enabled good as well as bad behavior. The European Commission speaks in terms of technologies that can be used for civilian but also for military uses, which echoes the language used in the Wassenaar Agreement which aims to limit the export of these technologies. The authors note, however, that the Wassenaar Agreement has only had a limited success with digital technologies–such as cryptography–because software isn’t exported in the same way as body armor or explosive chemicals.

Threats posed by AI

The report identifies three sets of threats posed by AI technologies to be managed. (1) Cybersecurity: threats to digital security will be increased in scope by delegating mundane and time-consuming tasks of hacking to a system, which allows for more attack vectors to be explored. (2) Autonomous systems: new threats will arise through the use of autonomous physical systems to carry out attacks in the real world, possibly by using drones. (3) Fake News: political propaganda will be more targeted and more persuasive, for example by pinpointing political messages based on people’s current inferred moods, or by imitating voices and video images to attribute false statements to influential persons.

Recommendations

The first high-level recommendation offered by the report is that “Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.” Further in the report, it is suggested that this collaboration is in part informed by the lessons learned in the regulation of cybersecurity. I could not agree more that a technically informed regulation of AI technology is pertinent but would suggest this collaboration is further informed by the lessons of bioethics.

Questions for regulation

Professors Seumas Miller and Michael J. Selgelid go further than the current state of cybersecurity research in their paper titled “Ethical and Philosophical Consideration of the Dual-use Dilemma in the Biological Sciences” and raise the following questions for regulation (loosely interpreted for the AI field):

  • Who formally decides which type of research is permissible, and which if off limits?
  • Which safety precautions are in place when experimenting on a real person, or with actual personal data?
  • Should there be mandatory licensing of some inventions, to the extent that they are patented or otherwise protected by intellectual property rights? If so, which ones, and why?
  • Should the field of AI engineering require mandatory training of some sort (e.g. AI and ethics)?
  • Given the potential misuses of the researched technology, should AI research labs require psychological screening and background checks of personnel?
  • When can academic freedom be justifiably limited and research results censored?