02/08/2019 – During an exceptionally hot European summer, policy makers and supervisors are warming up to AI regulation. Disregarding their holidays, Dutch MPs asked parliamentary questions on the export of facial recognition software to Chinese Public Authorities and the Dutch Central Bank presented its guidelines on the use of AI in the financial sector. But most importantly, the newly elected European Commission president, Ursula von der Leyen, vowed to put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence in her first 100 days in office. Meanwhile, a Dutch business delegation including Considerati travelled with Prime Minister Rutte to the Boston area for a trade mission on AI/Robotics. Our visits to AI/robotics startups, accelerators and academic labs highlighted the wide range of technology that is called AI; the importance of both economic and societal value; and why both innovative businesses and policy makers have to perform a balancing act to introduce AI into our society.
It is deceptively easy to think about AI as a single technology. The organisations visited during the trade mission showed that this could not be further from the truth. Consider Humatics, a company that provides "microlocation" products that enable 3D positioning on a centimeter/millimeter scale. For example this can be used to track tools in an factory in high resolution. We also visited Affectiva, a startup that develops emotional recognition software, that (among many other things) can be used to detect sleepy drivers.
These two examples show the variety in technologies that can be considered AI. They also show that the specific application of a technology is the right unit of analysis for judgements on the benefits and harms. If I had illustrated the above technologies as useful tools to provide feedback to retail workers on their attitude towards customers or to track their movement on the working place, I am sure many would judge the merits of the technology differently. This means that an extensive and generic approach to AI regulation is hard to do right without risking slowing down innovation.
To get to the point where a technology can be used for a specific purpose, there needs to be a certain liberty to disregard the status quo. While visiting the many exiting projects, questions about the 'why' of a project were frequently answered with an optimistic: 'because we can!'. This decoupling of innovation and the context where it will provide economic and societal value, seems to foster both the speed and 'innovativeness' of innovation. For instance, visiting the Harvard Wyss institute we were introduced to Kilobots, robots that can operate in an autonomous swarm (i.e. like a school of fish). This kind of technology could be used to act as quarter makers to build colonies on mars, for chilling military purposes, etc.
Losing oneself in dystopic worrying about all potential unwanted uses of your innovation will probably kill the innovation. On the other hand, in the end innovative products and services will be deployed in the real world. And this world brings an array of varying expectations, evolving interests that lead to new policy questions and questions about preservation of existing norms. In other words, if you move fast and brake things, people get mad eventually.
The challenge is: if too many premature concerns only hamper innovation, when do you start to deal with issues like acceptability and trust to protect your investment? Our experience with helping businesses to innovate responsibly shows these issues are more manageable when acknowledged and adressed early in the lifecycle of a specific application. This way there will be no surprises bringing the product or service to market, both internally and externally.
For the Commission’s initiative to regulate AI the same logic applies. Policy makers need to perform the balancing act of providing enough space for innovation while reaffirming and evaluating the norms that are impacted by new technologies. If not, innovation will be stifled, or the public will lose trust and oppose the use of the technology. Policy makers also need to avoid the trap of creating a generic regulation that disregards the enormous importance of the context in which AI is deployed. For instance, it might be wise to curtail facial recognition in some cases (surveillance, marketing), and leave it unrestricted in others (photo applications that recognize who is in a picture).
For now, the Commission’s regulatory efforts seem to focus on a framework for AI with specific provisions on transparency obligations for automated decision-making, and assessments to ensure that AI systems do not perpetuate discrimination or violate fundamental rights. In itself these topics are not surprising, but the devil will be in the details. For instance, what will qualify as a violation of a human right? If transparency is required, for what purpose and towards which audience? We will follow the regulatory efforts of the European Commission EU wide and in the member states closely and keep you updated here.
Are you interested in what AI regulation could look like, and how the requirements for AI will impact your operations? Please contact me for more information.