19/06/2023 - Welcome back to Considerati's comprehensive blog series on the influence of Artificial Intelligence. We, at Considerati, are committed to demystifying the intersection between AI and other areas such as liability, privacy, and ethics in light of the EU AI Act moving to the trialogue phase. This blog series was launched ahead of the recently concluded Conference: Artificial Intelligence, Impact & Update 2023 that Considerati organised in collaboration with Outvie on 16 June. Prof Mr Dr Bart Schermer, founder of Considerati, spoke on the status and developments of the AI regulation. Responsible AI consultants Marianne Schoenmakers and Jord Goudsmit also organised a workshop on how to measure and monitor Responsible AI Governance within organisations.
In our previous blog posts of the AI series, we delved into the proposed liability framework concerning AI in the European Union (“EU”) and discussed the potential of AI to cause human rights violations. A mature and established example of both these aspects is the intersection of privacy and AI.
This blog explores the privacy implications of AI from the lens of real-life use-cases and discusses how the General Data Protection Regulation (“GDPR”) has emerged as and will remain a relevant and functional regulatory regime for AI.
Two of the most common instances of AI around us include:
Virtual assistants: It is becoming increasingly common for people to use assistants like Siri, Alexa, and Google Assistant in their homes and workplaces. These assistants are also AI-based tools, which are trained to process commands from the user and respond in a user-friendly manner. Such assistants also tend to record and process the commands given by the users for improving the relevance and accuracy of their output. Additionally, recently emerged reports also indicate that some virtual assistants record and process conversations around them, whether or not intended as a command, to continuously train their AI models. Given the potentially intimate and private settings in which these virtual assistants are likely to be used, it is crucial for the design and functioning of the assistants to be GDPR compliant.
There are several other such examples of existing and emerging AI in our everyday lives where we are not only likely to be surrounded by AI in our most private spheres but also be subjected to AI in highly data-intensive contexts, such as autonomous vehicles, facial recognition, credit checks, content moderation, and healthcare diagnosis. Thus, the GDPR needs to be taken into account for protecting the rights and privacy of the data subject, and for ensuring that the users/deployers and makers of AI remain compliant and avoid the high fines.
The penalty imposed on the Dutch Tax and Customs Administration for using its fraud (FSV) blacklist demonstrates how, in the absence of definitive liability laws governing AI, the GDPR has been a reliable anchor for accountability when using AI. To give a brief summary, the Dutch tax case arose out of the use of a fraud detection algorithm by the Dutch Tax and Customs Authority. The Authority was held guilty of several violations under the GDPR, including processing inaccurate personal data which led to the algorithm wrongfully blacklisting individuals. This not only violated the rights of such individuals under the GDPR but also had severe financial implications for them. The Dutch Data Protection Authority imposed significant sanctions on the Tax and Customs Authority under the GDPR. This is an appropriate example of the importance of the GDPR in ensuring responsible use of AI.
Over the past half a decade, the GDPR has established a substantial foundation for the protection of privacy and personal data. Guidance from Data Protection Authorities and EU authorities such as the European Data Protection Board has helped clarify the scope and understanding of “personal data”, the legality of its processing and retention, and the rights of data subjects in these contexts. Given that several real-life use cases of AI, as discussed above, rely on personal information from data subjects, it is essential for the makers and deployers/users of such AI to comply with the GDPR, in addition to and even after AI-specific legislations are implemented.
AI, despite its prevalence and hype, is an emerging technology with limitless uses and potential. For the same reasons, regulation and governance of AI is a complex subject requiring ongoing evolution of the law in tandem with the technology, and spanning multiple verticals of the legal regime - from human rights, data protection, and intellectual property, to sector-specific laws. With the EU AI Act entering the trialogue phase, and the proposal for the AI Liability Directive making progress in the legislative process, a more targeted regulation of AI is soon to materialise. However, alongside the upcoming laws, the importance and relevance of the GDPR, as an equally necessary regulatory tool for AI, cannot be understated.
As we wrap up our analysis of AI and privacy in this edition of Considerati's AI blog series, we trust that our exploration has provided valuable insights into the intricate intersections of AI with liability, privacy, and ethics. The complexities of these relationships are pivotal in our digital ecosystem and understanding them is crucial.
In our next post, which will be published on the 26th of June, we will be concluding Considerati’s AI Blog Series by venturing into the realm of privacy engineering and AI. Stay tuned!
Do you have any questions about this subject? Contact Considerati, as we offer specialized advice and tailor-made support.