26/06/2023 - Welcome back to the final entry in Considerati's comprehensive blog series on the influence of Artificial Intelligence. We, at Considerati, are committed to demystifying the intersection between AI and other areas such as liability, privacy, and ethics in light of the EU AI Act moving to the trialogue phase. 

This blog series was launched ahead of the recently concluded Conference: Artificial Intelligence, Impact & Update 2023 that Considerati organised in collaboration with Outvie on 16 June, 2023. Prof Mr Dr Bart Schermer, founder of Considerati, spoke on the status and developments of the AI regulation. Responsible Tech consultants Marianne Schoenmakers and Jord Goudsmit also organised a workshop on how to measure and monitor Responsible AI Governance within organisations.  

Last week, we took a look at the intersection between privacy and artificial intelligence, with the blog post titled “Unlocking the Potential: AI, GDPR, and the Future of Privacy”. Today, we will be taking a step further and considering how data protection engineering is relevant for AI.

Introduction

Data Protection Engineering, or Privacy Engineering is a discipline that helps translate the principle of data protection by design and default into specific technical requirements within systems that process personal data, including AI, and complementary organizational measures. It strives to provide guidance in minimizing privacy risks so that organizations can properly allocate resources and effectively implement technical controls within systems. Data Protection Engineering is an amalgamation of software engineering, ethical considerations, and legal compliance.

Why is Data Protection Engineering Relevant for AI?

Since AI systems are inherently data-intensive, data protection engineering is very important, especially in light of the rapid breakthroughs in AI, and the growing importance of data privacy. On one hand, AI systems can facilitate more robust protections, such as advanced intrusion detection systems and automated data anonymization. On the other hand, the large-scale data processing inherent in many AI applications can pose significant privacy risks if not properly managed. Therefore, from a legal and ethical standpoint, it is imperative to make sure that these technologies protect the privacy of the data subjects throughout the data lifecycle.

Additionally, users of AI systems must be able to trust the technology. By ensuring that it handles data in a privacy-preserving manner, data protection engineering not only helps train AI models by accessing only the raw data, but it also provides the users the relevant information to users in a transparent manner.

A key component that effectuates privacy engineering is privacy enhancing technologies (PETs), such as Homomorphic encryption. It is a technique that can be used to guarantee the confidentiality, integrity, and availability of personal data, while simultaneously ensuring accuracy, intelligibility, and operability when processing said data. Lastly, engineering a technically robust AI system reduces the likelihood of data breaches, and allows technological innovation to be balanced with data protection of individuals. With increased regulatory scrutiny on AI systems,

The industry use-cases wherein data protection engineering is crucial are numerous:

Smart home devices: AI-powered smart home devices like Google Nest or Amazon Echo are becoming increasingly common. These devices continuously process personal data in the background, which raises serious privacy concerns. Privacy engineering can help ensure that these devices process data in a way that respects user privacy, for instance by automatically anonymizing voice data and ensuring user consent for data collection.

Automated decision-making systems: These systems, which include loan approval algorithms, HR recruiting tools, or advertising platforms, often process sensitive personal data. Privacy engineering can ensure these systems adhere to privacy-by-design principles, including data minimization, purpose limitation, and transparency, by explaining outcomes, providing oversight and objection tools etc.

Healthcare: AI algorithms are increasingly used in healthcare, from diagnosis to treatment recommendation. These algorithms often process sensitive health data, necessitating robust privacy protections. Privacy engineering plays a key role in ensuring these protections are integrated into the design of the AI systems, by synthesizing data or using digital twins.

Conclusion

Just as the GDPR has been instrumental in shaping the privacy landscape, privacy engineering, along with the obligations in the upcoming AI Act and AI Liability Directive, will be key to ensuring that AI systems of the future are designed and developed with privacy at their core. While the GDPR & AI Act provides the "what" – the legal requirements for privacy and AI– privacy engineering provides the "how" – the technical means to implement these requirements in AI systems.

While public bodies such as European Union Agency for Cybersecurity (ENISA) is aiding organizations by setting standards and guiding legislative initiatives, it is crucial that organizations themselves take the technical requirements that protect privacy, seriously. AI systems ought to be created and deployed in a manner that is not only efficient and legally compliant, but as technical and ethical bastion of an individual’s right to privacy. 

This entry concludes our AI Blog Series, and we trust that our continued exploration of various topics and their intersection with AI has provided valuable insights to you. 

Do you have any further questions you would like answered about the topics covered in this series? Or do you, as an organization, require practical advice to ensure that your use of AI is responsible? Contact Considerati, as we offer specialized advice and tailor-made support.

Rohit Hebbale Legal Consultant

Do you have any questions?