11/05/2023 - This morning (May, 11th) the AI Act was adopted at committee level. The plenary vote is scheduled mid-June, after which the trilogue phase can start. In this blog we present some of the most fundamental and debated topics from these compromise amendments.
Defining AI has proven to be a challenging, yet crucial task. The official definition in the AI Act largely overlaps with the one used by the Organization for Economic Collaboration and Development (OECD) and reads as follows:
“‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
In comparison to earlier definitions of AI, this one is narrower and focusses on systems based on machine-learning and deep learning, which is more in line with the conservative political groupings. In contrast, the liberal left politicians were pushing for a broader definition of AI, which would also include automated decision-making.
The committees have chosen to add an extra layer for the classification of high-risk AI systems listed in Annex III. The output of an AI system must pose a ‘significant’ risk of harm to the health, safety or fundamental rights. Though highly debated because the term ‘significant’ can add extra confusion, the term was introduced so that not every use case will automatically be classified as high-risk when listed in Annex III.
Deployers of high-risk AI systems are required to perform a fundamental rights impact assessment (article 29a). There should be a detailed plan in which mitigating measures are explained. Interestingly, deployers must also notify relevant stakeholders as well as representatives of groups of persons that are affected by the high-risk AI system to collect relevant information. An exception applies here for small and medium-size enterprises (SME). Public authorities are required to publish a summary of their fundamental rights impact assessment. If a deployer is required to perform a data protection impact assessment (DPIA), both assessments should be conducted in conjunction.
The compromise amendments differentiate between foundation models, generative AI and general purpose AI. Starting with foundation models, these are a recent development in which a model is trained on a vast amount of - often unlabeled – data and is designed for a general output. As a result, these models can then be used in many, distinct applications. The application of foundation models can be AI systems with a specific intended purpose, or general purpose AI systems like Midjourney and ChatGPT. These systems are general purpose AI systems as they can be used in and adapted to a wide range of (general) applications for which they were not intentionally and specifically designed. Midjourney and ChatGPT are also examples of generative AI, as they are systems capable of generating text, images, or other media in response to prompts.
Very large online platforms (VLOP) can strongly influence the shaping of public opinion and discourse, election and democratic processes and societal concerns, according to the European Parliament. Therefore, AI systems intended to be used by social media platforms (when they are VLOPs), for instance in their recommender systems, will be classified as high-risk AI systems and must comply with the requirements set out in chapter 2.
Do you want to learn more about the AI Act and anticipate on the impact that it will have on your organisation? Register for our workshop on the AI Act on the 22nd of June or send me (Jord Goudsmit) a message! In this workshop you will learn what the AI Act entails, the latest state of play, what preparations your organisation should make now and how it overlaps with related legislation. Register here: https://considerati-academy.nl/opleidingen/aiact/
We at Considerati are thrilled to announce the launching of a comprehensive blog series focused on Artificial Intelligence. As the leading voice in privacy and data protection, we understand the profound importance of demystifying the complex world of AI and how it intersects with other disciplines. Through this series, we will delve into the nuances of AI, its implications on privacy, ethical considerations, and the legal aspects that surround its use. Stay tuned for thoughtful insights, engaging discussions, and expert analysis!
Do you want to know more about the AI Act? Then contact us, we are ready to advise you.