21/12/2023 - After marathon negotiations, a political agreement on the AI Act has been reached. However, due to the technical committees that will take place in December and January, we will still have to wait a little longer before the final text will be published after final adoption by the Member States and the European Parliament. However, the co-legislators from the European Parliament (IMCO and LIBE) did already present a list of topics that are included in the Act. In this AI Act blog series, we will focus on several highlights thereof, starting with the Fundamental Rights Impact Assessment (FRIA).  

What is a FRIA?

As part of the provisional agreement, a FRIA is one of the requirements for high-risk AI systems as can be read in the press release by the European Parliament. AI systems that are considered to have significant potential harm must assess their impact on fundamental rights before placing an AI system on the market or putting it into service.  

A Fundamental Rights Impact Assessment is in essence a structured way of estimating risks of an AI system to fundamental rights, such as the right to due process of law, freedom of expression and the right to privacy. In a Q&A published by the European Commission, a description is provided. A FRIA shall focus on:  

  • the deployer’s processes in which the high-risk AI system will be used; 
  • the period of time and frequency in which the high-risk AI system is intended to be used; 
  • the categories of natural persons and groups likely to be affected by its use in the specific context; 
  • the specific risks of harm likely to impact the affected categories of persons or group of persons; 
  • a description of the implementation of human oversight measures; and  
  • of measures to be taken in case of the materialization of the risks. 

Who is required to perform the FRIA under the AI Act? 

The FRIA is especially relevant to the public sector and high-risk AI. First of all, deployers of AI systems that are governed by public law or providing public services, fall under the scope of this requirement. Also, operators providing so-called high-risk AI systems are required to perform a FRIA. The national AI supervisory authority must be notified of the results of the FRIA.   

How can you comply with this requirement? 

There are several model assessments that are designed to meet this requirement. One of those is a Fundamental Rights Algorithm Impact Assessment (FRAIA). This model is the English version of the Impact Assessment Mensenrechten en Algoritmen (IAMA), which is developed in the Netherlands by the Utrecht University. This assessment helps to systematically provide answers to the obligatory elements in the (provisionary) AI Act. For example, a FRAIA helps to assess the reasonably foreseeable impact on fundamental rights or the risks of harm likely to impact marginalized persons or vulnerable groups. One of the main strengths of the FRAIA is that it requires to assess the risks in a multidisciplinary setting, which results in a concise and thorough assessment.   

FRIA and DPIA 

Since the GDPR, the Data Protection Impact Assessment (DPIA) has become a well-known requirement to assess the impact of a data processing activity. For many AI systems, a DPIA will be mandatory given the personal data processing that will in many cases take place. If both impact assessments are required, it is highly practical to align both impact assessment because of the overlap they have. This will save you time and also enhance the quality of the outcomes of both assessments.  

Jord Goudsmit Consultant Responsible AI

Questions?

If you would like to know more about the AI Act or performing performing a FRAIA, please contact Judith van Schie or Jord Goudsmit. Next to a thorough understanding of tech-related legislation, Considerati offers trainings on how to perform a fundamental rights impact assessments, enabling responsible use of AI with an understanding of business needs.