31/05/2023 - Welcome back to Considerati's comprehensive blog series on the influence of Artificial Intelligence. At Considerati we are committed to demystifying the intersection between AI and other areas such as liability, privacy, and ethics in light of the EU AI Act moving to the trilogue phase.   

This blog series was launched ahead of the Conference: Artificial intelligence, impact & update that Considerati is organising in collaboration with Outvie on 16 June. On this day, Prof Mr Dr Bart Schermer, founder of Considerati, will speak on the status and developments of the AI regulation. In addition, Responsible Tech consultants Marianne Schoenmakers and Jord Goudsmit will organise a workshop on how to measure and monitor Responsible AI Governance within your organisation.  

In case you would like to attend, you can order your tickets and use our code 70940_Considerati at the end under the ‘Voucher’ section. By using that code to get your tickets, you will be automatically entering a fun promotion where you will have the chance to win one of three copies of the book ‘De democratie crasht’ by Kees Verhoeven that we will be giving away, for free.   


When one starts talking about Artificial Intelligence (“AI”), the discussion will often lead to topics such as privacy, AI taking over jobs and discrimination. These topics are certainly important. The right to privacy and the prohibition of discrimination are well known human rights. There are however, more human rights that can have a relationship with AI. In this blog we will focus on the lesser mentioned human rights when talking about AI.  

Human rights 

There is not one definition for human rights, although the United Nations (“UN”) describes human rights as follows: “rights inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion or any other status.” Human rights are often enshrined in national law, but member states of the European Union also follow the European Charter of Fundamental Rights (“CFEU”) and the Universal Declaration of Human Rights. A country that has ratified one of these human right treaties, takes on obligations and duties under international law. They have the obligation to respect, meaning that states must refrain from interfering with human rights. They have the obligation to protect individuals and groups against human rights violations. States also have to obligation to fulfill, meaning that states must take positive action to facilitate the basic human rights.  

AI and its possible violations on human rights 

To exhibit the possible violations of AI on human rights, we will discuss three human rights enshrined by the CFEU and the way AI could possibly violate or interact with those.  

  1. Rights of the child (article 24 CFEU): Children hold a special position in human rights. As they are often not aware of their rights, they need to be protected more. That is why AI or algorithms that come into contact with children are often under more scrutiny. Examples of this can be seen in recent fines given out by several European supervisory authorities to social media companies.
  2. Right to good administration (article 41 CFEU): An algorithm or AI that is used by governmental bodies, often have an automatic effect on citizens. This effect can be both positive and negative. A fictional example can be an algorithm or AI that uses certain rules to ascertain whether someone is at a risk of committing a crime. When a person come out as high-risk, he would be surveilled more strictly. Other than several ethical issues that come up with these kinds of algorithms, it can also violate the person’s right to good administration. The organization using the algorithm may not be able to explain the reasons for the label, as it was decided by the AI. 
  3. Presumption of innocence (article 48 CFEU):  The presumption of innocence is a right that is applicable in several fields of work, for example: the police, the tax offices, the municipality. All those bodies have a right and obligation to find the people who misuse the system to their own advantage. In order to do so, the bodies need to select the people who need to be checked. The reason for this is often simple: not enough administrative power to handle the number of cases that need to be checked. An algorithm or an AI can be a solution. The AI will make the selection, and the human will check the cases suggested by the AI. However, if the selection is made by an AI, will that mean that those people have a higher chance of being guilty? This question is near impossible to answer, which means the presumption of innocence might be violated.  


The recent rise of the use of AI introduces many possibilities. However, it is important to keep in mind the negative effects that AI may present. This blog talked about the ways AI could violate certain human rights. As mentioned in previous blogs, a way to detect and mitigate these violations is to perform a Fundamental Rights Impact Assessment (“IAMA”).  

As we wrap up our analysis of AI and human rights in this edition of Considerati's focused series on Artificial Intelligence, we trust that our exploration has provided valuable insights into the intricate intersections of AI with liability, privacy, and ethics. The complexities of these relationships are pivotal in our digital ecosystem, and understanding them is crucial. 

In our next post which will be published on the 12th of June we will be venturing into the realm of liability and AI. See you then!  

Do you want to know more?

After reading this blog, do you have questions about AI and human rights? Or are you looking for practical advice to ensure your organizationuses AI in a responsible way? Contact Considerati, as we offer specialized advice and tailor-made support.