Impact of the regulation in three questions

22/04/'21 - On Wednesday, April 21, the European Commission (EC) published the long-awaited proposal for the regulation of Artificial Intelligence (AI). The regulation can have a major impact on providers and users of high-risk AI systems. In fact, some AI applications are considered so dangerous that they will be prohibited.  

To whom is the law addressed 

In the proposal, the Commission focuses on developers (providers), distributors and users of AI systems. These are software systems that generate output based on human-set goals. These are, for example, predictions, classification & analyses, but also adjustments to a (virtual) environment or media content. Machine learning, logic and expert systems, and statistic methods all fall under this definition of AI. This means that the scope of the regulation is very broad. 

What types of AI systems does the proposal describe?  

The proposal distinguishes three types of AI systems for which different rules apply: prohibited, high-risk, and other AI systems. AI systems that unconsciously manipulate individuals in a harmful way are prohibited. AI systems that exploit vulnerabilities of persons, that are used for social scoring (eg Chinese social credit system)or perform biometric identification in the public space are also not allowed 

Most of the proposal focuses on high-risk AI systems. These are AI systems in products that fall under specific EU legislation, such as aviation equipment, toys, and medical devices. This also includes systems that are used for assessment and selection in education or by employers, or that provide access to social security or a loan. Various applications within public administration have also been designated as high risk, including in the field of police and emergency services, migration and border surveillance, and the justice system. An important exception are applications for public safety. Several new requirements apply to all these high-risk systems. 

All other uses of AI are "other" systems. Users (the persons or organization responsible for the deployment of an AI system) must inform people when they interact with an AI system. In addition, users must also state whether emotion detection is being used. Users generating media manipulated based on real people, such as deep fakes, are also required to state this.  

What are the requirements for high-risk systems? 

The most prominent requirement for high-risk systems is the obligation to carry out a conformity assessment before the product can be placed on the market. AI systems that are used as a security component for public infrastructure or biometric identification, must be assessed by a third party. Other systems comply with a conformity assessment by the provider or EU importer. 

A conformity assessment should include risk management, data quality (used for training), documentation and logging of key design decisions, transparency and information to users, appropriate human oversight, and the robustness, accuracy and security of the system. It is also required to register a high-risk AI system in a central European database. Distributors of high-risk AI systems are responsible for ensuring that the system complies with the requirements of the EU regulation. 

Users are required to monitor the use of the AI ​​system and to keep track of which data is used to make predictions, for example. They must also use systems according to the provider's instructions and intended purposes. If this is deviated from, the regulation stipulates that the user must be seen as a provider, with the applicable requirements. 

 

Next steps?

The proposal will follow the ordinary legislative procedure in the European Union, a proces that will take several. However, this is the time to make strategic decisions to ensure that your organisation is future-proof. Do you have questions how this proposal affects your organisation? Please contact Joas van Ham, vanham@considerati.com.