13/10/2022 On September 28th, 2022, the European Commission published a proposal for the Directive on adapting noncontractual civil liability rules to artificial intelligence (‘’AI’’). The so-called AI Liability Directive (“the Directive”) is intended to establish civil liability rules that will enable persons who have suffered damages caused by AI systems to seek compensation for such damages. These rules are intended to complement the previously proposed AI Act as a way for persons to seek redress when they are negatively impacted by an AI system, especially when that impact is a consequence of non-compliance with the obligations of the AI Act. The following blog will elaborate upon the content of the draft Directive and the potential implications it might have for both businesses and individuals.
The Directive establishes rules intended to facilitate non-contractual fault-based civil liability claims for compensation for damages incurred by AI-enabled products and services. The Directive would cover products such as drones, robots and smart-home devices. It casts a wide net of applicability, with potentially culpable actors being manufactures, developers, providers or users of AI systems, whereas potential claimants being both individuals and companies. However, similarly to the AI Act, while the Directive covers any type of AI system, it is predominantly oriented towards high-risk AI. And in a similar manner to the general EU Member States (“MS”) civil liability framework, the burden of proof falls on the claimant. For a claim to be considered as falling within the scope of the Directive it needs to concern damage(s) caused by an output of an AI system or the failure of such a system to produce an output where it should have been produced. The claim can concern any type of damage covered by national law, including discrimination or breach of fundamental rights such as privacy.
While not implementing a reversal of the burden of proof, as it might “hamper innovation of AI-enabled products and services”, the Directive introduces two measures which should alleviate the victim’s burden of proof – 1) presumption of causality, and 2) the right to access evidence.
If the claimant can demonstrate that a provider, developer or user of an AI system was at fault for the harm that occurred to the victim, due to their failure to comply with the obligations stemming from the AI Act, and it is reasonably likely that the fault influenced the output, the MS national courts shall presume that the non-compliance caused the fault. While this presumption would certainly ease the claimant’s burden of proof, to be able to meet the conditions of the measure the claimant would have to demonstrate that a provider or user of a high-risk AI system has failed to comply with specific requirements of the AI Act. Moreover, when it comes to non-high-risk AI, the presumption would only apply where a national court considers it “excessively difficult” for the claimant to prove the non-compliance. Lastly, the defendant has the right to rebut the presumption, the procedural aspects of which are left to the Member States.
Claimants will have the right to request evidence from providers or users of high-risk AI about a specific high-risk AI system that is alleged to have caused damage. The MS national courts would have the power to order such providers or users to disclose the requested evidence if they do not respond to or refuse the claimant’s request. This disclosure, however, is to be taken in light of rules regulating trade secrets and does not apply to non-high-risk AI. As an additional measure, the national courts can presume the defendant’s non-compliance with their ‘’duty of care’’ if they do not comply with the disclosure order. The Directive introduces the duty of care as a required standard of conduct of AI system providers or users “in order to avoid damage to legal interests recognised at national or Union law level”. As with the presumption of causality, the defendants have the right to rebut the presumption of non-compliance.
These new AI civil liability rules are welcomed as a much-needed response to the staggering technological advances which could potentially misuse the inability of current MS civil liability rules to handle claims for damage caused by AI-enabled products and services. Additionally, the AI Directive will certainly enhance the enforcement of the AI Act, as a compensatory tool to its preventive scope, especially since the AI Act itself has been previously criticised for lacking a redress mechanism for persons negatively impacted by an AI system. However, the Directive in its current form seems to fall short from providing victims a proper redress mechanism that does not overly burden them with the difficult task of providing sufficient evidence of a fault. Conversely, its far-reaching application to the types of damages for which claims can be sought might expose AI providers and users to mass litigation and potentially frivolous suits, indeed hampering innovation.
The obligation for the claimant to provide proof of the fault of the provider or user of AI systems might prove excessively difficult when it comes to the so-called “black box” AI systems. The inputs and operations of such systems are not visible or available to an outsider, which means that this opacity and complexity would present a very onerous obstacle to claimants. Moreover, AI systems, especially high-risk ones, are likely to be proprietary or protected by trade secrets therefore even the empowerment of the MS national courts to order disclosure could potentially fall short. Furthermore, the Directive covers only fault-based claims, meaning that unintentional damage caused by AI would potentially remain unaddressed and uncompensated.
On the other hand, while the providers and users of AI systems have the right to rebut the presumption measures of the Directive, it is questionable how this would be managed in practice. They would either have to present evidence that the harm was caused by another cause, or the breach of duty of care did not cause the harm in question. Further, the open interpretation of what type of damage can be compensated could incur unwarranted litigation and protection costs for providers, developers, and users of AI.
As a fairly new proposal, the Directive has not yet been thoroughly analysed or criticised by a vast majority of the public. A more substantial critique was given by the European Consumer Organisation (BEUC) which stated that when dealing with “black box” AI, consumers could encounter a “blind spot” to apply the new rules, adding that “consumers will be better protected if a lawnmower shreds their shoes in the garden than if they are unfairly discriminated against through a credit scoring system.” The Irish Council for Civil Liberties (ICCL) reiterated the criticism directed towards the non-inclusion of the reversal of the burden of proof, illustrating the difficulty of claimants to prove fault of providers and users of AI.
In any case the proposal is far from causing actual implications. With the ongoing discussion of the not-yet adopted AI Act, it is expected that this new AI civil liability framework will wait its turn. Considerati will continue to monitor these developments. If you have any questions about the implications of the AI Act or the AI Liability Directive on your organisation, do not hesitate to contact us.