12/06/2023 - Welcome back to Considerati's comprehensive blog series on the influence of Artificial Intelligence. We, at Considerati, are committed to demystifying the intersection between AI and other areas such as liability, privacy, and ethics in light of the EU AI Act moving to the trialogue phase.   

This blog series was launched ahead of the Conference: Artificial intelligence, impact & update 2023 that Considerati is organising in collaboration with Outvie on 16 June. On this day, Prof Mr Dr Bart Schermer, founder of Considerati, will speak on the status and developments of the AI regulation. In addition, Responsible Tech consultants Marianne Schoenmakers and Jord Goudsmit will organise a workshop on how to measure and monitor Responsible AI Governance within your organisation.   

In case you would like to attend, you can order your tickets and use our code 70940_Considerati at the end under the ‘Voucher’ section. By using that code to get your tickets, you will be automatically entering a fun promotion where you will have the chance to win one of three copies of the book ‘De democratie crasht’ by Kees Verhoeven that we will be giving away, for free. 

Our previous insightful exploration took us through the topic of “Artificial Intelligence: a human rights violation?”, shedding light on the often-lesser discussed implications of Artificial Intelligence (“AI”) - human rights. This time, we dive into another nascent hot AI topic, who is liable when an AI system fails to perform? 

Introduction 

As we previously discussed, when the topic of AI veers into the conversation, the usual suspects are its impact on privacy, the job market, discrimination, and general Skynet-phobia. However, as the last step of the AI Act saga is upon us, another important AI-related topic is gaining momentum —liability rules for AI systems. The subject of AI liability has long occupied law school classrooms, but with the recent European Commission (“EC”) Proposal for the so-called AI Liability Directive (“AILD), as well as the amendments to the existing Product Liability Directive (“PLD)  that now considers AI systems as products, this somewhat abstract issue may soon have very real applications and consequences. 

Why is the EU liability regime changing? 

In most EU Member States, the general standard is fault-based liability. Under this regime, claimants need to prove that the defendant was at fault, that they suffered harm, and that there is a link of causality between the harmful activity and the damage. Other cases require a strict liability regime, where there is no requirement for the defendant to be at fault, rather claimants are only required to prove the default, or the risks taken by the defendants.  

The currently applicable PLD established a strict liability regime where producers are liable for the defect of their products regardless of whether the defect is their fault. However, applying this regime in the context of digital or autonomous systems, considering, among other things, the uncertainty regarding the legal nature of digital goods, i.e., whether they are products or services, has caused the EC to reevaluate the PLD and its dealing with emerging technologies. Therefore, the proposed amendments to the PLD include expanding the notions of damage, product, defect, and liable party. Among these changes is including intangible software and digital manufacturing files as products, as well as having "loss or corruption of data that is not used exclusively for professional purposes” as a compensable damage category. 

The AILD on the other hand is intended to set uniform rules regarding civil liability of developers and users of AI. The current EU liability regime does not seem to sufficiently address damage caused by AI-enabled products and services. When not protected by the PLD, claimants would need to resort to national Member States’ civil liability rules to seek damage caused from AI systems. With the frequent inability to prove the existence of an AI system’s developer fault due to most AI systems’ complexity or opacity, it is evident that AI needs a separate liability regime. The one introduced in the AILD, is certainly a step forward, but does not appear to completely fill the gaps.  

As more thoroughly discussed in a previous Considerati blog, the AILD is fault-based, with a novelty being the right of claimants to access relevant evidence. While helpful, the issues of opacity and complexity, as well as the inherent proprietary nature of some AI systems, would still make it difficult for claimants to prove fault of the AI system, i.e., its developer or user. Moreover, as the AILD is intended to complement the AI Act, the directive introduces a rebuttable presumption of causality to cause fault. This means that Member States’ national courts may presume that the non-compliance of a provider, developer, or user of an AI system with the AI Act caused the fault, if the claimant can prove that the non-compliance caused the damage, and it is reasonably likely that the fault influenced the output of the AI system, or lack thereof. Again, while helpful, this will only apply for high-risk AI systems, whereas for non-high-risk AI the presumption would only apply where a national court considers it “excessively difficult” for the claimant to prove the non-compliance. 

Who can really be liable when an AI system fails to perform? 

Considering these changes to the EU liability regime when it comes to AI and digital technologies, an important question is who exactly is at fault for the default of an AI system. AI-based solutions most often have multiple parties involved in the provision of the product or service. Proving the existence of a damage might be an “easier” step, but who exactly should compensate the damages and to what extent, is not always clear. With a strict liability regime, the total damage is compensable, but with a fault-based one, it first needs to be ascertained which party is to blame and for what damage. The initial thought might be oriented towards fault of the producer or developer of the AI system, as they control the system’s safety features and provide the interfaces between the system and its operator, however, in some cases, the operator (user) may exert certain control over the system itself. This is especially important for highly autonomous AI systems where it is the operator that maintains the system and ensures its proper use. Producers may create the AI system, but it is the owner and the user of the system who influence the final use of it. 

In the PLD proposal, the term "producer" is replaced with "manufacturer" to include providers of software, providers of digital services and online marketplaces as possible liable parties. The AILD proposal conversely places providers and users of an AI system as potentially liable parties, relying on their definition in the AI Act. Despite these solutions, identifying a concrete liable party may not apply to all AI systems. Open-source systems, e.g., are excluded from the scope of the PLD. The AI Act, therefore, also the AILD, excludes open-source systems as well. If such systems cause damage, they will not be covered by the current or proposed liability regime.  

Lastly, the latest developments surrounding the AI Act saw the European Parliament add requirements for foundational models such as large language models and generative AI, which includes ChatGPT. OpenAI’s (ChatGPT’s developer) Terms of Use stipulate that the user of ChatGPT will “defend, indemnify, and hold harmless us (…) from and against any claims, losses, and expenses (…) arising from or relating to your use of the Services.” In other words, the user of ChatGPT that has caused damage when using ChatGPT will be culpable alongside OpenAI. 

Conclusion 

Holding AI systems liable is not a clear-cut case. With differing liability regimes in the proposed PLD amendments and AILD, the real-life consequences of these two acts regarding AI liability are yet to be seen. This blog explained the current EU liability regime and how AI systems fit within it. With constant ongoing changes in the realm of AI regulations, changes in the proposed AI liability regime are perhaps to be expected as well. 

As we wrap up our analysis of AI and liability in this edition of Considerati's focused series on Artificial Intelligence, we trust that our exploration has provided valuable insights into the intricate intersections of AI with liability, privacy, and ethics. The complexities of these relationships are pivotal in our digital ecosystem and understanding them is crucial.  

In our next post, which will be published on the 19th of June, we will be venturing into the realm of privacy and AI. See you then!  

Kristijan Pejikj Paralegal

Do you have any questions?

Do you have any questions about AI liability? Contact Considerati, as we offer specialized advice and tailor-made support.