Teisė ISSN 1392-1274 eISSN 2424-6050

2024, Vol. 130, pp. 129–143 DOI: https://doi.org/10.15388/Teise.2024.130.11

Shaping Civil Liability in the Digital Age: AILD and the Revisited PLD

Deimantė Rimkutė
Vilnius University, Faculty of Law
Doctoral student at the Department of Private Law
Saulėtekio al. 9, Bulding I, LT-10222 Vilnius, Lithuania
Phone: (+370 5) 236 6170
E-mail: deimante.rimkute@stud.tf.vu.lt

Shaping Civil Liability in the Digital Age: AILD and the Revisited PLD

Deimantė Rimkutė
(Vilnius University (Lithuania))

Summary. This article assesses two proposals for directives put forward by the European Commission: The Artificial Intelligence Liability Directive and the revised Product Liability Directive. The first part of the article introduces and compares the two proposals for directives. The subsequent part assesses the key elements of the Directives, including the scope of the application; the proposed changes to procedural law (disclosure of evidence, presumptions); and the rules of the revised Product Liability Directive on the identification of product defects. The article concludes with suggested improvements to the proposals for both Directives.
Keywords: non-contractual liability, strict liability, product liability, artificial intelligence.

Civilinės atsakomybės formavimas skaitmeniniame amžiuje: Dirbtinio intelekto atsakomybės direktyva ir Peržiūrėta atsakomybės už gaminius direktyva

Deimantė Rimkutė
(Vilniaus universitetas (Lietuva))

Santrauka. Straipsnyje vertinami du Europos Komisijos pateikti direktyvos pasiūlymai: Dirbtinio intelekto atsakomybės direktyva ir Peržiūrėta atsakomybės už gaminius direktyva. Pirmoje straipsnio dalyje pristatomi ir palyginami abu direktyvų pasiūlymai. Kitoje dalyje įvertinami svarbiausi šių direktyvų elementai, įskaitant direktyvų taikymo sritį; siūlomi proceso teisės pakeitimai (įrodymų atskleidimas, prezumpcijos), taip pat Peržiūrėtos atsakomybės už gaminius direktyvos taisyklės dėl produkto trūkumų nustatymo. Straipsnio pabaigoje apibendrinama, kad nors Europos Komisijos direktyvos pasiūlymai žymi pažangą siekiant apsaugoti nukentėjusiuosius nuo dirbtinio intelekto padarytos žalos, juose aptinkama ir trūkumų. Siūlomais patobulinimais įvairiose svarbiose srityse, įskaitant produkto defektų nustatymą, prezumpcijos mechanizmus, įrodymų atskleidimą ir taikymo sritį, siekiama ištaisyti direktyvose pastebėtus trūkumus ir sustiprinti nuo dirbtinio intelekto nukentėjusių asmenų apsaugą. Įtraukus šiuos patobulinimus, direktyvomis būtų galima veiksmingiau pasiekti jų tikslą – užtikrinti tinkamą nuo dirbtinio intelekto nukentėjusių asmenų apsaugą.
Pagrindiniai žodžiai: deliktinė atsakomybė, griežtoji atsakomybė, atsakomybė už netinkamos kokybės gaminius, dirbtinis intelektas.

_________

Received: 14/03/2024. Accepted: 19/03/2024
Copyright © 2024 Deimantė Rimkutė. Published by
Vilnius University Press
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

On 28 September 2022, the European Commission introduced two directive proposals aimed at regulating civil liability for damage caused by artificial intelligence (also referred to as AI): Artificial Intelligence Liability Directive (referred to as the AILD); and revisited Product Liability Directive (referred to as the revisited PLD) (further both directives – Directives), which offers a new version of Product Liability Directive of 1985 (85/374/EEC, known as the PLD). These Directives, in conjunction with AI Act, the common regulatory proposal and legal framework for AI, mark the culmination of the European Commission‘s comprehensive set of proposals for regulating AI liability.

Despite the public release of these Directives on 28 September2022, there has been limited in-depth analysis of both legal acts. Two notable papers are The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future, authored by Philipp Hacker and published on 7 December 2022, and The European Commission‘s Approach to Extra-Contractual Liability and AI – A First Analysis and Evaluation of the Two Proposals, authored by Orian Dheu, Jan De Bruyne, and Charlotte Ducuing, published on 18 October 2022.

In these articles, it is acknowledged that there is room for improvement within the Directives. Therefore, further research is necessary to ensure the development of the most effective and comprehensive Directives possible. This underscores the importance of conducting additional analysis and research to refine and enhance the existing proposals. This paper aims to fill the gap in research by conducting a thorough analysis and evaluation of them. The initial chapter will introduce the Directives, followed by a detailed examination of critical components, such as their scope of application, proposed procedural mechanisms (evidence disclosure, presumptions), and the challenges related to establishing a defective product under the revised PLD. The article will conclude by providing recommendations aimed at improving both proposals.

1. The Framework: Similarities and Differences

As previously mentioned, the European Commission is introducing two distinct sets of measures aimed at regulating activities associated with artificial intelligence that could potentially endanger others. These measures encompass safety requirements, as outlined in the AI Act, and non-contractual liability, as delineated in the revisited PLD and AILD. Although this paper primarily focuses on the former set of measures, it‘s important to emphasize that safety requirements and non-contractual liability regulations work in conjunction with each other (Zech, 2021). While the proposal for the AI Act does not establish specific individual rights for victims, it does introduce security regulations governing the development and deployment of AI (Wendehorst, 2022, p. 7). In cases where these security rules are violated, resulting in damage, the mechanism of redress, which is the central subject of this paper, is activated through liability regulations (Nawaz, 2022).

While both Directives share a commonality in introducing non-contractual liability regulations, they exhibit conceptual differences (Hacker, 2022, p. 3). The AILD primarily seeks to standardize procedural aspects, such as evidence disclosure and the burden of proof, within the realm of fault-based civil non-contractual liability. The revisited PLD aims to rejuvenate traditional product liability principles on a broader scale, encompassing liability for all digital products (Hacker, 2022, p. 3). It‘s noteworthy that these Directives can be applied individually or in conjunction (European Commission, 2022a). However, the choice of the civil liability framework hinges on the nature of the damage to be compensated, the parties involved, and the substantive and procedural requisites of liability (further details are available in Table 1).

Table 1. Overview of the main differences between the revisited PLD and the AILD

The revisited PLD

The AILD

Claim based on EU law

Claim based on Member State law

Material and procedural aspects

Procedural aspects

Applies to physical products and software, including artificial intelligence systems

Applicable only to artificial intelligence systems

Strict liability

Liability based on fault

Claims against manufacturers and other actors in the supply chain

Claims against manufacturers, professional users, and consumers

Compensable damages: damage to property, death or personal injury and loss of data

Damages: potentially also damages for breach of fundamental rights (non-pecuniary) and direct losses

Full harmonisation

Minimum harmonisation

Source: Hacker, 2022, 8 p.

As illustrated in Table 1, the scope of the Directives differs. The revisited PLD encompasses physical products and software, including AI, it aligns with the strict liability of manufacturers and, under specific conditions, other entities within the supply chain. On the other hand, the AILD is confined to AI, concentrating on the harmonization of procedural aspects related to fault-based liability as per Member State laws. Furthermore, the AILD extends its coverage beyond actions directed solely at manufacturers, also encompassing professional and non-professional users (i.e., consumers) (European Commission, 2022a). Another distinguishing factor lies in the category of damage eligible for compensation. The revisited PLD confines damages to issues concerning life, health, property, and data loss. In contrast, the AILD maintains a broader perspective, allowing for compensation for breaches of fundamental rights (non-pecuniary damages) or direct losses (Hacker, 2022, p. 8).

Additionally, there exists a distinction in the degree of harmonization (Dheu, De Bruyne, and Ducuing, 2022, p. 7). The AILD permits Member States to „adopt or maintain national rules which are more favourable to claimants seeking to establish a non-contractual civil law claim for damage caused by an AI system, provided that such rules are in conformity with Union law“ (Article 1(4) of the AILD). Conversely, the revisited PLD prohibits Member States from retaining or introducing provisions in their national legislation that deviate from those outlined in the revisited PLD. Consequently, the AILD adheres to a framework of minimum harmonization (Dheu, De Bruyne, and Ducuing, 2022, p. 7).

2. Assessment of the Proposals for Directives

Building on the previous section, it is evident that while directives exhibit variations, they also share certain commonalities that merit a collective analysis. Therefore, further examination will encompass the following subjects: the scope of the Directives; their mechanisms – the process of evidentiary disclosure, and presumptions; and a separate topic – the assessment of establishing product defectiveness under revisited PLD.

2.1. Scope of application

2.1.1. Scope of the AILD

Under the European Commission‘s proposal for the AILD, there are three key limitations on the scope of the AILD. Firstly, it is applicable exclusively to actions grounded in non-contractual fault (Article 1 of the AILD). Consequently, other forms of non-contractual liability, such as strict liability without fault, are not covered by the AILD and fall outside its scope (LLP, 2022). Secondly, while the presumption mechanism designed to facilitate proof applies regardless of the specific AI system responsible for the damage, the rules concerning evidence disclosure are constrained by the technical characteristics of the AI – are applicable if the damage was caused by a high-risk AI (as outlined in Article 1(a) to (b) of the AILD). Thirdly, the AILD does not encompass damages actions arising from harm caused by artificial intelligence systems subject to human supervision (i.e., assistive artificial intelligence) (as indicated in recital 15 of the AILD).

This article offers a critical viewpoint on the decision within AILD to adopt fault-based liability regulations while excluding strict liability. Although, in some cases strict liability is established not based on violation of the duty of care or safety standards but from the mere occurrence of damage itself (Mikelėnas, 1995, p. 12). Therefore, a plaintiff pursuing a strict liability claim would not encounter the challenge of proving wrongful conduct, which arises in the context of fault-based-liability due to AI‘s complexity, autonomy, and opacity, often referred to as the black box phenomenon (Bruyne et al., 2022).

However, in certain instances of strict liability, it is necessary to establish the unlawfulness of the tortfeasor‘s actions. In such cases, the mechanisms proposed in the AILD unreasonably restrict the rights of individuals who have suffered damage. This limitation is perceived as unjust because it places an undue burden on the injured parties, potentially denying them fair compensation for their losses. Hence, the assertion is that by not incorporating strict liability provisions, AILD unnecessarily limits the rights of individuals who have suffered harm from AI-related incidents.

Furthermore, it is contended that the European Commission‘s suggestion to narrow the requirement for disclosing evidence in the AILD to high-risk AI, as defined by the AI Act, would complicate the achievement of the AILD‘s goals. As highlighted by Professor Hacker, certain lower-risk AI systems can be just as hazardous as high-risk AI systems (Hacker, 2022, p. 12). High-risk AI does not encompass AI systems like emotion recognition, insurance pricing models (except for life and health insurance), and self-driving transport. Self-driving transport poses risks to the safety and well-being of passengers and other road users, while emotion recognition systems infringe upon personal privacy. From an insurance perspective, any risk that leaves an individual without coverage is significant (Hacker, 2022, p. 12). Therefore, the European Commission‘s proposal to deny claimants who are victims of low-risk AI the right to disclosure would place an undue burden on the evidentiary process for these victims.

Concerns shall also by raised about the exclusion clause in the AILD, which states that the AILD should not apply when the final decision is made by a human, based on information or a recommendation generated by artificial intelligence (recital 15 of the AILD). This exemption could potentially serve as a loophole to avoid the application of the directive, similar to what occurred with a comparable provision in the General Data Protection Regulation (GDPR), where individuals were tasked with merely rubber-stamping AI-generated outcomes (Wachter, Mittelstadt, and Floridi, 2017, p. 92). Therefore, if the rubber-stamping of results generated by assisted AI were insincere, the exemption from the AILD based on harm caused by assisted AI could encourage artificial circumvention of the Directive.

2.1.2. Scope of the revisited PLD

The revisited PLD differs from the current version of the PLD by expanding the scope of those who can be held liable for damages. Under the revisited PLD, liability will not only apply to manufacturers but also to economic operators1. The European Commission also aims to clarify the concept of a product by broadening the scope of the Directive from tangible products to include digital production files and software (Article 4(1) of the revisited PLD). Additionally, the proposal extends liability to components of products, which are defined as any tangible or intangible object or service2 incorporated into or connected to a product by the manufacturer or under the manufacturer‘s control (Article 4(3) of the revisited PLD).

The European Commission‘s proposal to broaden the definition of a product has sparked a range of opinions. On one hand, the clarification of the product definition is seen as a positive development. The current version of the PLD refers to movable products, which could limit the Directive‘s scope to tangible physical objects (Schellekens, 2022). This limitation does not align with the modern landscape where products are increasingly digitalized, and software has become a commodity with various applications (Van Gool, 2022). Thus, clarifying the concept of a product is seen as a necessary step to ensure consumer protection.

On the other hand, Dheu, De Bruyne, and Ducuing express concerns that these proposals to clarify and expand the concept of products might alter the fundamental nature of the Directive. They argue that the PLD would no longer be exclusively focused on products and manufacturing as it has been traditionally. Such changes would necessitate a thorough analysis of their long-term consequences, which is currently lacking (Dheu, De Bruyne, and Ducuing, 2022, p. 35).

Overall, the debate surrounding the European Commission‘s proposal underscores the need for careful consideration of the potential impacts and implications of expanding the definition of products under the PLD. It highlights the importance of striking a balance between adapting regulations to keep pace with technological advancements while ensuring that consumer protection remains a central focus.

2.2. Disclosure of evidence in the proposed Directives

The disclosure of evidence is a procedural measure introduced in the proposed Directives to alleviate the burden of proof on claimants. While the European Commission suggests implementing this measure in both Directives, the specific implementation of the evidentiary disclosure mechanism varies between the revisited PLD and AILD. In the following section, it will be delved into the mechanism of evidentiary disclosure in both Directives.

2.2.1. The evidence disclosure in the AILD

The AILD grants victims the right to request for evidence in court independently, without separate claim for damages, caused by high-risk AI. However, this process is subject to a pre-litigation request procedure. In this procedure:

1. The claimant must initiate the evidence request by first sending it to the potential defendant, which could be the provider of the AI system, the entity responsible for the provider‘s obligations, or the user of the AI system (as outlined in Article 3(1) of the AILD).

2. If the potential defendant rejects the claimant‘s request, the claimant then has the right to approach the court to request the disclosure of evidence. The AILD sets certain requirements for this request, including that it should be supported by sufficient facts and evidence to substantiate the intended damages claim (as per Article 3(1) of the AILD).

3. If the court grants the claimant‘s request, it is obligated to take measures to preserve the evidence supporting the damages claim, ensure proportionality in the disclosure of evidence, extract only the minimum necessary information, and safeguard the confidentiality of the evidence (as outlined in Articles 3(2) to (4) of the AILD).

4. If the defendant fails to disclose or preserve the evidence, as determined by the court, it would be presumed to have violated its duty of care. However, the defendant can rebut this presumption (as per Article 3(5) of the AILD).

The evidence disclosure system proposed in the AILD is generally supported in this article. However, even with access to the necessary data from the defendant, claimants may still face significant challenges in establishing the conditions for civil liability. While the evidence becomes available to claimants, it may remain unreadable to individuals without specialized training. Consequently, claimants would need to invest in translating the information they receive into a language understandable to non-specialists. Additionally, hiring experts to analyse the data within the AI system would entail substantial costs for claimants, particularly considering that defendants may seek to overwhelm them with vast quantities of data. This would lead to increased legal expenses for claimants without guaranteeing positive outcomes in their cases.

2.2.2. Disclosure of evidence in the revisited PLD

Under Article 8 of the proposal for the revisited PLD, claimants may also avail themselves of the evidentiary tool. However, the revisited PLD is more limited in terms of disclosure of evidence compared to the AILD. Firstly, under the revisited PLD proposal, there is no pre-litigation procedure for the discovery of evidence. Second, claimants cannot make an independent procedural claim for disclosure of evidence without a claim for damages, i.e., claimants acquire the right to request disclosure of evidence once they have brought an action for damages and have provided facts and evidence sufficient to substantiate the credibility of the claim for damages (Article 8(1) of the revisited PLD).

However, the fact that the claimants are not able to submit an independent request for evidence (in addition to the claim for damages) to the court under the revisited PLD could disproportionately complicate the proof of the conditions for civil liability. Firstly, claimants would be forced to pay a higher stamp duty to obtain information, as the fees for bringing the main proceedings would be higher than the fees for the judicial disclosure process, which could discourage the initiation of damages proceedings in general. Secondly, it would be more difficult for the claimants to decide on the initiation of the proceedings as information on the circumstances of the damage would be less accessible due to the non-application of the pre-litigation discovery procedure. This would lead to a higher number of unjustified damages claims, which would burden the judicial system. Thirdly, it would be more difficult for a potential claimant to reach a negotiated settlement in the absence of the necessary evidence.

The revisited PLD also contrasts with the AILD in terms of the standards imposed on defendants for the preservation and disclosure of evidence. In contrast to the revisited PLD, the AILD explicitly places higher obligations on defendants, including: (1) imposing an obligation to keep records of high-risk AI (Article 3(1) of the AILD); and (2) presuming negligence on the part of defendants in failing to comply with the obligation not only to disclose, but also to keep the required records (Article 3(5) of the AILD).

This means that claimants of revisited PLD would receive less detailed information about the circumstances of the damage than those relying on the AILD, since revisited PLD defendants have no duty to compile data, nor are they presumed to be negligent if they have failed to comply with the duty to compile data. Nor, paradoxically, would the revisited PLD defendants be considered negligent for failing to keep the necessary data, even if the damage was caused by a high-risk AI, whereas the same defendants would be considered negligent under the AILD.

2.3. The system of presumptions in the proposed Directives

One of the mechanisms aimed at alleviating the burden of proof within the proposed Directives is the application of presumptions, which involves treating a fact as true until proven otherwise (Rescher, 2006, p. 34). This section will examine the presumption system and highlight its challenging aspects within the Directives.

2.3.1. The presumption system in the proposal for an AILD

The AILD introduces presumptions related to fault (negligence) and causation. When it comes to fault, the defendant is presumed to be at fault (negligent) if they have failed to adhere to the obligation of disclosing or preserving information, as detailed in Article 3 of the AILD. In the case of causation, the presumption applies under two conditions: (1) meeting the necessary requirements that apply to all defendants, and (2) fulfilling the special requirements, which are contingent on the technological features of the AI system and/or the legal standing of the plaintiff (Hacker, 2022, p. 34).

The necessary conditions for triggering the presumption of causation encompass (1) fault, which includes negligence and/or a breach of the law, (2) a reasonable belief that this fault influenced the AI system‘s output or lack thereof, and (3) the establishment that the damage resulted from the AI system generating or failing to generate an output, as articulated in Article 4(1) of the AILD. Special conditions pertain to supplementary factors that come into play with consideration given to the technological aspects of the AI system and the legal status of the claimant:

1. In cases involving high-risk AI suppliers, the presumption of causation comes into effect when there is an additional violation of particular AI Act provisions. These provisions encompass areas such as data management and training (as specified in Article 10(2) to (4) of the AI Act), transparency (detailed in Article 13 of the AI Act), human oversight (covered by Article 14 of the AI Act), efficiency, reliability, cybersecurity (as articulated in Article 15 of the AI Act), and the obligation to take corrective measures, including revocation and withdrawal (stipulated in Article 21 of the AI Act). In addition, the presumption is triggered when the defendant is unable to demonstrate that the claimant possesses adequate evidence and expertise to establish causation.

2. High-risk AI users face the presumption of causation if they commit additional breaches of the AI Act provisions, which include responsibilities like using and monitoring the AI system in accordance with usage instructions (outlined in Article 29 (1) and (4) of the AI Act), ensuring supervision by individuals with the necessary competence, training, and authorization (specified in Article 29 (1a) of the AI Act), and supplying input data that is relevant to the intended purpose of the AI system (as per Article 29(3) of the AI Act). Similarly, this presumption applies when the defendant cannot prove that the claimant possesses the requisite evidence and expertise to establish causation.

3. The presumption of causation extends to cases of low-risk AI-induced damage, regardless of the defendant, when the claimant demonstrates that meeting the burden of proof is excessively challenging.

4. The presumption of causation applies to users of AI systems if (1) the user has substantially interfered with the operating conditions of the AI systems; or (2) the user should or could have determined the operating conditions and did not do so (e.g. by using the system under unauthorised conditions) (Article 4(6) of the AILD, recital 29).

In principle, introducing presumptions is a positive legislative approach to enhance the effectiveness of civil liability objectives. However, the presumption model for causation outlined in the proposal for an AILD exhibits certain limitations that could impede the efficient resolution of victims‘ damages.

Firstly, in the proposed framework, the rebuttable presumption will apply to claimants who have been harmed by high-risk AI systems and are pursuing claims against users or suppliers of high-risk AI solely exclusively in cases involving AI Act violations, excluding other potential legal grounds for liability in EU or national law. Conversely, a plaintiff affected by a low-risk AI system will not face such limitations and can base their claims on a broader range of legal grounds. This raises inquiries regarding why a claimant harmed by a high-risk AI system would have a narrower range of legal avenues compared to one harmed by a low-risk AI system.

Secondly, as pointed out by Professor Hacker, the European Commission‘s proposed presumptive model fails to incorporate any provisions outlining the steps to be taken after the generation of an AI result, which minimise the damage (Hacker, 2022, p. 38). This omission could prove significant, especially in situations where the AI Act imposes responsibilities on AI users to maintain these systems (as stipulated in Article 29(1) and (4) of the AI Act).

Thirdly, unlike the revised PLD, the AILD does not propose a more extensive system of presumptions concerning wrongful conduct. Under the revised PLD, product defectiveness would be presumed if the plaintiff demonstrates that the product does not adhere to the mandatory safety requirements or that the damage resulted from an evident product defect. Aligning the presumption systems between the revised PLD and the AILD would improve consumer recourse. There is no apparent reason why these two criteria, namely non-compliance of the product with mandatory safety requirements or evident failure, should not, at the very least implicitly, constitute a breach of the duty of care in a manner akin to a product defect. This would alleviate the challenge of proving wrongful conduct under the AILD regime (Hacker, 2022, p. 43).

2.3.2. The presumption system in the proposal for a Product Directive

In the revised PLD, a claimant is eligible for compensation if she can demonstrate that the product was defective, and that she incurred damages as a result. In other words, the claimant must substantiate three prerequisites for civil liability: product defectiveness, damages, and causation (Article 9(1) of the revised PLD). To simplify the establishment of liability, the European Commission includes a list of presumptions regarding product defects and causation to the Directive (Article 9 of the revisited PLD).

The revisited PLD outlines two scenarios for presuming product defects and causation, depending on whether the technical or scientific complexity of the evidence would make the burden of proof too challenging. Firstly, as a general rule, where the technical or scientific specificity of the evidence would not excessively burden the claimant:

1. Product defectiveness would be presumed if: (1) the defendant has failed to meet the burden of disclosure of the relevant evidence available to it; (2) the plaintiff proves that the product does not comply with the mandatory safety requirements laid down by EU or national legislation aimed at protecting against the risk of harm; or (3) the plaintiff proves that the damage is caused by an evident failure of the product in its normal use or in normal circumstances (Article 9(2) of the revisited PLD).

2. Causation would be presumed if it is established that the product is defective and that the damage caused is normally caused by that defect (Article 9(3) of the revisited PLD).

However, if the court determines that the technical or scientific complexity of the claimant‘s case makes it excessively difficult for the plaintiff to prove the product‘s defects or the causal link between the product‘s defects and the damage (or both conditions of liability), the product‘s defects or the causal link between the product‘s defects and the damage (or both conditions of liability) could be presumed, provided that: (1) the product contributed to the damage, and (2) the product is likely to have been defective, or its defects are the likely cause of the damage, or both. In such cases, the defendant would be entitled to challenge the basis for the application of this mechanism, i.e., that it is not unduly difficult for the claimant to prove the defects of the products or the causal link due to technical or scientific specificity, as well as the fact that the product contributed to the damage (Article 9(4) of the revisited PLD).

Like the AILD, the system of presumptions is a suitable approach for alleviating the burden of proof for victims. Nonetheless, the revisited PLD does have certain limitations or shortcomings. First, as previously mentioned, the AILD, unlike the revisited PLD, explicitly presumes negligence on the part of defendants in cases where they fail to comply with the obligation to both disclose and retain relevant data. In contrast, the revisited PLD only proposes to presume product defectiveness in cases where there is a breach of the duty to disclose. Paradoxically, under the revisited PLD, defendants would not be considered negligent for failing to retain the necessary data, even if the damage was caused by a high-risk AI, while the same defendants would be held liable under the AILD.

Second, to trigger the presumption of defects, the claimant must establish a violation of the mandatory safety requirements or the existence of an evident malfunction (Article 9(2) of the revisited PLD). Nevertheless, as previously discussed, substantiating an evident malfunction or a breach of mandatory safety requirements would still pose challenges for consumers, as they would need to engage costly experts to analyse the data, even if the defendants provide the evidence.

2.4. Grounds for civil liability: defective product under the revisited PLD

As it was already mentioned, the AILD exclusively addresses procedural aspects related to fault-based liability, while the revisited PLD not only covers procedural rules but also introduces certain amendments to the material product liability law. According to the revisited PLD, claimants will need to establish three conditions for civil liability: product defectiveness, causation, and damage. Given that the primary challenge lies in establishing the first condition of civil liability, which is unlawful conduct, further exploration into the grounds for establishing such conduct will be undertaken.

Under the revisited PLD, economic operators are held liable for damage caused by a defect in a product. Article 6(1) of the revisited PLD provides a definition of a defective product, according to which „a product shall be deemed to be defective if it does not provide the safety which a person has a right to expect having regard to all the circumstances“. Product defects, as classified by doctrine, can manifest in three forms: (1) defects in manufacture3, (2) defects in design4, and (3) defects in the instructions (i.e., in the warning5). Concerning AI liability, the category of design defects presents the most formidable challenges. In AI systems, consumer harm could result from issues like a faulty image recognition system that fails to identify traffic signals in a self-driving car or a facial recognition algorithm that confuses race, gender, or age (O‘Shaughnessy, 2022). In essence, consumers would suffer due to shortcomings in the product development process, specifically design flaws, where programmers make errors in the AI code or datasets, provide inadequate data, or use data that does not adequately represent all segments of society.

Despite suggestions to add to the list of criteria for identifying product defects6, the definition of defective products in the PLD (or in revisited PLD) does not provide a clearer algorithm for identifying defects. According to the established practice in the Member States, the main criterion for assessing whether a design is defective or not is the conformity of the product with the state of scientific and technical knowledge (Beherendt and Moelle, 2020). According to the revisited PLD, the manufacturer is exempted from liability if the level of scientific and technical knowledge was not such that a defect could be identified. Therefore, if the manufacturer could have developed a better product in the light of the state of scientific and technical knowledge, but failed to do so, thereby violating the consumer‘s expectations, the manufacturer is held liable for the damage caused by the product7.

However, there are legitimate doubts as to whether the scientific and technical knowledge test, which has so far been applied in the Member States, and which offers a more specific than subjective algorithm for testing consumer expectations, would be a sufficient test for identifying the shortcomings in artificial intelligence design.

Firstly, the scientific and technical knowledge test itself lacks specificity (Restrepo Amariles and Tamò-Larrieux, 2021; Ryan & Kearney, 2013), raising questions as to whether the test should compare the product to the best analogue product at the time of launch or whether it should be assessed against publications or standards available to the manufacturer at the time of launch. This lack of specificity may lead to different interpretations and may make it difficult for manufacturers to know what level of safety they should aim for when designing their products. This can further complicate legal analysis and lead to inconsistent results in different cases (Restrepo Amariles and Tamò-Larrieux, 2021).

Second, as AI systems increasingly substitute for professionals or competent consumers in specific domains, it becomes debatable whether the quality of an AI system should be assessed by comparing its performance to that of a human counterpart (e.g., a professional or an average consumer in that field) (Restrepo-Amariles and Tamò-Larrieux, 2021). This issue is exemplified by the Tesla accident in May 2016, where the autopilot misidentified a large truck at an intersection as a distant bridge and directed the vehicle straight toward the truck (Hunt, 2018). As Professor Hacker highlights, it is evident that an average driver would not have made such an error in judgment (Hacker, 2022). Consequently, the Tesla accident case prompts the question of whether it might be valuable to identify a deficiency in the Tesla product by comparing its performance to that of a human, such as a more skilled average driver, even if the Tesla model itself does not exhibit deficiencies according to the test of scientific and technical knowledge.

Hence, the option of identifying deficiencies in artificial intelligence by comparing the performance of an AI system‘s product with the standards of a professional in the relevant field should not be dismissed. This approach aligns with the proposal put forth by Professor Hacker, who argues that, in the absence of technical standards governing AI performance, AI should be evaluated against human performance (Hacker, 2022). A similar perspective is advocated by the Committee on Legal Affairs of the European Parliament, which maintains that when an AI system is designed to replace or augment human decision-making, the safety of the AI system should be assessed by benchmarking its performance against that of an intelligent human with equivalent competence and knowledge (European Parliament, Committee on Legal Affairs, 2020).

Another significant criterion for evaluating product defects pertains to the timing at which these defects are assessed. Under the current PLD, product defectiveness is evaluated from the moment when the manufacturer introduces the product to the market, at which point they no longer have control over the product as it enters the distribution chain (as per Article 7(e) of the PLD). If a defect emerges after the product was placed on the market, the manufacturer is exempted from liability, as outlined in Article 7(b) of the PLD.

In contrast, the revisited PLD proposes to extend the evaluation of product quality beyond the moment of market entry. It introduces the possibility of holding manufacturers accountable for defects in the product that arise even after the product has been placed on the market, provided these defects are attributable to software or related services under the manufacturer‘s control. This includes software upgrades, updates, or machine learning algorithms, as elaborated in Article 10(1)(d) of the revisited PLD and its recital 37.

From the article’s viewpoint, the above-mentioned proposal is a positive development. Unlike traditional products, artificial intelligence systems are subject to ongoing changes and improvements by their developers, such as upgrades and enhancements driven by machine learning techniques, even after their initial release on the market (Mayer-Schönberger, 2019). This means that the manufacturer of the AI system retains a degree of control over the product or its components even after it‘s in the hands of consumers. Consequently, assessing the quality of a product solely based on its initial launch on the market might not be appropriate (Custers, 2019).

However, the European Commission‘s proposal does raise certain questions. The Directive doesn‘t provide a clear definition of what constitutes a software update or upgrade, nor does it specify the extent of the manufacturer‘s control in cases where a product is capable of learning and adapting after installation (Dheu, De Bruyne and Ducuing, 2022, p. 36). Additionally, it remains uncertain how product quality would be assessed in situations where consumer harm is caused by a product whose component has been upgraded by the manufacturer. There are multiple potential moments for assessing product quality and liability:

1. From the time of damage: should the assessment of the product quality start from the time when the damage occurred? This would consider the ongoing control by the manufacturer after the product‘s release; or

2. From the time of product version change: should the assessment begin from the moment the product version on the market changes? This would involve evaluating the product based on the updates or changes introduced.

3. From the time of specific update: or, should the assessment of the product focus on the particular update that led to the damage? This would mean considering the product‘s state at the time of that specific update, rather than its initial release.

The lack of clarity on these issues introduces uncertainty into the liability framework for AI systems, which may complicate legal proceedings and make it challenging to establish responsibility for AI-related harm. The interpretation of revisited PLD regarding the timing of assessment has significant implications for AI product liability. Each scenario offers a different perspective:

1. Strict assessment based on the state of the art at the time of damage: this approach would hold the manufacturer accountable for AI product quality based on the most current state of the art at the time the damage occurred. It places a higher burden on manufacturers to ensure that their AI systems remain up-to-date and competitive with the latest technological advancements.

2. Less strict assessment based on the time of product version change: if the assessment starts from the moment the product version on the market changes, it might provide some flexibility for manufacturers. However, it still requires them to update their products promptly when necessary and ensure the quality meets contemporary standards.

3. Least strict assessment focused on specific updates: this scenario would potentially be more lenient on manufacturers since it considers the product‘s state only at the time of the specific update causing harm.

The choice among these scenarios will significantly impact the legal landscape for AI product liability. Stricter standards may encourage manufacturers to maintain products at a high standard continuously, while more flexible interpretations may lead to disputes regarding the timing and foreseeability of updates and their impact on liability.

Proposals

While the European Commission‘s directives proposals represent progress in safeguarding victims of AI-related harm, they fall short in addressing certain key areas such as scope, procedural mechanisms, presumptions, and criteria for establishing unlawful conduct. In response to these identified issues, this article presents a proposal aimed at improving the directives.

1. Proposals on the scope of the Directives (subsection 2.1):

1. To consider extending the scope of the AILD to encompass both fault-based civil liability and strict liability. By including strict liability, the AILD would offer a more robust framework for addressing AI-related harm.

2. The evidence disclosure mechanism could be extended to encompass not only claimants affected by high-risk AI systems but also those impacted by low-risk AI systems. This would expand the scope of the Directives to encompass a broader range of AI systems, ensuring that a larger number of victims have access to evidence when pursuing claims for damages.

3. In the recital of the AILD, it is recommended to include a statement that explicitly outlines the Directive‘s relevance in cases where AI contributes to harm in conjunction with human decision-makers. This addition aims to eliminate any ambiguity and confirm that the AILD covers scenarios where AI systems support human decision-making processes.

2. Proposals for the disclosure of evidence in the Directives (section 2.2):

1. Consider creating a fund to cover the expenses associated with conducting an expert analysis of the AI system for potential claimants and outline in the disclosure rules that the information should be presented in a manner that maximizes its comprehensibility to the claimant.

2. Similar to the AILD, the revised PLD might incorporate a broad obligation to retain data. This can be achieved by expanding Article 9(2a) of the revised PLD to establish a presumption of product defects when the defendant neglects to uphold the obligation to preserve evidence related to the AI system responsible for the harm.

3. Article 8 in the revised PLD could introduce the claimants‘ entitlement to access information from defendants through a pre-litigation information request, along with the option to acquire evidence from the defendant without initiating a damages claim, aligning with the provisions found in the AILD.

3. Proposals for a presumption mechanism in the Directives (section 2.3):

1. The AILD might establish that the presumption of causation for high-risk AI damage applies not only in the event of a breach of the relevant provisions of the AI Act, but also in the event of a breach of the general duty of care or a violation of other norms laid down in national law.

2. Consider aligning the AILD with the revised PLD by establishing that unlawful conduct is presumed if the claimant demonstrates that the product does not meet the mandatory safety requirements or that the damage resulted from an evident product defect.

4. Proposals for the defectiveness of product under revisited PLD (subsection 2.4):

1. Enhance the clarity of the revised PLD by offering more explicit instructions on evaluating the quality of the design. Consider whether it would be suitable to identify AI system shortcomings by benchmarking them against the performance of a professional in the relevant field.

2. The revised PLD could outline the process for establishing the threshold of scientific and technical knowledge applied in evaluating product defects.

3. The revised PLD might provide more clarity regarding when the assessment of product quality takes place if the manufacturer maintains control post-release: from the time of damage occurrence; from the moment the product version on the market changes; or from the specific moment the update is implemented if the damage results from an update.

4. In conclusion, the improvements proposed for the Directives in various crucial domains, such as product defectiveness, presumption mechanisms, evidence disclosure, and scope, seek to rectify shortcomings observed within the current directives and enhance protection for individuals impacted by artificial intelligence. By incorporating these enhancements, the Directives can more proficiently fulfil their aim of ensuring sufficient safeguarding for victims of AI-related harm.

Bibliography

Legal acts

European Union legislation

Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

European Commission proposal of 28 September 2022 for a Directive of the European Parliament and of the Council on adapting the rules on non-contractual civil liability to artificial intelligence.

European Commission proposal of 28 September 2022 for a Directive of the European Parliament and of the Council on product liability for defective products.

Special literature

Custers, B. (2019). The liability implications of artificial intelligence and big data: A private international law perspective. Journal of Law and the Biosciences, 6(1), 1–33.

Dheu, O., De Bruyne, J., Ducuing, C. (2022). The European Commission‘s Approach to Extra-Contractual Liability and AI - A First Analysis and Evaluation of the Two Proposals [online]. Available at: https://ssrn.com/abstract=4239792 [Accessed 31 Mar. 2023].

European Commission, Directorate-General for Communications Networks, Content and Technology, (2020) European enterprise survey on the use of technologies based on artificial intelligence: final report. Publications Office [online]. Available at: https://data.europa.eu/doi/10.2759/759368

European Commission, Directorate-General for Justice, and Consumers, (2019) Liability for artificial intelligence and other emerging digital technologies. Publications Office [online]. Available at: https://data.europa.eu/doi/10.2838/573689

Hacker, P. (2022). Review of The European AI Liability Directives - Critique of a Half-Hearted Approach and Lessons for the Future. (Working Document), December [online]. Available at: https://arxiv.org/pdf/2211.13960.pd

Restrepo Amariles, D., & Tamò-Larrieux, A. (2021). Product Liability and Artificial Intelligence: Challenges and Opportunities. European Journal of Risk Regulation, 12(1), 33–50.

Ryan, J., Kearney, L. (2013). Product Liability for High-Tech Products: The Challenges of the Scientific and Technical Knowledge Defence. Journal of Business Law, (3), 216–240.

Mayer-Schönberger, V. (2019). Regulating Uncertainty in the Use of Machine Learning Algorithms in Medicine. Duke Law & Technology Review, 17(2), 234–256.

Mikelėnas, V. (1995). Problems of civil liability: comparative aspects. Vilnius. Justitia.

Rescher, N. (2006). Presumption and the Practices of Tentative Cognition. Cambridge University Press. doi:

Restrepo Amariles, D., Tamò-Larrieux, A. (2021). Product Liability and Artificial Intelligence: Challenges and Opportunities. European Journal of Risk Regulation, 12(1), 34–45.

Schellekens, M. (2022). Human-machine interaction in self-driving vehicles: a perspective on product liability. International Journal of Law and Information Technology, 30(2), 233–248, https://doi.org/10.1093/ijlit/eaac010

Soyer, B., Tettenborn, A. (2023). Artificial intelligence and civil liability-do we need a new regime? International Journal of Law and Information Technology.

Van Gool, E. (2022). Case C-65/20 Krone: Offering (some) clarity relating to product liability, information, and software [online]. European Law Blog. Available at: https://europeanlawblog.eu/2022/01/19/case-c-65-20-krone-offering-some-clarity-relating-to-product-liability-information-and-software/

Wendehorst, C. (2022). Expert explainer: AI liability in Europe [online]. Available at: https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/

Wachter, S., Mittelstadt, B., Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005

Zech, H. (2021). Liability for AI: public policy considerations. ERA Forum, 22. https://doi.org/10.1007/s12027-020-00648-0

Case law

CJEU decisions

CJEU, 29 May 1997, Case C-300/95 [1997] ECR I-2649 (Commission v United Kingdom).

Decisions of the courts of the Republic of Lithuania

Resolution of the Panel of Judges of the Civil Cases Division of the Supreme Court of Lithuania of 29 November 2012 in the civil case G. P. and Others v. Republic of Lithuania, Case No 3K-3-539/2012.

Other sources

Beherendt, P., Moelle, H. (2020). Product Liability – 5 Particularities in Germany. [online]. Available at: https://www.taylorwessing.com/de/insights-and-events/insights/2018/10/product-liability-5-particularities-in-germany [Accessed 31 Mar. 2023].

Bruyne, O. D.; Charlotte Ducuing, Jan De (2022). The Commission‘s proposals for adapting liability rules to the digital age. Part 1: The AI Liability Directive [online] CITIP blog. Available at: https://www.law.kuleuven.be/citip/blog/the-commissions-proposals-for-adapting-liability-rules-to-the-digital-age-part-1-the-ai-liability-directive/ [Accessed 31 Mar. 2023].

European Commission – European Commission. (2022). Press corner [online]. Available at: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5791 [Accessed 31 Mar. 2023].

European Commission – European Commission. (2022b). Press corner [online]. Available at: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5793 [Accessed 31 Mar. 2023].

Hunt, E. (2018). Tay, Microsoft‘s AI chatbot, gets a crash course in racism from Twitter [online]. The Guardian. Available at: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

Nawaz, S. A. (2022). The Proposed EU AI Liability Rules: Ease or Burden? [online] European Law Blog. Available at: https://europeanlawblog.eu/2022/11/07/the-proposed-eu-ai-liability-rules-ease-or-burden/ [Accessed 31 Mar. 2023].

LLP, L. & W. (2022). European Commission Proposes Reform on Liability Rules for Artificial Intelligence | Latham & Watkins [online]. Latham.London. Available at: https://www.latham.london/2022/12/european-commission-proposes-reform-on-liability-rules-for-artificial-intelligence/ [Accessed 31 Mar. 2023].

O‘Shaughnessy, M. (2022). One of the Biggest Problems in Regulating AI Is Agreeing on a Definition [online]. Carnegie Endowment for International Peace. Available at: https://carnegieendowment.org/2022/10/06/one-of-biggest-problems-in-regulating-ai-is-agreeing-on-definition-pub-88100

Deimantė Rimkutė – Vilniaus universiteto Teisės fakulteto Privatinės teisės katedros doktorantė. Pagrindinės mokslinių tyrmų sritys: prievolių teisė, civilinė atsakomybė, deliktinė atsakomybė, technologijų įtaka teisei. Rengiamos disertacijos pavadinimas „Dirbtinio intelekto naudojimu padarytos žalos atlyginimas“.

Deimantė Rimkutė is a PhD student at the Department of Private Law, Faculty of Law, Vilnius University. Her main areas of research are law of obligations, civil liability, tort liability, and influence of technology on law. The title of the dissertation being prepared is „Compensation for Damages Caused by the Use of Artificial Intelligence“.


1 Under the revisited PLD, liability will also apply to economic operators, i.e. the manufacturer of the product or component, the related service provider, the authorised representative, the importer, the fulfilment service provider or the distributor (Article 4(16) of the revisited PLD).

2 According to the revisited PLD, a digital service is a service that is integrated into or linked to a product in such a way that without it the product would not be able to perform one or more of its functions (Article 4(4) of the revisited PLD).

3 Manufacturing defects occur when one of the products does not match the manufacturer‘s prototype.

4 Design defects are defects identified when a product meets the manufacturer‘s requirements but the requirements themselves do not meet consumer expectations

5 Instructional defects are due to an error left in the instructions for use or maintenance.

6 According to the current version of Article 6 of the PLD, these circumstances are (1) the placing on the market of the product; (2) the use of the product that could reasonably be expected; (3) the time at which the product was put into circulation. The revisited PLD proposes to add a number of further circumstances, namely (4) the impact on the product of other products that can reasonably be expected to be used in combination with the product; (5) the moment of placing on the market or putting into service of the product, or, if the manufacturer retains control of the product thereafter, the moment when the product has left the manufacturer’s control; (6) the safety of the product, including cyber-security requirements; (7) the intervening regulatory authority; and (8) the expectations of the end-users.

7 The revisited PLD clarifies that the test of the level of scientific and technical knowledge is objective (Article 10(1e) of the proposal for a Product Directive). In this way, it is proposed to codify in EU Member States the case-law of the CJEU in case C-300/95 (CJEU, 29 May 1997, Case C-300/95 [1997] ECR I-2649 (Commission v United Kingdom).