GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

European Commission Proposes Reform on Liability Rules for Artificial Intelligence | Latham & Watkins LLP - JDSupra

European Commission Proposes Reform on Liability Rules for Artificial Intelligence | Latham & Watkins LLP – JDSupra

The directives aim to assist claimants in proving the causation of damages and product defectiveness in complex AI systems, creating legal certainty for providers.

On 28 September 2022, the European Commission issued two proposed directives to reform and clarify liability rules on artificial intelligence (AI):

The European Commission considers that recent developments in AI have exposed shortcomings within the EU’s civil liability rules. The Commission outlined the following key reasons for proposing the new directives:

AI Liability Directive

The AI Liability Directive complements the European Commission’s proposed Regulation on Artificial Intelligence (AI Act) (for further information on the AI Act, see Latham’s briefing). Whereas the AI Act aims to classify AI systems by risk and regulate them accordingly, the AI Liability Directive seeks to impose liability if these risks have materialised into harm affecting end users.

Notably, the AI Liability Directive would allow national courts to compel providers of high-risk AI systems to give relevant evidence to claimants about a specific system that is alleged to have caused damage. This rule may apply if: (i) the claimant presents sufficient facts and evidence to support the claim for damages; and (ii) the claimant shows that they have exhausted all proportionate attempts to gather the relevant evidence from the defendant. Access to evidence would allow claimants to decide whether their claim is well-founded and, if so, how to substantiate their claim for damages. These disclosure powers under the AI Liability Directive dovetail with the transparency, audit, and recordkeeping obligations proposed by the AI Act.

Additionally, the AI Liability Directive introduces a presumption of causation between the defendant’s fault and the damage caused to a claimant by the AI system. This presumption would apply if the following three conditions are met:

However, the presumption would not apply in relation to AI systems categorised as high-risk under the AI Act[1] if the defendant shows that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link between fault and damage. For AI systems that are not categorised by the AI Act as high-risk, the presumption of causality would only apply if national courts consider it excessively difficult for the claimant to prove the causal link between fault and damage. The defendant has the right to rebut the presumption of causality by showing that its fault could not have caused the damage.

Revised Product Liability Directive

The Revised Product Liability Directive aims to modernise the Product Liability Directive by taking account of software and digital products. It retains the Product Liability Directive’s strict or no-fault liability regime, which means that manufacturers of defective products would be held liable without the need for claimants to establish any fault on the manufacturer’s part. In contrast, the AI Liability Directive, as discussed above, uses a “fault-based” regime.

Under the Revised Product Liability Directive, AI systems and AI-enabled goods fall explicitly under the scope of regulated products, thereby including them into the no-fault liability regime. The Revised Product Liability Directive also provides that hardware manufacturers, software producers, and providers of digital services that affect how a product works could also all face liabilities for any product defects.

The Revised Product Liability Directive further seeks to place ongoing liability on such manufacturers, software producers, and providers of digital services once the AI systems are on the market. These operators would remain liable if defectiveness results from a related service, software updates or upgrades, or a lack of software updates or upgrades required to maintain safety, to the extent the foregoing are within the relevant operator’s control.

The Revised Product Liability Directive also provides that if a national court determines that technical or scientific complexity makes it excessively difficult for claimants to prove a product’s defectiveness or the causal link between the defect and the damage, then such defectiveness or causation can be presumed. This presumption could apply provided the claimant can prove that: (i) the product contributed to the damage; and (ii) the product was likely defective or its defectiveness is a likely cause of the damage, or both.

Next Steps

The European Commission intends the AI Liability Directive and the Revised Product Liability Directive to complement each other in building a liability framework for AI systems and AI-enabled products, alongside the broader regulatory framework under the AI Act. This set of EU AI legislation currently sits at various stages of the EU legislative process and will be subject to further negotiation and potential amendment before coming into force. Questions remain in terms of how the strict liability and the fault-based regimes will operate side-by-side in practice; how these liability regimes will relate in practice to the AI Act’s proposed conformance framework for AI systems; and how market practice will develop for the allocation of liability along AI and data supply chains.

In the UK, the government has rejected comprehensive regulation of AI and AI liability in favour of an iterative, sector-based approach building on current regulation and guidance. For an overview of the UK approach, see Latham’s briefing on the UK’s AI Strategy.

This post was prepared with the assistance of Clarence Cheong in the London office of Latham & Watkins.

This content was originally published here.