The European Commission has published new liability rules on digital products and artificial intelligence (AI) in order to protect consumers from harm, including in cases where cybersecurity vulnerabilities fail to be addressed.
The two proposals the Commission adopted on September 28, 2022 will modernize the existing rules on the strict liability of manufacturers for defective products.
The liability rules allow compensation for damages when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.
The Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains.
The Commission proposes for the first time a targeted harmonization of national liability rules for AI, making it easier for victims of AI-related damage or cybersecurity threats to get compensation.
The revised Directive modernizes and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products, from garden chairs to advanced machinery. It ensures fair and predictable rules for businesses and consumers alike by modernizing liability rules for circular economy business models and for products in the digital age allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.
The Directive simplifies the legal process for victims when it comes to proving that someone’s fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality’ will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.
The new rules strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation, while laying down guarantees for the AI sector by introducing, per instance, the right to fight a liability claim based on a presumption of causality.
The recent directives still need to be turned into national law and the European governments asked to make it a serious step in their schedules.
Author: Mattia Forzo
Leave a Reply