What happens if an AI system causes damage, and who bears responsibility for the consequences
Whether we want to accept it or not, artificial intelligence plays a moreimportant role in our society than ever before. From medical diagnoses tofinancial decisions — AI systems increasingly take on tasks performed bypeople.
I. What Can AI Errors in Production Lead To?
- Employee/consumer data breaches;
- Manufacturing errors that harm consumers — for example, defective goods;
- Workplace injuries;
- Financial losses (due to equipment downtime or increased product defects);
- And others
II. Subjectsof Liability
EU Regulation — at the forefront, but with its difficulties
In the EU, artificial intelligence is regulated by Regulation (EU)2024/1689 of June 13, 2024, which adopted the AI Act. This is the world's first comprehensive regulatory act governing AI. At the same time, this document essentially omits issues of liability for errors made by AI in its work, which raises many questions. To develop this issue, the EU prepared a draft AI Liability Directive, which was supposed to provide various liability measures for damage caused by artificial intelligence. However, in February of this year, the directive was withdrawn. The reason stated was the following — interested parties could not agree; calls for simplification of regulation in the digital sector also had an influence.
Thus, in the EU at the moment, the main regulation of liability for AI errors is a general act — the EU Directive on Liability for Defective Products (hereinafter — the Directive). The main provisions of the Directive to pay attention to:
- The Directive applies to autonomous software, including AI systems, with some exceptions;
- The liability measure under the Directive is compensation for damage;
- Any person entitled to compensation may bring an action against: (i) the manufacturer of the defective product; and (ii) the manufacturer of the defective component — in certain cases. If the manufacturer is located outside the EU, claims may also be brought against: (i) the importer; (ii) the manufacturer's authorized representative; or (iii) the fulfillment service provider. Joint and several liability is provided;
- The burden of proving the product's defectiveness, damage and causal link between defectiveness and damage lies with the injured party;
- Those who have paid compensation have the right to file a recourse claim against the party guilty of the damage;
- Interesting fact — as a ground for exemption from liability, the Directive provides the following circumstance: the objective state of scientific and technical knowledge at the time of placing the product on the market or putting it into operation, or during the period when the product was under the manufacturer's control, did not allow the defect that caused the damage to be detected.
It should be noted that the Directive does not have regulatory effect in member states and must be implemented into national legislation.
CIS Model Law — trends in regional legislation development
In April 2025, the CIS model law "On Artificial Intelligence Technologies" was adopted — a recommendatory act that can be used by member states in developing national legislation in the field of AI.
Main provisions:
- The law extends its effect to AI technologies and systems using AI;
- The law introduces the principle of absolute and joint liability, according to which:
- The principle of absolute liability is established (that is, liability occurs regardless of fault) in the sphere of relations related to high-risk AI technologies;
- Joint liability of owners, possessors, developers and operators of AI technologies is established;
- The possibility of bringing to liability in accordance with the norms of criminal, administrative, civil and labor legislation is provided (while specific offenses must be determined by national legislation);
- The need to insure risks associated with AI errors is enshrined (the list of AI technologies subject to insurance must be determined based on national legislation by the authorized state body);
- Mutual insurance is allowed with the possibility of establishing special conditions for insuring risks of causing harm during testing and pilot operation of certain categories of AI technologies.
The situation in Belarus — how can an error madeby AI in production be regulated in theory now?
At the moment, Belarusian legislation lacks special regulation of liability for damage caused by errors of AI systems. There are many concepts about whether a person can be held liable for a robot's errors, especially when it comes to highly autonomous systems. The main approach currently is that a person/company bears responsibility for the actions of autonomous systems. Thus, those who may be held liable for the damage caused include:
- A company that created a defective product using AI that caused damage (for example, liability is possible under the Law "On Consumer Protection") / a company that introduced AI into production and is unable to ensure safe working conditions (when harming an employee of the enterprise). It is possible to hold the head of the organization or another person responsible for equipment safety, etc. liable (for example, a programmer in the company responsible for software technical support);
- An AI developer company integrated into production (it is possible to file claims in the order of recourse). An important point — the enterprise must be able to prove that defects in products / other errors were caused precisely by a poorly developed AI system.
III. Risk insurance
Today in Belarus there is no practice of insuring risks associated with the use of AI technologies.
At the same time, some countries have actively begun to implement insurance in the field of AI.
For example, in the Lloyd's of London insurance market, they began to offer insurance that covers risks and/or losses associated with errors of chatbot sand other artificial intelligence tools.
There is an opinion that liability insurance when using AI fits into the general concept of cyber risk insurance. The global market already has specialized insurance products, such as cyber insurance, professional liability insurance and technology risk insurance, which cover losses from incidents involving AI. For example, similar products already exist in Russia, and they can be adapted to cover risks associated with AI. However, the market is still at the beginning of its development.
At the same time, provisions on insurance of such risks are also enshrined in the CIS model law.
It can be expected that in the coming years, Belarusian legislation will be adapted taking into account global trends, and insurance organizations will begin to develop specialized products to cover losses arising from the use of AI in industry.



