top of page

If AI kills someone, who should be responsible?



When we humans harm someone, it is clear the responsibility we have upon our actions. But when it is a machine, to who should the accountability fall across? It is a fact that the robot themselves cannot respond to any of the consequences, which leaves us with the question of “who is responsible?”.


With the popularization of artificial intelligence (AI) software robots in diverse fields such as industrial, commercial, military, medical and personal, a longing debate has arisen regarding human relationships with these entities. The ability of current social and legal structures to deal with AI technology is a serious and widespread issue in modern society. If a robot harms something, what are the legal ramifications? Although AI technologies are gaining a significant space in the contemporary world, research is still shallow concerning the responsibilities of a robot’s actions, and it is still unclear to whom the blame should fall on when deprogrammed mistakes occur.


Accountability plays a fundamental role in the concept of human morality; in fact, we anticipate it in every situation. Responsibility supports the rule of law and directs the calculation of restitution. It is present in professional businesses and governmental activities, as well as being the “glue” of social trusts between citizens. Having people and organizations being responsible for their actions, we forecast further actions also thinking of its implications upon us. Accountability only applies to real-life scenarios - ‘it is uniquely a human ethical priority’(Chief executive website) - where bots can't be blamed for their actions. Here, a barrier is built between; solutions to contemporary problems becoming an issue itself, and requiring other solutions as well.

Some think accountability means that AI systems are able to justify their actions based on the decisions they make, “derivable from, and explain by, the decision-making mechanisms used”, says AI ethicist Virginia Dignum. According to Dignum, the accountability of an AI tool involves a wide range of components in a social techno-system, instead of being a distinctive trait of a particular system. Alternatively, the other perspective relates the accountability of a bot to its capacity and acknowledgement of each scenario. An AI tool should already include accountability as one of its components. This falls into the act of “algorithmic accountability”, where a company is responsible for taking a risk-assessment of the automate system before the product is put in the market, together with considering the capacity of their sociotechnical system.

Taking into account the aspects mentioned above, accountability reflects not only the expectation that parties’ involvement in the development of an AI are able to explain both the system’s decisions and their own. These are the indispensable basis of human responsibility for AI judgements. The application of AI involves various stakeholders, who follow a set of laws, regulations and social expectations, as the decisions machines make happen in scenarios concerning social and technical systems. A vast landscape of possibilities is created apart from these situations, where AI outcomes may revert to countless individuals and organizations.

In a scenario where an AI system harms someone, determining responsibility can be complex and various factors such as implementation and usage of the AI system should be taken into consideration. There is no clear consensus on who should be held responsible, but there are a few perspectives to be thought of. Initially, the designers and manufacturers of the AI system should be questioned: if the AI system was designed and built with a clear intention to cause harm, then those involved in the design and manufacturing process hold responsibility. Secondly, the owners or operators of the AI system could be approached: if the AI system was used in an irresponsible or harmful manner, the person or organization responsible for operating the AI system should be held liable. Lastly, the users of the AI system must fall into consideration: if the AI system was used in a way that was not intended or outside its capabilities, the user could be held responsible for any harm caused. There is an even larger complexity regarding who to point out when these systems are still developing, or contain algorithms that are constantly innovating.

It might be simpler for businesses to focus on who to call on to put things right rather than who to hold accountable when things go wrong. The business is ready to respond to AI outcomes and take genuine responsibility for correcting problems with remedial actions when there is a clear understanding of who is responsible for what and to whom.

Ultimately, the question of responsibility will likely be determined by the legal system and may involve extensive concerns of liability and accountability. In some cases, multiple parties might be held responsible, or responsibility may be shared. It is important to bear in mind the ethical implications of AI technology and to develop clear regulations and guidelines to ensure that AI systems are used in a safe and responsible manner. But this conflict created between accountability and innovation is not a binary choice; responsibility cannot be neglected in the pursuit of the revolutionary AI. “Modern problems require modern solutions”, even though these problems were initially a solution.


Recent Posts

See All
bottom of page