Artificial Intelligence (AI) agents cannot imitate the feelings of humans, and therefore, liability for crimes that include the existence of feelings (such as crimes of hate) cannot be attributed on AI agents.
Most Artificial Intelligence agents are aimed to be made by AI researchers in a manner that makes the agents capable of mimicking aspects associated with human intelligence. This includes learning, observation of the world through vision, processing language and so on. The general purpose is to program them in a way that they match (if not outperform) the ability of humans to understand the world and absorb its environment. Under such circumstances, it is important to examine the criminal liability of an AI agent- who will be held liable for a crime committed by an AI agent?
The constituent elements of criminal liability are actus reus (wrongful act) and mens rea (wrongful intention). An act cannot be criminal by itself; it must be accompanied by a guilty mind for it to be criminal. The task of attributing actus reus to an AI agent is a fairly simple one- if an action resulting in a criminal act is taken by an AI system (or if there is a failure on part of the AI system to take an action where there was a duty to act i.e. omission), then there is an occurrence of the actus reus bit of an offence on part of the AI system. However, the tougher task is the attribution of mens rea to an AI agent.
It is important to highlight that there are various levels to mens rea. While the Indian Penal Code, 1860 has not explicitly used the word ‘mens rea’, its essence has been represented through other words such as ‘dishonestly’, ‘criminal knowledge or intention’ and so on. Mens rea is generally indicative of blame-worthy mental condition, and is inclusive of intention, negligence and recklessness.
Intention refers to the desire of an individual to fulfill a particular result. It can also include the foresight that certain consequences will be the result of the conduct of the individual. Negligence refers to the lack of precaution which is expected of a reasonable man under the specific circumstances of a case. Lastly, recklessness refers to the foresight of certain consequences as a result of one’s conduct, but there is an absence of desirability in the same.
Under this context, it is important to examine the three models of criminal liability put forth by Gabriel Hallevy – the Perpetration-via-Another Liability Model, the Natural-Probable Consequence Liability Model and the Direct Liability model. These models are not mutually exclusive, which means that it is possible to have two or even three of these models in place in a society.
According to the first model, the AI system is an innocent agent and does not have any human attributes. This model considers the AI agent solely an instrument used by the real perpetrator (who is held accountable) committing the offence. Further, under this model, the perpetrator is placed on equal footing with incompetent persons, children and so on.
Under the second model, the underlying assumption is that the programmers and users involved in the day-to-day activities of the AI agent do not have any intention of committing an offense by way of using the AI entity for the same. In such cases, the criminal liability is imposed on accomplices, wherein an offense (which had not been planned by all) is committed by one. The programmers/users are merely required to be aware that a particular offense will be the natural or probable result of their actions for this model to apply. It is important to highlight that under this model, the AI agent is held criminally liable (along with the programmers and the users) if it did not act as an innocent agent, and not if it did act as an innocent agent.
The third model provides for the direct liability of the AI entity; the programmer or user of the AI agent are given less important under this model. There is no reason to not attribute criminal liability on an AI agent if it is able to fulfill the external as well as the internal elements i.e. the actus reus and the mens rea. As discussed previously in this piece, the attribution of mens rea to an AI agent is a difficult one.
Most AI agents are capable of receiving factual data and of forming an understanding of such data, which is what constitutes as knowledge. Specific intent refers to an aim of making a factual event occur; the programming of an AI agent can be done in a way for it to have such an aim or purpose. However, it is further important to highlight that AI agents cannot imitate the feelings of humans, and therefore, liability for crimes that include the existence of feelings (such as crimes of hate) cannot be attributed on AI agents. This model does not suggest that liability of the AI agent precludes that of the programmers/users; the liability is not divided but is combined.
It is therefore possible to impose liability on Artificial Intelligence agents if such models are in place. Considering these agents are now capable of imitating aspects of human life, it is extremely important to have certain mechanisms in place in order to provide protection to the society against dangers that may result from such a development.
 D Gaur, Criminal Law: Cases and Materials (9th edn)
 S Atchuten Pillai, PSA Pillai’s Criminal Law (13th edn, 2018)
 C Jr Lashbrooke, ‘Legal Reasoning and Artificial Intelligence’, Loyola Law Review (1988)
 Gabriel Hallevy, ‘The Criminal Liability of Artificial Intelligence Entities- from Science Fiction to Legal Social Control’, Akron Intellectual Property Journal (2016)
 Maxim Dobrinoiu, ‘The Influence of Artificial Intelligence on Criminal Liability’, Lex ET Scientia International Journal (2019)
 Vincent Boulanin, ‘The Artificial Intelligence: A Primer’, Stockholm International Peace Research Institute (2019)
Cover Image Credits: Gayathri N.