In order to address the criminal liability of artificial intelligence entities, it is first necessary to determine which of the categories of semi-autonomous, autonomous or conscious entities these entities fall into. Since autonomous and semi-autonomous assets are considered goods, criminal liability will belong to the person using these goods. It is controversial who will be held criminally responsible for a crime committed by a conscious artificial intelligence, whether negligently or intentionally.

In order to be considered a crime in terms of criminal law, it must first be a voluntary act. Although it is not possible to talk about a completely voluntary movement for artificial intelligence today, it is possible to predict that we will encounter examples of this situation in the near future. Today, we can talk about the voluntary movement of artificial intelligence, albeit limited, within the scope of its programming. For example, the automatic pilot in the ‘Model 3’ model of the electric car manufacturer TESLA brand perceived the flag as a traffic light, and if it had moved on a red light, it would have caused people to be injured or killed.[1]

With this example, it is possible for artificial intelligence entities to commit crimes such as injuring, killing, insulting, threatening, damaging property, entering information systems, recording, disseminating and seizing personal data. Therefore, the issue of who will bear the criminal responsibility arising from these crimes needs to be resolved. Since current legal systems tend to accept artificial intelligence as goods, the criminal liability of the artificial intelligence programmer or user comes to the fore in terms of any crime committed. However, it should be noted that discussions regarding the legal personality of artificial intelligence continue and there is no clear regulation on the subject.

According to the view that accepts artificial intelligence assets as goods, if artificial intelligence assets commit a crime, the person who programs or uses the artificial intelligence asset should be held responsible. In order for the person who programs or uses an artificial intelligence asset to be held criminally responsible, this person must know that he/she is using the artificial intelligence asset as a tool in a crime, in other words, he/she must be at fault. It should be noted that artificial intelligence assets can be considered weapons in terms of the Turkish Penal Code. In accordance with Article 6 of the Turkish Penal Code; Any cutting, piercing or injuring tool made for use in attack and defense is considered a weapon. Therefore, an artificial intelligence entity with cutting, piercing and bruising features will be considered as a weapon.

The moral element of the crime consists of intent and negligence. Intention is the knowing and willful execution of the elements of the legal definition of the crime. Negligence, on the other hand, is the commission of a behavior without foreseeing the consequences specified in the legal definition of the crime, due to violation of the duty of care and attention. For example, if a crime occurs as a result of programming an artificial intelligence robot to kill or insult a person, then intentional liability will arise. To give another example, in cases where a person is killed by an unmanned aerial vehicle or the artificial intelligence is commanded to “cuss everyone who asks you questions other than me”, the software developer/user will be held liable in terms of intent.[2]

Unlike the view that considers artificial intelligence as a thing, there is also a view in the doctrine that advocates accepting artificial intelligence as a legal entity. According to the Turkish Penal Code; Penal sanctions cannot be imposed on legal entities. However, sanctions in the nature of security measures stipulated by law due to crime are reserved. In other words, criminal sanctions can only be imposed on real persons. Therefore, if artificial intelligence entities are given legal personality or electronic personality, they will not be criminally liable.

There is another view in the doctrine that advocates that artificial intelligence should be accepted as a real person if it becomes an entity that has full freedom of will in the future, in other words, acts with consciousness. In this case, it will not be possible to hold the programmer responsible for any crime committed by artificial intelligence. So much so that the principle of individuality of punishment is valid in criminal law. Therefore, every real person will be personally responsible for the crime he committed with intent or negligence. Therefore, artificial intelligence itself will be criminally responsible for the crime it commits.

If we accept the artificial intelligence being as a real person, just like a human, it is also possible to use it as a tool in crime. In this case, criminal liability will belong to the person in the background. On the other hand, if an artificial intelligence entity is given a personality other than a real person (legal personality or electronic personality) or if no personality is recognized, it is not possible to use it as a tool in a crime. Moreover, if the view that personality should not be recognized is taken as basis, it will be used as some kind of tool/tool/object/weapon if used in a crime. In both cases, the perpetrator of the crime will not change, only in the first example the human will be the indirect perpetrator, while in the second example he will be the direct perpetrator.[3]

Another issue that needs to be considered is whether the artificial intelligence entity can be a victim or a victim of a crime. According to the view that accepts artificial intelligence as property, if any damage is caused to artificial intelligence, the crime of damaging property will be committed. According to the views advocating giving artificial intelligence a legal personality or electronic personality, artificial intelligence can only be harmed by a crime. However, it should be noted that there is also an opinion in the doctrine that legal entities may be victims. According to the view that advocates giving real personality to artificial intelligence, artificial intelligence entities can be both the perpetrators and victims of the crime.

Finally, the issue of sanctions and execution should be addressed in terms of crimes committed by artificial intelligence. According to the view that accepts artificial intelligence as a commodity, there will be no sanctions imposed on artificial intelligence. However, according to the view advocating giving artificial intelligence a legal personality or electronic personality, the provisions stipulated by the Turkish Penal Code for legal entities can be applied to artificial intelligence by analogy. Although criminal sanctions cannot be imposed on legal entities in accordance with the law, it is possible to impose sanctions in the nature of security measures. There are two security measures foreseen for legal entities in the law: cancellation of permission and confiscation. Since discussions regarding the legal personality of artificial intelligence are ongoing, security measures regarding artificial intelligence assets have not been clearly regulated. However, according to those who support this view, if the artificial intelligence entity commits a crime, it is possible to take a security measure in the form of canceling the software of the artificial intelligence entity within the framework of the permission cancellation procedure.

If we look at the issue of sanctions in terms of the view that advocates accepting artificial intelligence entities as real persons, the sanctions foreseen in the Law for real persons are for the purpose of reformation. Therefore, imposing a prison sentence on an artificial intelligence entity will not be able to achieve the purpose of the Law. However, it is possible to say that the judicial fine imposed for real persons is relatively appropriate to the structure of the artificial intelligence entity.

As a result, while discussions on the legal nature of artificial intelligence continue and technology develops more and more every day, new problems give rise to the search for new solutions. Since current legal systems tend to accept artificial intelligence as goods, its programmer or user is held responsible for crimes that occur as a result of artificial intelligence. According to the view that advocates accepting artificial intelligence as a real person, it is possible to impose a judicial fine as a sanction, even though it does not comply with the nature of a prison sentence. However, considering the views advocating the acceptance of artificial intelligence as a legal person or electronic person, the sanction is not imprisonment or a judicial fine; Subjecting it to sanctions such as terminating the software or temporarily disabling it seems more appropriate considering the nature of artificial intelligence.

REFERENCE

Köken, E. (2021). YAPAY ZEKÂNIN CEZAİ SORUMLULUĞU. Türkiye Adalet Akademisi Dergisi (47)

Turkish Penal Code and Related Legislation


[1] Köken, E. (2021). YAPAY ZEKÂNIN CEZAİ SORUMLULUĞU. Türkiye Adalet Akademisi Dergisi (47), p.264

[2] Köken, E. (2021). YAPAY ZEKÂNIN CEZAİ SORUMLULUĞU. Türkiye Adalet Akademisi Dergisi (47), p.269

[3] Köken, E. (2021). YAPAY ZEKÂNIN CEZAİ SORUMLULUĞU. Türkiye Adalet Akademisi Dergisi (47), p.273