Can AI Commit a Crime? A Look at Intent
|
In criminal law, the mens rea is the criminal intent. It is a mindset – most crimes require both an act and a mindset. Often mens rea is what distinguishes guilt from innocence even among people who have committed the same act. For example, pushing a shopping cart of bagged groceries out of a store with an item that accidentally remained in the cart during checkout unbeknownst to you is different from shoplifting, even if the shoplifting entailed leaving an item in the cart to avoid paying. The intent is the difference. Similarly, a premeditated killing differs from an accidental one. Acting with indifference or depraved indifference (recklessness), with knowledge, negligently, and purposely are examples of mindsets. Acting without knowledge that an act is a crime is sometimes important and other times not. Sometimes just the intent to do an action can satisfy the mens rea condition, but the unintended result is the crux of the crime. Someone may intend to rob a store, but not to kill the clerk. That describes felony murder, where the mens rea required is not intent to kill the clerk, but intent to engage in the felony.
Whether AI can have a mens rea of its own is unclear. For now, intent to act is merely programmed purpose. AI’s ability to scour large data sets and follow complex algorithms to come to conclusions in ways that mimic thinking and reasoning does not amount to the ability to act with criminal intent. The humanness may be the missing element. For example, AI would not be sorrowful if it committed murder, although it could be programmed to exhibit sorrow.

The Tech Made Me Do It
AI can change people’s behavior. For example, a neurotech implant for Parkinson’s disease changed a man’s behavior so much that he chose to limit its use. The technology is deep brain stimulation, which uses machine learning, a subcategory of AI. With the implant, he spent money recklessly and got into several car accidents. (See, Verbeek, Moralizing Technology, p. 151.) What if, hypothetically, he had killed someone with his car. Could he claim the neurotech made him do it? Would that negate the reckless mens rea of vehicular manslaughter? In drunk driving cases, the alcohol is not an excuse. Looking for a morally significant difference, what about vehicular deaths when taking a prescription drug at the appropriate dose? Generally, prescriptions warn people not to drive. The medicine itself, like the deep brain stimulation, could cloud judgment, leading a person to drive anyway. Is implanted technology more a part of oneself than a medicine? And would a warning that your neurotech might alter behavior be enough to protect the designer, manufacturer, or the medical team implanting it from liability for actions you take?
What if rather than neurotech, an extremely strong artificial hand with AI governing how much force it uses, were to crush something valuable? Or someone? Generally, a person would control an artificial hand. If the person were to say, “It made me do it”, he may face a strong argument that he had the power and duty to control it. That is, a court may find that the person’s intent is either reckless by failure to control it , or purposeful in using it to cause harm.
The AI Acted Alone
A person could argue that the AI acted autonomously despite being an artificial hand attached to and operating as part of the body. This reasoning asserts that the person’s mindset was completely irrelevant. Whether failing to control the AI or not, the assertion is that the AI is a separate and distinct actor with its own intent. Without the relevance of the person’s mindset, the victim (whose thing of value, or hand or even whole body was crushed) could try to blame that the tech company creating the artificial limb and embedding in it the ability to use extreme strength. An artificial limb generally would be seen as a result of its creator rather than as its own agent.

If a self-driving car kills a pedestrian, it is unlikely that someone would say the car meant to do so. It may have been programmed to do so (purposely), to avoid doing so, or programmed recklessly, negligently, or with the knowledge that it could make a mistake. Many would argue the AI did not mean to kill someone and does not understand the human meaning or consequences of killing someone. AI acts with a nonhuman type of knowledge, but not with the typical mens rea components necessary for crime. And, of course, if a car committed a crime, it would hardly go to prison, so the hypothetical ends. But someone probably should take responsibility. The tort of wrongful death would arguably apply.
AI seems not to commit crimes autonomously, but could be trained to do so, in which case, developers could be responsible. And, in some cases users must be responsible as well.
The car company relying on the AI may be blamed, and it also may deflect blame to the coders or builders of the AI system, especially if there is a mistake in it and it was created by a separate company. Perhaps it was not trained on enough possible situations to know how to avert all types of accident. In cases about autopilot, Tesla argued that the driver is the captain and is ultimately responsible for the car’s actions while it is on autopilot. These are generally products liability cases, not criminal ones.
There is an interesting observation: The way tech operates under the law may effectively lead to products liability lawsuits rather than the use of criminal justice, even if the AI is involved in what would be a crime if it were carried out by humans alone. I am not sure whether that could lead people to using AI more so that if something goes wrong, they can shrug it off as not their fault. The driverless car autopilot class action plaintiffs blame Tesla. If Tesla is blamed and the person in the car using the technology is not, as the technology becomes safer, people may prefer it due to its enabling people to avoid charges of vehicular negligence, manslaughter, and homicide, which are possible when a human is driving.

Cause and Intent Differ
Causation is a real issue. Did the person or the AI or both engage in a crime or cause the harm? How can accountability be encouraged when some AI is produced by large corporations who are not especially harmed by wrongful death or products liability claims? There probably is potential for criminal lawsuits for reckless and malicious uses of AI, like those uses seen in hacking. Overall, AI is making workplaces and transportation safer and restoring bodily abilities lost due to diseases and accidents. Yet for victims of AI gone wrong, torts may replace crimes, because based on criminal codes, AI can get away with it.