I have a couple of robotic vacuum cleaners. I named one of them “Lil Miss Naughty” and the other “Dr. Evil”. The first one has a tendency to leave some spots a bit messy. The second one likes to attack cords and rugs. No matter what I say to them, they don’t change their behavior. It’s almost as though they want me to believe that they have no control over what they’re doing. I’m not buying it.
The law says that at some point, children become adults. They acquire legal agency and, therefore, legal culpability. As a society, we argue that when someone causes harm, they either knew better, or should have known better. Either way, they are worthy of blame, liability, and/or punishment. So what, exactly, changes in the minds of children that, as adults, they’re responsible?
In March, 2018, an Uber “self-driving” car killed a woman in Phoenix, not recognizing that an object outside a crosswalk was a person, and had its automated breaking turned off. We didn’t arrest or sue the car. We don’t think that the car could have chosen to do anything differently — it had no free will, and therefore, no moral responsibility. In the deterministic world of the car’s driving software, once the car started its trip, the rest was destiny.
At what point will we be able to say, morally or legally, that my vacuum cleaner is responsible for its actions? What, exactly, would a vacuum cleaner have to be able to do “cognitively”, in order for us to say that it either knew or should have known better than to do what it did? In fact, how can we claim that a person knew or should have known better? If the person did know better, than wouldn’t it have acted differently? And if it should have known better, what exactly did it do wrong to end up not knowing, and when did it not learn that? We say that ignorance of the law is no excuse — where in that ignorance lies moral responsibility and legal culpability? Would it ever make sense to put a vacuum cleaner in jail?
The next series of posts will explore some of the issues raised here and the factors that need to be considered in order to arrive at an answer. The orientation, however, will be concrete. Rather than the full-blown dialectic related to the potential of free will existing in the universe, I want to look at what it would take to grow my vacuum cleaner from a child to a legally responsible agent — to level it up.