
The prior post introduced the Naughty Vacuum Cleaner (NVC) that would sometimes misbehave, and whether or how it might make sense to punish it. The concept of jailing a vacuum cleaner — even a smart one — seems silly (unless we want cleaner prisons). But the question of why it seems silly points to why or when it may (or may not) make sense to put people in jail, based on assumptions regarding specific types of blameworthy mental processing. It raises questions about moral and legal responsibility, strict liability, degrees of culpability (e.g. negligence, recklessness, intent), the purpose of punishment, and free will.
Suppose that a vacuum cleaner is programmed both to stay charged and to keep a room clean. However, if it runs out of charge, it is a big problem, since it may be left alone weeks/months at a time and is expected to keep an area clean (a modern robotic vacuum cleaner can automatically return to its charging station and empty its dust bin). Furthermore, suppose that it learns how thoroughly to clean by getting corrected if certain areas are left noticeably dirty. If the learning algorithm balances cleaning vs. staying charged, an NVC could learn to clean only to the point of not leaving visibly dirty areas (e.g. ignoring under furniture), and staying connected to the charger longer than otherwise necessary.
In our desire to change the actions of the NVC, we might seek to adjust the weights so that it cleans better. But for a sufficiently complex actor, we may not have direct access to the weights or the algorithm. In that case, we have to adjust its behavior by changing the inputs so that the system learns to act differently. We need to incentivize cleaning and deter excessive charging.
In Holmes’ “The Path of the Law“, he highlights the difference between morality and law as exemplified by a completely immoral (or, actually, amoral) “bad man”: “A man who cares nothing for an ethical rule which is believed and practised [sic] by his neighbors is likely nevertheless to care a good deal to avoid being made to pay money, and will want to keep out of jail if he can.” That is, society regulates behavior not only through social norms, but also through legal punishment as a deterrent to unwanted actions.
The notion of deterrence only works for an entity that maintains a complex set of goals, desires, and possible actions. Jail serves as a deterrent to the degree that a person doesn’t want to be in one for whatever reason (e.g. fear, removal from family, criminal record, stigma). Thus, for the NVC to adjust its actions based on a deterrent, it has to have some motivation that is negatively impacted by the punishment, and it has to know that the punishment could follow from the bad behavior. This is a lot to expect of a vacuum cleaner. However, not all punishment results from intentional acts, such as in cases of strict liability.
With strict liability, an actor is viewed as culpable for having done something, regardless of any lack of awareness of the prohibition, or lack of intention of the action. This is especially challenging for criminal liability (e.g. drug possession, statutory rape), since the law generally presumes that free will is a necessary factor to warrant punishment. Strict liability completely ignores any notion of mens rea (e.g. intent or awareness). If we’re willing to put people in jail without a showing of intention or negligence, perhaps the NVC might need to serve some time.

What are the goals of putting a human in jail, and how well does that approach work? For example, incapacitation without any change in processing leads to recidivism. If all we do with the NVC is unplug it but don’t upgrade it, we don’t expect it to behave differently when we plug it in again. There is a shared goal of rehabilitation or at least effective specific deterrence. Retribution for its own sake is certainly questionable for a vacuum cleaner.
To the degree that we view machines as merely commodity objects, we approach the regulation of their behavior very differently than with humans. That is, rather than punish a machine, we typically think to either change or replace it. Vacuum cleaners are designed, so where they make mistakes, we look to the designers for blame. What would a machine have to do to be blameworthy, and what type of punishment would make sense?
A claim that humans are designed, or that all behavior is ultimately deterministic, can be used to argue that humans are free from responsibility. The law, however, works under the model that somewhere along the line, a human acquires responsibility. But then, why can’t a machine?
As an example of the legal notion of the acquisition of agency, negligence is defined as an actor not following what is considered to be standard duty of care by a reasonably-acting similarly situated actor. This is true for children of a given age as well as specialists such as doctors. Legal culpability depends on issues such as reasonableness, defining a similar group of peers, and standard practice.
Thus, the law considers a body of knowledge to be acquired (e.g. for a child, some type of common sense; for a doctor, suggested or best practice), and asks whether or not such knowledge was evident in the relevant actions. Actors are expected to incorporate newly acquired beliefs and judgments to guide their actions. If they fail to learn what is expected of them to learn, or fail to act from those lessons, then they are blameworthy.
The acquisition of responsibility, blameworthiness, or culpability, is a standard perspective of human development. An operational definition of free will requires a learning component such that there is a delineation of the point at which the actor can be said to have claimed itself to be the final arbiter of its actions, regardless of how it may or may not have been created and/or designed. Such a notion of accountability is not inconsistent with machine learning.
In the next post, I’ll start to formalize a simple model of free will that incorporates agency acquisition.
Very interesting. So, culpability, and punishment are hand in hand with free will (or its machine-learning equivalent)?
Great article. I particularly enjoyed your reference to Holmes’ The Path of the Law, because in my opinion, it remains the most important text in jurisprudence. The “bad man” is an excellent example of a deep insight into human behavior, ethics, and morality.
I am interested to see the direction you move with the formalization process. You note, “Such a notion of accountability is not inconsistent with machine learning.” I agree. At first impression, I thought of a Markov Decision Process (MDP) as a means for developing an agency formalization. The MDP model is the foundation of reinforcement learning – a type of agent based machine learning. Looking forward to reading your next article – and glad to know you’re writing again.