Doubting Driverless Dilemmas. Julian De Freitas et al. Perspectives on Psychological Science, July 31, 2020. https://doi.org/10.1177/1745691620922201
Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.
Keywords: moral judgment, autonomous vehicles, driverless policy
Trolley dilemmas are incredibly unlikely to occur on real roads
The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely
on real roads. This is because they require that the vehicle will certainly kill one individual or
another, with no other location to steer the vehicle, no way to buy more time, and no steering
maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that
AVs can gather information about the social characteristics of people, e.g., whether they are
criminals, or contributors to society. Yet many of these social characteristics are inherently
unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing
between.
Lacking in these discussions are realistic examples or evidence of situations where human
drivers have had to make such choices. This makes it premature to consider them as part of any
practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers
acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which
human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of
how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et
al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives
in the thought experiments can be identified in reality, it is unhelpful to consider them when making
AV policies or regulations.
Trolley dilemmas cannot be reliably detected by any real-world perception system
For the purposes of a thought experiment, it is simplifying to assume that one is already in a
trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would
first need to be trained how to do this perfectly. After all, since the overwhelming majority of
driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a
trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate
when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill),
versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this
distinction would require unrealistic capabilities for technology in the present or near future,
including (i) knowing all relevant physical details about the environment that could influence
whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness
of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is
in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of
pedestrians and drivers, so that their choices can be taken into account.
Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV
must optimize toward solving a dilemma whose very existence is incredibly challenging to detect.
Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to
a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means
that they would need to be simulated and multiplied within any dataset, potentially introducing
unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas
where there aren’t any.
Trolley dilemmas cannot be reliably acted upon by any real-world control system
Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a
considered decision about whom of two people to harm, yet does not have enough control to
instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether
(Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options
left, it’s unlikely that these options will neatly map onto two options that require a moral rule to
arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more
constrained its options are the less likely it is to have the control to successfully execute a choice—
and if it can’t execute a choice, then there’s no real dilemma.