Open the drivers-side door HAL
Mass adoption of autonomous / self-driving cars will not happen in western society. BOOM! There I said it.
Now, this is just a prediction based on personal opinions about human nature / psychology, culture, law and attitudes. There’s no hard evidence behind it and I’m not pointing to a graph and trying to explain why these particular values mean Autonomous Vehicles (AVs) contravene the basic laws of physics dammit!
This is more about the technology succumbing to a confluence of sociopolitical impediments rather than having one single fatal flaw, or because of some technological impossibility. Also, I’m not for a moment suggesting that limited adoption of AVs won’t happen (it already has). Expect the Planned City in China that only has AVs, or the Palo Alto gated community with a fleet of corporate-sponsored AVs. I could also envision AVs being more widespread in a world without mass personal car ownership… but by that stage we’re no longer talking about the same “western society” that I see through my window every day.
The thing that fascinates me most though… is that we suddenly have a classic philosophical thought experiment (The Trolley Problem) bursting out of the realm of the hypothetical and getting right up in the faces of engineers. As someone who has been both an engineer and a philosopher, this makes me grin.
For those unfamiliar with the Trolley problem; you can go and read all about its history as a thought experiment on wikipedia. I’m going to explain it here though — but specifically in terms of how it relates to AVs.
So I got to imagining the guy working at the lab developing the Morality Chip. I bet they don’t call it that of course… that would really put the willies up Joseph Q. Public. But that’s what it is (yes yes, it’s actually software not “a chip”… but for dramatic purposes and ease of visualisation I’m imagining it as a discrete hard-coded “Morality Core” in every AI-enabled machine… an Asimovian safeguard against hacking). And whoever is working on that thing is spending their days asking some really weird questions… many of which centre on how many pedestrians your life is worth. Oh man, I’d love to be programming that thing. What strange afternoons they must be.
Clearly AVs will be programmed to take some sort of limited evasive action if they detect an imminent collision. And as soon as that evasive action involves more than slamming on the brakes (and even then, there are hypotheticals involving the relative speeds of the cars behind you); as soon as it involves altering direction as an emergency maneuver… we have entered a very weird moral universe.
It’s weird partly because it’s only inhabited by AIs.
OK. Maybe Formula 1 drivers. Maybe. But mostly AIs.
Here’s the scenario… you or I are driving along at a safe 50kph in a 60kph zone. Without warning, the truck in the oncoming lane (which is travelling too fast to begin with) has a tyre failure and suddenly barrels right towards us.
At that moment, you or I react based on a tiny number of urgent bits of information. Raw survival instinct, sheer panic and the most godalmighty injection of adrenalin instruct our arms to jerk the wheel towards whatever seems like the safest direction for us (and our passengers) at that moment.
Perhaps the pedestrian we kill continues to haunt our conscience forever. Perhaps their family hates and blames us. But if so, it’ll be completely irrational. The expectation that any human being has moral agency in that overwhelming fraction of a second; that terrifying moment during which their life has suddenly come under threat; a situation about which they possess incomplete information and literally not enough time to rationally consider options. Whatever emotions may swirl around afterwards, the law would not hold us accountable. And no rational person would.
But that all changes when the decision to swerve into the pedestrian is taken by a processor quick enough to actually weigh up the options. We inject morality into the moment. A situation that was previously just the chaotic outcome of uncontrolled physics and neurochemistry turns into The Trolley Problem. But no longer as a thought-experiment. Now it’s a design decision. And different people… different numbers of people are really going to die based on how our engineers are coping with The Trolley Problem.
The AV doesn’t jerk the wheel and mount the pavement out of sheer panic… it notes the trajectory of the truck, notes its own trajectory and it calculates that killing the pedestrian is the only guaranteed way to prevent a collision. Once it’s made that calculation… what do we — sitting in a quiet lab as the clock slowly ticks towards lunch — what do we tell the car to do?
Add some sauce to the dish… the car is self-aware enough to know how many occupants it has.
if( count($passengers) >= count($pedestrians_on_trajectory) ) {
execute_trajectory_change( 'fast' );
} else {
spotify( 'REM_EndOfTheWorldAsWeKnowIt' );
}
Will we see industry standardisation? Or will Mercedes place a higher value on driver life than BMW? Will that become a selling point? Will we have social oneupmanship, with some looking down their noses at people in non-Pedestrian Parity Approved brands? Will the cycling lobby demand a 1.1x multiplier to compensate for the additional speed above walking pace they are travelling? Strange afternoons.
Ford Motors
Guaranteed to value your life at the Texas legal maximum of 2.3 pedestrians!
With different implementations of the same technology there’s simply no way to know whether AVs from different developers are making the same decisions… whether they place the same relative values on human lives. But unlike that split-second monkey-brain decision we make under the most severe pressure we’re ever likely to encounter; this is very definitely a moral question. Deliberate decisions are being made. Imagine the scandal when they unearth the subfunction…
if( in_array( $passenger_nationality, 'french' ) {
$num_passengers = $num_passengers - 1;
}
PS: I’m not suggesting that The Trolley Problem is going to sink AVs. As I say; the problem with this technology is more about — what I perceive as — a large number of different legal, moral, cultural and technological obstacles which are likely to combine to prove insurmountable in practice. This is just one of them. That said, the look on the face of the first guy whose car drives him off a cliff rather than hitting a couple of kids… the look on his face when his car actively prevents his monkey-brain-driven attempts to save himself… if that guy is me, I hope I have the last-minute presence of mind to glance in the mirror and take solace in how funny it all is.
PPS: Needless to say; this is all a very simplified stating of the problem facing the engineers. Once you throw in probability? Oh man, then you enter a world of weirdness. If “Evasive Maneuver 1” has a 40% chance of avoiding impact with the truck but is 70% likely to kill 2 pedestrians, does “Evasive Maneuver 2″ trump it? Despite being 99% certain to kill at least 1 pedestrian it has a 75% chance of avoiding a collision…”