By Sarah Wells
BU News Service
It’s the year 2030 and you’re relaxing in the back seat of your self-driving car as it whisks down the interstate. Glancing out the windows, you trust the car’s on-board computer to make all the necessary decisions to get you to your destination safely – even safer than you could. But as the car accelerates through a green light, a situation emerges: a child runs into traffic at the same time a truck passes on your left. Too late to avoid a collision, the car must make an impossible decision. Will it choose to save you or the child?
In the 1980s, German computer scientist Ernst Dickmanns equipped a five-ton Merecedes-Benz van with a camera and sensors, enabling it to drive up to 60 miles per hour with near complete autonomy – albeit on an empty road. Today, self-driving cars drive on populated roads using sensors, pre-programmed obstacle-avoidance algorithms, “smart” object differentiation and predictive modeling. With these algorithmic tools, companies like Google, Tesla and Uber have been getting ready to market self-driving vehicles.
Some drivers feel uneasy that life-or-death decisions will be left to a machine. To understand these anxieties and search for possible solutions, a team from MIT’s Media Lab is conducting a survey. They created a crowd sourcing website based on a philosophical thought experiment known as the trolley problem.
Proposed by British philosopher Phillipa Foot in the late 1960s, the experiment asks you to imagine standing at the intersection of a diverging train track. On one track is a group of five people, while on the other is a single person. You control the lever that changes the tracks. Which way do you direct the train? The original thought experiment was meant to test moral boundaries. For example, would you sacrifice the one person to save the other five if that person was a family member? The introduction of autonomous vehicles offers an even trickier dilemma – would you sacrifice the one person if it was yourself?
Launched in 2016, the website called Moral Machine explored these fears through hypothetical autonomous vehicle trolley situations and gathered data from an online survey of more than a million users. Participants faced tricky scenarios of these collisions and decided whether to save the passengers in the vehicle or the pedestrians. Overall, the survey showed that people would choose to minimize the amount of harm in a collision, even if it meant killing the car’s passengers – i.e. themselves. However, when asked if they would buy these potentially self-sacrificing vehicles, most users said no.
This contradiction echoes the public’s perception of self-driving cars. The National Highway Traffic Safety Administration estimates that autonomous vehicles would eliminate over 90 percent of traffic collision fatalities. Yet a January 2018 poll found that out of 1,005 people surveyed, 64 percent worried about these vehicles being on the road. “These machines are expected to make better decisions,” says Edmond Awad, a developer on the project. “But it’s hard to dismiss [the trolley problem] as unlikely. What if it kills me?”
Kate Saenko, an assistant professor of computer science at Boston University, said she believes that a major contributor to this unease is lack of communication between the car and its passenger.
“If you’re in a self-driving car and you know it’s going to make a [decision], you’re probably going to want to know why,” Saenko said. “Why is it stopping? Did it actually see something or is it going bonkers?”
Saenko’s research focuses on a kind of machine learning called deep learning, which utilizes a computational structure called a neural network. Neural networks mimic how human brains learn. Data from the environment shapes the network by forming connections and linking connected spots in a complex tangle. Computer scientists can see the input and output of these networks, but the actual thinking – the triggering of new connections – takes place in so-called “hidden units.”
Autonomous vehicles use these deep learning models, and the mystery around their choices makes potential drivers uneasy.
To address that concern, Saenko is working to open these hidden boxes. One way involves adding an additional neural network to act as an interpreter between the car’s thought process and its human companion.
Saenko imagines taking a short drive. If you tell your car to go to the end of the road and turn left, the interpreter would relay each part of the car’s thought process as it breaks your command into subtasks.
“It can actually explain its plan to the human user,” Saenko said. “It tells the person the network understood all of his or her commands – [that] it wasn’t missing something.”
Daniel Star, an associate professor of philosophy at Boston University, said he isn’t convinced that cars control moral decisions at all. Ascribing morality depends on intent and free will. Star argued that while these cars are “thinking” — through observations that spark neural network connections —whatever morality is involved rests on the shoulders of the programmers.
In some ways, that’s a relief. At least humans are making the decisions. However, Star said he worries that programmers might focus on legal issues more than moral ones. When explaining the trolley problem, Star said there isn’t necessarily a wrong answer, but it might be legally less “messy” if cars always prioritized the safety of their passengers. That would reduce the number of lawsuits from drivers. It would also side-step the issue of making value judgments about pedestrians.
To help address such moral dilemmas, the National Highway Traffic Safety Administration (NHTSA) added an “Ethical Considerations” section to their 2016 Federal Automated Vehicles Policy. Although policy-makers haven’t yet formulated mandates for these decisions, NHTSA urges that they be made “consciously and intentionally” and with consideration for both drivers and pedestrians. NHTSA hopes that their ethical guidelines will spark a conversation about these new technologies and that academia, industry, and the public will create these rules together.
Perhaps the only way to assuage our fears about these moral dilemmas is to have a conversation about them. To foster that discussion, Joey Lee, interaction designer and creative technologist at Moovel Labs in Germany, developed a project called “Who Wants to Be a Self-Driving Car?” In the quiet urban areas near the lab, test users lie on their stomachs in a go-kart sized vehicle wearing virtual reality headsets. With a joystick in hand, they navigate their environments using only the visual data that a self-driving car sees, essentially putting themselves in the mindset of an autonomous vehicle. Through this exercise, Lee and his team hope to turn the conversation away from fear and distrust and towards an empathetic and open dialogue about the future of autonomous vehicles.
[…] Today’s self-driving cars utilize sensors, predictive modeling, pre-programmed obstacle-avoidance algorithms, and “smart” object differentiation; with such algorithmic tools, companies like Uber, Google and Tesla have been fine tuning their self-driving vehicles to handle situations like the above, says BU News Service. […]