Study: Self-driving cars ‘learn’ how to make moral decisions on the road

OSNABRÜCK, Germany — Two children chase a ball in front of a speeding car and a split second decision is made.

The car swerves into a concrete wall, killing the elderly couple occupying the vehicle and sparing the life of the child.  But there was no driver making the choice to jerk the wheel. Instead, a computer made a value calculation and the car did what its programmers told it was most ethical.

Road
A first-of-its-kind study finds that self-driving cars may be able to understand and take into consideration human morality in split-second decisions on the road.

Think moral judgements in such situations shouldn’t be made by computers? Many scientists say it is inevitable as self-driving cars hit the road. To help prepare, researchers at The Institute of Cognitive Science at the University of Osnabrück have completed a study they say shows that a relatively simple program is capable of predicting what decisions a real-life driver would make.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” says Leon Sütfeld, first author of the study, in a press release.

In the study, Sütfeld and fellow researchers placed participants in a fully immersive virtual reality simulation of driving through a “typical suburban neighborhood on a foggy day.”

They then gathered data on what drivers did when faced with an unavoidable dilemma, such as children or pets running in front of the car. Armed with this data, they then used it to craft algorithms that in a majority of cases accurately predict a real driver’s decision.

Despite the success of the computer making human-like decisions in their simplified model, the researchers note there are many important questions to be answered.

“When a decision has to be made between killing a dog with near certainty and taking a 5% risk of injuring a human, how should the algorithm decide?” they ask in the research paper. “We don’t seem to take much issue with assigning different values of life to different species, and a system favoring pets over game or birds might be acceptable in the public eye.”

Writing that this means there may be some ability to apply computer-based decision making to self-driving cars, they say it also brings up further questions about a computer’s ability to deal with the full complexity of human ethical decision making.

“We need to ask whether autonomous systems should adopt moral judgements,” says Gordon Pipa, another of the study’s senior authors. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

In the paper published last month in Frontiers of Behavioral Science, the researchers note their use of virtual reality is also important consideration as it increases emotional arousal and provides richer context to the choices.

That being said, a variety of non-VR studies gathering data on human decision making during driving are also currently underway. If you want to participate right now, you can try out MIT’s Moral Machine here.

Confronting visitors with a variety of moral dilemmas a self driving car could face, the Moral Machine website also allows you to compare your choices to other participants and even design your own scenarios for testing.