Uncomfortable ethics for autonomous vehicles
Self-driving cars, or autonomous vehicles (AVs), are an inevitability on our roads. That’s probably a good thing – most accidents are the result of human error. Ironically, humans will still need to programme the ’thinking’ part of an AV, especially when it’s presented with ethical dilemmas. But how important is this for AV manufacturers? Dr Tripat Gill of Wilfrid Laurier University in Ontario, Canada, a specialist in consumer attitudes towards radical technologies, has uncovered some uncomfortable truths.
Motor mechanics have a joke: the most dangerous part of any car is the nut that holds the steering wheel. There is truth to that – human error is a factor in most vehicle accidents. For example, in the UK and US, that figure is between 92% and 94%. One proposed solution is self-driving cars, often referred to as autonomous vehicles (AVs), since they take the human element out of driving. Safety is one of the major driving forces in the development of AVs, and advancements in technology are such that the commercialisation of AVs is deemed inevitable. One critical factor in how successful they will be is whether consumers are willing to assign control of the driving operations to an autonomous agent. Ironically, key to this is consumers’ comfort that the autonomous agent operates within the same ethical decision-making framework as the one it is replacing: their own.
One senior researcher in the nascent field of consumer interactions with autonomous agents has uncovered an uncomfortable truth about this and just how vital ethical dilemmas are for those who will one day get inside an AV. Dr Tripat Gill is an associate professor of marketing at Wilfrid Laurier University in Ontario, Canada, and a specialist in what factors encourage the adoption of technological innovations, especially revolutionary technologies such as AVs and artificial intelligence (AI). Of particular interest to Gill are the social impacts of such technologies and the ethical challenges they pose. This is particularly the case for AVs: not only must an AV negotiate the myriad physical challenges of road travel, but it must also be able to make difficult ethical decisions. For this reason, parallel to research in AV spatial negotiating technology, significant research within both computer sciences and social sciences is going into what moral norms should be ‘built into’ the AI component of an AV. A common theme to this research is the ‘ethical dilemma’.
The ethical dilemma
Research in this area typically presents the dilemma thus: if a pedestrian steps in front of an AV and the vehicle cannot stop in time, but swerving will put it in the path of two people, does it hit the one person? What if that person is a child and the other two are adults? Should that make a difference? Most studies have examined utilitarian dilemmas such as sacrificing one life for two or choosing between harm to different types of targets. Gill’s recent studies have focused on a less-researched dilemma: one target of potentially serious injury is sitting in the car.
In a study published in 2020, Gill presented respondents with a ‘stay or swerve’ scenario to determine what they would do if they were driving a vehicle and what they thought an AV should do if they were a passenger in the AV. Using a simple diagram, he presented the scenario as such: a pedestrian steps in front of the vehicle; if the vehicle stays on course, it will hit the pedestrian, probably killing them or causing them serious harm; swerving will put the vehicle on a collision course with a tree or pole, probably killing or seriously injuring the respondent – as the driver in the regular car or passenger in the AV.
Gill’s recent studies have focused on a less-researched dilemma – where one target of potentially serious harm is sitting in the car.
A key outcome of this study was that while most participants selected ‘swerve’ as the preferred course of the vehicle, they were far more likely to choose to stay on course and harm the pedestrian if they were a passenger in the AV than if they were driving themselves. When Gill repeated the study but adjusted it to make the car significantly more expensive, the outcome was the same. Even replacing the simple diagram with more vivid images of harm or evoking physical harm by getting respondents to immerse their arms in freezing water had little effect on the outcome. While most participants still chose ‘swerve’ – and avoid harm to the pedestrian – the odds of choosing ‘stay’ and harm the pedestrian were nearly two-fold higher as a passenger in the AV than if they were behind the wheel of a typical vehicle.
Gill explains this as follows: when we drive a car, we have control over our actions and hence feel responsible for the consequences of our decisions. Even though self-harm violates the fundamental evolutionary instinct of self-preservation, we still have a moral intuition not to harm others. However, as passengers in an AV, we would have lower perceived control over our actions and therefore can attribute the responsibility for any harmful consequences to the AV. Factors that could magnify or mitigate this phenomenon would be whether we own the vehicle (and therefore it should somehow be programmed as part of our self-identity), if it’s rented or a taxi (and thus more ‘distanced’), or whether we see the AV in a ‘servant’ context (and therefore duty-bound to protect us).
Attributing responsibility to a vehicle
This responsibility attribution should worry AV designers and manufacturers if they believe building a moral code into their vehicles is essential. However, according to Gill, many manufacturers don’t. In fact, they see such ethical dilemmas as a distraction. Instead, they consider technical challenges such as operating in inclement weather, recognising objects, and accurately reading road signage as more pressing issues. But do consumers agree? After all, they’re the ones who will – or will not – buy the vehicles.
To test this, Gill conducted two surveys as part of a study published in 2021. The surveys had a total of 1,678 US-based respondents in the typical target market as early adopters of AV technology. They were presented with detailed information about AVs’ benefits and the different technical, legal, and ethical challenges in the design and adoption of these vehicles. Once they had passed a comprehension test to ensure they understood the survey topic, they were asked to rate each of the considered benefits. These included reduced accidents, freeing up commuting time, increased mobility for the elderly and disabled, reduced traffic congestion, reduced insurance costs, and reduced health costs. They were then asked to rate the importance of overcoming the challenges should they consider being a passenger in an AV. Among the ethical challenges was the focal ethical dilemma in the previous studies of whether an AV should seriously harm a pedestrian or the AV passenger. They were also asked to rate the relative importance of overcoming the different challenges by allocating 100 points among ten different challenges (four technical, two legal, and four ethical, including the focal dilemma).
Notably, respondents considered the ethical dilemma as the most significant challenge to overcome in whether they would consider adopting an AV, more so than any of the technical or legal challenges.
The safety benefits of AVs scored the highest rating. Among the technical challenges, recognising objects, gestures, and road signs and operating in inclement weather were rated the most important to overcome. The liability of harm was considered the most important legal challenge. Among the ethical challenges, the most important was the focal dilemma of whether an AV should seriously harm a pedestrian or its passenger. Notably, respondents considered the ethical dilemma as the most significant challenge to overcome in whether they would consider adopting an AV, more so than any of the technical or legal challenges. It was just as Gill had hypothesised.
The one human element
When, in 2018, an Uber autonomous test vehicle with a backup driver hit and killed a pedestrian walking a bicycle across a road in Tempe, Arizona, Uber suspended its AV programme, despite the US National Transportation Safety Board (NTSB) finding that human error was mostly to blame for the crash. Uber knew the accident had fed a primary fear of AVs: will they be able to detect and avoid another human and brake in time? And if not, would the AV’s emergency avoidance actions put their passengers’ lives in danger? While continual testing and advances in AV technology are whittling away the remaining technological challenges and regulators are confronting the inevitability of AVs on our roads, the ethical calculus governing AVs is still foremost in the minds of consumers.
As Gill’s research has shown, among the myriad benefits offered by autonomous vehicles, safer road use ranks the highest among those who would consider getting into one. Indeed, taking the human factor out of vehicle use is one way to reduce road fatalities. However, there is one ‘human’ element that potential users seem to want in an AV if they were sitting inside: sensitivity to the ethical dilemma around harming either a pedestrian or themselves should the scenario arise, no matter how rare that may be and how insignificant AV manufacturers judge it.
Given what your research has shown, how do you think AV manufacturers should react?
I think that AV manufacturers cannot disregard the issue of ethical dilemmas. My research shows that people consider such dilemmas (of distributing risk between passengers vs. pedestrians) as the most significant challenge in their adoption of AVs. While people are cognisant of the technical issues facing AVs (such as recognising signage and operating in bad weather), they are most concerned about the potential risk that ethical dilemmas can pose. However rare, people do want to know how AVs will address these dilemmas. How much weight will they put on the lives of passengers versus the people outside? Whatever solution is proposed, it needs buy-in from the public, or else they will not adopt and the promise of AVs saving lives will remain unfulfilled.