Cooperation with autonomous machines through culture and emotion
Autonomous machines that can act on our behalf are rapidly becoming fundamental to our society. Robots, drones, and self-driving vehicles are all becoming a reality with the potential to mould our existence. These machines can profoundly change how we interact with each other, so it is essential that we understand if they will influence any significant changes to human decision making.
Research carried out by Dr Celso de Melo, a computer scientist with the US Army Research Laboratory, and Dr Kazunori Terada, Associate Professor at Gifu University, Japan, shows that people have a tendency to make less favourable decisions and be less cooperative with machines than with humans. Results reveal that people engage in social categorisation that distinguishes people, or ‘us’, from machines, or ‘them’. This leads to an unfavourable bias against ‘them’, implying that machines are perceived as out-group members.
In-groups and out-groups
People tend to categorise others during social interactions. They are inclined to associate more with some people, self-identifying with the in-group, and less so with others, the out-group. This can result in a bias favouring cooperation with members of the in-group; thus, promoting the in-group’s affluence and increasing the individual’s likelihood of survival and receipt of long-term benefits. Such perception of group membership is often used to encourage cooperation in situations involving social dilemmas. It has been shown that people categorise machines in a similar way, in line with gender and cultural stereotypes, favouring computers with a virtual face typical of their race and voices with accents similar to their own. This would imply that as well as people applying social categorisation to machines, machines can also be members of the in-group.
Studies show that when engaging with machines, people make different decisions and show different patterns of brain activity than when engaging with humans, even though they consider machines to be social actors. Results also suggest that people experience less emotion when dealing with machines than with humans, and that machines tend to be treated as members of an out-group.
People have a tendency to make less favourable decisions and be less cooperative with machines than with humans.
Hypotheses for improved human-machine interaction
Dr de Melo and Dr Terada explain that as autonomous machines are becoming ubiquitous in society, it is essential that we find ways to foster cooperation between them and humans. Moreover, these solutions will have to surmount the unfavourable biases. Underpinned by their previous work and a review of other research, they put forward two hypotheses: Firstly, “associating positive cues of cultural membership could mitigate the default unfavourable bias people have towards machines”. Secondly, “emotion expressions could override expectations of cooperation based on social categories”.
The prisoner’s dilemma
The researchers recruited a total of 945 participants, 468 from the United States and 477 from Japan. Participants were paired with counterparts of either the same or different culture before taking part in 20 rounds of the prisoner’s dilemma.
In the dilemma, two players simultaneously make a decision to either defect or cooperate. Decision theory would suggest defection is the best response regardless of the counterpart’s decision: if you think your counterpart is going to defect then you should defect as well; if you think your counterpart is going to cooperate, then you should still defect and maximise your payoff. If both participants follow this reasoning, however, they will end up worse off than if they had cooperated.
Human vs autonomous machine counterparts
The experiment was fully anonymous, in that both the participants and experimenters were anonymous to each other. The participants were told that they would be engaging with either another participant or an autonomous machine. However, to maximise experimental control, all participants actually engaged with a computer script. The focus of the experiment was to investigate whether participants would cooperate differently with humans as opposed to machines, and if any difference could be regulated with culture or facial expressions of emotion.
Both human and machine counterparts were able to express emotion using virtual faces from either US or Japanese culture. They presented a competitive, neutral, or cooperative disposition. Earlier studies by Dr de Melo revealed that emotional expressions can influence cooperation in the prisoner’s dilemma, so the researchers chose a number of set patterns displaying sequences of competitive, cooperative or neutral emotions. Using a measure of the cooperation rate, averaged across all rounds of the prisoner’s dilemma, the researchers carried out statistical analysis employing a 2 × 2 × 3 between-participants factorial design. This enabled analysis of the effects of counterpart type (human or machine), counterpart culture (United States or Japan) and emotion (competitive, neutral or cooperative) to be carried out simultaneously.
Pairings of different cultures
When pairs were of different cultures, participants cooperated more when they thought they had human counterparts rather than machines. There was more cooperation when participants were paired with cooperative and neutral partners than competitive counterparts. In addition, when counterparts demonstrated competitive or neutral emotions, their counterparts cooperated more with humans than machines. When counterparts demonstrated cooperative emotion, however, there was no significant difference. These results applied to participants from both Japan and the United States.
Pairings of the same culture
Contrastingly, when pairs were of the same culture, there was no significant difference in the cooperation of participants with either human or computer counterparts. Nevertheless, there was more cooperation with cooperative and neutral counterparts than competitive ones. Once again, there was no significant difference between Japanese and the US participants’ results.
When counterparts demonstrated cooperative emotion, there was no significant difference in cooperation with machines or humans.
Solutions to overcome bias
The researchers offer two solutions to overcome the unfavourable bias towards machines and improve human-machine cooperation. Firstly, the experiment demonstrated that with participants from both cultures taking part in human-machine interaction, a straightforward culture cue, by way of the ethnicity of the computer’s virtual face, was enough to mitigate the bias. Secondly, mechanisms conveying affiliative intent, in the form of facial expressions of emotion, promoted human-machine cooperation, overriding the default expectations of coalition alliances derived from social categories. Furthermore, when machines showed positive emotion, such as joy following cooperation and regret following exploitation, the researchers found that people cooperated with machines every bit as much as with the human counterparts.
Emotion influencing decision making
The research team observed that emotion had the strongest effect in their experiment. A machine displaying a virtual face from a different culture group would be treated as an in-group member through astute visual presentation of emotion, in this case expressing joy after cooperation and regret following exploitation. This research highlights that emotion is a powerful influence on human behaviour, and decision making in particular. Moreover, the research demonstrates that the default associations of social categories can be overturned. This is encouraging as it may be difficult to control the perception of social categories in machines.
These results can inform designers of autonomous machines how to overcome the unfavourable bias towards machines and offers solutions to improve the level of cooperation in human-machine interaction.
The researchers reflect that given the increasing divisiveness in society, it is not surprising to find autonomous machines being perceived as outsiders and, therefore, less likely to reap the benefits afforded to members of the in-group.
This research demonstrates that humans will fall back on their established psychological mechanisms to ascertain associations and cooperate with machines. It is reassuring to know that our behaviour with machines is underpinned by the same psychological mechanisms we use with humans and provides opportunities to reduce negative bias with machines.
The researchers conclude that “since autonomous machines can be designed to take advantage of these psychological mechanisms driving human behaviour, they introduce a unique opportunity to promote a more cooperative society”.
Personal Response
What sparked your interest in human-machine cooperation and how do you see this research progressing?
<> It was the realisation that the success of AI hinges on peoples’ ability to trust and cooperate with AI. We are developing technology that has the potential to considerably improve human life, but we will only be able to reap those benefits if we find ways to promote collaboration between humans and machines. As this technology becomes more pervasive in society, we need to continue studying the mechanisms driving human decision making and understanding how those influence the design of autonomous machines.