The importance of ethical conduct has been long known, and continues to be a pressing concern not only for society in general, but also in business, and in the applications of artificial intelligence (AI). However, while AI is among the foremost of contemporary fields where ethics is a concern, AI has not itself previously served the purpose of clarifying the consequences of ethics – or the lack thereof – for society as a whole.
Philosophers from ancient times have pondered the role of ethical conduct, and have scolded men who behave unethically, perhaps in the mistaken belief that their conduct does not have serious consequences. Confucius says, ‘The small man thinks that small acts of goodness are of no benefit, and does not do them; and that small deeds of evil do no harm, and does not refrain from them.’
Ethics in general comes in different varieties, these being primarily virtue ethics, or the notion that actions must fit a pre-existing sense of virtue; deontological ethics, or the notion that behaviour in accord with duties and roles is ethical; and consequentialism, or the notion that actions become ethical or otherwise merely by their consequences.
In recent work presented at the IJCAI 2020 conference, Aditya Hegde and Vibhav Agarwal of IIIT Bangalore, working under my supervision, set out to formulate an agent-based model of a society where ethical as well as unethical conduct are possible. While it is of interest to make models and AI systems more ethical, our purpose here was to model ethical and unethical behaviours in society, to understand the consequences of each. To model agent interactions we used a paradigm called a continuous prisoner’s dilemma (a variant on the prisoner’s dilemma, a classic staple of game theory). We also defined a new agent type which we call a virtue agent, making it possible for agents to embody ethical and unethical actions (donations and theft respectively), with consequences to their individual resources and reputations, as well as for the overall wealth of society. As in our everyday human experience, the model too has the feature that common interactions of agents do not significantly alter their reputations (or become widely known), but large ethical or unethical actions do.
A simulation based on a model of a real system is, in essence, an in-silico working out of the consequences of that particular model of the system assuming some parameters, in contrast to the testing of such a system in a physical setting. Agent-based models and simulations using them are particularly useful in the social sciences where it is simply not possible to arrange scenarios to our liking. Our model makes it possible for us to set up scenarios that are essentially impossible to arrange in reality, and to understand their consequences by simulations.
One question that is always important in questions of ethics is: why do some people choose to be unethical? Many reasons may exist, but an important one, surely, is the desire for greater gain than can be had by ethical means. To this, the classical riposte is generally that while unethical actions may bring temporary gains, they leave people who indulge in them worse off in the long run. The Mahabharata famously says, “A man may be seen to prosper by his sins, obtain good therefrom and vanquish his foes. Destruction, however, overtakes him to the roots.” A similar sentiment was expressed by George Washington, who disavowed unethical conduct, saying it “must always greatly overbalance in permanent evil any partial or transient benefit.”
Our simulations suggest that the insights of the past thinkers were in fact correct: unethical agents make attractive short-term gains but sink eventually; only ethical agents prosper in the long run.
A further related result is that even a small fraction of agents being highly ethical raise the prosperity of society quite significantly, even when the vast majority of society is unethical—such ethical agents have a disproportionately large impact on the overall social good.
Another issue relevant to ethics in society is whether it is more important to reward good behaviour, or to punish bad behaviour. Many past studies in psychology suggest that people are motivated to be ethical more by rewards than by punishments, but on account of the doctrine of retributive justice that underlies modern law enforcement, it is common that unethical conduct is punished, whilst ethical conduct may be scarcely noticed.
Here too, our work indicates that a negativity bias, or the focus being more on punishing unethical conduct rather than on rewarding ethical conduct, does not encourage unethical agents to become more ethical; on the contrary, a positivity bias, a greater emphasis on rewarding ethical conduct rather than punishing unethical conduct, does so.
ReferencesPeek, S (2021) A Culture of Ethical Behavior Is Essential to Business Success [online] Business News Daily www.businessnewsdaily.com/9424-business-ethical-behavior [Accessed 01/12/2021]
Aditya Hegde, Vibhav Agarwal, Shrisha Rao. Ethics, Prosperity, and Society: Moral Evaluation Using Virtue Ethics and Utilitarianism. 29th International Joint Conference on Artificial Intelligence (IJCAI 2020). 10.24963/ijcai.2020/24
Müller, V (2020) Ethics of Artificial Intelligence and Robotics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-ai [Accessed 01/12/2021]
Walen, A (2020) Retributive Justice [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/justice-retributive/ [Accessed 01/12/2021]
Alexander, L (2020) Deontological Ethics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-deontological [Accessed 01/12/2021]
Sinnott-Armstrong, W (2019) Consequentialism [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/consequentialism [Accessed 01/12/2021]
Hursthouse, R (2018) Virtue Ethics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-virtue [Accessed 01/12/2021]