Community Content
This article has been written and edited by the research team. It has not passed through the quality control procedures applied to content produced by the Research Outreach team.

Understanding ethics in society via artificial intelligence and game theory

The moral underpinning of ethical conduct is an issue of long standing. It is generally accepted that ethical conduct by individuals is beneficial to society, but questions still remain unanswered: can individuals prosper for themselves by being unethical? If most people in society are unethical, is it better for someone to also be unethical? Many such questions cannot be answered by thought experiments or real-world observations. Using AI-based simulation techniques, it is possible to give some definitive answers.

The importance of ethical conduct has been long known, and continues to be a pressing concern not only for society in general, but also in business, and in the applications of artificial intelligence (AI).  However, while AI is among the foremost of contemporary fields where ethics is a concern, AI has not itself previously served the purpose of clarifying the consequences of ethics –⁠ or the lack thereof –⁠ for society as a whole.

Philosophers from ancient times have pondered the role of ethical conduct, and have scolded men who behave unethically, perhaps in the mistaken belief that their conduct does not have serious consequences.  Confucius says, ‘The small man thinks that small acts of goodness are of no benefit, and does not do them; and that small deeds of evil do no harm, and does not refrain from them.’

Ethics in general comes in different varieties, these being primarily virtue ethics, or the notion that actions must fit a pre-existing sense of virtue; deontological ethics, or the notion that behaviour in accord with duties and roles is ethical; and consequentialism, or the notion that actions become ethical or otherwise merely by their consequences.

In recent work presented at the IJCAI 2020 conference,  Aditya Hegde and Vibhav Agarwal of IIIT Bangalore, working under my supervision, set out to formulate an agent-based model of a society where ethical as well as unethical conduct are possible.  While it is of interest to make models and AI systems more ethical, our purpose here was to model ethical and unethical behaviours in society, to understand the consequences of each.  To model agent interactions we used a paradigm called a continuous prisoner’s dilemma (a variant on the prisoner’s dilemma, a classic staple of game theory).  We also defined a new agent type which we call a virtue agent, making it possible for agents to embody ethical and unethical actions (donations and theft respectively), with consequences to their individual resources and reputations, as well as for the overall wealth of society.  As in our everyday human experience, the model too has the feature that common interactions of agents do not significantly alter their reputations (or become widely known), but large ethical or unethical actions do.

A simulation based on a model of a real system is, in essence, an in-silico working out of the consequences of that particular model of the system assuming some parameters, in contrast to the testing of such a system in a physical setting.  Agent-based models and simulations using them are particularly useful in the social sciences where it is simply not possible  to arrange scenarios to our liking.  Our model makes it possible for us to set up scenarios that are essentially impossible to arrange in reality, and to understand their consequences by simulations.  

 One question that is always important in questions of ethics is: why do some people choose to be unethical?  Many reasons may exist, but an important one, surely, is the desire for greater gain than can be had by ethical means.  To this, the classical riposte is generally that while unethical actions may bring temporary gains, they leave people who indulge in them worse off in the long run.  The Mahabharata famously says, “A man may be seen to prosper by his sins, obtain good therefrom and vanquish his foes. Destruction, however, overtakes him to the roots.”  A similar sentiment was expressed by George Washington, who disavowed unethical conduct, saying it “must always greatly overbalance in permanent evil any partial or transient benefit.”

Our simulations suggest that the insights of the past thinkers were in fact correct: unethical agents make attractive short-term gains but sink eventually; only ethical agents prosper in the long run.

A further related result is that even a small fraction of agents being highly ethical raise the prosperity of society quite significantly, even when the vast majority of society is unethical—such ethical agents have a disproportionately large impact on the overall social good.  

Another issue relevant to ethics in society is whether it is more important to reward good behaviour, or to punish bad behaviour.  Many past studies in psychology suggest that people are motivated to be ethical more by rewards than by punishments, but on account of the doctrine of retributive justice that underlies modern law enforcement, it is common that unethical conduct is punished, whilst ethical conduct may be scarcely noticed.

Here too, our work indicates that a negativity bias, or the focus being more on punishing unethical conduct rather than on rewarding ethical conduct, does not encourage unethical agents to become more ethical; on the contrary, a positivity bias, a greater emphasis on rewarding ethical conduct rather than punishing unethical conduct, does so.

References

Peek, S (2021) A Culture of Ethical Behavior Is Essential to Business Success [online] Business News Daily www.businessnewsdaily.com/9424-business-ethical-behavior [Accessed 01/12/2021]

Aditya Hegde, Vibhav Agarwal, Shrisha Rao. Ethics, Prosperity, and Society: Moral Evaluation Using Virtue Ethics and Utilitarianism. 29th International Joint Conference on Artificial Intelligence (IJCAI 2020). 10.24963/ijcai.2020/24

Müller, V (2020) Ethics of Artificial Intelligence and Robotics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-ai [Accessed 01/12/2021]

Walen, A (2020) Retributive Justice [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/justice-retributive/ [Accessed 01/12/2021]

Alexander, L (2020) Deontological Ethics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-deontological [Accessed 01/12/2021]

Sinnott-Armstrong, W (2019) Consequentialism [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/consequentialism [Accessed 01/12/2021]

Hursthouse, R (2018) Virtue Ethics [online] Stanford Encyclopaedia of Philosophy www.plato.stanford.edu/entries/ethics-virtue [Accessed 01/12/2021]

Written By

Shrisha Rao
International Institute of Information Technology - Bangalore

Contact Details

Email: shrao@ieee.org
Telephone:
+918041407722

Address:
26/C Electronics City
Hosur Road
Bangalore
Karnataka
India
560100

Want to read more articles like this?

Sign up to our mailing list and read about the topics that matter to you the most.
Sign Up!

Leave a Reply

Your email address will not be published. Required fields are marked *

Thank you for expressing interest in joining our mailing list and community. Below you can select how you’d like us to interact with you and we’ll keep you updated with our latest content.

You can change your preferences or unsubscribe by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at audience@researchoutreach.org at any time and if you have any questions about how we handle your data, please review our privacy agreement.

Would you like to learn more about our services?

We use MailChimp as our marketing automation platform. By clicking below to submit this form, you acknowledge that the information you provide will be transferred to MailChimp for processing in accordance with their Privacy Policy and Terms.

Subscribe to our FREE PUBLICATION