Delegation to AI
While Artificial Intelligence enables substantial productivity gains from delegating tasks to machines, it may facilitate the delegation of unethical behaviour. This risk is highly relevant amid the rapid rise of 'agentic' AI systems. Here we explore whether delegating tasks to machines might increase human immoral behavior. We also study the factors that may influence this effect. Factors range from how we delegate to machines (e.g. using natural language vs. rule-based programming) to whether some AI systems are more compliant than others when asked to perform unethical tasks. We explore design and policy strategies to mitigate the ethical risks from delegation to machines. These include guardrails that operate both on the humans and the machines.
Scientific writings
Köbis, N., Rahwan, Z., Rilla, R., Supriyatno, B., Bersch, C., Ajaj, T., Bonnefon, J.-F., Rahwan, I. (2025). Delegation to Artificial Intelligence can increase dishonest behaviour. Nature.
[Paper, Preprint, Enhanced PDF]Bonnefon, J. F., Rahwan, I., & Shariff, A. (2024). The moral psychology of Artificial Intelligence. Annual Review of Psychology, 75(1), 653-675.
Köbis, N., Bonnefon, J. F., & Rahwan, I. (2021). Bad machines corrupt good morals. Nature Human Behaviour, 5(6), 679-685.
[View-only open access version] [Media: LA Times op-ed, The Economist (PDF)]