Skip to main content

Why ethical use of AI robots must incorporate accountability and moral intensity 

Robot with laser light from eye

By Dr Zsófia Tóth, August 2022

For many businesses, investing in AI improves productivity, boosts revenue, increases accuracy and opens new avenues for industry. But, according to Dr Zsófia Tóth and her colleagues, this isn’t without risk… 

The use of Artificial Intelligence (AI) robots in the workplace and to provide both services and products is becoming more and more commonplace. Today, autonomous bots are deployed to perform tasks across a whole host of sectors in order to make practices smoother and more efficient, improve services for customers and even keep people safe.  

Far from day-to-day routine tasks in sales, education delivery or processing, today’s AI bots can effectively support medical diagnosis, handle dangerous items or complex information, take over security, or even assist us in hospitality. The sheer number of tasks they can take on, the swift nature of the way they operate, and the speed of their learning makes it impossible for humans to keep up.  

Whilst such efficiency has previously caused alarm amongst human workers when it comes to job security, the pace of change in the workplace, and the pressing need for tech-savvy upskilling, there’s another concern growing in momentum.  

With this increase in AI robotics use and capability, questions arise of another nature… How can we keep track of their actions and decisions? And, how can we make sure to clarify accountability to reduce errors in the actions of an AI robot? 

Though highly trained AI robots are typically more effective and make less mistakes than human counterparts, there’s one glaring issue that arises when those rare mistakes happen. In the case of tasks that aren’t supported by AI robots, there’s typically more clarity on who takes responsibility. However, when an AI robot operates, who are the stakeholders that take responsibility? And how does moral intensity come into play (i.e., is it a menial task like cleaning, or something of higher risk such as caring for vulnerable people)?  

We decided to explore this in our latest paper. Through our research, we’ve devised a framework for industry and society to follow in order to clarify where responsibility lies when AI slips up.  

We researched two themes for ethical evaluations; the locus of morality, meaning the level of autonomy to choose an ethical course of action, and moral intensity, meaning the potential consequences of the use of AI robots. To develop the framework, we first reviewed the uses of AI robots in different professional settings from an ethical perspective. From there we developed four clusters of accountability to help with the identification of specific actors who can be held accountable for an AI robot’s actions.  

These four clusters revolved around the ethical categories of; illegal, immoral, permissible and supererogatory, which are outlined in normative business ethics.   

Supererogatory actions represent a positive extra mile from what one expects morally, whilst the other three categories were either neutral or negative. Illegal is any action that’s against the law and regulations; immoral is any action that only reaches the legal threshold’s bare minimum, and morally permissible actions are those not requiring explanations of presumed fairness or appropriateness. 

Humans can set the boundaries of what AI robots can and should learn and unlearn (e.g. to decrease or eliminate racial/gender bias), and the type of decisions they can make without human involvement (e.g. a self-driving car in an emergency setting). 

With this in mind, each cluster in the framework included actors who were responsible for an AI robot’s actions. In the first cluster, ‘professional norms’, AI robots are used for small, remedial, everyday tasks like heating or cleaning. Here, robot design experts and customers take the most responsibility for the appropriate use of the robots.  

In the second cluster, ‘business responsibility’, AI robots are used for difficult but basic tasks, such as mining or agriculture. In this instance a wider group of organisations bear the brunt of responsibility for any misstep made by the technology. In cluster three, ‘inter-institutional normativity’, AI may make decisions with potential major consequences, such as healthcare management or crime-fighting. Here, it’s domestic governmental and regulatory bodies who should be increasingly involved in agreeing on specific guidelines and shouldering responsibility when errors occur. 

In the fourth cluster, ‘supra-territorial regulations’, AI is used on a global level, such as in the military, or driverless cars. Here we deduced that a wider range of governmental bodies, regulators, firms and experts must hold accountability. As it becomes increasingly complex to attribute the outcomes of AI robots’ use to specific individuals or organisations, these cases deserve special attention. 

When you bring AI robots into the mix, the line of accountability becomes much more difficult to understand. If we can’t take responsibility for error, how are we ever to learn how such incidents could’ve been prevented, or avoid repeating them in future? This is among the reasons why this framework can be useful. 

Previously, the accountability of these actions was a grey area. By offering ways to ensure better clarity of responsibilities from the beginning when it comes to the use of an AI robot, business leaders, policy makers and governments can be reassured knowing that their technological advances are not reckless.   

More information on Dr Zsófia Tóth's research interests.