‘Is it wrong to kick a robot dog?’
Presenting a possible solution to the above question by introducing an original, cognitive-based theory of virtue ethics that encompasses the morality of all Human-Object Interactions.
Abstract
As electromechanical and Artificial Intelligence systems develop, the ethical dimension of our interactions with capable robots becomes increasingly relevant. This essay addresses the fundamental question: “Is kicking a robot dog morally permissible?” and then extends this analysis to encompass the abuse of all robots and inanimate objects.
We begin by applying deontological and consequentialist moral perspectives to Human-Robot Interactions (HRI) and determine the limitations of these frameworks in providing explicit guidance on these emerging ethical issues.
Subsequently, we explore HRI through the lens of Virtue Ethics and examine contemporary applications, including Mark Coeckelbergh’s theory of Relational Virtue Ethics. Conventional virtue ethics and Coeckelbergh's theory concur that kicking a robot dog is morally impermissible due to its reflection of wider social practices. However, we identify a logical gap in their assumptions, as they assume that kicking a robot dog is morally equivalent to kicking a sentient entity.
In response to these deficiencies, this essay introduces a cognitive-representation-focused technomoral theory, offering a more complete virtue ethics framework. This theory can be used to explain why kicking a robot dog can be deemed morally equivalent to harming a sentient being.
Therefore, this essay contributes to the emerging discourse on the moral implications of human interactions with advanced robots. It accomplishes this by introducing an original framework – a technomoral adaptation of traditional virtue ethics – which enhances our understanding of the ethics of HRI.
A Deontological and Consequentialist View of HRI
Let us apply the deontological and consequentialist frameworks to HRI. Kantianism – a prominent form of deontology – places a high moral status on humans due to our rational thinking capacity, which allows us to transcend our animalistic instincts. He maintains that all humans have a duty to obey the ‘categorical imperative’ – one formulation of which emphasises us to treat all rational beings as ‘ends’ and not ‘means’ (Johnson, 2022). Hence, if robots with Artificial Intelligence systems were believed to be sentient and as rationally capable as humans, it would be morally impermissible to kick a robot dog. However, this situation makes a host of assumptions – what if the robots are not ‘rationally capable’ beings? What if they are not sentient at all? What if their state of consciousness is not known, as is likely to be the case. In fact, Kant did not believe the categorical imperative dictates a moral loyalty to non-humans: in a separate work, he believed the only duty we had towards animals was simply proportional to the amount it would influence our behaviour towards other humans. Essentially, deontology does not directly address the morality of abusing robots.
Similarly, consequentialism provides no explicit answers about the morality of HRI. Consequentialism is an ethical framework which is the view that normative properties depend only on the consequences of the given action (Sinnot-Amstrong, 2023). Instead, it raises ancillary concerns, such as property damage and the resulting suffering for the owner. Furthermore, consequentialists themselves are divided on whether animals deserve a moral status akin to that of humans (John & Sebo, 2020). This isn’t within the scope of this paper, but it is evident that neither deontology nor consequentialism provide clear answers in the context of HRI.
HRI Through the Lens of Virtue Ethics
Let us now look at the third principal ethical framework: virtue ethics. Virtue ethics is a character-based approach to morality and emphasises the development and practice of virtues - deeply entrenched, desirable character traits. This is in contrast to vices, which are deeply entrenched but undesirable character traits (Hursthouse & Pettygrove, 2022).
Many contemporary philosophers have turned to virtue ethics to address the moral questions that consequentialism and deontology struggled to resolve. It is particularly suited to HRI analysis because it shifts the moral responsibility from the situational context to the inner state of the agent. The prevailing conclusion amongst these philosophers is that – when viewed through the framework of virtue ethics – kicking a robot dog is morally impermissible. This is because it is a tangible manifestation of the inherent vices within human beings.
In How to Use Virtue Ethics for Thinking About the Moral Standing of Social Robots: A Relational Interpretation in Terms of Practices, Habits, and Performance, Mark Coeckelbergh introduces a sociological interpretation of the abstract concept of ‘virtues’ and uses this to analyse HRI. He explains that we cannot only view virtues and vices as traits that reside in the abstract of our cognitive space, as is commonly thought. They are also actualised and cultivated by deliberately performing physical expressions of these traits, which he refers to as ‘bodily performances’. Furthermore, he asserts that our traits are influenced by our wider social and cultural context, and that ‘bodily performances’ are not only expressions of our internal state but also reflective of a wider social practice (Coeckelbergh p.35, 2021).
He thus reframes the fundamentals of virtue ethics into a crucial triadic relationship between the agent’s abstract traits, her deliberate performance of these traits and the contribution to and reflection of these traits to a wider social practice. For instance, in the moral dilemma when an agent is utilising a sex robot, it is traditionally condemned by virtue ethicists because it displays the internalised lust and misogyny of the agent. With Coeckelbergh’s revision, it can also be condemned because it is a deliberate choice which reflects and perpetuates the social practice of objectifying women for sexual gratification (Coeckelbergh p.38, 2021).
This relational theory of virtue ethics reveals that kicking a robot dog is ethically impermissible. It constitutes a bodily performance of the internal vice of callousness, and – with Coeckelbergh’s insight – also serves as a reflection of the wider social practice of abusing animals.
Flaw of Conventional and Coeckelbergh’s Application of Virtue Ethics to HRI
However, within Coeckelbergh's argument and many other technomoral applications of virtue ethics, a significant flaw emerges. One can acknowledge the validity of virtue ethics and Coeckelbergh’s relational revision while maintaining the assertion that abuse of robots is not immoral. The objection goes like this: robots, despite their human-like appearances, lack sentience and are essentially sophisticated mechanical contrivances. Consequently, any harm inflicted upon a robot can be likened to the actions of a woodworker cutting a piece of wood to craft furniture—a morally neutral act that neither contributes to nor reflects any malevolent social practice.
It is essential to note that this line of argument does not contradict any technomoral theories of virtue ethics. Instead, it posits that the abuse of robots is morally neutral, even within the lens of virtue ethics. This is because Coeckelbergh and other proponents of this framework have – while broadening our understanding of the contextual dynamics of virtue ethics – have made the fundamental assumptions that causing harm to robots signifies a manifestation of vice and not merely a morally indifferent action.
To A More Complete Cognitive-Representation Theory of Virtue Ethics
To address these issues, we must determine which internal trait is manifested in the bodily performance of an agent in a HRI. I assert that this attribution is primarily contingent on the cognitive representation of the robot in the agent’s mind during the time of the HRI.
For instance, when an agent kicks a robot dog because she finds it humorous, we can discern that the cognitive representation of the robot in the agent’s mind is that of life. The amusement arises from the resemblance of inflicting simulated harm on a lifelike being, while knowing that there will be no consequences. In this scenario, we can attribute this bodily performance to the vice of taking enjoyment in others’ suffering. I was discussing this theory with my friend, and he explained that he has a toy stuffed monkey and occasionally he playfully hurls it at the wall – an act many of us may recall from our adolescence or have experienced impulses to engage in. Like a hypothetical agent kicking a robot dog, my friend derives amusement from causing simulated distress to an object that exhibits lifelike characteristics and is represented as life in his cognitive space for the brief duration of the HRI. In this case, it is morally impermissible to abuse robots.
Conversely, if the agent views the robot as a purely mechatronic contraption and kicks it to solely showcase its technical capabilities, this action does not reflect a vice or a virtue – it remains morally neutral. Consider the example of Boston Dynamics’ robotic dog, SPOT, being kicked by an employee to demonstrate its ability to autonomously regain balance. In this case, I assert that this act is akin to moving a piece of furniture – the employee’s cognitive representation of SPOT at the time of the HRI was purely that of an object. Nevertheless, it is intriguing to note that many individuals perceived this action as morally ‘wrong’, even while struggling to articulate why. My revised theory explains this: those who have a cognitive representation of SPOT as that of a dog felt that kicking it was morally wrong. We must reconcile this with the fact that the kicker did not believe it to be representative of life, and so the act is not a bodily expression of a vice and does not contribute to a harmful social practice.
Note that this situation is in direct exclusion to the first case: an agent cannot harm a robot for humorous purposes and claim moral neutrality due to maintaining a non-sentient representation of the robot. The act would only be found funny because the agent has a cognitive representation of the robot as life-like.
Note that while this expanded theory provides a logical foundation to analyse technomoral issues, it is still prone to the same flaws that the original theories of virtue ethics are prone to. For example, this theory is susceptible to extreme subjectivity or feigned subjectivity.
Furthermore, an agent could maintain a completely different cognitive representation of a robot to another agent. This could lead to the moral justification of actions which another agent views as morally impermissible and has no resolution in these conflicts. This disparity is evident in the case of SPOT. However, these criticisms are criticisms of virtue ethics itself – which is outside the scope of this essay.
Conclusion
In summary, I believe that while virtue ethics is a commendable and broad theory, it is not directly applicable to these technomoral issues without certain revisions. Most essays that probe these questions fail to thoroughly explain why or how they have arrived at their conclusions and simultaneously make assumptions which do not address the entire issue. I propose a new theory of technomoral virtue ethics centered on the cognitive representation of those people, animals and objects we interact with. Through this theory, we see that the morality of harming sentient-adjacent beings and inanimate objects – kicking a robot dog was the simplified situation which allowed me to explore a broader range of issues – depends entirely on the representation of that entity in the person’s mind at the time of the HRI.
References
1. “The fundamental principle of morality — the CI — is none other than the law of an autonomous will. Thus, at the heart of Kant’s moral philosophy is a conception of reason whose reach in practical affairs goes well beyond that of a Humean ‘slave’ to the passions. Moreover, it is the presence of this self-governing reason in each person that Kant thought offered decisive grounds for viewing each as possessed of equal worth and deserving of equal respect.” (Johnson 2022, https://plato.stanford.edu/entries/kant-moral/)
2. Consequentialism, as its name suggests, is simply the view that normative properties depend only on consequences. (Sinnot-Amstrong 2023, https://plato.stanford.edu/entries/consequentialism)
3. Consequentialism is thought to be in significant conflict with animal rights theory because it does not regard activities such as confinement, killing, and exploitation as in principle morally wrong. Proponents of the “Logic of the Larder” argue that consequentialism results in an implausibly pro-exploitation stance, permitting us to eat farmed animals with positive well- being to ensure future such animals exist. Proponents of the “Logic of the Logger” argue that consequentialism results in an implausibly anti-conservationist stance, permitting us to exterminate wild animals with negative well-being to ensure future such animals do not exist. We argue that this conflict is overstated. Once we have properly accounted for indirect effects, such as the role that our policies play in shaping moral attitudes and behavior and the importance of accepting policies that are robust against deviation, we can see that consequentialism may converge with animal rights theory significantly, even if not entirely. (John & Sebo 2020, Consequentialism and Nonhuman Animals)
4. Virtue ethics is currently one of three major approaches in normative ethics. It may, initially, be identified as the one that emphasizes the virtues, or moral character, in contrast to the approach that emphasizes duties or rules (deontology) or that emphasizes the consequences of actions (consequentialism). (Hursthouse & Pettygrove 2022, https://plato.stanford.edu/entries/ethics-virtue/)
5. This entails a number of shifts or revisions of the initial virtue ethics approach to moral standing. First we shifted the ethical attention from the potential moral patient or object (the robot) to the moral agent or subject (the human user or abuser). Then we performed a second shift: one from the individual moral agent or subject to the practice as a whole, in which moral excellence may grow and flourish or not. Finally, there also has been a third shift: from the moment of (ab)use to the wider temporal horizon: the history of the practice—and hence the history of abuse. (Coeckelbergh 2021, p. 35, How to Use Virtue Ethics for Thinking About the Moral Standing of Social Robots: A Relational Interpretation in Terms of Practices, Habits, and Performance)
6. What is potentially wrong about molesting sex robots, for example, is not that harm is done to the robot and not only that this individual behavior might badly reflect on the moral character of the person in the sense of mental dispositions, but also with the social context in which this abuse arises: with the practice of using women for sex (that is, as if they were a machine), and with the way current intimate relationships, understood as a practice, are sometimes organized in particular contexts. (Coeckelbergh 2021, p. 38, How to Use Virtue Ethics for Thinking About the Moral Standing of Social Robots: A Relational Interpretation in Terms of Practices, Habits, and Performance)