Susan and Michael Anderson
Who or what is deserving of moral consideration? Could intelligent, autonomously functioning machines be included in this category? Could machines ever be considered to be moral agents? If so, how can we ensure that they behave in an ethically responsible manner? How does moral agency relate to moral responsibility? Could intelligent, autonomously functioning machines be viewed as moral agents that are not morally responsible for their actions? Might we have a moral responsibility to (a) develop ethically trained machines that can bring about desirable states of affairs and (b) harness machine capabilities to further our understanding of ethics? These are questions that Susan will consider in the first part of their presentation as she explores the relationship between intelligent machines and ethics. In the second part, Susan and Michael will briefly summarize their research in machine ethics, ending with a demonstration of their General Ethical Dilemma Analyzer.
Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines
The creation of artificial consciousnesses raises a variety of philosophical challenges. From an ethical perspective, the creation of artificial consciousness raises serious issues as we develop consciousnesses that have the capacity for attitudes. However, current machines lack the capacities that would impose any moral restrictions on us. Until machines are developed that have a certain kind of consciousnesses, machines should not be considered moral patients.
Patiency Is Not a Virtue: Suggestions for Co-Constructing an Ethical Framework Including Intelligent Artefacts
The question of whether AI can or should be afforded moral agency or patiency is not one amenable to simple discovery or reasoning, because we as societies are constantly constructing our artefacts, including our ethical systems. Here I briefly examine the origins and nature of ethical systems in a variety of species, then propose a definition of morality that facilitates the debate concerning not only whether it is ethical for us to afford moral agency and patiency on AI, but also whether it is ethical for us to build AI we should so afford.
Bridging the Responsibility Gap in Automated Warfare
Marc Champagne and Ryan Tonkens
Robert Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise thereby fostering what Matthias has dubbed the responsibility gap. We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we might term blank check responsibility. A person of sufficiently high standing could accept responsibility for the actions of autonomous robotic devices even if that person could not be causally linked to those actions besides this prior agreement. The basic intuition behind our proposal is that we can impute relations even when no other form of contact can be established. The missed alternative we want to highlight, then, would consist in an exchange: social prestige in the occupation of a given office would come at the price of signing away part of one's freedoms to a contingent and unpredictable future guided by another (in this case, artificial) agency.
Who cares about robots? A phenomenological approach to the moral status of autonomous intelligent machines
This paper address the problem of how to approach the question of moral status of autonomous intelligent machines, in particular intelligent autonomous robots. Inspired by phenomenological and hermeneutical philosophical traditions, (1) it proposes a shift in epistemology of robotics (from objectivism to phenomenology, from object to subject-object relations, from the individual to the social and cultural, and from status to change) and (2) analyses what it is we care about when we care about the moral status robots. This give us an approach that implies epistemological anthropocentrism, but not necessarily moral anthropocentrism; whether or not we want to include robots in our world depends on the kind of moral and social relations that emerge between humans and other entities.
Moral philosophies are arguably all anthropocentric and so fundamentally concerned with biological mechanisms. Computationalism, on the other hand, sees biology as just one possible implementation medium. Can non-human, non-biological agents be moral? This paper looks at the nature of morals, at what is necessary for a mechanism to make moral decisions, and at the impact biology might have on the process. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While biology certainly affects human moral reasoning, it in no way restricts the development of artificial moral agents. The consequences of sophisticated artifical mechanisms living with natural human ones is also explored. While the prospects for peaceful coexistence are not particularly good, it is the realisation that humans no longer occupy a privileged place in the world, that is likely to be the most disconcerting. Computationalism implies we are mechanisms; probably the most immoral of moral mechanisms.
A Vindication of the Rights of Machines
David J. Gunkel
This paper responds to the machine question in the affirmative, arguing that machines, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in three parts. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship--moral agency and patiency. And in the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is considered necessary to be considered a moral agent or patient but because the standard characterization of agency and patiency already fail to accommodate not just machines but also those entities who are currently regarded as being moral subjects. The third part responds to this systemic failure by formulating an approach to ethics that is oriented and situated otherwise. This alternative proposes an ethics that is based not on some prior discovery concerning the ontological status of others but the product of a decision that responds to and is able to be responsible for others and other kinds of otherness.
Can an unmanned drone be a moral agent? Ethics and accountability in military robotics
Remotely operated Unmanned Aerial Systems (UAS) or "drones" are now routinely deployed in theatres of war and are capable of lethal acts such as firing missiles under the control of their human operators. It would take a small technological step but a large legal and ethical one to allow them to make "kill" decisions autonomously. This paper outlines some general technical and ethical contexts surrounding the use of these weapons and examines a specific proposal for implementing ethical constraints on UAS, Arkin's "ethical governor". It is argued that the proposal is flawed in several respects: the author fails to support his bold claim that robots are capable of acting more ethically than humans, there is a lack of clarity in the formal representations of ethical constraints, and the metaphor of a "governor" is a misleading characterisation of the proposed system's functionality (as argued elsewhere by Matthias).
The robot, a stranger to ethics
Can an "autonomous" robot be ethical? Ethics is a discipline that calls upon certain capacities of an agent for a purpose. We will show that the goal of ethics is not attainable by a robot, even autonomous, thereby implying that it is not a moral agent and that it cannot be a moral agent because it lacks the necessary capabilities. The field of ethics is therefore foreign to the robot, and we will show why it would not be useful for the definition of ethics to be modified in order to integrate robots, if they come under two traditional conceptions of ethics--those of Aristotle and of Kant--and the minimal definition of ethics.
Manipulation, Moral Responsibility, and Machines
In this paper, I argue that machines of sufficient complexity can qualify as morally responsible agents. In order to do this I examine one form of the manipulation argument against compatibilism. The argument starts with a case in which an agent is programmed so that she satisfies the compatibilist conditions for moral responsibility, yet intuitively the agent is not morally responsible. It is then claimed that this agent is not relevantly different from a determined agent; thereby showing that determined agents also lack moral responsibility. In response, I argue that the agent is morally responsible, and the only reason that one would think otherwise is if they think that humans have a soul that is being overridden by the programming. I then generalise this result to show that certain machines can qualify as morally responsible agents.
Behind the Mask: Machine Morality
Keith Miller, Marty J. Wolf and Frances Grodzinsky
We consider machines that have the ability to masquerade as human in the context of Floridi's Information Ethics and artificial evil. We analyze a variety of different robots and contexts and the ethical implications for the development of such robots. We demonstrate numerous concerns that arise due to the ambiguity introduced by masquerading machines, suggesting a need for careful consideration regarding the development of masquerading robots.
Machines and the Moral Community
Erica L. Neely
A key distinction in ethics is between members and non-members of the moral community. Over time our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion can be understood in terms of respecting the interests and autonomy of a being and thus may be extended to self-aware and/or autonomous machines. Such machines exhibit a concept of self and thus desires for the course of their own existence; this gives them basic moral standing, although elaborating the nature of their rights is complex. While not all machines display autonomy, those which do must be treated as members of the moral community; to ignore their claims to moral recognition is to repeat the errors of colonialism.
Moral Agency, Moral Responsibility, and Artefacts: What Existing Artefacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims Upon Us
Joel Parthemore and Blay Whitby
This paper follows directly from our forthcoming paper in International Journal of Machine Consciousness, where we discuss the requirements for an artefact to be a moral agent and conclude that the artefactual question is ultimately a red herring. As we did in the earlier paper, we take moral agency to be that condition in which an agent can, appropriately, be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context, and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit selfconscious awareness: who does the "I" who thinks "I" think that "I" is? It must exhibit a range of highly sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: a "private" moral world is not enough. After reviewing these conditions and pouring cold water on a number of recent claims for having achieved "minimal" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artefacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.
The holy will of ethical machines: a dilemma facing the project of artificial moral agents
In this paper I will assume that the technical hurdles facing the creation of full ethical machines will eventually be overcome. I will thus focus on ethical questions that arise in connection with their creation. These questions are basically two: 1. Is their creation good for them? and 2. Is it good for us (humans)? In asking the latter, I have a specific hazard in mind: namely, since the very idea of full ethical machines implies that they will be able to make moral judgments about their actions, it follows that they will be capable of morally judging humans as well, unless we deliberately block this ability. I see a hazard in this ability arising from their moral superiority, which I attempt to explain and substantiate in this paper.
Is there a continuity between man and machine?
Johnny Hartz Søraker
The principle of formal equality, one of the most fundamental and undisputed principles in ethics, states that a difference in treatment or value between two kinds of entities can only be justified on the basis of a relevant and significant difference between the two. Accordingly, when it comes to the question of what kind of moral claim an intelligent or autonomous machine might have, one way to answer this is by way of comparison with humans: Is there a fundamental difference between humans and machines that justifies unequal treatment, or will the two become increasingly continuous, thus making it increasingly dubious whether unequal treatment is justified? This question is inherently imprecise, however, because it presupposes a stance on what it means for two types of entities to be sufficiently similar, as well as which types of properties that are relevant to compare. In this paper, I will sketch a formal characterization of what it means for two types of entities to be continuous in this sense, discuss what it implies for two different types of entities to be (dis-)continuous with regard to both ethics and science, and discuss a dramatic difference in how two previously discontinuous entities might become continuous.
The centrality of machine consciousness to machine ethics. Between realism and social-relationism.
I compare a 'realist' with a 'social-relational' perspective on our judgments of the moral status of machines. I argue that moral status is closely bound up with a being's ability to experience states of conscious satisfaction or suffering (CSS). The social-relational view may be right that a wide variety of social interactions between us and machines will proliferate in future generations, and that the appearance of CSS-features in such machines may make moral-role attribution socially prevalent in human-machine relations. But the social world is enabled and constrained by the physical world. Features analogous to physiological features in biological CSS are what need to be present for nonbiological CSS. Working out the details of such features will be a scientific inquiry sharing the same kind of 'objectivity' as, for instance, physicists' questions about dark matter.
Safety and Morality REQUIRE the Recognition of Self-Improving Machines as Moral/Justice Patients and Agents
Mark R. Waser
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. We argue that this is solely due to an insufficient understanding of exactly what morality is and why it exists. To solve this, we draw from evolutionary biology/psychology, cognitive science, and economics to create a safe, stable, and self-correcting model that not only explains current human morality and answers the "machine question" but remains sensitive to current human intuitions, feelings, and logic while evoking solutions to numerous other urgent current and future dilemmas.
Strange Things Happen at the One Two Point: The Implications of Autonomous Created Intelligence in Speculative Fiction Media
Damien P. Williams
By its very nature, Science Fiction media has often concerned itself with advances in human enhancement as well as the creation of various autonomous, thinking, non-human beings. Unfortunately, since the initial proffering of the majority interpretation of Frankenstein, Mary Shelly's seminal work, and before, most speculative fiction media has taken the standpoint that to enhance or to explore the creation of intelligences, in this way, is doomed to failure, thus recapitulating the myths of Daedalus and of Prometheus and of Lucifer, again and again. What we see and are made to fear are the uprisings of the robots or the artificial neural networks, rather than discussing and respecting the opportunity for a non-human intelligence to arise and demand rights. In this work, I make use of specific films, books, and television shows to explore the philosophical and cultural implications of an alternate interpretation of not only Frankenstein, but of the whole of the field of science fiction.