Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0014
- Subject:
- Philosophy, Moral Philosophy
The richness of human moral decision making is underscored by the project of developing an artificial moral agent. This brief epilogue discusses how the project of designing artificial moral agents ...
More
The richness of human moral decision making is underscored by the project of developing an artificial moral agent. This brief epilogue discusses how the project of designing artificial moral agents feeds back into our understanding of ourselves as moral agents and of the nature of ethical theory itself. The limitations of current ethical theory for developing the control architecture of artificial moral agents highlights deep questions about the purpose of such theories.Less
The richness of human moral decision making is underscored by the project of developing an artificial moral agent. This brief epilogue discusses how the project of designing artificial moral agents feeds back into our understanding of ourselves as moral agents and of the nature of ethical theory itself. The limitations of current ethical theory for developing the control architecture of artificial moral agents highlights deep questions about the purpose of such theories.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.001.0001
- Subject:
- Philosophy, Moral Philosophy
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these ...
More
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.Less
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0005
- Subject:
- Philosophy, Moral Philosophy
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” ...
More
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” machine can be a moral agent. A pragmatically oriented approach is developed, which recognizes that full‐blown moral agency (which depends on “strong AI”) or even AI powerful enough to pass the Turing Test may be beyond current or future technology, but locates the project of developing artificial moral agents in the space between operational morality and genuine moral agency. This niche is labeled “functional morality.” The goal of this chapter is to address the question of what the various approaches to artificial intelligence (AI) from traditional symbol‐processing approaches to more recent approaches based on embodied cognition can provide toward functional morality.Less
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” machine can be a moral agent. A pragmatically oriented approach is developed, which recognizes that full‐blown moral agency (which depends on “strong AI”) or even AI powerful enough to pass the Turing Test may be beyond current or future technology, but locates the project of developing artificial moral agents in the space between operational morality and genuine moral agency. This niche is labeled “functional morality.” The goal of this chapter is to address the question of what the various approaches to artificial intelligence (AI) from traditional symbol‐processing approaches to more recent approaches based on embodied cognition can provide toward functional morality.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0003
- Subject:
- Philosophy, Moral Philosophy
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. ...
More
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. Systems low on both dimensions have “operational morality” their moral significance is entirely in the hands of designers and users. Systems intermediate on either dimension have “functional morality” the machines themselves can assess and respond to moral challenges. Full moral agents, high on both dimensions, may be unattainable with present technology. This framework is compared to Moor's categories, which range from implicit ethical agents whose actions have ethical impact, to explicit ethical agents that are explicit ethical reasoners. Different ethical issues are raised by AI's various objectives from the augmentation of human decision making (basic decision support systems to cyborgs) to fully autonomous systems. Finally, the feasibility of a modified Turing Test for evaluating artificial moral agents—a Moral Turing Test—is discussed.Less
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. Systems low on both dimensions have “operational morality” their moral significance is entirely in the hands of designers and users. Systems intermediate on either dimension have “functional morality” the machines themselves can assess and respond to moral challenges. Full moral agents, high on both dimensions, may be unattainable with present technology. This framework is compared to Moor's categories, which range from implicit ethical agents whose actions have ethical impact, to explicit ethical agents that are explicit ethical reasoners. Different ethical issues are raised by AI's various objectives from the augmentation of human decision making (basic decision support systems to cyborgs) to fully autonomous systems. Finally, the feasibility of a modified Turing Test for evaluating artificial moral agents—a Moral Turing Test—is discussed.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0012
- Subject:
- Philosophy, Moral Philosophy
This chapter presents a specific framework in which the rational and the supra‐rational capacities needed by artificial moral agents might be combined in a single machine that has artificial general ...
More
This chapter presents a specific framework in which the rational and the supra‐rational capacities needed by artificial moral agents might be combined in a single machine that has artificial general intelligence. In collaboration with Stan Franklin of the University of Memphis the authors describe how moral decision‐making capacities might be incorporated into Franklin's “LIDA” model of cognition and consciousness. The LIDA model provides a general framework for understanding cognitive cycles of perception, attention, deliberation, evaluation, and decision making.Less
This chapter presents a specific framework in which the rational and the supra‐rational capacities needed by artificial moral agents might be combined in a single machine that has artificial general intelligence. In collaboration with Stan Franklin of the University of Memphis the authors describe how moral decision‐making capacities might be incorporated into Franklin's “LIDA” model of cognition and consciousness. The LIDA model provides a general framework for understanding cognitive cycles of perception, attention, deliberation, evaluation, and decision making.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0001
- Subject:
- Philosophy, Moral Philosophy
The development of military robots deployed on the battlefield and service robots in the home underscore the need for artificial moral agents. However, autonomous bots within existing computer ...
More
The development of military robots deployed on the battlefield and service robots in the home underscore the need for artificial moral agents. However, autonomous bots within existing computer systems are already making decisions that affect humans for good or for bad. The topic of (ro)bot (a spelling that represents both robots and software bots within computer systems) morality has been explored in science fiction by authors such as Isaac Asimov with his three laws of robotics, in television shows such as Star Trek, and in various Hollywood movies. However, the project of this book is not science fiction. Rather, current developments in computer science and robotics necessitate the project of building artificial moral agents. The preface places machine morality in the context of philosophical ethics and other sources of moral principles, and outlines the chapters for the remainder of the book.Less
The development of military robots deployed on the battlefield and service robots in the home underscore the need for artificial moral agents. However, autonomous bots within existing computer systems are already making decisions that affect humans for good or for bad. The topic of (ro)bot (a spelling that represents both robots and software bots within computer systems) morality has been explored in science fiction by authors such as Isaac Asimov with his three laws of robotics, in television shows such as Star Trek, and in various Hollywood movies. However, the project of this book is not science fiction. Rather, current developments in computer science and robotics necessitate the project of building artificial moral agents. The preface places machine morality in the context of philosophical ethics and other sources of moral principles, and outlines the chapters for the remainder of the book.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0007
- Subject:
- Philosophy, Moral Philosophy
Implementing any top‐down ethical theory of ethics in an artificial moral agent will pose both computational and practical challenges. One central concern is framing the background information ...
More
Implementing any top‐down ethical theory of ethics in an artificial moral agent will pose both computational and practical challenges. One central concern is framing the background information necessary for rule and duty based conceptions of ethics and utilitarianism. Asimov's three laws come readily to mind when considering rules for (ro)bots, but even these apparently straightforward principles are not likely to be practical for programming moral machines. To check whether a machine's actions conform to high‐level rules such as the Golden Rule, the deontology of Kant's categorical imperative, or the general demands of consequentialism, e.g. utilitarianism, fail to be computationally tractable.Less
Implementing any top‐down ethical theory of ethics in an artificial moral agent will pose both computational and practical challenges. One central concern is framing the background information necessary for rule and duty based conceptions of ethics and utilitarianism. Asimov's three laws come readily to mind when considering rules for (ro)bots, but even these apparently straightforward principles are not likely to be practical for programming moral machines. To check whether a machine's actions conform to high‐level rules such as the Golden Rule, the deontology of Kant's categorical imperative, or the general demands of consequentialism, e.g. utilitarianism, fail to be computationally tractable.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0002
- Subject:
- Philosophy, Moral Philosophy
Artificial moral agents are necessary and inevitable. Innovative technologies are converging on sophisticated systems that will require some capacity for moral decision making. With the ...
More
Artificial moral agents are necessary and inevitable. Innovative technologies are converging on sophisticated systems that will require some capacity for moral decision making. With the implementation of driverless trains, the “trolley cases” invented by ethicists to study moral dilemmas may represent actual challenges for artificial moral agents. Among the difficult tasks for designers of such systems is to specify what the goals should be, i.e. what is meant by a “good” artificial moral agent? Computer viruses are among the software agents that already cause harm. Credit card approval systems are among the examples of autonomous systems that already affect daily life in ethically significant ways but are “ethically blind” because they lack moral decision‐making capacities. Pervasive and ubiquitous computing, the introduction of service robots in the home to care for the elderly, and the deployment of machine‐gun‐carrying military robots expand the possibilities of software and robots without sensitivity to ethical considerations harming people.Less
Artificial moral agents are necessary and inevitable. Innovative technologies are converging on sophisticated systems that will require some capacity for moral decision making. With the implementation of driverless trains, the “trolley cases” invented by ethicists to study moral dilemmas may represent actual challenges for artificial moral agents. Among the difficult tasks for designers of such systems is to specify what the goals should be, i.e. what is meant by a “good” artificial moral agent? Computer viruses are among the software agents that already cause harm. Credit card approval systems are among the examples of autonomous systems that already affect daily life in ethically significant ways but are “ethically blind” because they lack moral decision‐making capacities. Pervasive and ubiquitous computing, the introduction of service robots in the home to care for the elderly, and the deployment of machine‐gun‐carrying military robots expand the possibilities of software and robots without sensitivity to ethical considerations harming people.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0009
- Subject:
- Philosophy, Moral Philosophy
The topic of this chapter is the application of virtue ethics to the development of artificial moral agents. The difficulties of applying general moral theories in a top‐down fashion to artificial ...
More
The topic of this chapter is the application of virtue ethics to the development of artificial moral agents. The difficulties of applying general moral theories in a top‐down fashion to artificial moral agents motivate the return to the virtue‐based conception of morality that can be traced to Aristotle. Virtues constitute a hybrid between top‐down and bottom‐up approaches in that the virtues themselves can be explicitly described, but their acquisition as moral character traits seems essentially to be a bottom‐up process. Placing this approach in a computational framework, the chapter discusses the suitability of the kinds of neural network models provided by connectionism for training (ro)bots to distinguish right from wrong.Less
The topic of this chapter is the application of virtue ethics to the development of artificial moral agents. The difficulties of applying general moral theories in a top‐down fashion to artificial moral agents motivate the return to the virtue‐based conception of morality that can be traced to Aristotle. Virtues constitute a hybrid between top‐down and bottom‐up approaches in that the virtues themselves can be explicitly described, but their acquisition as moral character traits seems essentially to be a bottom‐up process. Placing this approach in a computational framework, the chapter discusses the suitability of the kinds of neural network models provided by connectionism for training (ro)bots to distinguish right from wrong.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0006
- Subject:
- Philosophy, Moral Philosophy
The architectures for artificial moral agents fall within two broad approaches: the top‐down imposition of an ethical theory, and the bottom‐up building of systems that aim at goals or standards that ...
More
The architectures for artificial moral agents fall within two broad approaches: the top‐down imposition of an ethical theory, and the bottom‐up building of systems that aim at goals or standards that may or may not be specified in explicitly theoretical terms. How might moral decision making specifically be implemented in (ro)bots? This chapter outlines what philosophers and engineers have to offer each other and describes a basic framework for top‐down and bottom‐up approaches to the design of artificial moral agents.Less
The architectures for artificial moral agents fall within two broad approaches: the top‐down imposition of an ethical theory, and the bottom‐up building of systems that aim at goals or standards that may or may not be specified in explicitly theoretical terms. How might moral decision making specifically be implemented in (ro)bots? This chapter outlines what philosophers and engineers have to offer each other and describes a basic framework for top‐down and bottom‐up approaches to the design of artificial moral agents.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0004
- Subject:
- Philosophy, Moral Philosophy
The chapter begins with an overview of philosophy of technology to provide a context for the specific concerns raised by the prospect of artificial moral agents. Some concerns, such as whether ...
More
The chapter begins with an overview of philosophy of technology to provide a context for the specific concerns raised by the prospect of artificial moral agents. Some concerns, such as whether artificial moral agents will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, such as the prospect of humans becoming literally enslaved to machines, seem highly speculative. The unsolved problem of technology risk assessment is how heavily to weigh catastrophic possibilities against the advantages provided by new technologies. When should the precautionary principle be invoked? Historically, philosophers of technology have served as external critics, but increasingly philosophers are engaged in engineering activism, bringing sensitivity to human values into the design of systems. Human anthropomorphism of robotic dolls, robopets, household robots, companion robots, sex toys, and even military robots raises questions of whether these artifacts dehumanize people and substitute impoverished relationships for real human interactions.Less
The chapter begins with an overview of philosophy of technology to provide a context for the specific concerns raised by the prospect of artificial moral agents. Some concerns, such as whether artificial moral agents will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, such as the prospect of humans becoming literally enslaved to machines, seem highly speculative. The unsolved problem of technology risk assessment is how heavily to weigh catastrophic possibilities against the advantages provided by new technologies. When should the precautionary principle be invoked? Historically, philosophers of technology have served as external critics, but increasingly philosophers are engaged in engineering activism, bringing sensitivity to human values into the design of systems. Human anthropomorphism of robotic dolls, robopets, household robots, companion robots, sex toys, and even military robots raises questions of whether these artifacts dehumanize people and substitute impoverished relationships for real human interactions.