Donald Davidson
- Published in print:
- 2004
- Published Online:
- August 2004
- ISBN:
- 9780198237549
- eISBN:
- 9780191601378
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198237545.003.0005
- Subject:
- Philosophy, General
Finds fault with Turing's answer to the question, ‘Can a computer think’? Turing believed that if the answers given by a computer and a person leave an interpreter unable to discriminate between ...
More
Finds fault with Turing's answer to the question, ‘Can a computer think’? Turing believed that if the answers given by a computer and a person leave an interpreter unable to discriminate between them, then computers must be said to be able to think. The author objects that in order for a computer to think, it must mean something by the answer it gives. Consequently, without evidence for the fact that a computer not merely possesses the syntax of the language it is responding in but also has a semantics, the findings of Turing's Test cannot be used as evidence for the claim that computers can, even in theory, think. Understanding the semantics of an object or creature requires that the interpreter be able to observe what in the world that is shared by interpreter and interpretant causes the latter's responses; having a semantics requires a history of engagement with others and with objects in the world. Turing's test for thought is inadequate, according to the author, not because it restricts the evidence to what can be observed about the computer from the outside, but because it does not allow enough of what is outside to be observed.Less
Finds fault with Turing's answer to the question, ‘Can a computer think’? Turing believed that if the answers given by a computer and a person leave an interpreter unable to discriminate between them, then computers must be said to be able to think. The author objects that in order for a computer to think, it must mean something by the answer it gives. Consequently, without evidence for the fact that a computer not merely possesses the syntax of the language it is responding in but also has a semantics, the findings of Turing's Test cannot be used as evidence for the claim that computers can, even in theory, think. Understanding the semantics of an object or creature requires that the interpreter be able to observe what in the world that is shared by interpreter and interpretant causes the latter's responses; having a semantics requires a history of engagement with others and with objects in the world. Turing's test for thought is inadequate, according to the author, not because it restricts the evidence to what can be observed about the computer from the outside, but because it does not allow enough of what is outside to be observed.
Ned Schantz
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780195335910
- eISBN:
- 9780199868902
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195335910.003.0007
- Subject:
- Literature, Film, Media, and Cultural Studies, Women's Literature
This chapter considers how a cultural insistence on confined space emerges in direct proportion to the manifest range or intensity of female networking. Beginning with the tradition of the locked ...
More
This chapter considers how a cultural insistence on confined space emerges in direct proportion to the manifest range or intensity of female networking. Beginning with the tradition of the locked room mystery and its disturbing misogynistic tendencies, the chapter analyses the isolating effects of technologies from telephones to e-mail when deployed in the name of romance or safety. A brief philosophical detour yields a curious opportunity in the epistemology of artificial intelligence as established by the so-called Turing Test, where the locked room required by the test’s controls creates an unanticipated space of gender ambiguity and desire. This dynamic returns in the pseudonymous e-mail choreography of You’ve got Mail, only to be shut down by gothic forces in thin disguise. The chapter comes finally to consider the gendered inversion of space in Bound as an impressive, if still troubling, escape from the space of crime.Less
This chapter considers how a cultural insistence on confined space emerges in direct proportion to the manifest range or intensity of female networking. Beginning with the tradition of the locked room mystery and its disturbing misogynistic tendencies, the chapter analyses the isolating effects of technologies from telephones to e-mail when deployed in the name of romance or safety. A brief philosophical detour yields a curious opportunity in the epistemology of artificial intelligence as established by the so-called Turing Test, where the locked room required by the test’s controls creates an unanticipated space of gender ambiguity and desire. This dynamic returns in the pseudonymous e-mail choreography of You’ve got Mail, only to be shut down by gothic forces in thin disguise. The chapter comes finally to consider the gendered inversion of space in Bound as an impressive, if still troubling, escape from the space of crime.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0003
- Subject:
- Philosophy, Moral Philosophy
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. ...
More
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. Systems low on both dimensions have “operational morality” their moral significance is entirely in the hands of designers and users. Systems intermediate on either dimension have “functional morality” the machines themselves can assess and respond to moral challenges. Full moral agents, high on both dimensions, may be unattainable with present technology. This framework is compared to Moor's categories, which range from implicit ethical agents whose actions have ethical impact, to explicit ethical agents that are explicit ethical reasoners. Different ethical issues are raised by AI's various objectives from the augmentation of human decision making (basic decision support systems to cyborgs) to fully autonomous systems. Finally, the feasibility of a modified Turing Test for evaluating artificial moral agents—a Moral Turing Test—is discussed.Less
A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. Systems low on both dimensions have “operational morality” their moral significance is entirely in the hands of designers and users. Systems intermediate on either dimension have “functional morality” the machines themselves can assess and respond to moral challenges. Full moral agents, high on both dimensions, may be unattainable with present technology. This framework is compared to Moor's categories, which range from implicit ethical agents whose actions have ethical impact, to explicit ethical agents that are explicit ethical reasoners. Different ethical issues are raised by AI's various objectives from the augmentation of human decision making (basic decision support systems to cyborgs) to fully autonomous systems. Finally, the feasibility of a modified Turing Test for evaluating artificial moral agents—a Moral Turing Test—is discussed.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0005
- Subject:
- Philosophy, Moral Philosophy
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” ...
More
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” machine can be a moral agent. A pragmatically oriented approach is developed, which recognizes that full‐blown moral agency (which depends on “strong AI”) or even AI powerful enough to pass the Turing Test may be beyond current or future technology, but locates the project of developing artificial moral agents in the space between operational morality and genuine moral agency. This niche is labeled “functional morality.” The goal of this chapter is to address the question of what the various approaches to artificial intelligence (AI) from traditional symbol‐processing approaches to more recent approaches based on embodied cognition can provide toward functional morality.Less
How closely could artificial agents, lacking human qualities such as consciousness and emotions, come to being considered moral agents? Chapter 4 begins by discussing the issue of whether a “mere” machine can be a moral agent. A pragmatically oriented approach is developed, which recognizes that full‐blown moral agency (which depends on “strong AI”) or even AI powerful enough to pass the Turing Test may be beyond current or future technology, but locates the project of developing artificial moral agents in the space between operational morality and genuine moral agency. This niche is labeled “functional morality.” The goal of this chapter is to address the question of what the various approaches to artificial intelligence (AI) from traditional symbol‐processing approaches to more recent approaches based on embodied cognition can provide toward functional morality.
Jeannette Bohg and Danica Kragic
- Published in print:
- 2016
- Published Online:
- September 2016
- ISBN:
- 9780262034326
- eISBN:
- 9780262333290
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262034326.003.0018
- Subject:
- Neuroscience, History of Neuroscience
Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be ...
More
Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be at the core of this process, yet understanding has been challenged because cognition is referred to differently within and across research areas, and is not clearly defined. The classic robotics approach is decomposition into functional modules which perform planning, reasoning, and problem-solving or provide input to these mechanisms. Although advancements have been made and numerous success stories reported in specific niches, this systems-engineering approach has not succeeded in building such a cognitive agent. The emergence of an action-oriented paradigm offers a new approach: action and perception are no longer separable into functional modules but must be considered in a complete loop. This chapter reviews work on different mechanisms for action-perception learning and discusses the role of embodiment in the design of the underlying representations and learning. It discusses the evaluation of agents and suggests the development of a new embodied Turing Test. Appropriate scenarios need to be devised in addition to current competitions, so that abilities can be tested over long time periods.Less
Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be at the core of this process, yet understanding has been challenged because cognition is referred to differently within and across research areas, and is not clearly defined. The classic robotics approach is decomposition into functional modules which perform planning, reasoning, and problem-solving or provide input to these mechanisms. Although advancements have been made and numerous success stories reported in specific niches, this systems-engineering approach has not succeeded in building such a cognitive agent. The emergence of an action-oriented paradigm offers a new approach: action and perception are no longer separable into functional modules but must be considered in a complete loop. This chapter reviews work on different mechanisms for action-perception learning and discusses the role of embodiment in the design of the underlying representations and learning. It discusses the evaluation of agents and suggests the development of a new embodied Turing Test. Appropriate scenarios need to be devised in addition to current competitions, so that abilities can be tested over long time periods.
Levesque Hector J.
- Published in print:
- 2012
- Published Online:
- August 2013
- ISBN:
- 9780262016995
- eISBN:
- 9780262301411
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262016995.003.0012
- Subject:
- Computer Science, Artificial Intelligence
This chapter explores the philosophical question of whether computers can really think. It considers the Turing Test and Searle's Chinese Room argument. It suggests that regardless of one's position ...
More
This chapter explores the philosophical question of whether computers can really think. It considers the Turing Test and Searle's Chinese Room argument. It suggests that regardless of one's position on the philosophical issues, we are still left with what might be called the AI question: If it is indeed true that tricks and fakery are not sufficient to generate intelligent behavior such as passing some form of the Turing Test, then what is? In the end, it is this question that is perhaps the most profound one to emerge out of the entire discussion, and one that will not be resolved by merely arguing one way or another.Less
This chapter explores the philosophical question of whether computers can really think. It considers the Turing Test and Searle's Chinese Room argument. It suggests that regardless of one's position on the philosophical issues, we are still left with what might be called the AI question: If it is indeed true that tricks and fakery are not sufficient to generate intelligent behavior such as passing some form of the Turing Test, then what is? In the end, it is this question that is perhaps the most profound one to emerge out of the entire discussion, and one that will not be resolved by merely arguing one way or another.
Paul Kockelman
- Published in print:
- 2017
- Published Online:
- July 2017
- ISBN:
- 9780190636531
- eISBN:
- 9780190636562
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190636531.003.0007
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter details the inner workings of spam filters, algorithmic devices that separate desirable messages from undesirable messages. It argues that such filters are a particularly important kind ...
More
This chapter details the inner workings of spam filters, algorithmic devices that separate desirable messages from undesirable messages. It argues that such filters are a particularly important kind of sieve insofar as they readily exhibit key features of sieving devices in general, and algorithmic sieving in particular. More broadly, it describes the relation between ontology (assumptions that drive interpretations) and inference (interpretations that alter assumptions) as it plays out in the classification and transformation of identities, types, or kinds. Focusing on the unstable processes whereby identifying algorithms, identified types, and evasive transformations are dynamically coupled over time, it also theorizes various kinds of ontological inertia and highlights various kinds of algorithmic ineffability. Finally, it shows how similar issues underlie a much wider range of processes, such as the Turing Test, Bayesian reasoning, and machine learning more generally.Less
This chapter details the inner workings of spam filters, algorithmic devices that separate desirable messages from undesirable messages. It argues that such filters are a particularly important kind of sieve insofar as they readily exhibit key features of sieving devices in general, and algorithmic sieving in particular. More broadly, it describes the relation between ontology (assumptions that drive interpretations) and inference (interpretations that alter assumptions) as it plays out in the classification and transformation of identities, types, or kinds. Focusing on the unstable processes whereby identifying algorithms, identified types, and evasive transformations are dynamically coupled over time, it also theorizes various kinds of ontological inertia and highlights various kinds of algorithmic ineffability. Finally, it shows how similar issues underlie a much wider range of processes, such as the Turing Test, Bayesian reasoning, and machine learning more generally.
Michael Tye
- Published in print:
- 2017
- Published Online:
- November 2016
- ISBN:
- 9780190278014
- eISBN:
- 9780190278045
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190278014.003.0003
- Subject:
- Philosophy, Philosophy of Mind, Moral Philosophy
Historically, major intellectual figures, most notably Rene Descartes, have been committed to the claim that only humans can have experiences. Radical conservatism of this sort is a mistake. The ...
More
Historically, major intellectual figures, most notably Rene Descartes, have been committed to the claim that only humans can have experiences. Radical conservatism of this sort is a mistake. The mistake has its origins in religious conviction, mind-body dualism, and an alleged connection between thought and language. This conservatism survives in certain contemporary views that hold that experience is necessarily thought-like or conceptual. However, as this chapter argues, there are few compelling reasons we should adopt these conceptualist views, and they face many problems. Conceptualism, therefore, and with it the belief that consciousness is necessarily found only in humans, can be rejected.Less
Historically, major intellectual figures, most notably Rene Descartes, have been committed to the claim that only humans can have experiences. Radical conservatism of this sort is a mistake. The mistake has its origins in religious conviction, mind-body dualism, and an alleged connection between thought and language. This conservatism survives in certain contemporary views that hold that experience is necessarily thought-like or conceptual. However, as this chapter argues, there are few compelling reasons we should adopt these conceptualist views, and they face many problems. Conceptualism, therefore, and with it the belief that consciousness is necessarily found only in humans, can be rejected.
Michael LaBossiere
- Published in print:
- 2017
- Published Online:
- October 2017
- ISBN:
- 9780190652951
- eISBN:
- 9780190652982
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190652951.003.0019
- Subject:
- Philosophy, Philosophy of Science
While sophisticated artificial beings are still the stuff of science fiction, it is reasonable to address the challenge of determining the moral status of such systems now. Since humans have spent ...
More
While sophisticated artificial beings are still the stuff of science fiction, it is reasonable to address the challenge of determining the moral status of such systems now. Since humans have spent centuries discussing the ethics of humans and animals, a sensible shortcut is to develop tests for matching artificial beings with existing beings and assigning them a corresponding moral status. While there are a multitude of moral theories addressing the matter of status, the focus is on two of the most common types. The first comprises theories based on reason (exemplified by Kant). The second comprises theories based on feeling (exemplified by Mill). Regardless of the actual tests, there will always be room for doubt. To address this, three arguments are presented in favor of the presumption of status, similar to that of the presumption of innocence in the legal system.Less
While sophisticated artificial beings are still the stuff of science fiction, it is reasonable to address the challenge of determining the moral status of such systems now. Since humans have spent centuries discussing the ethics of humans and animals, a sensible shortcut is to develop tests for matching artificial beings with existing beings and assigning them a corresponding moral status. While there are a multitude of moral theories addressing the matter of status, the focus is on two of the most common types. The first comprises theories based on reason (exemplified by Kant). The second comprises theories based on feeling (exemplified by Mill). Regardless of the actual tests, there will always be room for doubt. To address this, three arguments are presented in favor of the presumption of status, similar to that of the presumption of innocence in the legal system.