Melissa Terras
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780199204557
- eISBN:
- 9780191708121
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199204557.001.0001
- Subject:
- Classical Studies, British and Irish History: BCE to 500CE
The ink and stylus tablets discovered at the Roman fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets in particular are extremely difficult to read. ...
More
The ink and stylus tablets discovered at the Roman fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets in particular are extremely difficult to read. This book details the development of what appears to be the first system constructed to aid experts in the process of reading an ancient document, exploring the extent to which techniques from artificial intelligence can be used to develop a system that could aid historians in reading the stylus texts. Using knowledge elicitation techniques (borrowed from artificial intelligence and engineering science), a model is proposed for how experts construct a reading of a text. A prototype system is presented that can read in image data and produce realistic and plausible textual interpretations of the writing that appears on the documents. Incorporating knowledge elicited from experts working on the texts, and utilizing image processing techniques developed in engineering science to analyze the stylus tablets, the book includes a corpora of letter forms generated from the Vindolanda text corpus, and a detailed description of the architecture of the system. This research presents the first stages towards developing a cognitive visual system that can propagate realistic interpretations from image data.Less
The ink and stylus tablets discovered at the Roman fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets in particular are extremely difficult to read. This book details the development of what appears to be the first system constructed to aid experts in the process of reading an ancient document, exploring the extent to which techniques from artificial intelligence can be used to develop a system that could aid historians in reading the stylus texts. Using knowledge elicitation techniques (borrowed from artificial intelligence and engineering science), a model is proposed for how experts construct a reading of a text. A prototype system is presented that can read in image data and produce realistic and plausible textual interpretations of the writing that appears on the documents. Incorporating knowledge elicited from experts working on the texts, and utilizing image processing techniques developed in engineering science to analyze the stylus tablets, the book includes a corpora of letter forms generated from the Vindolanda text corpus, and a detailed description of the architecture of the system. This research presents the first stages towards developing a cognitive visual system that can propagate realistic interpretations from image data.
B. Jack Copeland (ed.)
- Published in print:
- 2005
- Published Online:
- January 2008
- ISBN:
- 9780198565932
- eISBN:
- 9780191714016
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198565932.001.0001
- Subject:
- Mathematics, History of Mathematics
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was ...
More
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer — the idea of controlling a computing machine's operations by means of coded instructions, stored in the machine's ‘memory’. In 1945, Turing drew up his revolutionary design for an electronic computing machine — his Automatic Computing Engine (‘ACE’). A pilot model of the ACE ran its first programme in 1950 and the production version, the ‘DEUCE’, went on to become a cornerstone of the fledgling British computer industry. The first ‘personal’ computer was based on Turing's ACE. This book describes Turing's struggle to build the modern computer. It contains first-hand accounts by Turing and by the pioneers of computing who worked with him. The book describes the hardware and software of the ACE and contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life).Less
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer — the idea of controlling a computing machine's operations by means of coded instructions, stored in the machine's ‘memory’. In 1945, Turing drew up his revolutionary design for an electronic computing machine — his Automatic Computing Engine (‘ACE’). A pilot model of the ACE ran its first programme in 1950 and the production version, the ‘DEUCE’, went on to become a cornerstone of the fledgling British computer industry. The first ‘personal’ computer was based on Turing's ACE. This book describes Turing's struggle to build the modern computer. It contains first-hand accounts by Turing and by the pioneers of computing who worked with him. The book describes the hardware and software of the ACE and contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life).
Stefan Helmreich, Sophia Roosth, and Michele Friedner
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691164809
- eISBN:
- 9781400873869
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691164809.003.0003
- Subject:
- Anthropology, Social and Cultural Anthropology
This chapter examines how scientists working on Artificial Life have understood their practices as situated historically. It first considers the practice of finding genealogies for Artificial Life, ...
More
This chapter examines how scientists working on Artificial Life have understood their practices as situated historically. It first considers the practice of finding genealogies for Artificial Life, arguing that such a search for ancestors carries acute historiographical and epistemological dangers. It then comments on computer simulations that fashion the computer as a kind of fish tank into which users can peer to see artificial life forms swimming about. It also discusses a different realm of modeling, that of cognition in Artificial Intelligence. The chapter concludes by suggesting a mode of imagining history that it calls an underwater archaeology of knowledge. In an underwater archaeology of knowledge, representational artifacts become mixed in with portraits of the world, requiring new sorts of narrative disentangling and qualification.Less
This chapter examines how scientists working on Artificial Life have understood their practices as situated historically. It first considers the practice of finding genealogies for Artificial Life, arguing that such a search for ancestors carries acute historiographical and epistemological dangers. It then comments on computer simulations that fashion the computer as a kind of fish tank into which users can peer to see artificial life forms swimming about. It also discusses a different realm of modeling, that of cognition in Artificial Intelligence. The chapter concludes by suggesting a mode of imagining history that it calls an underwater archaeology of knowledge. In an underwater archaeology of knowledge, representational artifacts become mixed in with portraits of the world, requiring new sorts of narrative disentangling and qualification.
B. Jack Copeland and Diane Proudfoot
- Published in print:
- 2005
- Published Online:
- January 2008
- ISBN:
- 9780198565932
- eISBN:
- 9780191714016
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198565932.003.0006
- Subject:
- Mathematics, History of Mathematics
This chapter discusses Turing's contributions to the field of computing. Topics covered include the Turing machine, cryptanalytic machines, the ACE and the EDVAC, the Manchester computer, artificial ...
More
This chapter discusses Turing's contributions to the field of computing. Topics covered include the Turing machine, cryptanalytic machines, the ACE and the EDVAC, the Manchester computer, artificial intelligence, and artificial life.Less
This chapter discusses Turing's contributions to the field of computing. Topics covered include the Turing machine, cryptanalytic machines, the ACE and the EDVAC, the Manchester computer, artificial intelligence, and artificial life.
Robert Geraci
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195393026
- eISBN:
- 9780199777136
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195393026.001.0001
- Subject:
- Religion, Religion and Society
The hope that we might one day upload our minds into robots and, eventually, cyberspace has become commonplace and now affects life across a broad spectrum of western culture. Popular science books ...
More
The hope that we might one day upload our minds into robots and, eventually, cyberspace has become commonplace and now affects life across a broad spectrum of western culture. Popular science books on robotics and artificial intelligence (AI) by Hans Moravec, Ray Kurzweil, and others argue that one day advances in robotics, AI and neurobiology will enable us to copy our conscious selves into machines, which will take over the cosmos and live eternally in a perfect world of supremely intelligent Mind. Such views borrow from the apocalyptic traditions of Judaism and Christianity and influence the politics of research grants, life in online virtual reality environments, and conversations within philosophical, legal and theological circles. Apocalyptic AI is important to scientific research because it promotes public and private funding for robotics and AI. In addition, residents of the online world Second Life have adopted it as a worldview that gives meaning to their activities and many already wish to live in Second Life or a similar environment forever, just as Moravec and Kurzweil promise they will. Finally, several of the claims of Apocalyptic AI have become a serious topic of debate for philosophers of mind, legal scholars and theologians. The successful integration of religion, science and technology in Apocalyptic AI creates a powerful worldview with considerable influence in modern life and challenges many of our long held assumptions about the relationship between religion and science.Less
The hope that we might one day upload our minds into robots and, eventually, cyberspace has become commonplace and now affects life across a broad spectrum of western culture. Popular science books on robotics and artificial intelligence (AI) by Hans Moravec, Ray Kurzweil, and others argue that one day advances in robotics, AI and neurobiology will enable us to copy our conscious selves into machines, which will take over the cosmos and live eternally in a perfect world of supremely intelligent Mind. Such views borrow from the apocalyptic traditions of Judaism and Christianity and influence the politics of research grants, life in online virtual reality environments, and conversations within philosophical, legal and theological circles. Apocalyptic AI is important to scientific research because it promotes public and private funding for robotics and AI. In addition, residents of the online world Second Life have adopted it as a worldview that gives meaning to their activities and many already wish to live in Second Life or a similar environment forever, just as Moravec and Kurzweil promise they will. Finally, several of the claims of Apocalyptic AI have become a serious topic of debate for philosophers of mind, legal scholars and theologians. The successful integration of religion, science and technology in Apocalyptic AI creates a powerful worldview with considerable influence in modern life and challenges many of our long held assumptions about the relationship between religion and science.
Robert M. Geraci
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195393026
- eISBN:
- 9780199777136
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195393026.003.0002
- Subject:
- Religion, Religion and Society
Apocalyptic AI is transmitted to roboticists and AI researchers through science fiction and is expressed in pop science as a means of raising the cultural prestige of research and researchers and ...
More
Apocalyptic AI is transmitted to roboticists and AI researchers through science fiction and is expressed in pop science as a means of raising the cultural prestige of research and researchers and justifying funding spent on robotics and AI. Science fiction often uses religious imagery and language to explore culture and several authors have engaged the idea that human beings might upload their minds into machines. The influence of science fiction is widely accepted among roboticists, who gain inspiration from it, as almost certainly happened for Hans Moravec, the pioneer of Apocalyptic AI thinking. Popular science authors in robotics and AI fuse religious and scientific work into a meaningful worldview in order to gain the benefits of both. Such role-hybridization increases the prestige of the researchers and plays a part in military, government and private investment in robotics and AI.Less
Apocalyptic AI is transmitted to roboticists and AI researchers through science fiction and is expressed in pop science as a means of raising the cultural prestige of research and researchers and justifying funding spent on robotics and AI. Science fiction often uses religious imagery and language to explore culture and several authors have engaged the idea that human beings might upload their minds into machines. The influence of science fiction is widely accepted among roboticists, who gain inspiration from it, as almost certainly happened for Hans Moravec, the pioneer of Apocalyptic AI thinking. Popular science authors in robotics and AI fuse religious and scientific work into a meaningful worldview in order to gain the benefits of both. Such role-hybridization increases the prestige of the researchers and plays a part in military, government and private investment in robotics and AI.
Craig DeLancey
- Published in print:
- 2002
- Published Online:
- November 2003
- ISBN:
- 9780195142716
- eISBN:
- 9780199833153
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195142713.001.0001
- Subject:
- Philosophy, Metaphysics/Epistemology
Passionate Engines shows that our best understanding of emotion has important implications for understanding intentionality, rationality, phenomenal consciousness, artificial ...
More
Passionate Engines shows that our best understanding of emotion has important implications for understanding intentionality, rationality, phenomenal consciousness, artificial intelligence, and other issues. Some theories of mind, of action, and of moral psychology, and some approaches in artificial intelligence, are shown to be inconsistent with our best understanding of emotions. However, our best understanding of emotions also suggests fruitful new approaches to the challenges of these disciplines. There are three additional themes. First, the book introduces a version of a theory of some emotions called the affect program theory. This theory is defended against social constructionist and cognitivist views of emotion, and shown to be able to account for the rationality of emotions and our ability to emote for fictions. Second, the book defends the hierarchical view of mind. Part of this view is the thesis that the primary topic of the study of mind and artificial intelligence is autonomy, and not the skills typically associated with intelligence. Third, the book challenges the simplistic associations that naturalism has come to have in much contemporary philosophy of mind, arguing that science typically complicates and enriches, instead of eliminating and reducing, our view of natural phenomena.Less
Passionate Engines shows that our best understanding of emotion has important implications for understanding intentionality, rationality, phenomenal consciousness, artificial intelligence, and other issues. Some theories of mind, of action, and of moral psychology, and some approaches in artificial intelligence, are shown to be inconsistent with our best understanding of emotions. However, our best understanding of emotions also suggests fruitful new approaches to the challenges of these disciplines. There are three additional themes. First, the book introduces a version of a theory of some emotions called the affect program theory. This theory is defended against social constructionist and cognitivist views of emotion, and shown to be able to account for the rationality of emotions and our ability to emote for fictions. Second, the book defends the hierarchical view of mind. Part of this view is the thesis that the primary topic of the study of mind and artificial intelligence is autonomy, and not the skills typically associated with intelligence. Third, the book challenges the simplistic associations that naturalism has come to have in much contemporary philosophy of mind, arguing that science typically complicates and enriches, instead of eliminating and reducing, our view of natural phenomena.
Robert M. Geraci
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195393026
- eISBN:
- 9780199777136
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195393026.003.0001
- Subject:
- Religion, Religion and Society
Apocalyptic AI is a movement that exemplifies the longstanding connection between religion and science in the western world. The movement, based on popular science books by Hans Moravec and Ray ...
More
Apocalyptic AI is a movement that exemplifies the longstanding connection between religion and science in the western world. The movement, based on popular science books by Hans Moravec and Ray Kurzweil, shares the theological categories of apocalyptic Judaism and Christianity: a dualistic worldview, a sense of alienation, an expectation of the end of alienation in a transcendent new world, and occupation in the new world in glorified new bodies. Apocalyptic AI is the belief that mind and body struggle with one another in a battle that has so far, inevitably, been won by the body but which will see our minds victorious when we can upload them into cyberspace where we will live forever with virtual bodies.Less
Apocalyptic AI is a movement that exemplifies the longstanding connection between religion and science in the western world. The movement, based on popular science books by Hans Moravec and Ray Kurzweil, shares the theological categories of apocalyptic Judaism and Christianity: a dualistic worldview, a sense of alienation, an expectation of the end of alienation in a transcendent new world, and occupation in the new world in glorified new bodies. Apocalyptic AI is the belief that mind and body struggle with one another in a battle that has so far, inevitably, been won by the body but which will see our minds victorious when we can upload them into cyberspace where we will live forever with virtual bodies.
Robert M. Geraci
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195393026
- eISBN:
- 9780199777136
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195393026.003.0004
- Subject:
- Religion, Religion and Society
Apocalyptic AI predictions have garnered so much attention that—in combination with rapidly progressing robotic technology—widespread public attention has focused upon how human beings and robots ...
More
Apocalyptic AI predictions have garnered so much attention that—in combination with rapidly progressing robotic technology—widespread public attention has focused upon how human beings and robots should and will relate to one another as machines get smarter. The influence of Apocalyptic AI extends to philosophical and scientific discussions about consciousness (especially in arguments over whether machines can or will be conscious), legal scholarship (where the rights of machines are debated) and moral and theological reasoning (in which both AI experts and theologians have considered the moral implications of conscious machines and wondered whether those machines will engage in human religious practice). Far from being irrelevant or easily dismissed as fantasy, Apocalyptic AI and its advocates have become major forces in contemporary culture.Less
Apocalyptic AI predictions have garnered so much attention that—in combination with rapidly progressing robotic technology—widespread public attention has focused upon how human beings and robots should and will relate to one another as machines get smarter. The influence of Apocalyptic AI extends to philosophical and scientific discussions about consciousness (especially in arguments over whether machines can or will be conscious), legal scholarship (where the rights of machines are debated) and moral and theological reasoning (in which both AI experts and theologians have considered the moral implications of conscious machines and wondered whether those machines will engage in human religious practice). Far from being irrelevant or easily dismissed as fantasy, Apocalyptic AI and its advocates have become major forces in contemporary culture.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.001.0001
- Subject:
- Philosophy, Moral Philosophy
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these ...
More
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.Less
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.
John O. McGinnis
- Published in print:
- 2012
- Published Online:
- October 2017
- ISBN:
- 9780691151021
- eISBN:
- 9781400845453
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151021.003.0007
- Subject:
- Political Science, Public Policy
This chapter focuses on artificial intelligence (AI). The development of machine intelligence can directly improve governance, because progress in AI can help in assessing policy consequences. More ...
More
This chapter focuses on artificial intelligence (AI). The development of machine intelligence can directly improve governance, because progress in AI can help in assessing policy consequences. More substantial machine intelligence can process data, generate hypotheses about the effects of past policy, and simulate the world to predict the effects of future policy. Thus, it is more important to formulate a correct policy toward AI than toward any other rapidly advancing technology, because that policy will help advance beneficial policies in all other areas. The holy grail of AI is so-called strong AI, defined as a general purpose intelligence that approximates that of humans. The correct policy for AI—substantial government support for Friendly AI—both promotes AI as an instrument of collective decision making and helps prevent the risk of machine takeover.Less
This chapter focuses on artificial intelligence (AI). The development of machine intelligence can directly improve governance, because progress in AI can help in assessing policy consequences. More substantial machine intelligence can process data, generate hypotheses about the effects of past policy, and simulate the world to predict the effects of future policy. Thus, it is more important to formulate a correct policy toward AI than toward any other rapidly advancing technology, because that policy will help advance beneficial policies in all other areas. The holy grail of AI is so-called strong AI, defined as a general purpose intelligence that approximates that of humans. The correct policy for AI—substantial government support for Friendly AI—both promotes AI as an instrument of collective decision making and helps prevent the risk of machine takeover.
Joscha Bach
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780195370676
- eISBN:
- 9780199870721
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195370676.001.0001
- Subject:
- Psychology, Cognitive Models and Architectures
Although computational models of cognition have become very popular, these models are relatively limited in their coverage of cognition—they usually only emphasize problem solving and reasoning, or ...
More
Although computational models of cognition have become very popular, these models are relatively limited in their coverage of cognition—they usually only emphasize problem solving and reasoning, or treat perception and motivation as isolated modules. The first architecture to cover cognition more broadly is the Psi theory, developed by Dietrich Dörner. By integrating motivation and emotion with perception and reasoning, and including grounded neuro-symbolic representations, the Psi contributes significantly to an integrated understanding of the mind. It provides a conceptual framework that highlights the relationships between perception and memory, language and mental representation, reasoning and motivation, emotion and cognition, autonomy and social behavior. So far, the Psi theory's origin in psychology, its methodology, and its lack of documentation have limited its impact. This book adapts the Psi theory to cognitive science and artificial intelligence, by elucidating both its theoretical and technical frameworks, and clarifying its contribution to how we have come to understand cognition.Less
Although computational models of cognition have become very popular, these models are relatively limited in their coverage of cognition—they usually only emphasize problem solving and reasoning, or treat perception and motivation as isolated modules. The first architecture to cover cognition more broadly is the Psi theory, developed by Dietrich Dörner. By integrating motivation and emotion with perception and reasoning, and including grounded neuro-symbolic representations, the Psi contributes significantly to an integrated understanding of the mind. It provides a conceptual framework that highlights the relationships between perception and memory, language and mental representation, reasoning and motivation, emotion and cognition, autonomy and social behavior. So far, the Psi theory's origin in psychology, its methodology, and its lack of documentation have limited its impact. This book adapts the Psi theory to cognitive science and artificial intelligence, by elucidating both its theoretical and technical frameworks, and clarifying its contribution to how we have come to understand cognition.
Wendell Wallach and Colin Allen
- Published in print:
- 2009
- Published Online:
- January 2009
- ISBN:
- 9780195374049
- eISBN:
- 9780199871889
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374049.003.0013
- Subject:
- Philosophy, Moral Philosophy
The desirability of computers making moral decisions poses an array of future dangers that are difficult to anticipate but will, nevertheless, need to be monitored and managed. Public policy and ...
More
The desirability of computers making moral decisions poses an array of future dangers that are difficult to anticipate but will, nevertheless, need to be monitored and managed. Public policy and mechanisms of social and business liability management will both play a role in the safety, direction, and speed in which artificial intelligent systems are developed. Fear is not likely to stop scientific research, but it is likely that various fears will slow it down. Mechanisms for distinguishing real dangers from speculation and hype fueled by science fiction are needed. This chapter surveys ways of addressing issues of rights and accountability for (ro)bots and touches on topics such as legal personhood, self‐replicating robots, the possibility of a “singularity” at which AI outstrips human intelligence, and the transhumanist movement that sees the future of humanity itself as an inevitable (and desirable) march toward cyborg beings.Less
The desirability of computers making moral decisions poses an array of future dangers that are difficult to anticipate but will, nevertheless, need to be monitored and managed. Public policy and mechanisms of social and business liability management will both play a role in the safety, direction, and speed in which artificial intelligent systems are developed. Fear is not likely to stop scientific research, but it is likely that various fears will slow it down. Mechanisms for distinguishing real dangers from speculation and hype fueled by science fiction are needed. This chapter surveys ways of addressing issues of rights and accountability for (ro)bots and touches on topics such as legal personhood, self‐replicating robots, the possibility of a “singularity” at which AI outstrips human intelligence, and the transhumanist movement that sees the future of humanity itself as an inevitable (and desirable) march toward cyborg beings.
Susan W. Brenner
- Published in print:
- 2007
- Published Online:
- January 2009
- ISBN:
- 9780195333480
- eISBN:
- 9780199855353
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195333480.001.0001
- Subject:
- Law, Intellectual Property, IT, and Media Law
In this book, Susan Brenner analyzes the complex and evolving interactions between law and technology and provides a thorough and detailed account of the law in technology at the beginning of the ...
More
In this book, Susan Brenner analyzes the complex and evolving interactions between law and technology and provides a thorough and detailed account of the law in technology at the beginning of the 21st century. She draws upon recent technological advances, evaluating how developing technologies may alter how humans interact with each other and with their environment. She analyzes the development of technology as shifting from one of “use” to one of “interaction,” and argues that this interchange requires us to reconceptualize our approach to legal rules, which were originally designed to prevent the “misuse” of older technologies. Brenner argues that as technologies continue to evolve, the laws targeting the relationship between humans and technology must become, and should remain, neutral. She explains how older technologies rely on human implementation, but new, “smart” technologies are intelligent and autonomous, in varying degrees. This, she notes, will eventually lead to the ultimate progression in our relationship with technology: the fusion of human physiology and technology. Law in an Era of “Smart” Technology provides a detailed, historically-grounded analysis of why our traditional relationship with technology is evolving in ways that require a corresponding shift in our law.Less
In this book, Susan Brenner analyzes the complex and evolving interactions between law and technology and provides a thorough and detailed account of the law in technology at the beginning of the 21st century. She draws upon recent technological advances, evaluating how developing technologies may alter how humans interact with each other and with their environment. She analyzes the development of technology as shifting from one of “use” to one of “interaction,” and argues that this interchange requires us to reconceptualize our approach to legal rules, which were originally designed to prevent the “misuse” of older technologies. Brenner argues that as technologies continue to evolve, the laws targeting the relationship between humans and technology must become, and should remain, neutral. She explains how older technologies rely on human implementation, but new, “smart” technologies are intelligent and autonomous, in varying degrees. This, she notes, will eventually lead to the ultimate progression in our relationship with technology: the fusion of human physiology and technology. Law in an Era of “Smart” Technology provides a detailed, historically-grounded analysis of why our traditional relationship with technology is evolving in ways that require a corresponding shift in our law.
Gordon M. Shepherd
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195391503
- eISBN:
- 9780199863464
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195391503.003.0016
- Subject:
- Neuroscience, History of Neuroscience
This chapter focuses on the development of theoretical neuroscience. The mid-20th century marked the emergence of several new fields that laid the foundations for general theories of brain function. ...
More
This chapter focuses on the development of theoretical neuroscience. The mid-20th century marked the emergence of several new fields that laid the foundations for general theories of brain function. McCulloch and Pitts applied the symbolic logic metaphor to nerve cell circuits, postulating that specific interconnections could perform basic logic functions such as AND, OR, and AND-NOT gates. John von Neumann drew on this idea of the brain in formulating the classical architecture of the digital computer. Developments in control theory, neurology, and adaptive behavior came together in the new field of cybernetics. The McCulloch–Pitts oversimplified neurons contributed to the rise of artificial intelligence and neural nets. Von Neumann eventually realized that the fundamental computational elements of the nervous system are not oversimplified neurons, but individual synapses distributed on dendritic trees. This insight anticipated current work on developing more realistic large-scale neural networks, drawing on studies at all the levels of organization covered in this book, to simulate how the brain actually carries out its functions.Less
This chapter focuses on the development of theoretical neuroscience. The mid-20th century marked the emergence of several new fields that laid the foundations for general theories of brain function. McCulloch and Pitts applied the symbolic logic metaphor to nerve cell circuits, postulating that specific interconnections could perform basic logic functions such as AND, OR, and AND-NOT gates. John von Neumann drew on this idea of the brain in formulating the classical architecture of the digital computer. Developments in control theory, neurology, and adaptive behavior came together in the new field of cybernetics. The McCulloch–Pitts oversimplified neurons contributed to the rise of artificial intelligence and neural nets. Von Neumann eventually realized that the fundamental computational elements of the nervous system are not oversimplified neurons, but individual synapses distributed on dendritic trees. This insight anticipated current work on developing more realistic large-scale neural networks, drawing on studies at all the levels of organization covered in this book, to simulate how the brain actually carries out its functions.
Susan W. Brenner
- Published in print:
- 2007
- Published Online:
- January 2009
- ISBN:
- 9780195333480
- eISBN:
- 9780199855353
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195333480.003.0008
- Subject:
- Law, Intellectual Property, IT, and Media Law
This chapter summarizes the analysis presented in the earlier chapters and addresses its implications. It notes the difficulties involved in forecasting the ways in which misuse will manifest itself ...
More
This chapter summarizes the analysis presented in the earlier chapters and addresses its implications. It notes the difficulties involved in forecasting the ways in which misuse will manifest itself in the not-too-remote future — when technology will become far, far more sophisticated and misuse may be committed not only by un-enhanced humans, but also by cyborgs and perhaps even by non-human intelligences. The chapter concludes, however, that if in enacting laws we keep our focus on what is relevant (behavior) and what is circumstance (technology), lawmaking should be much more effective and much more efficient than it often is at the very beginning of the 21st century.Less
This chapter summarizes the analysis presented in the earlier chapters and addresses its implications. It notes the difficulties involved in forecasting the ways in which misuse will manifest itself in the not-too-remote future — when technology will become far, far more sophisticated and misuse may be committed not only by un-enhanced humans, but also by cyborgs and perhaps even by non-human intelligences. The chapter concludes, however, that if in enacting laws we keep our focus on what is relevant (behavior) and what is circumstance (technology), lawmaking should be much more effective and much more efficient than it often is at the very beginning of the 21st century.
Stuart Russell
- Published in print:
- 2002
- Published Online:
- February 2006
- ISBN:
- 9780195147667
- eISBN:
- 9780199785865
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195147669.003.0002
- Subject:
- Philosophy, Metaphysics/Epistemology
This chapter considers how to formalize intelligence or rationality in a way that has value for the development of agents built for a specific application and of general theories of intelligence. It ...
More
This chapter considers how to formalize intelligence or rationality in a way that has value for the development of agents built for a specific application and of general theories of intelligence. It presents three candidates that traditionally have stood as formalizations of intelligence: perfect rationality, calculative rationality, and meta-level rationality. Perfect rationality is an abstraction that does not correspond to any physical reasoner. Calculative rationality fails to scale up to problems of sufficient and interesting complexity. Meta-level rationality pushes the problem into a never-ending regress. As an alternative, this chapter considers the notion of bounded optimality as a workable proxy for theorizing about machine intelligence. This notion rests on two crucial elements: that behaviors and decisions happen in real time and that an agent is defined by a particular (software and hardware) architecture and a particular program that runs on that architecture. Under this view, an agent is bounded optimal if it maximizes the utility of its behavior for a task within the demands of the environment. The chapter then elaborates on the role of adaptive, inductive mechanisms as the means for making gains in calculative and meta-level rationality for real-world application systems, and for bounded optimality more generally.Less
This chapter considers how to formalize intelligence or rationality in a way that has value for the development of agents built for a specific application and of general theories of intelligence. It presents three candidates that traditionally have stood as formalizations of intelligence: perfect rationality, calculative rationality, and meta-level rationality. Perfect rationality is an abstraction that does not correspond to any physical reasoner. Calculative rationality fails to scale up to problems of sufficient and interesting complexity. Meta-level rationality pushes the problem into a never-ending regress. As an alternative, this chapter considers the notion of bounded optimality as a workable proxy for theorizing about machine intelligence. This notion rests on two crucial elements: that behaviors and decisions happen in real time and that an agent is defined by a particular (software and hardware) architecture and a particular program that runs on that architecture. Under this view, an agent is bounded optimal if it maximizes the utility of its behavior for a task within the demands of the environment. The chapter then elaborates on the role of adaptive, inductive mechanisms as the means for making gains in calculative and meta-level rationality for real-world application systems, and for bounded optimality more generally.
Tok Thompson
- Published in print:
- 2019
- Published Online:
- May 2020
- ISBN:
- 9781496825087
- eISBN:
- 9781496825131
- Item type:
- chapter
- Publisher:
- University Press of Mississippi
- DOI:
- 10.14325/mississippi/9781496825087.003.0009
- Subject:
- Sociology, Culture
Artificial intelligence programs have increasingly entered public discourse in many diverse and overlapping ways. The various artificial intelligences are connected to our biologically based ones ...
More
Artificial intelligence programs have increasingly entered public discourse in many diverse and overlapping ways. The various artificial intelligences are connected to our biologically based ones largely (though not solely) via the cyber network, which itself increasingly draws our species into its communicative framework. In this new, mediated, cyborg realm of culture there are no non-human animals, or plants, or any other natural forms of intelligence, but that does not mean we are all alone. Rather, there are new voices in our shared agora, now, and their voices do not necessarily attend to our own. This chapter explores the cultural overlaps of human and artificial intelligences online.Less
Artificial intelligence programs have increasingly entered public discourse in many diverse and overlapping ways. The various artificial intelligences are connected to our biologically based ones largely (though not solely) via the cyber network, which itself increasingly draws our species into its communicative framework. In this new, mediated, cyborg realm of culture there are no non-human animals, or plants, or any other natural forms of intelligence, but that does not mean we are all alone. Rather, there are new voices in our shared agora, now, and their voices do not necessarily attend to our own. This chapter explores the cultural overlaps of human and artificial intelligences online.
Ajay Agrawal, Joshua Gans, and Avi Goldfarb (eds)
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780226613338
- eISBN:
- 9780226613475
- Item type:
- book
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226613475.001.0001
- Subject:
- Economics and Finance, Microeconomics
Recent advances in artificial intelligence (AI) highlight its potential to affect productivity, growth, inequality, market power, innovation, and employment. In September 2017, the National Bureau of ...
More
Recent advances in artificial intelligence (AI) highlight its potential to affect productivity, growth, inequality, market power, innovation, and employment. In September 2017, the National Bureau of Economic Research held its first conference on the Economics of AI in Toronto. The purpose of the conference and associated volume is to set the research agenda for economists working on AI. The focus of the volume is on the economic impact of machine learning, a branch of computational statistics that has driven the recent excitement around AI. The volume also highlights key questions on the economic impact of robotics and automation, as well as the potential economic consequences of a still-hypothetical artificial general intelligence. The volume covers four broad themes: AI as a general purpose technology; the relationship between AI, growth, jobs, and inequality; regulatory responses to changes brought on by AI; and the effects of AI on the way economic research is conducted. In highlighting these themes, the volume provides several frameworks for understanding the economic impact of AI. In doing so, it identifies a number of key open research questions in a variety of research areas including productivity, growth, decision-making, jobs, inequality, market structure, privacy, trade, liability, political economy, econometrics, behavioral economics, and innovation.Less
Recent advances in artificial intelligence (AI) highlight its potential to affect productivity, growth, inequality, market power, innovation, and employment. In September 2017, the National Bureau of Economic Research held its first conference on the Economics of AI in Toronto. The purpose of the conference and associated volume is to set the research agenda for economists working on AI. The focus of the volume is on the economic impact of machine learning, a branch of computational statistics that has driven the recent excitement around AI. The volume also highlights key questions on the economic impact of robotics and automation, as well as the potential economic consequences of a still-hypothetical artificial general intelligence. The volume covers four broad themes: AI as a general purpose technology; the relationship between AI, growth, jobs, and inequality; regulatory responses to changes brought on by AI; and the effects of AI on the way economic research is conducted. In highlighting these themes, the volume provides several frameworks for understanding the economic impact of AI. In doing so, it identifies a number of key open research questions in a variety of research areas including productivity, growth, decision-making, jobs, inequality, market structure, privacy, trade, liability, political economy, econometrics, behavioral economics, and innovation.
Ulrike Hahn and Michael Ramscar (eds)
- Published in print:
- 2001
- Published Online:
- March 2012
- ISBN:
- 9780198506287
- eISBN:
- 9780191686962
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506287.001.0001
- Subject:
- Psychology, Cognitive Psychology
Understanding how objects are partitioned into useful groups to form concepts is important to most disciplines. Concepts allow us to treat different objects equivalently according to shared ...
More
Understanding how objects are partitioned into useful groups to form concepts is important to most disciplines. Concepts allow us to treat different objects equivalently according to shared attributes, and hence to communicate about, draw inferences from, reason with, and explain these objects. Understanding how concepts are formed and used is thus essential to understanding and applying these basic processes, and the topic of similarity-based classification is central to psychology, artificial intelligence, statistics, and philosophy. This book provides an interdisciplinary overview of this area.Less
Understanding how objects are partitioned into useful groups to form concepts is important to most disciplines. Concepts allow us to treat different objects equivalently according to shared attributes, and hence to communicate about, draw inferences from, reason with, and explain these objects. Understanding how concepts are formed and used is thus essential to understanding and applying these basic processes, and the topic of similarity-based classification is central to psychology, artificial intelligence, statistics, and philosophy. This book provides an interdisciplinary overview of this area.