S. Matthew Liao
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780190905033
- eISBN:
- 9780190905071
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190905033.003.0001
- Subject:
- Philosophy, Moral Philosophy
This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses ...
More
This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.Less
This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.
Edmund T. Rolls
- Published in print:
- 2020
- Published Online:
- February 2021
- ISBN:
- 9780198871101
- eISBN:
- 9780191914157
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198871101.003.0019
- Subject:
- Neuroscience, Behavioral Neuroscience, Neuroendocrine and Autonomic
In this Chapter a comparison is made between computations in the brain and computations performed in computers. This is intended to be helpful to those engineers, computer scientists, AI specialists ...
More
In this Chapter a comparison is made between computations in the brain and computations performed in computers. This is intended to be helpful to those engineers, computer scientists, AI specialists et al interested in designing new computers that emulate aspects of brain function. In fact, the whole of this book is intended to be useful for this aim, by setting out what is computed by different brain systems, and what we know about how it is computed. It is essential to know this if an emulation of brain function is to be performed, and this is important to enable this group of scientists to bring their expertise to help understand brain function more. The Chapter also considers the levels of investigation, which include the computational, necessary to understand brain function; and some applications of this understanding, to for example how our developing understanding is relevant to understanding disorders, including for example of food intake control leading to obesity. Finally, Section 19.10 makes it clear why the focus of this book is on computations in primate (and that very much includes human) brains, rather than on rodent (rat and mice) brains. It is because the systems-level organization of primate including human brains is quite different from that in rodents, in many fundamental ways that are described.Less
In this Chapter a comparison is made between computations in the brain and computations performed in computers. This is intended to be helpful to those engineers, computer scientists, AI specialists et al interested in designing new computers that emulate aspects of brain function. In fact, the whole of this book is intended to be useful for this aim, by setting out what is computed by different brain systems, and what we know about how it is computed. It is essential to know this if an emulation of brain function is to be performed, and this is important to enable this group of scientists to bring their expertise to help understand brain function more. The Chapter also considers the levels of investigation, which include the computational, necessary to understand brain function; and some applications of this understanding, to for example how our developing understanding is relevant to understanding disorders, including for example of food intake control leading to obesity. Finally, Section 19.10 makes it clear why the focus of this book is on computations in primate (and that very much includes human) brains, rather than on rodent (rat and mice) brains. It is because the systems-level organization of primate including human brains is quite different from that in rodents, in many fundamental ways that are described.
Thomas P. Trappenberg
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780198828044
- eISBN:
- 9780191883873
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198828044.001.0001
- Subject:
- Neuroscience, Behavioral Neuroscience
Machine learning is exploding, both in research and for industrial applications. This book aims to be a brief introduction to this area given the importance of this topic in many disciplines, from ...
More
Machine learning is exploding, both in research and for industrial applications. This book aims to be a brief introduction to this area given the importance of this topic in many disciplines, from sciences to engineering, and even for its broader impact on our society. This book tries to contribute with a style that keeps a balance between brevity of explanations, the rigor of mathematical arguments, and outlining principle ideas. At the same time, this book tries to give some comprehensive overview of a variety of methods to see their relation on specialization within this area. This includes some introduction to Bayesian approaches to modeling as well as deep learning. Writing small programs to apply machine learning techniques is made easy today by the availability of high-level programming systems. This book offers examples in Python with the machine learning libraries sklearn and Keras. The first four chapters concentrate largely on the practical side of applying machine learning techniques. The book then discusses more fundamental concepts and includes their formulation in a probabilistic context. This is followed by chapters on advanced models, that of recurrent neural networks and that of reinforcement learning. The book closes with a brief discussion on the impact of machine learning and AI on our society.Less
Machine learning is exploding, both in research and for industrial applications. This book aims to be a brief introduction to this area given the importance of this topic in many disciplines, from sciences to engineering, and even for its broader impact on our society. This book tries to contribute with a style that keeps a balance between brevity of explanations, the rigor of mathematical arguments, and outlining principle ideas. At the same time, this book tries to give some comprehensive overview of a variety of methods to see their relation on specialization within this area. This includes some introduction to Bayesian approaches to modeling as well as deep learning. Writing small programs to apply machine learning techniques is made easy today by the availability of high-level programming systems. This book offers examples in Python with the machine learning libraries sklearn and Keras. The first four chapters concentrate largely on the practical side of applying machine learning techniques. The book then discusses more fundamental concepts and includes their formulation in a probabilistic context. This is followed by chapters on advanced models, that of recurrent neural networks and that of reinforcement learning. The book closes with a brief discussion on the impact of machine learning and AI on our society.
Thomas P. Trappenberg
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780198828044
- eISBN:
- 9780191883873
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198828044.003.0004
- Subject:
- Neuroscience, Behavioral Neuroscience
This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins ...
More
This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.Less
This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.
Otmar Hilliges
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198799603
- eISBN:
- 9780191839832
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198799603.003.0004
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy
Sensing of user input lies at the core of HCI research. Deciding which input mechanisms to use and how to implement them such that they work in a way that is easy to use, robust to various ...
More
Sensing of user input lies at the core of HCI research. Deciding which input mechanisms to use and how to implement them such that they work in a way that is easy to use, robust to various environmental factors and accurate in reconstruction of the users intent is a tremendously challenging problem. The main difficulties stem from the complex nature of human behavior which is highly non-linear, dynamic and context dependent and can often only be observed partially. Due to these complexities, research has turned its attention to data-driven techniques in order to build sophisticated and robust input recognition mechanisms. In this chapter we discuss the most important aspects that constitute data-driven signal analysis approaches. The aim is to provide the reader with an overall understanding of the process irrespective of the exact choice of sensor or machine learning algorithm.Less
Sensing of user input lies at the core of HCI research. Deciding which input mechanisms to use and how to implement them such that they work in a way that is easy to use, robust to various environmental factors and accurate in reconstruction of the users intent is a tremendously challenging problem. The main difficulties stem from the complex nature of human behavior which is highly non-linear, dynamic and context dependent and can often only be observed partially. Due to these complexities, research has turned its attention to data-driven techniques in order to build sophisticated and robust input recognition mechanisms. In this chapter we discuss the most important aspects that constitute data-driven signal analysis approaches. The aim is to provide the reader with an overall understanding of the process irrespective of the exact choice of sensor or machine learning algorithm.
Gill Burbridge
- Published in print:
- 2017
- Published Online:
- February 2021
- ISBN:
- 9781911325031
- eISBN:
- 9781800342576
- Item type:
- chapter
- Publisher:
- Liverpool University Press
- DOI:
- 10.3828/liverpool/9781911325031.003.0009
- Subject:
- Film, Television and Radio, Film
This chapter examines the act of eating text. The approach to learning and enquiry explored here is, in part, an act of resistance and re-appropriation, genuinely committed to challenging and ...
More
This chapter examines the act of eating text. The approach to learning and enquiry explored here is, in part, an act of resistance and re-appropriation, genuinely committed to challenging and contesting imposed assumptions about the relationship between curriculum content, academic rigour, and the development of critical thinking and deep learning. Such a challenge to accepted orthodoxies invites a dialogue between those who see 'more facts' as the route to 'good conceptual understanding' and those who question the very distinction between factual knowledge as a schema for understanding the world and the process of interpretation. The chapter then considers how educators might negotiate with their students the realms of 'action and feelings' and encourage them to see the organic power that language has not just to depict but also to create. In doing so, they are inviting them to engage with food as an act of communication, 'a body of images, a protocol of usages, situations, and behaviour'.Less
This chapter examines the act of eating text. The approach to learning and enquiry explored here is, in part, an act of resistance and re-appropriation, genuinely committed to challenging and contesting imposed assumptions about the relationship between curriculum content, academic rigour, and the development of critical thinking and deep learning. Such a challenge to accepted orthodoxies invites a dialogue between those who see 'more facts' as the route to 'good conceptual understanding' and those who question the very distinction between factual knowledge as a schema for understanding the world and the process of interpretation. The chapter then considers how educators might negotiate with their students the realms of 'action and feelings' and encourage them to see the organic power that language has not just to depict but also to create. In doing so, they are inviting them to engage with food as an act of communication, 'a body of images, a protocol of usages, situations, and behaviour'.
Bradley E. Alger
- Published in print:
- 2019
- Published Online:
- February 2021
- ISBN:
- 9780190881481
- eISBN:
- 9780190093761
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190881481.003.0015
- Subject:
- Neuroscience, Techniques
This chapter evaluates two possible futures for the hypothesis in scientific thinking in the age of Big Data. In one, the Big Data Mindset entirely shuns hypotheses and the scientific goal of ...
More
This chapter evaluates two possible futures for the hypothesis in scientific thinking in the age of Big Data. In one, the Big Data Mindset entirely shuns hypotheses and the scientific goal of understanding nature, settling instead for correlations teased from gigantic datasets by computer algorithms that even their developers don’t understand. In the other, Robot Scientist becomes capable of designing and executing complex though constrained hypothesis-based experiments. The day of the Robot Scientist has already dawned in the form of a fully automated laboratory that can experiment on simple organisms. The Robot Scientist and the Big Data Mindset take diametrically opposed approaches to scientific thinking. The Big Data Mindset is represented by the rise and fall of Google Flu Trends, a program that tried and failed to predict the outbreak of flu epidemics by analyzing disorderly masses of internet search terms. The Robot Scientist deals with Big Data intelligibly via neural networks and translations of logical natural language concepts into computer-friendly motifs. They are both enormously powerful strategies for exploiting opportunities afforded by Big Data, and today both have serious limitations. The Big Data Mindset renders people impotently dependent on its calculations; the Robot Scientist can’t invent truly innovative hypotheses. Nevertheless, we are very still early on in the computer revolution and both the Big Data Mindset and the Robot Scientist promise to work drastic transformations on the worlds of science and scientific thinking as their powers increase.Less
This chapter evaluates two possible futures for the hypothesis in scientific thinking in the age of Big Data. In one, the Big Data Mindset entirely shuns hypotheses and the scientific goal of understanding nature, settling instead for correlations teased from gigantic datasets by computer algorithms that even their developers don’t understand. In the other, Robot Scientist becomes capable of designing and executing complex though constrained hypothesis-based experiments. The day of the Robot Scientist has already dawned in the form of a fully automated laboratory that can experiment on simple organisms. The Robot Scientist and the Big Data Mindset take diametrically opposed approaches to scientific thinking. The Big Data Mindset is represented by the rise and fall of Google Flu Trends, a program that tried and failed to predict the outbreak of flu epidemics by analyzing disorderly masses of internet search terms. The Robot Scientist deals with Big Data intelligibly via neural networks and translations of logical natural language concepts into computer-friendly motifs. They are both enormously powerful strategies for exploiting opportunities afforded by Big Data, and today both have serious limitations. The Big Data Mindset renders people impotently dependent on its calculations; the Robot Scientist can’t invent truly innovative hypotheses. Nevertheless, we are very still early on in the computer revolution and both the Big Data Mindset and the Robot Scientist promise to work drastic transformations on the worlds of science and scientific thinking as their powers increase.
Stephen K. Reed
- Published in print:
- 2020
- Published Online:
- August 2020
- ISBN:
- 9780197529003
- eISBN:
- 9780197529034
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197529003.003.0014
- Subject:
- Psychology, Cognitive Psychology
Machine learning is a highly influential field that has made major contributions to the increased effectiveness of artificial intelligence. Machine learning utilizes different methods, four of which ...
More
Machine learning is a highly influential field that has made major contributions to the increased effectiveness of artificial intelligence. Machine learning utilizes different methods, four of which have been particularly effective. The Analogizers classify patterns based on their similarity to other patterns. Multidimensional scaling provides support. The Bayesians revise the probability of hypotheses based on new evidence. The Connectionists adjust the strength between layers of “neurons.” Deep leaning based on many layers of connections has proven particularly successful. The Symbolists use rules that combine pieces of pre-existing knowledge. Hybrid systems combine these methods to create systems that are more effective than individual methods.Less
Machine learning is a highly influential field that has made major contributions to the increased effectiveness of artificial intelligence. Machine learning utilizes different methods, four of which have been particularly effective. The Analogizers classify patterns based on their similarity to other patterns. Multidimensional scaling provides support. The Bayesians revise the probability of hypotheses based on new evidence. The Connectionists adjust the strength between layers of “neurons.” Deep leaning based on many layers of connections has proven particularly successful. The Symbolists use rules that combine pieces of pre-existing knowledge. Hybrid systems combine these methods to create systems that are more effective than individual methods.
Edmund T. Rolls
- Published in print:
- 2020
- Published Online:
- February 2021
- ISBN:
- 9780198871101
- eISBN:
- 9780191914157
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198871101.001.0001
- Subject:
- Neuroscience, Behavioral Neuroscience, Neuroendocrine and Autonomic
The subject of this book is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of ...
More
The subject of this book is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed. The book will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics.Less
The subject of this book is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed. The book will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics.
Chris Bleakley
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780198853732
- eISBN:
- 9780191888168
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198853732.003.0011
- Subject:
- Mathematics, History of Mathematics, Logic / Computer Science / Mathematical Philosophy
Chapter 11 traces the history of artificial neural networks (ANNs) from humble beginnings in the 1940s to their monumental successes in the 21st century. ANNs are algorithms which mimic the behaviour ...
More
Chapter 11 traces the history of artificial neural networks (ANNs) from humble beginnings in the 1940s to their monumental successes in the 21st century. ANNs are algorithms which mimic the behaviour of the nerve cells in the human brain. The concept was originally proposed by Walter Pitts and Warren McCulloch but it was Frank Rosenblatt that popularised the idea, building an ANN to recognise simple shape in images. Rosenblatt’s Perceptron was heavily criticised and attention turned to other, more rigorours mathematical, approaches. In the 70s, three independent research teams invented an effective algorithm for training an ANN to perform pattern recognition tasks. By the 1990s, a handful of results suggested that the idea might work after all. Around 2006, it finally became apparent that computer performance had been the limiting factor. Large networks could perform many pattern recognition just as well humans. So-called deep learning was about to transform computing.Less
Chapter 11 traces the history of artificial neural networks (ANNs) from humble beginnings in the 1940s to their monumental successes in the 21st century. ANNs are algorithms which mimic the behaviour of the nerve cells in the human brain. The concept was originally proposed by Walter Pitts and Warren McCulloch but it was Frank Rosenblatt that popularised the idea, building an ANN to recognise simple shape in images. Rosenblatt’s Perceptron was heavily criticised and attention turned to other, more rigorours mathematical, approaches. In the 70s, three independent research teams invented an effective algorithm for training an ANN to perform pattern recognition tasks. By the 1990s, a handful of results suggested that the idea might work after all. Around 2006, it finally became apparent that computer performance had been the limiting factor. Large networks could perform many pattern recognition just as well humans. So-called deep learning was about to transform computing.
James A. Anderson
- Published in print:
- 2017
- Published Online:
- February 2018
- ISBN:
- 9780199357789
- eISBN:
- 9780190675264
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199357789.003.0012
- Subject:
- Psychology, Cognitive Psychology
What form would a brain theory take? Would it be short and punchy, like Maxwell’s Equations? Or with a clear goal but achieved by a community of mechanisms—local theories—to attain that goal, like ...
More
What form would a brain theory take? Would it be short and punchy, like Maxwell’s Equations? Or with a clear goal but achieved by a community of mechanisms—local theories—to attain that goal, like the US Tax Code. The best developed recent brain-like model is the “neural network.” In the late 1950s Rosenblatt’s Perceptron and many variants proposed a brain-inspired associative network. Problems with the first generation of neural networks—limited capacity, opaque learning, and inaccuracy—have been largely overcome. In 2016, a program from Google, AlphaGo, based on a neural net using deep learning, defeated the world’s best Go player. The climax of this chapter is a fictional example starring Sherlock Holmes demonstrating that complex associative computation in practice has less in common with accurate pattern recognition and more with abstract high-level conceptual inference.Less
What form would a brain theory take? Would it be short and punchy, like Maxwell’s Equations? Or with a clear goal but achieved by a community of mechanisms—local theories—to attain that goal, like the US Tax Code. The best developed recent brain-like model is the “neural network.” In the late 1950s Rosenblatt’s Perceptron and many variants proposed a brain-inspired associative network. Problems with the first generation of neural networks—limited capacity, opaque learning, and inaccuracy—have been largely overcome. In 2016, a program from Google, AlphaGo, based on a neural net using deep learning, defeated the world’s best Go player. The climax of this chapter is a fictional example starring Sherlock Holmes demonstrating that complex associative computation in practice has less in common with accurate pattern recognition and more with abstract high-level conceptual inference.
Jennifer Pan
- Published in print:
- 2020
- Published Online:
- July 2020
- ISBN:
- 9780190087425
- eISBN:
- 9780190087463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190087425.003.0007
- Subject:
- Political Science, Comparative Politics, International Relations and Politics
The conclusion considers how China’s pursuit of political order through preemptive control changes in a digital context of rapidly growing data, computing power, and advances in machine learning ...
More
The conclusion considers how China’s pursuit of political order through preemptive control changes in a digital context of rapidly growing data, computing power, and advances in machine learning (e.g., deep learning, artificial intelligence / “AI”). Digital advances help the Chinese government collect more information about the entire population, and to do so in ways that are less detectable. However, new digital technologies do not alter China’s goal of preemptive control or the predictive surveillance that underpins this goal. Digital technologies will likely enable the government to identify more potential threats, but because digital technologies will not eliminate error altogether and because there is always a tradeoff between precision and recall in machine classification systems, the dramatic expansion of available information may expand the number of people trapped in programs of preemptive control.Less
The conclusion considers how China’s pursuit of political order through preemptive control changes in a digital context of rapidly growing data, computing power, and advances in machine learning (e.g., deep learning, artificial intelligence / “AI”). Digital advances help the Chinese government collect more information about the entire population, and to do so in ways that are less detectable. However, new digital technologies do not alter China’s goal of preemptive control or the predictive surveillance that underpins this goal. Digital technologies will likely enable the government to identify more potential threats, but because digital technologies will not eliminate error altogether and because there is always a tradeoff between precision and recall in machine classification systems, the dramatic expansion of available information may expand the number of people trapped in programs of preemptive control.
Jun Tani
- Published in print:
- 2016
- Published Online:
- November 2016
- ISBN:
- 9780190281069
- eISBN:
- 9780190281083
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190281069.001.0001
- Subject:
- Psychology, Cognitive Models and Architectures
How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering ...
More
How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.Less
How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.
Stephen K. Reed
- Published in print:
- 2020
- Published Online:
- August 2020
- ISBN:
- 9780197529003
- eISBN:
- 9780197529034
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197529003.003.0015
- Subject:
- Psychology, Cognitive Psychology
Deep connectionist learning has resulted in very impressive accomplishments, but it is unclear how it achieves its results. A dilemma in using the output of machine learning is that the best ...
More
Deep connectionist learning has resulted in very impressive accomplishments, but it is unclear how it achieves its results. A dilemma in using the output of machine learning is that the best performing methods are the least explainable. Explainable artificial intelligence seeks to develop systems that can explain their reasoning to a human user. The application of IBM’s WatsonPaths to medicine includes a diagnostic network that infers a diagnosis from symptoms with a degree of confidence associated with each diagnosis. The Semanticscience Integrated Ontology uses categories such as objects, processes, attributes, and relations to create networks of biological knowledge. The same categories are fundamental in representing other types of knowledge such as cognition. Extending an ontology requires a consistent use of semantic terms across different domains of knowledge.Less
Deep connectionist learning has resulted in very impressive accomplishments, but it is unclear how it achieves its results. A dilemma in using the output of machine learning is that the best performing methods are the least explainable. Explainable artificial intelligence seeks to develop systems that can explain their reasoning to a human user. The application of IBM’s WatsonPaths to medicine includes a diagnostic network that infers a diagnosis from symptoms with a degree of confidence associated with each diagnosis. The Semanticscience Integrated Ontology uses categories such as objects, processes, attributes, and relations to create networks of biological knowledge. The same categories are fundamental in representing other types of knowledge such as cognition. Extending an ontology requires a consistent use of semantic terms across different domains of knowledge.
Simone Natale
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780190080365
- eISBN:
- 9780190080402
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190080365.003.0007
- Subject:
- History, Cultural History
AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on ...
More
AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on the web, or turning on a lamp. Every assistant is represented as an individual character or persona (e.g., “Siri” or “Alexa”) that despite being nonhuman can be imagined and interacted with as such. Focusing on the cases of Alexa, Siri, and Google Assistant, this chapter argues that voice assistants activate an ambivalent relationship with users, giving them the illusion of control in their interactions with the assistants while at the same time withdrawing them from actual control over the computing systems that lie behind these interfaces. The chapter illustrates how this is made possible at the interface level by mechanisms of projection that expect users to contribute to the construction of the assistant as a persona, and how this construction ultimately conceals the networked computing systems administered by the powerful corporations who developed these tools.Less
AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on the web, or turning on a lamp. Every assistant is represented as an individual character or persona (e.g., “Siri” or “Alexa”) that despite being nonhuman can be imagined and interacted with as such. Focusing on the cases of Alexa, Siri, and Google Assistant, this chapter argues that voice assistants activate an ambivalent relationship with users, giving them the illusion of control in their interactions with the assistants while at the same time withdrawing them from actual control over the computing systems that lie behind these interfaces. The chapter illustrates how this is made possible at the interface level by mechanisms of projection that expect users to contribute to the construction of the assistant as a persona, and how this construction ultimately conceals the networked computing systems administered by the powerful corporations who developed these tools.
Dana H. Ballard
- Published in print:
- 2015
- Published Online:
- September 2015
- ISBN:
- 9780262028615
- eISBN:
- 9780262323819
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028615.003.0004
- Subject:
- Neuroscience, Research and Theory
The primary way the brain responds quickly is to store previous experiences in a vast tabular format that allows responses to be accessed quickly. The exact format of the memory is probably a ...
More
The primary way the brain responds quickly is to store previous experiences in a vast tabular format that allows responses to be accessed quickly. The exact format of the memory is probably a composite of many different constraints, the forms of which are described. The primary anatomical organization of cortex is into hierarchies of predominantly two-dimensional `maps,’ of key features. Such features are computed at the upper layers of each map, the lower layers handing input and output signals. The separate feature characteristics of a map initially led researchers to think of its properties as more or less independent from other maps, but recent research is revealing that the maps’ feature sets are far more integrated and interdependent. Bayesian network models have provided an elegant computational framework that captures this interdependence.Less
The primary way the brain responds quickly is to store previous experiences in a vast tabular format that allows responses to be accessed quickly. The exact format of the memory is probably a composite of many different constraints, the forms of which are described. The primary anatomical organization of cortex is into hierarchies of predominantly two-dimensional `maps,’ of key features. Such features are computed at the upper layers of each map, the lower layers handing input and output signals. The separate feature characteristics of a map initially led researchers to think of its properties as more or less independent from other maps, but recent research is revealing that the maps’ feature sets are far more integrated and interdependent. Bayesian network models have provided an elegant computational framework that captures this interdependence.
Ronald M. Baecker
- Published in print:
- 2019
- Published Online:
- November 2020
- ISBN:
- 9780198827085
- eISBN:
- 9780191917318
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198827085.003.0018
- Subject:
- Computer Science, Human-Computer Interaction
There have been several challenges to our view of our position and purpose as human beings. The scientist Charles Darwin’s research demonstrated ...
More
There have been several challenges to our view of our position and purpose as human beings. The scientist Charles Darwin’s research demonstrated evolutionary links between man and other animals. Psychoanalysis founder Sigmund Freud illuminated the power of the subconscious. Recent advances in artificial intelligence (AI) have challenged our identity as the species with the greatest ability to think. Whether machines can now ‘think’ is no longer interesting. What is important is to critically consider the degree to which they are called upon to make decisions and act in significant and often life-critical situations. We have already discussed the increasing roles of AI in intelligent tutoring, medicine, news stories and fake news, autonomous weapons, smart cars, and automation. Chapter 11 focuses on other ways in which our lives are changing because of advances in AI, and the accompanying opportunities and risks. AI has seen a paradigm shift since the year 2000. Prior to this, the focus was on knowledge representation and the modelling of human expertise in particular domains, in order to develop expert systems that could solve problems and carry out rudimentary tasks. Now, the focus is on the neural networks capable of machine learning (ML). The most successful approach is deep learning, whereby complex hierarchical assemblies of processing elements ‘learn’ using millions of samples of training data. They can then often make correct decisions in new situations. We shall also present a radical, and for most of us a scary, concept of AI with no limits—the technological singularity or superintelligence. Even though superintelligence is for now sciencefiction, humanity is asking if there is any limit to machine intelligence. We shall therefore discuss the social and ethical consequences of widespread use of ML algorithms. It is helpful in this analysis to better understand what intelligence is, so we present two insightful formulations of the concept developed by renowned psychologists.
Less
There have been several challenges to our view of our position and purpose as human beings. The scientist Charles Darwin’s research demonstrated evolutionary links between man and other animals. Psychoanalysis founder Sigmund Freud illuminated the power of the subconscious. Recent advances in artificial intelligence (AI) have challenged our identity as the species with the greatest ability to think. Whether machines can now ‘think’ is no longer interesting. What is important is to critically consider the degree to which they are called upon to make decisions and act in significant and often life-critical situations. We have already discussed the increasing roles of AI in intelligent tutoring, medicine, news stories and fake news, autonomous weapons, smart cars, and automation. Chapter 11 focuses on other ways in which our lives are changing because of advances in AI, and the accompanying opportunities and risks. AI has seen a paradigm shift since the year 2000. Prior to this, the focus was on knowledge representation and the modelling of human expertise in particular domains, in order to develop expert systems that could solve problems and carry out rudimentary tasks. Now, the focus is on the neural networks capable of machine learning (ML). The most successful approach is deep learning, whereby complex hierarchical assemblies of processing elements ‘learn’ using millions of samples of training data. They can then often make correct decisions in new situations. We shall also present a radical, and for most of us a scary, concept of AI with no limits—the technological singularity or superintelligence. Even though superintelligence is for now sciencefiction, humanity is asking if there is any limit to machine intelligence. We shall therefore discuss the social and ethical consequences of widespread use of ML algorithms. It is helpful in this analysis to better understand what intelligence is, so we present two insightful formulations of the concept developed by renowned psychologists.
Jennifer Pan
- Published in print:
- 2020
- Published Online:
- July 2020
- ISBN:
- 9780190087425
- eISBN:
- 9780190087463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190087425.003.0006
- Subject:
- Political Science, Comparative Politics, International Relations and Politics
This chapter captures the backlash—increased protests and lower legitimacy—triggered by prioritizing Dibao for targeted populations. The survey of 100 neighborhoods shows that when targeted ...
More
This chapter captures the backlash—increased protests and lower legitimacy—triggered by prioritizing Dibao for targeted populations. The survey of 100 neighborhoods shows that when targeted populations receive Dibao benefits, there is greater contention over Dibao distribution in the neighborhood. Those who are turned away from benefits are more likely to protest and bargain for Dibao. Using large-scale social media data and deep learning to extract unique, off-line collective action events, this chapter shows that welfare-related protests are higher among cities that have a higher level of Dibao provision to targeted populations than cities that have lower levels. Although local administrators are adept at defusing protests, and collective action remains small and localized, people are left resentful and embittered. Data from a nationally representative survey shows that cities with a higher level of Dibao provision to targeted populations have lower assessment of government capabilities, especially in welfare provision and public responsiveness, as well as lower levels of political trust and satisfaction.Less
This chapter captures the backlash—increased protests and lower legitimacy—triggered by prioritizing Dibao for targeted populations. The survey of 100 neighborhoods shows that when targeted populations receive Dibao benefits, there is greater contention over Dibao distribution in the neighborhood. Those who are turned away from benefits are more likely to protest and bargain for Dibao. Using large-scale social media data and deep learning to extract unique, off-line collective action events, this chapter shows that welfare-related protests are higher among cities that have a higher level of Dibao provision to targeted populations than cities that have lower levels. Although local administrators are adept at defusing protests, and collective action remains small and localized, people are left resentful and embittered. Data from a nationally representative survey shows that cities with a higher level of Dibao provision to targeted populations have lower assessment of government capabilities, especially in welfare provision and public responsiveness, as well as lower levels of political trust and satisfaction.
Josh Bongard
- Published in print:
- 2018
- Published Online:
- June 2018
- ISBN:
- 9780199674923
- eISBN:
- 9780191842702
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199674923.003.0011
- Subject:
- Neuroscience, Sensory and Motor Systems, Development
Embodied cognition is the view that intelligence arises out of the interaction between an agent’s body and its environment. Taking such a view generates novel scientific hypotheses about biological ...
More
Embodied cognition is the view that intelligence arises out of the interaction between an agent’s body and its environment. Taking such a view generates novel scientific hypotheses about biological intelligence and opportunities for advancing artificial intelligence. In this chapter we review one such set of hypotheses regarding how a robot may generate models of self, and others, and then exploit those models to recover from damage or exhibit the rudiments of social cognition. This modeling of self and others draws mainly on three concepts from neuroscience and AI: forward and inverse models in the brain, the neuronal replicator hypothesis, and the brain as a hierarchical prediction machine. The chapter concludes with future directions, including the integration of deep learning methods with embodied cognition.Less
Embodied cognition is the view that intelligence arises out of the interaction between an agent’s body and its environment. Taking such a view generates novel scientific hypotheses about biological intelligence and opportunities for advancing artificial intelligence. In this chapter we review one such set of hypotheses regarding how a robot may generate models of self, and others, and then exploit those models to recover from damage or exhibit the rudiments of social cognition. This modeling of self and others draws mainly on three concepts from neuroscience and AI: forward and inverse models in the brain, the neuronal replicator hypothesis, and the brain as a hierarchical prediction machine. The chapter concludes with future directions, including the integration of deep learning methods with embodied cognition.
Simone Natale
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780190080365
- eISBN:
- 9780190080402
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190080365.003.0008
- Subject:
- History, Cultural History
The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is ...
More
The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is commonly called AI. Artificial intelligence technologies are not just designed to interact with human users: they are designed to fit specific characteristics of the ways users perceive and navigate the external world. Communicative AI becomes more effective not only by evolving from a technical standpoint but also by profiting, through the dynamics of banal deception, from the social meanings humans project onto situations and things. In this conclusion, the risks and problems related to AI’s banal deception are explored in relationship with other AI-based technologies such as robotics and social media bots. A call is made for initiating a more serious debate about the role of deception in interface design and computer science. The book concludes with a reflection on the need to develop a critical and skeptical stance in interactions with computing technologies and AI. In order not to be found unprepared for the challenges posed by AI, computer scientists, software developers, designers as well as users have to consider and critically interrogate the potential outcomes of banal deception.Less
The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is commonly called AI. Artificial intelligence technologies are not just designed to interact with human users: they are designed to fit specific characteristics of the ways users perceive and navigate the external world. Communicative AI becomes more effective not only by evolving from a technical standpoint but also by profiting, through the dynamics of banal deception, from the social meanings humans project onto situations and things. In this conclusion, the risks and problems related to AI’s banal deception are explored in relationship with other AI-based technologies such as robotics and social media bots. A call is made for initiating a more serious debate about the role of deception in interface design and computer science. The book concludes with a reflection on the need to develop a critical and skeptical stance in interactions with computing technologies and AI. In order not to be found unprepared for the challenges posed by AI, computer scientists, software developers, designers as well as users have to consider and critically interrogate the potential outcomes of banal deception.