Arlindo Oliveira
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780262036030
- eISBN:
- 9780262338394
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036030.001.0001
- Subject:
- Computer Science, Artificial Intelligence
This book addresses the connections between computers, life, evolution, brains, and minds. Digital computers are recent and have changed our society. However, they represent just the latest way to ...
More
This book addresses the connections between computers, life, evolution, brains, and minds. Digital computers are recent and have changed our society. However, they represent just the latest way to process information, using algorithms to create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, shaped by billions of years of evolution. The most advanced of these information processing devices is the human brain. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any existing machine. They provide humans with intelligence, consciousness and, some believe, even with a soul. Brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Machine learning and artificial intelligence technologies will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate, and understand biological systems and even complete brains, with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners, and replace humans in many tasks. They may usher in a technological singularity, may make humans obsolete or even a threatened species. They make us super-humans or demi-gods.Less
This book addresses the connections between computers, life, evolution, brains, and minds. Digital computers are recent and have changed our society. However, they represent just the latest way to process information, using algorithms to create order out of chaos. Before computers, the job of processing information was done by living organisms, which are nothing more than complex information processing devices, shaped by billions of years of evolution. The most advanced of these information processing devices is the human brain. Brains enable humans to process information in a way unparalleled by any other species, living or extinct, or by any existing machine. They provide humans with intelligence, consciousness and, some believe, even with a soul. Brains also enabled humans to develop science and technology to a point where it is possible to design computers with a power comparable to that of the human brain. Machine learning and artificial intelligence technologies will one day make it possible to create intelligent machines and computational biology will one day enable us to model, simulate, and understand biological systems and even complete brains, with unprecedented levels of detail. From these efforts, new minds will eventually emerge, minds that will emanate from the execution of programs running in powerful computers. These digital minds may one day rival our own, become our partners, and replace humans in many tasks. They may usher in a technological singularity, may make humans obsolete or even a threatened species. They make us super-humans or demi-gods.
Arlindo Oliveira
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780262036030
- eISBN:
- 9780262338394
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036030.003.0005
- Subject:
- Computer Science, Artificial Intelligence
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an ...
More
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.Less
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.
Alexandre Todorov
- Published in print:
- 2016
- Published Online:
- May 2017
- ISBN:
- 9780262034685
- eISBN:
- 9780262335522
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262034685.003.0006
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
The aim of the RELIEF algorithm is to filter out features (e.g., genes, environmental factors) that are relevant to a trait of interest, starting from a set of that may include thousands of ...
More
The aim of the RELIEF algorithm is to filter out features (e.g., genes, environmental factors) that are relevant to a trait of interest, starting from a set of that may include thousands of irrelevant features. Though widely used in many fields, its application to the study of gene-environment interaction studies has been limited thus far. We provide here an overview of this machine learning algorithm and some of its variants. Using simulated data, we then compare of the performance of RELIEF to that of logistic regression for screening for gene-environment interactions in SNP data. Even though performance degrades in larger sets of markers, RELIEF remains a competitive alternative to logistic regression, and shows clear promise as a tool for the study of gene-environment interactions. Areas for further improvements of the algorithm are then suggested.Less
The aim of the RELIEF algorithm is to filter out features (e.g., genes, environmental factors) that are relevant to a trait of interest, starting from a set of that may include thousands of irrelevant features. Though widely used in many fields, its application to the study of gene-environment interaction studies has been limited thus far. We provide here an overview of this machine learning algorithm and some of its variants. Using simulated data, we then compare of the performance of RELIEF to that of logistic regression for screening for gene-environment interactions in SNP data. Even though performance degrades in larger sets of markers, RELIEF remains a competitive alternative to logistic regression, and shows clear promise as a tool for the study of gene-environment interactions. Areas for further improvements of the algorithm are then suggested.
Antonio Torralba and Adolfo Plasencia
- Published in print:
- 2017
- Published Online:
- January 2018
- ISBN:
- 9780262036016
- eISBN:
- 9780262339308
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036016.003.0027
- Subject:
- Society and Culture, Technology and Society
Antonio Torralba, member of MIT CSAIL, opens the dialogue by describing the research he performs in the field of computer vision and related artificial intelligence (AI). He also compares the ...
More
Antonio Torralba, member of MIT CSAIL, opens the dialogue by describing the research he performs in the field of computer vision and related artificial intelligence (AI). He also compares the conceptual differences and the context of the early days of artificial intelligence—where hardly any image recording devices existed—with the present situation, in which an enormous amount of data is available. Next, through the use of examples, he talks about the huge complexity faced by research in computer vision to get computers and machines to understand the meanings of what they “see” in the scenes, and the objects they contain, by means of digital cameras. As he explains afterward, the challenge of this complexity for computer vision processing is particularly noticeable in settings involving robots, or driverless cars, where it makes no sense to develop vision systems that can see if they cannot learn. Later he argues why today’s computer systems have to learn “to see” because if there is no learning process, for example machine learning, they will never be able to make autonomous decisions.Less
Antonio Torralba, member of MIT CSAIL, opens the dialogue by describing the research he performs in the field of computer vision and related artificial intelligence (AI). He also compares the conceptual differences and the context of the early days of artificial intelligence—where hardly any image recording devices existed—with the present situation, in which an enormous amount of data is available. Next, through the use of examples, he talks about the huge complexity faced by research in computer vision to get computers and machines to understand the meanings of what they “see” in the scenes, and the objects they contain, by means of digital cameras. As he explains afterward, the challenge of this complexity for computer vision processing is particularly noticeable in settings involving robots, or driverless cars, where it makes no sense to develop vision systems that can see if they cannot learn. Later he argues why today’s computer systems have to learn “to see” because if there is no learning process, for example machine learning, they will never be able to make autonomous decisions.
Anders Drachen and Shawn Connor
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198794844
- eISBN:
- 9780191836336
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198794844.003.0019
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy, Computational Mathematics / Optimization
Game Analytics (GA) provides new ways to conduct user research, counteracting some of the weaknesses of traditional approaches while retaining essential compatibility with the methodologies of GUR. ...
More
Game Analytics (GA) provides new ways to conduct user research, counteracting some of the weaknesses of traditional approaches while retaining essential compatibility with the methodologies of GUR. This chapter provides an overview of what GA is and how it fits within the daily operations of game development across studio sizes, with an emphasis on the intersection with GUR and the synergies that can be leveraged across analytics and user research.Less
Game Analytics (GA) provides new ways to conduct user research, counteracting some of the weaknesses of traditional approaches while retaining essential compatibility with the methodologies of GUR. This chapter provides an overview of what GA is and how it fits within the daily operations of game development across studio sizes, with an emphasis on the intersection with GUR and the synergies that can be leveraged across analytics and user research.
Andrea Moro
- Published in print:
- 2016
- Published Online:
- May 2017
- ISBN:
- 9780262034890
- eISBN:
- 9780262335621
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262034890.003.0001
- Subject:
- Linguistics, Psycholinguistics / Neurolinguistics / Cognitive Linguistics
Understanding the nature and the structure of human language coincides with capturing the constraints which make a conceivable language possible or, equivalently, whether there are impossible ...
More
Understanding the nature and the structure of human language coincides with capturing the constraints which make a conceivable language possible or, equivalently, whether there are impossible languages at all. The chapter focuses on syntax, the capacity to generate potentially infinite sentences from a fixed limited set of words: ever since Descartes this has been considered the fingerprint of human mind. Modern syntax has allowed a mathematical approach to this domain.Less
Understanding the nature and the structure of human language coincides with capturing the constraints which make a conceivable language possible or, equivalently, whether there are impossible languages at all. The chapter focuses on syntax, the capacity to generate potentially infinite sentences from a fixed limited set of words: ever since Descartes this has been considered the fingerprint of human mind. Modern syntax has allowed a mathematical approach to this domain.
David M. Day and Margit Wiesner
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9781479880058
- eISBN:
- 9781479888276
- Item type:
- chapter
- Publisher:
- NYU Press
- DOI:
- 10.18574/nyu/9781479880058.003.0010
- Subject:
- Psychology, Social Psychology
It has been 25 years since the criminal trajectory methodology was first introduced. Scientists from multiple fields have now arrived at a much more balanced view of its strengths and weaknesses. The ...
More
It has been 25 years since the criminal trajectory methodology was first introduced. Scientists from multiple fields have now arrived at a much more balanced view of its strengths and weaknesses. The final chapter of this book looks back at the accumulated research on criminal trajectories and renews the call on criminological trajectory researchers to interface better with contemporary developmental science frameworks. This call is not intended to replace extant developmental and life-course theories of crime but, rather, to complement them by incorporating meta-theoretical propositions from the field of developmental science. To this end, this chapter offers 12 suggestions for the next generation of trajectory researchers. They range from methodological issues, including the need for stricter reporting standards and greater methodological rigor, to substantive research needs, such as the exploration of the role of biological processes, and the study of prospective links to trajectory groups of distinct behaviors and intentional self-regulatory strategies that foster desisting pathways of crime.Less
It has been 25 years since the criminal trajectory methodology was first introduced. Scientists from multiple fields have now arrived at a much more balanced view of its strengths and weaknesses. The final chapter of this book looks back at the accumulated research on criminal trajectories and renews the call on criminological trajectory researchers to interface better with contemporary developmental science frameworks. This call is not intended to replace extant developmental and life-course theories of crime but, rather, to complement them by incorporating meta-theoretical propositions from the field of developmental science. To this end, this chapter offers 12 suggestions for the next generation of trajectory researchers. They range from methodological issues, including the need for stricter reporting standards and greater methodological rigor, to substantive research needs, such as the exploration of the role of biological processes, and the study of prospective links to trajectory groups of distinct behaviors and intentional self-regulatory strategies that foster desisting pathways of crime.
Ricardo Baeza-Yates and Adolfo Ricardo
- Published in print:
- 2017
- Published Online:
- January 2018
- ISBN:
- 9780262036016
- eISBN:
- 9780262339308
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036016.003.0017
- Subject:
- Society and Culture, Technology and Society
In this dialogue, the computer scientist Ricardo Baeza-Yates explains why search technologies enable behavioral patterns which are more deterministic for some people rather than others to be ...
More
In this dialogue, the computer scientist Ricardo Baeza-Yates explains why search technologies enable behavioral patterns which are more deterministic for some people rather than others to be predicted. And why people have in their behavior a long tail. He explains later why we feel more comfortable with determinism, which is one reason for the adoption of the semantic web taking so long. Ricardo doesn’t believe in the existence of a Kibernos (kubernetes) steering the Internet. He argues that the fact the Internet has not been dominated by any power in the physical world is not related to its size, but rather to its diversity. He then explains that artificial intelligence is related both to a complex critical mass and to computing capabilities. He also argues why not everyone is able to innovate— there is no algorithm that exists to achieve success— and why innovation is something that cannot be replicated. Finally, he explains how the Internet is broadening our social horizons and is a mirror of what we are; the way it allows us to control the devil in us, and how their mechanisms exercise control over the misuse of technology.Less
In this dialogue, the computer scientist Ricardo Baeza-Yates explains why search technologies enable behavioral patterns which are more deterministic for some people rather than others to be predicted. And why people have in their behavior a long tail. He explains later why we feel more comfortable with determinism, which is one reason for the adoption of the semantic web taking so long. Ricardo doesn’t believe in the existence of a Kibernos (kubernetes) steering the Internet. He argues that the fact the Internet has not been dominated by any power in the physical world is not related to its size, but rather to its diversity. He then explains that artificial intelligence is related both to a complex critical mass and to computing capabilities. He also argues why not everyone is able to innovate— there is no algorithm that exists to achieve success— and why innovation is something that cannot be replicated. Finally, he explains how the Internet is broadening our social horizons and is a mirror of what we are; the way it allows us to control the devil in us, and how their mechanisms exercise control over the misuse of technology.
Janice Glasgow and Evan Steeg
- Published in print:
- 1999
- Published Online:
- November 2020
- ISBN:
- 9780195119404
- eISBN:
- 9780197561256
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195119404.003.0011
- Subject:
- Computer Science, Systems Analysis and Design
The field of knowledge discovery is concerned with the theory and processes involved in the representation and extraction of patterns or motifs from large ...
More
The field of knowledge discovery is concerned with the theory and processes involved in the representation and extraction of patterns or motifs from large databases. Discovered patterns can be used to group data into meaningful classes, to summarize data, or to reveal deviant entries. Motifs stored in a database can be brought to bear on difficult instances of structure prediction or determination from X-ray crystallography or nuclear magnetic resonance (NMR) experiments. Automated discovery techniques are central to understanding and analyzing the rapidly expanding repositories of protein sequence and structure data. This chapter deals with the discovery of protein structure motifs. A motif is an abstraction over a set of recurring patterns observed in a dataset; it captures the essential features shared by a set of similar or related objects. In many domains, such as computer vision and speech recognition, there exist special regularities that permit such motif abstraction. In the protein science domain, the regularities derive from evolutionary and biophysical constraints on amino acid sequences and structures. The identification of a known pattern in a new protein sequence or structure permits the immediate retrieval and application of knowledge obtained from the analysis of other proteins. The discovery and manipulation of motifs—in DNA, RNA, and protein sequences and structures—is thus an important component of computational molecular biology and genome informatics. In particular, identifying protein structure classifications at varying levels of abstraction allows us to organize and increase our understanding of the rapidly growing protein structure datasets. Discovered motifs are also useful for improving the efficiency and effectiveness of X-ray crystallographic studies of proteins, for drug design, for understanding protein evolution, and ultimately for predicting the structure of proteins from sequence data. Motifs may be designed by hand, based on expert knowledge. For example, the Chou-Fasman protein secondary structure prediction program (Chou and Fasman, 1978), which dominated the field for many years, depended on the recognition of predefined, user-encoded sequence motifs for α-helices and β-sheets. Several hundred sequence motifs have been cataloged in PROSITE (Bairoch, 1992); the identification of one of these motifs in a novel protein often allows for immediate function interpretation.
Less
The field of knowledge discovery is concerned with the theory and processes involved in the representation and extraction of patterns or motifs from large databases. Discovered patterns can be used to group data into meaningful classes, to summarize data, or to reveal deviant entries. Motifs stored in a database can be brought to bear on difficult instances of structure prediction or determination from X-ray crystallography or nuclear magnetic resonance (NMR) experiments. Automated discovery techniques are central to understanding and analyzing the rapidly expanding repositories of protein sequence and structure data. This chapter deals with the discovery of protein structure motifs. A motif is an abstraction over a set of recurring patterns observed in a dataset; it captures the essential features shared by a set of similar or related objects. In many domains, such as computer vision and speech recognition, there exist special regularities that permit such motif abstraction. In the protein science domain, the regularities derive from evolutionary and biophysical constraints on amino acid sequences and structures. The identification of a known pattern in a new protein sequence or structure permits the immediate retrieval and application of knowledge obtained from the analysis of other proteins. The discovery and manipulation of motifs—in DNA, RNA, and protein sequences and structures—is thus an important component of computational molecular biology and genome informatics. In particular, identifying protein structure classifications at varying levels of abstraction allows us to organize and increase our understanding of the rapidly growing protein structure datasets. Discovered motifs are also useful for improving the efficiency and effectiveness of X-ray crystallographic studies of proteins, for drug design, for understanding protein evolution, and ultimately for predicting the structure of proteins from sequence data. Motifs may be designed by hand, based on expert knowledge. For example, the Chou-Fasman protein secondary structure prediction program (Chou and Fasman, 1978), which dominated the field for many years, depended on the recognition of predefined, user-encoded sequence motifs for α-helices and β-sheets. Several hundred sequence motifs have been cataloged in PROSITE (Bairoch, 1992); the identification of one of these motifs in a novel protein often allows for immediate function interpretation.
Sun-ha Hong
- Published in print:
- 2020
- Published Online:
- January 2021
- ISBN:
- 9781479860234
- eISBN:
- 9781479855759
- Item type:
- chapter
- Publisher:
- NYU Press
- DOI:
- 10.18574/nyu/9781479860234.003.0007
- Subject:
- Society and Culture, Technology and Society
Data-driven knowledge is increasingly produced without regard to human cognition and sensibility, yet this conflagration of machinic non-sense also demands that we adapt to its rationality. ...
More
Data-driven knowledge is increasingly produced without regard to human cognition and sensibility, yet this conflagration of machinic non-sense also demands that we adapt to its rationality. Data-sense describes the popular expectation that a posthuman future is inevitable, compelling human subjects to orient their personal lives and truths in ways most compatible with machine learning and other regimes of data-driven analysis. The pursuit of the human nonconscious as a source of objective truth erodes the ground beneath the feet of the good liberal subject and concomitant ideals of agency, freedom, and self-determination.Less
Data-driven knowledge is increasingly produced without regard to human cognition and sensibility, yet this conflagration of machinic non-sense also demands that we adapt to its rationality. Data-sense describes the popular expectation that a posthuman future is inevitable, compelling human subjects to orient their personal lives and truths in ways most compatible with machine learning and other regimes of data-driven analysis. The pursuit of the human nonconscious as a source of objective truth erodes the ground beneath the feet of the good liberal subject and concomitant ideals of agency, freedom, and self-determination.
Lysiane Charest
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198794844
- eISBN:
- 9780191836336
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198794844.003.0020
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy, Computational Mathematics / Optimization
This chapter is aimed at small-to-medium-sized studios wanting to introduce analytics into their development process. It focuses on concepts and techniques that are most useful for smaller studios, ...
More
This chapter is aimed at small-to-medium-sized studios wanting to introduce analytics into their development process. It focuses on concepts and techniques that are most useful for smaller studios, and that require minimal skills. While money is always an issue, plenty of free analytics tools exist, whether they are third-party tools or simple in-house solutions. The chapter details how the most important factor is the availability of human resources.Less
This chapter is aimed at small-to-medium-sized studios wanting to introduce analytics into their development process. It focuses on concepts and techniques that are most useful for smaller studios, and that require minimal skills. While money is always an issue, plenty of free analytics tools exist, whether they are third-party tools or simple in-house solutions. The chapter details how the most important factor is the availability of human resources.