Claus Beisbart
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199577439
- eISBN:
- 9780191730603
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577439.003.0006
- Subject:
- Philosophy, Philosophy of Science, Metaphysics/Epistemology
How can probabilistic models from physics represent a target, and how can one understand the probabilities that figure in such models? The aim of this chapter is to answer these questions by ...
More
How can probabilistic models from physics represent a target, and how can one understand the probabilities that figure in such models? The aim of this chapter is to answer these questions by analyzing random models of Brownian motion and point process models of the galaxy distribution as examples. This chapter defends the view that such models represent because we may learn from them by setting our degrees of belief following the probabilities suggested by the model. This account is not incompatible with an objectivist view of the pertinent probabilities, but stock objectivist interpretations, e.g., frequentism or Lewis’ Humean account of probabilities have problems to provide a suitable objectivist methodology for statistical inference from data. This point is made by contrasting Bayesian statistics with error statistics.Less
How can probabilistic models from physics represent a target, and how can one understand the probabilities that figure in such models? The aim of this chapter is to answer these questions by analyzing random models of Brownian motion and point process models of the galaxy distribution as examples. This chapter defends the view that such models represent because we may learn from them by setting our degrees of belief following the probabilities suggested by the model. This account is not incompatible with an objectivist view of the pertinent probabilities, but stock objectivist interpretations, e.g., frequentism or Lewis’ Humean account of probabilities have problems to provide a suitable objectivist methodology for statistical inference from data. This point is made by contrasting Bayesian statistics with error statistics.
Timothy J. O’Donnell and Noah D. Goodman
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0002
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses ...
More
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses some theoretical and methodological issues in the interpretation of this approach to modeling. The third section develops the modelling framework from the perspective of probabilistic programming using the Church programming language. The Church formalization makes explicit the relationship between the model and two important technical ideas from computer science (and, specifically, the theory of programming languages). The final section of the chapter provides more discussion of the four classes of model evaluated in the book.Less
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses some theoretical and methodological issues in the interpretation of this approach to modeling. The third section develops the modelling framework from the perspective of probabilistic programming using the Church programming language. The Church formalization makes explicit the relationship between the model and two important technical ideas from computer science (and, specifically, the theory of programming languages). The final section of the chapter provides more discussion of the four classes of model evaluated in the book.
Thomas L. Griffiths and Alan Yuille
- Published in print:
- 2008
- Published Online:
- March 2012
- ISBN:
- 9780199216093
- eISBN:
- 9780191695971
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199216093.003.0002
- Subject:
- Psychology, Cognitive Psychology
This chapter provides the technical introduction to Bayesian methods. Probabilistic models of cognition are often referred to as Bayesian models, reflecting the central role that Bayesian inference ...
More
This chapter provides the technical introduction to Bayesian methods. Probabilistic models of cognition are often referred to as Bayesian models, reflecting the central role that Bayesian inference plays in reasoning under uncertainty. It introduces the basic ideas of Bayesian inference and discusses how it can be used in different contexts. Probabilistic models provide a unique opportunity to develop a rational account of human cognition that combines statistical learning with structured representations. It recommends the EM algorithm and Markov chain Monte Carlo to estimate the parameters of models that incorporate latent variables, and to work with complicated probability distributions of the kind that often arise in Bayesian inference.Less
This chapter provides the technical introduction to Bayesian methods. Probabilistic models of cognition are often referred to as Bayesian models, reflecting the central role that Bayesian inference plays in reasoning under uncertainty. It introduces the basic ideas of Bayesian inference and discusses how it can be used in different contexts. Probabilistic models provide a unique opportunity to develop a rational account of human cognition that combines statistical learning with structured representations. It recommends the EM algorithm and Markov chain Monte Carlo to estimate the parameters of models that incorporate latent variables, and to work with complicated probability distributions of the kind that often arise in Bayesian inference.
Thomas L. Griffiths and Joshua B. Tenenbaum
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0021
- Subject:
- Psychology, Developmental Psychology
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such ...
More
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such a grammar might be — the analogues of syntactic categories and the rules that relate them in a linguistic grammar. It presents two proposals for causal grammars. The first asserts that the variables which describe events can be organized into causal categories, and allows relationships between those categories to be expressed. The second uses a probabilistic variant of first-order logic in order to describe the ontology and causal laws expressed in an intuitive theory. This chapter illustrates how both kinds of grammar can guide causal learning.Less
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such a grammar might be — the analogues of syntactic categories and the rules that relate them in a linguistic grammar. It presents two proposals for causal grammars. The first asserts that the variables which describe events can be organized into causal categories, and allows relationships between those categories to be expressed. The second uses a probabilistic variant of first-order logic in order to describe the ontology and causal laws expressed in an intuitive theory. This chapter illustrates how both kinds of grammar can guide causal learning.
Timothy J. O’Donnell
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0001
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter introduces the problem of productivity: How do language learners determine which potential generalizations can actually be used to create novel expressions in their language and which ...
More
This chapter introduces the problem of productivity: How do language learners determine which potential generalizations can actually be used to create novel expressions in their language and which only occur as parts of stored items? The first part of the chapter reviews historical approaches to this question, discussing previous unsuccessful attempts to reduce the problem of what is stored and what is computed to other properties. The second part of the chapter outlines a new theory of storage and computation based on the idea that the problem can be solved via a probabilistic inference which optimizes a tradeoff between fewer, simpler stored items, and simpler derivations of linguistic expressions. This inference-based model is contrasted with four other models to which it will be compared throughout the book: (i) the full-parsing model where all structure is always computed, (ii) the full-listing model where all structure is stored after the first time it is computed and (iii) two variants of the exemplar-based model which hypothesizes all possible mixtures of computation and storage in the derivation of every expression.Less
This chapter introduces the problem of productivity: How do language learners determine which potential generalizations can actually be used to create novel expressions in their language and which only occur as parts of stored items? The first part of the chapter reviews historical approaches to this question, discussing previous unsuccessful attempts to reduce the problem of what is stored and what is computed to other properties. The second part of the chapter outlines a new theory of storage and computation based on the idea that the problem can be solved via a probabilistic inference which optimizes a tradeoff between fewer, simpler stored items, and simpler derivations of linguistic expressions. This inference-based model is contrasted with four other models to which it will be compared throughout the book: (i) the full-parsing model where all structure is always computed, (ii) the full-listing model where all structure is stored after the first time it is computed and (iii) two variants of the exemplar-based model which hypothesizes all possible mixtures of computation and storage in the derivation of every expression.
Luc Bovens and Stephan Hartmann
- Published in print:
- 2004
- Published Online:
- January 2005
- ISBN:
- 9780199269754
- eISBN:
- 9780191601705
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199269750.003.0007
- Subject:
- Philosophy, Metaphysics/Epistemology
Presents some general reflections on the role and the challenges of probabilistic modelling in philosophy.
Presents some general reflections on the role and the challenges of probabilistic modelling in philosophy.
Alison Gopnik and Laura Schulz (eds)
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.001.0001
- Subject:
- Psychology, Developmental Psychology
This book outlines the recent revolutionary work in cognitive science formulating a “probabilistic model” theory of learning and development. It provides an accessible and clear introduction to the ...
More
This book outlines the recent revolutionary work in cognitive science formulating a “probabilistic model” theory of learning and development. It provides an accessible and clear introduction to the probabilistic modeling in psychology, including causal model, Bayes net, and Bayesian approaches. It also outlines new cognitive and developmental psychological studies of statistical and causal learning, imitation and theory-formation, new philosophical approaches to causation, and new computational approaches to the representation of intuitive concepts and theories. This book brings together research in all of these areas of cognitive science, with chapters by researchers in all these disciplines. Understanding causal structure is a central task of human cognition. Causal learning underpins the development of our concepts and categories, our intuitive theories, and our capacities for planning, imagination, and inference. This new work uses the framework of probabilistic models and interventionist accounts of causation in philosophy in order to provide a rigorous formal basis for “theory theories” of concepts and cognitive development. Moreover, the causal learning mechanisms this interdisciplinary research program has uncovered go dramatically beyond both the traditional mechanisms of nativist theories such as modularity theories, and empiricist ones such as association or connectionism. The chapters cover three topics: the role of intervention and action in causal understanding, the role of causation in categories and concepts, and the relationship between causal learning and intuitive theory formation. Though coming from different disciplines, the chapters converge on showing how we can use our own actions and the evidence we observe in order to accurately learn about the world.Less
This book outlines the recent revolutionary work in cognitive science formulating a “probabilistic model” theory of learning and development. It provides an accessible and clear introduction to the probabilistic modeling in psychology, including causal model, Bayes net, and Bayesian approaches. It also outlines new cognitive and developmental psychological studies of statistical and causal learning, imitation and theory-formation, new philosophical approaches to causation, and new computational approaches to the representation of intuitive concepts and theories. This book brings together research in all of these areas of cognitive science, with chapters by researchers in all these disciplines. Understanding causal structure is a central task of human cognition. Causal learning underpins the development of our concepts and categories, our intuitive theories, and our capacities for planning, imagination, and inference. This new work uses the framework of probabilistic models and interventionist accounts of causation in philosophy in order to provide a rigorous formal basis for “theory theories” of concepts and cognitive development. Moreover, the causal learning mechanisms this interdisciplinary research program has uncovered go dramatically beyond both the traditional mechanisms of nativist theories such as modularity theories, and empiricist ones such as association or connectionism. The chapters cover three topics: the role of intervention and action in causal understanding, the role of causation in categories and concepts, and the relationship between causal learning and intuitive theory formation. Though coming from different disciplines, the chapters converge on showing how we can use our own actions and the evidence we observe in order to accurately learn about the world.
Luc Bovens and Stephan Hartmann
- Published in print:
- 2004
- Published Online:
- January 2005
- ISBN:
- 9780199269754
- eISBN:
- 9780191601705
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199269750.001.0001
- Subject:
- Philosophy, Metaphysics/Epistemology
Probabilistic models have much to offer to epistemology and philosophy of science. Arguably, the coherence theory of justification claims that the more coherent a set of propositions is, the more ...
More
Probabilistic models have much to offer to epistemology and philosophy of science. Arguably, the coherence theory of justification claims that the more coherent a set of propositions is, the more confident one ought to be in its content, ceteris paribus. An impossibility result shows that there cannot exist a coherence ordering. A coherence quasi-ordering can be constructed that respects this claim and is relevant to scientific-theory choice. Bayesian-Network models of the reliability of information sources are made applicable to Condorcet-style jury voting, Tversky and Kahneman’s Linda puzzle, the variety-of-evidence thesis, the Duhem–Quine thesis, and the informational value of testimony.Less
Probabilistic models have much to offer to epistemology and philosophy of science. Arguably, the coherence theory of justification claims that the more coherent a set of propositions is, the more confident one ought to be in its content, ceteris paribus. An impossibility result shows that there cannot exist a coherence ordering. A coherence quasi-ordering can be constructed that respects this claim and is relevant to scientific-theory choice. Bayesian-Network models of the reliability of information sources are made applicable to Condorcet-style jury voting, Tversky and Kahneman’s Linda puzzle, the variety-of-evidence thesis, the Duhem–Quine thesis, and the informational value of testimony.
Peter V. Rabins
- Published in print:
- 2013
- Published Online:
- November 2015
- ISBN:
- 9780231164726
- eISBN:
- 9780231535458
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231164726.003.0004
- Subject:
- Philosophy, Philosophy of Science
This chapter describes the probabilistic model, in which causes are conceptualized as events that affect the likelihood that another event will occur. In this model, causes act as influences, risk ...
More
This chapter describes the probabilistic model, in which causes are conceptualized as events that affect the likelihood that another event will occur. In this model, causes act as influences, risk factors, predispositions, modifiers, and buffers. The complexity of the probabilistic concept of cause begins with the definition of the word “probability.” Its primary meaning relates to predictability or prognostication, that is, the likelihood of specific future outcomes or effects. The implication of this definition is that there is uncertainty as to the outcome. This chapter provides a history overview of the concept of probabilistic cause and considers the characteristics of probabilistic reasoning that are relevant to causality, along with the challenges of causal probabilistic logic, limitations of the probabilistic model, and whether probabilistic reasoning is different from categorical reasoning. It also outlines the criteria for choosing the appropriate model of causality.Less
This chapter describes the probabilistic model, in which causes are conceptualized as events that affect the likelihood that another event will occur. In this model, causes act as influences, risk factors, predispositions, modifiers, and buffers. The complexity of the probabilistic concept of cause begins with the definition of the word “probability.” Its primary meaning relates to predictability or prognostication, that is, the likelihood of specific future outcomes or effects. The implication of this definition is that there is uncertainty as to the outcome. This chapter provides a history overview of the concept of probabilistic cause and considers the characteristics of probabilistic reasoning that are relevant to causality, along with the challenges of causal probabilistic logic, limitations of the probabilistic model, and whether probabilistic reasoning is different from categorical reasoning. It also outlines the criteria for choosing the appropriate model of causality.
Gerd Gigerenzer
- Published in print:
- 2002
- Published Online:
- October 2011
- ISBN:
- 9780195153729
- eISBN:
- 9780199849222
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195153729.003.0007
- Subject:
- Philosophy, General
The cognitive bias research claims that people are naturally prone to making mistakes in reasoning and memory, including the mistake of over-estimating their knowledge. This chapter proposes a new ...
More
The cognitive bias research claims that people are naturally prone to making mistakes in reasoning and memory, including the mistake of over-estimating their knowledge. This chapter proposes a new theoretical model for confidence in knowledge based on the more charitable assumption that people are good judges of the reliability of their knowledge, provided that the knowledge is representatively sampled from a specified reference class. It claims that this model both predicts new experimental results and explains a wide range of extant experimental findings on confidence, including some perplexing inconsistencies. It consists of three parts: an exposition of the proposed theory of probabilistic mental models (PMM theory); a report of experimental tests confirming these predictions; and an explanation of apparent anomalies in previous experimental results by means of PMMs.Less
The cognitive bias research claims that people are naturally prone to making mistakes in reasoning and memory, including the mistake of over-estimating their knowledge. This chapter proposes a new theoretical model for confidence in knowledge based on the more charitable assumption that people are good judges of the reliability of their knowledge, provided that the knowledge is representatively sampled from a specified reference class. It claims that this model both predicts new experimental results and explains a wide range of extant experimental findings on confidence, including some perplexing inconsistencies. It consists of three parts: an exposition of the proposed theory of probabilistic mental models (PMM theory); a report of experimental tests confirming these predictions; and an explanation of apparent anomalies in previous experimental results by means of PMMs.
Joshua B. Tenenbaum, Thomas L. Griffiths, and Sourabh Niyogi
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0020
- Subject:
- Psychology, Developmental Psychology
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it ...
More
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it considers the computational problems that intuitive theories help to solve, focusing on their role in learning and reasoning about causal systems, and then using Bayesian statistics to describe the ideal solutions to these problems. The resulting framework highlights an analogy between causal theories and linguistic grammars: just as grammars generate sentences and guide inferences about their interpretation, causal theories specify a generative process for events, and guide causal inference.Less
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it considers the computational problems that intuitive theories help to solve, focusing on their role in learning and reasoning about causal systems, and then using Bayesian statistics to describe the ideal solutions to these problems. The resulting framework highlights an analogy between causal theories and linguistic grammars: just as grammars generate sentences and guide inferences about their interpretation, causal theories specify a generative process for events, and guide causal inference.
Raphaël Mourad (ed.)
- Published in print:
- 2014
- Published Online:
- December 2014
- ISBN:
- 9780198709022
- eISBN:
- 9780191779619
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198709022.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic ...
More
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.Less
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.
Christine Sinoquet
- Published in print:
- 2014
- Published Online:
- December 2014
- ISBN:
- 9780198709022
- eISBN:
- 9780191779619
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198709022.003.0002
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
The aim of this chapter is to offer an advanced tutorial to scientists with no background or no deep background on probabilistic graphical models. To readers more familiar with these models, this ...
More
The aim of this chapter is to offer an advanced tutorial to scientists with no background or no deep background on probabilistic graphical models. To readers more familiar with these models, this chapter is to be used as a compendium of definitions and general methods, to browse through at will. Intentionally self-contained, this chapter first begins with reminders of essential definitions such as the distinction between marginal independence and conditional independence. Then the chapter briefly surveys the most popular classes of probabilistic graphical models: Markov chains, Bayesian networks, and Markov random fields. Next probabilistic inference is explained and illustrated in the Bayesian network context. Finally parameter and structure learning are presented.Less
The aim of this chapter is to offer an advanced tutorial to scientists with no background or no deep background on probabilistic graphical models. To readers more familiar with these models, this chapter is to be used as a compendium of definitions and general methods, to browse through at will. Intentionally self-contained, this chapter first begins with reminders of essential definitions such as the distinction between marginal independence and conditional independence. Then the chapter briefly surveys the most popular classes of probabilistic graphical models: Markov chains, Bayesian networks, and Markov random fields. Next probabilistic inference is explained and illustrated in the Bayesian network context. Finally parameter and structure learning are presented.
Timothy J. O’Donnell and Noah D. Goodman
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0003
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the ...
More
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the simulations that we report later.Less
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the simulations that we report later.
Timothy J. O’Donnell
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0008
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter provides a short overview of the entire book.
This chapter provides a short overview of the entire book.
Thomas P. Trappenberg
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780198828044
- eISBN:
- 9780191883873
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198828044.003.0007
- Subject:
- Neuroscience, Behavioral Neuroscience
This chapter revises regression with the inclusion of uncertainty in the data in probabilistic models. It shows how modern probabilistic machine learning can be formulated. First, a simple stochastic ...
More
This chapter revises regression with the inclusion of uncertainty in the data in probabilistic models. It shows how modern probabilistic machine learning can be formulated. First, a simple stochastic generalization of the linear regression example is offered to introduce the formalism. This leads to the important maximum likelihood principle on which learning will be based. This concept is then generalized to non-linear problems in higher dimensions and the chapter relates this to Bayes nets. The chapter ends with a discussion about how such a probabilistic approach is related to deep learning.Less
This chapter revises regression with the inclusion of uncertainty in the data in probabilistic models. It shows how modern probabilistic machine learning can be formulated. First, a simple stochastic generalization of the linear regression example is offered to introduce the formalism. This leads to the important maximum likelihood principle on which learning will be based. This concept is then generalized to non-linear problems in higher dimensions and the chapter relates this to Bayes nets. The chapter ends with a discussion about how such a probabilistic approach is related to deep learning.
Timothy J. O'Donnell
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.001.0001
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
Language allows us to express and comprehend an unbounded number of thoughts. This fundamental and much-celebrated property is made possible by a division of labor between a large inventory of stored ...
More
Language allows us to express and comprehend an unbounded number of thoughts. This fundamental and much-celebrated property is made possible by a division of labor between a large inventory of stored items (e.g., affixes, words, idioms) and a computational system that productively combines these stored units on the fly to create a potentially unlimited array of new expressions. A language learner must discover a language’s productive, reusable units and determine which computational processes can give rise to new expressions. But how does the learner differentiate between the reusable, generalizable units (for example, the affix -ness, as in coolness, orderliness, cheapness) and apparent units that do not actually generalize in practice (for example, -th, as in warmth but not coolth)? This book proposes a formal computational model, fragment grammars, to answer these questions. This model treats productivity and reuse as the target of inference in a probabilistic framework, asking how an optimal agent can make use of the distribution of forms in the linguistic input to learn the distribution of productive word-formation processes and reusable units in a given language.Less
Language allows us to express and comprehend an unbounded number of thoughts. This fundamental and much-celebrated property is made possible by a division of labor between a large inventory of stored items (e.g., affixes, words, idioms) and a computational system that productively combines these stored units on the fly to create a potentially unlimited array of new expressions. A language learner must discover a language’s productive, reusable units and determine which computational processes can give rise to new expressions. But how does the learner differentiate between the reusable, generalizable units (for example, the affix -ness, as in coolness, orderliness, cheapness) and apparent units that do not actually generalize in practice (for example, -th, as in warmth but not coolth)? This book proposes a formal computational model, fragment grammars, to answer these questions. This model treats productivity and reuse as the target of inference in a probabilistic framework, asking how an optimal agent can make use of the distribution of forms in the linguistic input to learn the distribution of productive word-formation processes and reusable units in a given language.
Laura Schulz, Tamar Kushnir, and Alison Gopnik
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0006
- Subject:
- Psychology, Developmental Psychology
This chapter starts from the premise that much of children's knowledge takes the form of abstract, coherent, causal claims that are learned from, and defeasible by, evidence. This view is consistent ...
More
This chapter starts from the premise that much of children's knowledge takes the form of abstract, coherent, causal claims that are learned from, and defeasible by, evidence. This view is consistent with an interventionist view of causal knowledge, formalized in computational models using causal Bayes net representations. The chapter reviews empirical studies suggesting that, consistent with this account, preschoolers use patterns of evidence to: a) create novel, effective interventions; b) infer the structure of causal relationships, including relationships involving unobserved causes; c) accurately predict distinct outcomes from observed evidence and evidence generated by interventions; d) integrate novel evidence with prior beliefs; and e) distinguish informative interventions from confounded ones.Less
This chapter starts from the premise that much of children's knowledge takes the form of abstract, coherent, causal claims that are learned from, and defeasible by, evidence. This view is consistent with an interventionist view of causal knowledge, formalized in computational models using causal Bayes net representations. The chapter reviews empirical studies suggesting that, consistent with this account, preschoolers use patterns of evidence to: a) create novel, effective interventions; b) infer the structure of causal relationships, including relationships involving unobserved causes; c) accurately predict distinct outcomes from observed evidence and evidence generated by interventions; d) integrate novel evidence with prior beliefs; and e) distinguish informative interventions from confounded ones.
Itzhak Gilboa, Larry Samuelson, and David Schmeidler
- Published in print:
- 2015
- Published Online:
- May 2015
- ISBN:
- 9780198738022
- eISBN:
- 9780191801419
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198738022.003.0004
- Subject:
- Economics and Finance, Econometrics
This chapter presents a formal model that captures both case‐based and rule‐based reasoning. The model is general enough to describe Bayesian reasoning, which may be viewed as an extreme example of ...
More
This chapter presents a formal model that captures both case‐based and rule‐based reasoning. The model is general enough to describe Bayesian reasoning, which may be viewed as an extreme example of rule‐based reasoning. It suggests conditions under which Bayesian reasoning will give way to other modes of reasoning, and alternative conditions under which the opposite conclusion holds. It discusses how probabilistic reasoning may emerge periodically, with other modes of reasoning used between the regimes of different probabilistic models.Less
This chapter presents a formal model that captures both case‐based and rule‐based reasoning. The model is general enough to describe Bayesian reasoning, which may be viewed as an extreme example of rule‐based reasoning. It suggests conditions under which Bayesian reasoning will give way to other modes of reasoning, and alternative conditions under which the opposite conclusion holds. It discusses how probabilistic reasoning may emerge periodically, with other modes of reasoning used between the regimes of different probabilistic models.
Alison Gopni and Laura Schulz
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0001
- Subject:
- Psychology, Developmental Psychology
This chapter provides a simple, clear, and (hopefully) amusing introduction to causal model and Bayes nets theories in computer science, the interventionist account of causation in philosophy, and ...
More
This chapter provides a simple, clear, and (hopefully) amusing introduction to causal model and Bayes nets theories in computer science, the interventionist account of causation in philosophy, and the psychology of causal learning in both adults and children. It takes the form of a fictional e-mail exchange between a developmental psychologist and a philosopher/computer scientist in which each partner explains the background of their field to the other. In two attachments, the fictional authors review the literature on causal Bayes nets and on the psychology of causal inference.Less
This chapter provides a simple, clear, and (hopefully) amusing introduction to causal model and Bayes nets theories in computer science, the interventionist account of causation in philosophy, and the psychology of causal learning in both adults and children. It takes the form of a fictional e-mail exchange between a developmental psychologist and a philosopher/computer scientist in which each partner explains the background of their field to the other. In two attachments, the fictional authors review the literature on causal Bayes nets and on the psychology of causal inference.