Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0005
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian ...
More
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian inference in early sections of the chapter, and then illustrates them with several examples in sections that follow. Numerical techniques for solving complex problems are next discussed, and the final section provides a summary of pros and cons for classical and Bayesian method. It argues that most users of Bayesian estimation methods are likely to use a mix of Bayesian and frequentist tools. The reverse is also true—frequentist data analysts, even if they stay formally within the frequentist framework, are often influenced by “Bayesian thinking,” referring to “priors” and “posteriors.” The most advisable position is to know both paradigms well, in order to make informed judgments about which tools to apply in which situations.Less
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian inference in early sections of the chapter, and then illustrates them with several examples in sections that follow. Numerical techniques for solving complex problems are next discussed, and the final section provides a summary of pros and cons for classical and Bayesian method. It argues that most users of Bayesian estimation methods are likely to use a mix of Bayesian and frequentist tools. The reverse is also true—frequentist data analysts, even if they stay formally within the frequentist framework, are often influenced by “Bayesian thinking,” referring to “priors” and “posteriors.” The most advisable position is to know both paradigms well, in order to make informed judgments about which tools to apply in which situations.
Brian J. Scholl
- Published in print:
- 2005
- Published Online:
- January 2007
- ISBN:
- 9780195179675
- eISBN:
- 9780199869794
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195179675.003.0003
- Subject:
- Philosophy, Metaphysics/Epistemology
This chapter explores a way in which visual processing may involve innate constraints and attempts to show how such processing overcomes one enduring challenge to nativism. In particular, many ...
More
This chapter explores a way in which visual processing may involve innate constraints and attempts to show how such processing overcomes one enduring challenge to nativism. In particular, many challenges to nativist theories in other areas of cognitive psychology (e.g., ‘theory of mind’, infant cognition) have focused on the later development of such abilities, and have argued that such development is in conflict with innate origins (since those origins would have to be somehow changed or overwritten). Innateness, in these contexts, is seen as antidevelopmental, associated instead with static processes and principles. In contrast, certain perceptual models demonstrate how the very same mental processes can both be innately specified and yet develop richly in response to experience with the environment. This process is entirely unmysterious, as shown in certain formal theories of visual perception, including those that appeal to spontaneous endogenous stimulation and those based on Bayesian inference.Less
This chapter explores a way in which visual processing may involve innate constraints and attempts to show how such processing overcomes one enduring challenge to nativism. In particular, many challenges to nativist theories in other areas of cognitive psychology (e.g., ‘theory of mind’, infant cognition) have focused on the later development of such abilities, and have argued that such development is in conflict with innate origins (since those origins would have to be somehow changed or overwritten). Innateness, in these contexts, is seen as antidevelopmental, associated instead with static processes and principles. In contrast, certain perceptual models demonstrate how the very same mental processes can both be innately specified and yet develop richly in response to experience with the environment. This process is entirely unmysterious, as shown in certain formal theories of visual perception, including those that appeal to spontaneous endogenous stimulation and those based on Bayesian inference.
Gerd Gigerenzer
- Published in print:
- 2002
- Published Online:
- October 2011
- ISBN:
- 9780195153729
- eISBN:
- 9780199849222
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195153729.003.0006
- Subject:
- Philosophy, General
This chapter defines the concepts of natural sampling, natural frequencies, and reports experimental evidence for the impact of various external representations on statistical thinking. The mental ...
More
This chapter defines the concepts of natural sampling, natural frequencies, and reports experimental evidence for the impact of various external representations on statistical thinking. The mental strategies or shortcuts people use, not only their numerical estimates of risks, turn out to be a function of the external representation of numbers we choose. This chapter provides a theoretical framework that specifies why frequency formats should improve Bayesian reasoning and presents two studies that test whether they do. Its goal is to lead research on Bayesian inference out of the present conceptual cul-de-sac and to shift the focus from human errors to human engineering: how to help people reason the Bayesian way without even teaching them.Less
This chapter defines the concepts of natural sampling, natural frequencies, and reports experimental evidence for the impact of various external representations on statistical thinking. The mental strategies or shortcuts people use, not only their numerical estimates of risks, turn out to be a function of the external representation of numbers we choose. This chapter provides a theoretical framework that specifies why frequency formats should improve Bayesian reasoning and presents two studies that test whether they do. Its goal is to lead research on Bayesian inference out of the present conceptual cul-de-sac and to shift the focus from human errors to human engineering: how to help people reason the Bayesian way without even teaching them.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0005
- Subject:
- Biology, Ecology
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem ...
More
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem works, by making use of the conditional rule of probability and the law of total probability. The chapter begins with the central, underpinning tenet of the Bayesian view: the world can be divided into quantities that are observed and quantities that are unobserved. Unobserved quantities include parameters in models, latent states predicted by models, missing data, effect sizes, future states, and data before they are observed. We wish to learn about these quantities using observations. The Bayesian framework for achieving that understanding is applied in exactly the same way regardless of the specifics of the research problem at hand or the nature of the unobserved quantities.Less
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem works, by making use of the conditional rule of probability and the law of total probability. The chapter begins with the central, underpinning tenet of the Bayesian view: the world can be divided into quantities that are observed and quantities that are unobserved. Unobserved quantities include parameters in models, latent states predicted by models, missing data, effect sizes, future states, and data before they are observed. We wish to learn about these quantities using observations. The Bayesian framework for achieving that understanding is applied in exactly the same way regardless of the specifics of the research problem at hand or the nature of the unobserved quantities.
Jie W Weiss and David J Weiss
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780195322989
- eISBN:
- 9780199869206
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195322989.003.0017
- Subject:
- Psychology, Cognitive Psychology
The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference. A likelihood is the probability of evidence given a hypothesis multiplied by a positive ...
More
The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference. A likelihood is the probability of evidence given a hypothesis multiplied by a positive constant. The constant cancels out of simple versions of Bayes's Theorem, and so is irrelevant to single-stage inferences. Most non-statistical inferences require a multistage path from evidence to hypotheses; testimony that an event occurred does not guarantee that in fact it did. Hierarchical Bayesian models explicate such cases. For such models, the Likelihood Principle applies to a collection of data elements treated as a single datum conditionally independent of other similar collections. It does not necessarily apply to a single data element taken alone. This has unfortunate implications; in particular, it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios. This chapter sorts out these issues in the context of a trial in which one author is accused of murdering another, with the third as a key witness.Less
The Likelihood Principle of Bayesian inference asserts that only likelihoods matter to single-stage inference. A likelihood is the probability of evidence given a hypothesis multiplied by a positive constant. The constant cancels out of simple versions of Bayes's Theorem, and so is irrelevant to single-stage inferences. Most non-statistical inferences require a multistage path from evidence to hypotheses; testimony that an event occurred does not guarantee that in fact it did. Hierarchical Bayesian models explicate such cases. For such models, the Likelihood Principle applies to a collection of data elements treated as a single datum conditionally independent of other similar collections. It does not necessarily apply to a single data element taken alone. This has unfortunate implications; in particular, it does not permit the inputs to Bayesian arithmetic at all levels to be likelihood ratios. This chapter sorts out these issues in the context of a trial in which one author is accused of murdering another, with the third as a key witness.
Fei Xu
- Published in print:
- 2008
- Published Online:
- January 2008
- ISBN:
- 9780195332834
- eISBN:
- 9780199868117
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195332834.003.0010
- Subject:
- Philosophy, Philosophy of Mind
This chapter advocates a view that is a substantive middle ground between the extreme versions of nativism and empiricism — a view dubbed ‘rational constructivism’. This is a view that commits us to ...
More
This chapter advocates a view that is a substantive middle ground between the extreme versions of nativism and empiricism — a view dubbed ‘rational constructivism’. This is a view that commits us to some innate (or acquired) constraints and a set of powerful learning and inference mechanisms that may be critical for development. The mechanisms of statistical inference are used as a means to bridge the gap between discussions of innate knowledge and discussions of learning and conceptual change. In particular, the general framework of Bayesian inference is adopted and some recent research providing empirical evidence for the psychological reality of these inference mechanisms are presented.Less
This chapter advocates a view that is a substantive middle ground between the extreme versions of nativism and empiricism — a view dubbed ‘rational constructivism’. This is a view that commits us to some innate (or acquired) constraints and a set of powerful learning and inference mechanisms that may be critical for development. The mechanisms of statistical inference are used as a means to bridge the gap between discussions of innate knowledge and discussions of learning and conceptual change. In particular, the general framework of Bayesian inference is adopted and some recent research providing empirical evidence for the psychological reality of these inference mechanisms are presented.
Luc Bauwens, Michel Lubrano, and Jean-François Richard
- Published in print:
- 2000
- Published Online:
- September 2011
- ISBN:
- 9780198773122
- eISBN:
- 9780191695315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198773122.003.0001
- Subject:
- Economics and Finance, Econometrics
This chapter discusses the relationship between mathematical statistics, decision theory, and the application of Bayesian inference to econometrics. It analyses the Bayesian approach to decision ...
More
This chapter discusses the relationship between mathematical statistics, decision theory, and the application of Bayesian inference to econometrics. It analyses the Bayesian approach to decision making under uncertainty and suggests that this method provides a strong rationale for the use of Bayesian techniques in econometrics. It introduces a set of simple axioms to formalize a concept of rational behaviour in the face of uncertainty and presents estimation and hypothesis testing, both from a classical and Bayesian perspective.Less
This chapter discusses the relationship between mathematical statistics, decision theory, and the application of Bayesian inference to econometrics. It analyses the Bayesian approach to decision making under uncertainty and suggests that this method provides a strong rationale for the use of Bayesian techniques in econometrics. It introduces a set of simple axioms to formalize a concept of rational behaviour in the face of uncertainty and presents estimation and hypothesis testing, both from a classical and Bayesian perspective.
Ladan Shams and Ulrik Beierholm
- Published in print:
- 2011
- Published Online:
- September 2012
- ISBN:
- 9780195387247
- eISBN:
- 9780199918379
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195387247.003.0013
- Subject:
- Psychology, Cognitive Neuroscience, Cognitive Psychology
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to ...
More
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to complete segregation. Next, it describes two Bayesian causal-inference models that can account for the entire range of combinations of two or more sensory cues. It shows that one of these models, which is a hierarchical Bayesian model, is a special form of the other one (which is a nonhierarchical model). It then compares the predictions of these models with human data in multiple experiments and shows that Bayesian causal-inference models can account for the human data remarkably well. Finally, a study is presented that investigates the stability of priors in the face of drastic change in sensory conditions.Less
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to complete segregation. Next, it describes two Bayesian causal-inference models that can account for the entire range of combinations of two or more sensory cues. It shows that one of these models, which is a hierarchical Bayesian model, is a special form of the other one (which is a nonhierarchical model). It then compares the predictions of these models with human data in multiple experiments and shows that Bayesian causal-inference models can account for the human data remarkably well. Finally, a study is presented that investigates the stability of priors in the face of drastic change in sensory conditions.
Luc Bauwens, Michel Lubrano, and Jean-François Richard
- Published in print:
- 2000
- Published Online:
- September 2011
- ISBN:
- 9780198773122
- eISBN:
- 9780191695315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198773122.003.0002
- Subject:
- Economics and Finance, Econometrics
This chapter presents the basic concepts and tools that are useful for modelling and for Bayesian inference. It defines density kernels useful for simplifying notation and computations and explains ...
More
This chapter presents the basic concepts and tools that are useful for modelling and for Bayesian inference. It defines density kernels useful for simplifying notation and computations and explains the likelihood principle and its implications for the Bayesian treatment of nuisance parameters. It discusses the notion of natural conjugate inference, which is an important tool of Bayesian analysis in the case of the exponential family, and provides details on the natural conjugate framework.Less
This chapter presents the basic concepts and tools that are useful for modelling and for Bayesian inference. It defines density kernels useful for simplifying notation and computations and explains the likelihood principle and its implications for the Bayesian treatment of nuisance parameters. It discusses the notion of natural conjugate inference, which is an important tool of Bayesian analysis in the case of the exponential family, and provides details on the natural conjugate framework.
Joshua B. Tenenbaum, Thomas L. Griffiths, and Sourabh Niyogi
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0020
- Subject:
- Psychology, Developmental Psychology
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it ...
More
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it considers the computational problems that intuitive theories help to solve, focusing on their role in learning and reasoning about causal systems, and then using Bayesian statistics to describe the ideal solutions to these problems. The resulting framework highlights an analogy between causal theories and linguistic grammars: just as grammars generate sentences and guide inferences about their interpretation, causal theories specify a generative process for events, and guide causal inference.Less
This chapter presents a framework for understanding the structure, function, and acquisition of causal theories from a rational computational perspective. Using a “reverse engineering” approach, it considers the computational problems that intuitive theories help to solve, focusing on their role in learning and reasoning about causal systems, and then using Bayesian statistics to describe the ideal solutions to these problems. The resulting framework highlights an analogy between causal theories and linguistic grammars: just as grammars generate sentences and guide inferences about their interpretation, causal theories specify a generative process for events, and guide causal inference.
Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.003.0004
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination ...
More
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination with Gaussian priors for random effects, and outlines various model specifications that are included as special cases in SPMMs. Section 4.2 describes inferential techniques, detailing both empirical Bayes estimation based on mixed model technology and full Bayes techniques. Section 4.3 discusses the relation between Bayesian smoothing and correlation. Section 4.4 considers some additional or alternative semiparametric extensions of generalized linear mixed models: First, as in Section 3.2, the assumption of Gaussian random effects can be removed by allowing nonparametric Dirichlet process or Dirichlet process mixture priors in combination with Gaussian smoothness priors for functional effects. Second, local adaptivity of functional effects can be improved by scale mixtures of Gaussian smoothness priors, with variance parameters following stochastic process priors in another hierarchical stage. Third, the case of high-dimensional fixed effects β is also considered, with Bayesian shrinkage priors regularizing the resulting ill-posed inferential problem. Shrinkage priors can also be used for model choice and variable selection. The final Section 4.5 describes strategies for model choice and model checking in SPMMs.Less
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination with Gaussian priors for random effects, and outlines various model specifications that are included as special cases in SPMMs. Section 4.2 describes inferential techniques, detailing both empirical Bayes estimation based on mixed model technology and full Bayes techniques. Section 4.3 discusses the relation between Bayesian smoothing and correlation. Section 4.4 considers some additional or alternative semiparametric extensions of generalized linear mixed models: First, as in Section 3.2, the assumption of Gaussian random effects can be removed by allowing nonparametric Dirichlet process or Dirichlet process mixture priors in combination with Gaussian smoothness priors for functional effects. Second, local adaptivity of functional effects can be improved by scale mixtures of Gaussian smoothness priors, with variance parameters following stochastic process priors in another hierarchical stage. Third, the case of high-dimensional fixed effects β is also considered, with Bayesian shrinkage priors regularizing the resulting ill-posed inferential problem. Shrinkage priors can also be used for model choice and variable selection. The final Section 4.5 describes strategies for model choice and model checking in SPMMs.
Xiao‐Li Meng
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0016
- Subject:
- Mathematics, Probability / Statistics
H‐likelihood refers to a likelihood function of both fixed parameters and random “unobservables,” such as missing data and latent variables. The method then typically proceeds by maximizing over the ...
More
H‐likelihood refers to a likelihood function of both fixed parameters and random “unobservables,” such as missing data and latent variables. The method then typically proceeds by maximizing over the unobservables via an adjusted profile H‐likelihood, and carries out a Fisher‐information‐like calculation for (predictive) variance estimation. The claimed advantage is its avoidance of all “bad” elements of Bayesian prediction, namely the need for prior specification and posterior integration. This talk attempts to provide an in‐depth look into one of the most intriguing mysteries of modern statistics: why have the proponents of the H‐likelihood method (Lee and Nelder, 1996, 2001, 2005, 2009) been so convinced of its merits when almost everyone else considers it invalid as a general method? The findings are somewhat intriguing themselves. On the one hand, H‐likelihood turns out to be Bartlizable under easily verifiable conditions on the marginal distribution of the unobservables, and such conditions point to a transformation of unobservables that makes it possible to interpret one predictive distribution of the unobservables from three perspectives: Bayesian, fiducial and frequentist. On the other hand, the hope for such a Holy Grail in general is diminished by the fact that the log H‐ likelihood surface cannot generally be summarized quadratically due to the lack of accumulation of information for unobservables, which seems to be the Achilles' Heel of the H‐likelihood method.Less
H‐likelihood refers to a likelihood function of both fixed parameters and random “unobservables,” such as missing data and latent variables. The method then typically proceeds by maximizing over the unobservables via an adjusted profile H‐likelihood, and carries out a Fisher‐information‐like calculation for (predictive) variance estimation. The claimed advantage is its avoidance of all “bad” elements of Bayesian prediction, namely the need for prior specification and posterior integration. This talk attempts to provide an in‐depth look into one of the most intriguing mysteries of modern statistics: why have the proponents of the H‐likelihood method (Lee and Nelder, 1996, 2001, 2005, 2009) been so convinced of its merits when almost everyone else considers it invalid as a general method? The findings are somewhat intriguing themselves. On the one hand, H‐likelihood turns out to be Bartlizable under easily verifiable conditions on the marginal distribution of the unobservables, and such conditions point to a transformation of unobservables that makes it possible to interpret one predictive distribution of the unobservables from three perspectives: Bayesian, fiducial and frequentist. On the other hand, the hope for such a Holy Grail in general is diminished by the fact that the log H‐ likelihood surface cannot generally be summarized quadratically due to the lack of accumulation of information for unobservables, which seems to be the Achilles' Heel of the H‐likelihood method.
Terran Lane
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780195393798
- eISBN:
- 9780199897049
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195393798.003.0015
- Subject:
- Neuroscience, Behavioral Neuroscience, Development
Neuroscience data, from single-neuron recordings to whole-brain functional neuroimaging, is swamped with variability. The system under examination changes from subject to subject, trial to trial, ...
More
Neuroscience data, from single-neuron recordings to whole-brain functional neuroimaging, is swamped with variability. The system under examination changes from subject to subject, trial to trial, moment to moment. Such variation can be regarded in two fundamentally different ways: as noise (typically additive Gaussian) or as an effect of some underlying latent state variable. The best effort were made to control for such hidden conditions, but it is virtually impossible to control all possible variability. Having done our best to control the system, we typically treat the remaining variation as noise and “average it out” across subjects or trials. But doing so neglects the fact that variability due to latent variables carries real information that can tell us a great deal about the system. Bayesian statistical reasoning gives us a powerful tool for exploiting this additional information. Using these inferential mechanisms, variables can be estimated directly from observable data. This chapter describes the use of Bayesian inference of latent variables to solve key data analysis problems including identification of brain activity networks, group-level variability analysis, and identification of comorbid conditions.Less
Neuroscience data, from single-neuron recordings to whole-brain functional neuroimaging, is swamped with variability. The system under examination changes from subject to subject, trial to trial, moment to moment. Such variation can be regarded in two fundamentally different ways: as noise (typically additive Gaussian) or as an effect of some underlying latent state variable. The best effort were made to control for such hidden conditions, but it is virtually impossible to control all possible variability. Having done our best to control the system, we typically treat the remaining variation as noise and “average it out” across subjects or trials. But doing so neglects the fact that variability due to latent variables carries real information that can tell us a great deal about the system. Bayesian statistical reasoning gives us a powerful tool for exploiting this additional information. Using these inferential mechanisms, variables can be estimated directly from observable data. This chapter describes the use of Bayesian inference of latent variables to solve key data analysis problems including identification of brain activity networks, group-level variability analysis, and identification of comorbid conditions.
Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.003.0002
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it ...
More
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it introduces several tools for statistical modelling and inference that will be utilized in later chapters. It also highlights the close relation between frequentist penalized likelihood approaches and Bayesian inference based on smoothness priors. The chapter is organized as follows. Section 2.1 considers the classical smoothing problem for time series of Gaussian and non-Gaussian observations. Section 2.2 introduces penalized splines and their Bayesian counterpart as a computationally and conceptually attractive alternative to random-walk priors. Section 2.3 extends the univariate smoothing approaches to additive and generalized additive models.Less
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it introduces several tools for statistical modelling and inference that will be utilized in later chapters. It also highlights the close relation between frequentist penalized likelihood approaches and Bayesian inference based on smoothness priors. The chapter is organized as follows. Section 2.1 considers the classical smoothing problem for time series of Gaussian and non-Gaussian observations. Section 2.2 introduces penalized splines and their Bayesian counterpart as a computationally and conceptually attractive alternative to random-walk priors. Section 2.3 extends the univariate smoothing approaches to additive and generalized additive models.
Luc Bauwens, Michel Lubrano, and Jean-François Richard
- Published in print:
- 2000
- Published Online:
- September 2011
- ISBN:
- 9780198773122
- eISBN:
- 9780191695315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198773122.003.0005
- Subject:
- Economics and Finance, Econometrics
This chapter examines the application of the dynamic regression models for inference and prediction with dynamic econometric models. It shows how to extend to the dynamic case the notion of Bayesian ...
More
This chapter examines the application of the dynamic regression models for inference and prediction with dynamic econometric models. It shows how to extend to the dynamic case the notion of Bayesian cut seen in the static case to justify conditional inference. The chapter also explains how Bayesian inference can be used for single-equation dynamic models. It discusses the particular case of models with autoregressive errors, discusses the issues of moving average errors, and illustrates the empirical use of the error correction model by an analysis of a money demand function for Belgium.Less
This chapter examines the application of the dynamic regression models for inference and prediction with dynamic econometric models. It shows how to extend to the dynamic case the notion of Bayesian cut seen in the static case to justify conditional inference. The chapter also explains how Bayesian inference can be used for single-equation dynamic models. It discusses the particular case of models with autoregressive errors, discusses the issues of moving average errors, and illustrates the empirical use of the error correction model by an analysis of a money demand function for Belgium.
Michael N. Shadlen, Roozbeh Kiani, Timothy D. Hanks, and Anne K. Churchland
- Published in print:
- 2008
- Published Online:
- May 2016
- ISBN:
- 9780262195805
- eISBN:
- 9780262272353
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262195805.003.0004
- Subject:
- Psychology, Social Psychology
The aim of statistical decision theories is to understand how evidence, prior knowledge, and values lead an organism to commit to one of a number of alternatives. Two main statistical decision ...
More
The aim of statistical decision theories is to understand how evidence, prior knowledge, and values lead an organism to commit to one of a number of alternatives. Two main statistical decision theories, signal-detection theory and sequential analysis, assert that decision makers obtain evidence—often from the senses—that is corrupted by noise and weigh this evidence alongside bias and value to select the best choice. Signal-detection theory has been the dominant conceptual framework for perceptual decisions near threshold. Sequential analysis extends this framework by incorporating time and introducing a rule for terminating the decision process. This extension allows the trade-off between decision speed and accuracy to be studied, and invites us to consider decision rules as policies on a stream of evidence acquired in time. In light of these theories, simple perceptual decisions, which can be studied in the neurophysiology laboratory, allow principles that apply to more complex decisions to be exposed. The goal of this chapter is to “go beyond the data” to postulate a number of unifying principles of complex decisions based on our findings with simple decisions. We make speculative points and argue positions that should be viewed as controversial and provocative. In many places, a viewpoint will merely be sketched without going into much detail and without ample consideration of alternatives, except by way of contrast when necessary to make a point. The aim is not to convince but to pique interest. The chapter is divided into two main sections. The first suggests that an intentionbased framework for decision making extends beyond simple perceptual decisions to a broad variety of more complex situations. The second, which is a logical extension of the first, poses a challenge to Bayesian inference as the dominant mathematical foundation of decision making.Less
The aim of statistical decision theories is to understand how evidence, prior knowledge, and values lead an organism to commit to one of a number of alternatives. Two main statistical decision theories, signal-detection theory and sequential analysis, assert that decision makers obtain evidence—often from the senses—that is corrupted by noise and weigh this evidence alongside bias and value to select the best choice. Signal-detection theory has been the dominant conceptual framework for perceptual decisions near threshold. Sequential analysis extends this framework by incorporating time and introducing a rule for terminating the decision process. This extension allows the trade-off between decision speed and accuracy to be studied, and invites us to consider decision rules as policies on a stream of evidence acquired in time. In light of these theories, simple perceptual decisions, which can be studied in the neurophysiology laboratory, allow principles that apply to more complex decisions to be exposed. The goal of this chapter is to “go beyond the data” to postulate a number of unifying principles of complex decisions based on our findings with simple decisions. We make speculative points and argue positions that should be viewed as controversial and provocative. In many places, a viewpoint will merely be sketched without going into much detail and without ample consideration of alternatives, except by way of contrast when necessary to make a point. The aim is not to convince but to pique interest. The chapter is divided into two main sections. The first suggests that an intentionbased framework for decision making extends beyond simple perceptual decisions to a broad variety of more complex situations. The second, which is a logical extension of the first, poses a challenge to Bayesian inference as the dominant mathematical foundation of decision making.
Luc Bauwens, Michel Lubrano, and Jean-François Richard
- Published in print:
- 2000
- Published Online:
- September 2011
- ISBN:
- 9780198773122
- eISBN:
- 9780191695315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198773122.003.0007
- Subject:
- Economics and Finance, Econometrics
This chapter examines the importance of heteroscedasticity and the autoregressive conditional heteroscedasticity (ARCH) model in econometric analysis, particularly in the Bayesian inference approach. ...
More
This chapter examines the importance of heteroscedasticity and the autoregressive conditional heteroscedasticity (ARCH) model in econometric analysis, particularly in the Bayesian inference approach. It discusses the case of functional heteroscedasticity and proposes a general method for detecting heteroscedasticity. It explains that neglecting heteroscedasticity may result in a posterior distribution for the regression coefficients which is different from what it is when the heteroscedasticity is taken into account.Less
This chapter examines the importance of heteroscedasticity and the autoregressive conditional heteroscedasticity (ARCH) model in econometric analysis, particularly in the Bayesian inference approach. It discusses the case of functional heteroscedasticity and proposes a general method for detecting heteroscedasticity. It explains that neglecting heteroscedasticity may result in a posterior distribution for the regression coefficients which is different from what it is when the heteroscedasticity is taken into account.
Thomas L. Griffiths and Joshua B. Tenenbaum
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0021
- Subject:
- Psychology, Developmental Psychology
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such ...
More
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such a grammar might be — the analogues of syntactic categories and the rules that relate them in a linguistic grammar. It presents two proposals for causal grammars. The first asserts that the variables which describe events can be organized into causal categories, and allows relationships between those categories to be expressed. The second uses a probabilistic variant of first-order logic in order to describe the ontology and causal laws expressed in an intuitive theory. This chapter illustrates how both kinds of grammar can guide causal learning.Less
A causal theory can be thought of as a grammar that generates events, and that can be used to parse events to identify underlying causal structure. This chapter considers what the components of such a grammar might be — the analogues of syntactic categories and the rules that relate them in a linguistic grammar. It presents two proposals for causal grammars. The first asserts that the variables which describe events can be organized into causal categories, and allows relationships between those categories to be expressed. The second uses a probabilistic variant of first-order logic in order to describe the ontology and causal laws expressed in an intuitive theory. This chapter illustrates how both kinds of grammar can guide causal learning.
Gerd Gigerenzer
- Published in print:
- 2002
- Published Online:
- October 2011
- ISBN:
- 9780195153729
- eISBN:
- 9780199849222
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195153729.003.0004
- Subject:
- Philosophy, General
Human color vision is adapted to the spectral properties of natural sunlight. More generally, the perceptual system has been shaped by the environment in which human ancestors evolved, the ...
More
Human color vision is adapted to the spectral properties of natural sunlight. More generally, the perceptual system has been shaped by the environment in which human ancestors evolved, the environment often referred to as the “environment of evolutionary adaptiveness”, or EEA. This chapter proposes that human reasoning processes, like those of color constancy, are designed for information that comes in a format that was present in the EEA. It focuses on a class of inductive reasoning processes technically known as Bayesian inference, specifically a simple version thereof in which an organism infers from one or a few indicators which of two events is true.Less
Human color vision is adapted to the spectral properties of natural sunlight. More generally, the perceptual system has been shaped by the environment in which human ancestors evolved, the environment often referred to as the “environment of evolutionary adaptiveness”, or EEA. This chapter proposes that human reasoning processes, like those of color constancy, are designed for information that comes in a format that was present in the EEA. It focuses on a class of inductive reasoning processes technically known as Bayesian inference, specifically a simple version thereof in which an organism infers from one or a few indicators which of two events is true.
Luc Bauwens, Michel Lubrano, and Jean-François Richard
- Published in print:
- 2000
- Published Online:
- September 2011
- ISBN:
- 9780198773122
- eISBN:
- 9780191695315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198773122.003.0006
- Subject:
- Economics and Finance, Econometrics
This chapter examines the application of the unit root hypothesis in econometric analysis, particularly in the Bayesian inference approach. It explains that testing for a unit root in a Bayesian ...
More
This chapter examines the application of the unit root hypothesis in econometric analysis, particularly in the Bayesian inference approach. It explains that testing for a unit root in a Bayesian framework is one of the most controversial topics in the economic literature. This is because testing is one of the hot topics among classical and Bayesian statisticians and because the unit root hypothesis is a point hypothesis and Bayesians do not like testing a point hypothesis because it is not natural to compare an interval which receives a positive probability with a point null hypothesis of zero mass.Less
This chapter examines the application of the unit root hypothesis in econometric analysis, particularly in the Bayesian inference approach. It explains that testing for a unit root in a Bayesian framework is one of the most controversial topics in the economic literature. This is because testing is one of the hot topics among classical and Bayesian statisticians and because the unit root hypothesis is a point hypothesis and Bayesians do not like testing a point hypothesis because it is not natural to compare an interval which receives a positive probability with a point null hypothesis of zero mass.