Michael Goldstein
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0007
- Subject:
- Mathematics, Probability / Statistics
Computer simulators offer a powerful approach for studying complex physical systems. We consider their use in current practice and the role of external uncertainty in bridging the gap between the ...
More
Computer simulators offer a powerful approach for studying complex physical systems. We consider their use in current practice and the role of external uncertainty in bridging the gap between the properties of the model and of the system. The interpretation of this uncertainty analysis raises questions about the role and meaning of the Bayesian approach. We summarize some theory which is helpful to clarify and amplify the role of external specifications of uncertainty, and illustrate some of the types of calculation suggested by this approach.Less
Computer simulators offer a powerful approach for studying complex physical systems. We consider their use in current practice and the role of external uncertainty in bridging the gap between the properties of the model and of the system. The interpretation of this uncertainty analysis raises questions about the role and meaning of the Bayesian approach. We summarize some theory which is helpful to clarify and amplify the role of external specifications of uncertainty, and illustrate some of the types of calculation suggested by this approach.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0003
- Subject:
- Biology, Ecology
This chapter describes the rules of probability as well as probability distributions. Because models are inherently, deliberately approximate, there comes a need to understand the approximation ...
More
This chapter describes the rules of probability as well as probability distributions. Because models are inherently, deliberately approximate, there comes a need to understand the approximation inherent in models in terms of uncertainty. Thus, equipped with a proper understanding of the principles of probability, ecologists can analyze the particular research problem at hand regardless of its idiosyncrasies. These analyses extend logically from first principles rather than from a particular statistical recipe. The chapter starts with the definition of probability and develops a logical progression of concepts extending from it to a fully specified and implemented Bayesian analysis appropriate for a broad range of research problems in ecology.Less
This chapter describes the rules of probability as well as probability distributions. Because models are inherently, deliberately approximate, there comes a need to understand the approximation inherent in models in terms of uncertainty. Thus, equipped with a proper understanding of the principles of probability, ecologists can analyze the particular research problem at hand regardless of its idiosyncrasies. These analyses extend logically from first principles rather than from a particular statistical recipe. The chapter starts with the definition of probability and develops a logical progression of concepts extending from it to a fully specified and implemented Bayesian analysis appropriate for a broad range of research problems in ecology.
L. Dobrzyński
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198501688
- eISBN:
- 9780191718045
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198501688.003.0007
- Subject:
- Physics, Atomic, Laser, and Optical Physics
This chapter discusses the Maximum Entropy Method (MEM). It begins with a general introduction to MEM, then develops the application to the reconstruction of the three-dimensional electron density ...
More
This chapter discusses the Maximum Entropy Method (MEM). It begins with a general introduction to MEM, then develops the application to the reconstruction of the three-dimensional electron density distribution from the measured one dimensional projections (the directional Compton profiles) using both the MEED code and the exact solution method. Statistical errors in the reconstruction maps are considered and results for a number of real data sets presented. The chapter, which has 30 references, forms an introduction to MEM generally for x-ray practitioners.Less
This chapter discusses the Maximum Entropy Method (MEM). It begins with a general introduction to MEM, then develops the application to the reconstruction of the three-dimensional electron density distribution from the measured one dimensional projections (the directional Compton profiles) using both the MEED code and the exact solution method. Statistical errors in the reconstruction maps are considered and results for a number of real data sets presented. The chapter, which has 30 references, forms an introduction to MEM generally for x-ray practitioners.
Christopher H. Schmid and Kerrie Mengersen
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691137285
- eISBN:
- 9781400846184
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691137285.003.0011
- Subject:
- Biology, Ecology
This chapter introduces a Bayesian approach to meta-analysis. It discusses the ways in which a Bayesian approach differs from the method of moments and maximum likelihood methods described in ...
More
This chapter introduces a Bayesian approach to meta-analysis. It discusses the ways in which a Bayesian approach differs from the method of moments and maximum likelihood methods described in chapters 9 and 10, and summarizes the steps required for a Bayesian analysis. It shows that Bayesian methods provide the basis for a rich variety of very flexible models, explicit statements about uncertainty of model parameters, inclusion of other information relevant to an analysis, and direct probabilistic statements about parameters of interest. In a meta-analysis context, this allows for more straightforward accommodation of study-specific differences and similarities, nonnormality and other distributional features of the data, missing data, small studies, and so forth.Less
This chapter introduces a Bayesian approach to meta-analysis. It discusses the ways in which a Bayesian approach differs from the method of moments and maximum likelihood methods described in chapters 9 and 10, and summarizes the steps required for a Bayesian analysis. It shows that Bayesian methods provide the basis for a rich variety of very flexible models, explicit statements about uncertainty of model parameters, inclusion of other information relevant to an analysis, and direct probabilistic statements about parameters of interest. In a meta-analysis context, this allows for more straightforward accommodation of study-specific differences and similarities, nonnormality and other distributional features of the data, missing data, small studies, and so forth.
West Mike
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.003.0008
- Subject:
- Mathematics, Probability / Statistics
This chapter focuses on some key models and ideas in Bayesian time series and forecasting, along with extracts from a few time series analysis and forecasting examples. It discusses specific ...
More
This chapter focuses on some key models and ideas in Bayesian time series and forecasting, along with extracts from a few time series analysis and forecasting examples. It discusses specific modelling innovations that relate directly to the goals of addressing analysis of increasingly high-dimensional time series and nonlinear models. These include dynamic graphical and matrix models, dynamic matrix models for stochastic volatility, time-varying sparsity modelling, and nonlinear dynamical systems.Less
This chapter focuses on some key models and ideas in Bayesian time series and forecasting, along with extracts from a few time series analysis and forecasting examples. It discusses specific modelling innovations that relate directly to the goals of addressing analysis of increasingly high-dimensional time series and nonlinear models. These include dynamic graphical and matrix models, dynamic matrix models for stochastic volatility, time-varying sparsity modelling, and nonlinear dynamical systems.
Francis X. Diebold and Glenn D. Rudebusch
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691146805
- eISBN:
- 9781400845415
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691146805.003.0004
- Subject:
- Economics and Finance, History of Economic Thought
This chapter highlights aspects of the vibrant ongoing research program associated with the ideas developed in earlier chapters. It begins with a collage-style sketch of work involving Bayesian ...
More
This chapter highlights aspects of the vibrant ongoing research program associated with the ideas developed in earlier chapters. It begins with a collage-style sketch of work involving Bayesian analysis, functional form for factor loadings, term structures of credit spreads, and nonlinearities. It then discusses in greater detail the incorporation of more than three yield factors. Next, it treats stochastic volatility in both dynamic Nelson–Siegel model (DNS) and arbitrage-free DNS (AFNS) environments, with some attention to the issue of unspanned stochastic volatility. Finally, it discusses the incorporation of macroeconomic fundamentals in their relation to bond yields. It also introduces aspects of modeling real versus nominal yields in DNS/AFNS environments, a theme treated in detail in Chapter 5.Less
This chapter highlights aspects of the vibrant ongoing research program associated with the ideas developed in earlier chapters. It begins with a collage-style sketch of work involving Bayesian analysis, functional form for factor loadings, term structures of credit spreads, and nonlinearities. It then discusses in greater detail the incorporation of more than three yield factors. Next, it treats stochastic volatility in both dynamic Nelson–Siegel model (DNS) and arbitrage-free DNS (AFNS) environments, with some attention to the issue of unspanned stochastic volatility. Finally, it discusses the incorporation of macroeconomic fundamentals in their relation to bond yields. It also introduces aspects of modeling real versus nominal yields in DNS/AFNS environments, a theme treated in detail in Chapter 5.
HeeMin Kim
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780813129945
- eISBN:
- 9780813135748
- Item type:
- chapter
- Publisher:
- University Press of Kentucky
- DOI:
- 10.5810/kentucky/9780813129945.003.0006
- Subject:
- Political Science, International Relations and Politics
This chapter discusses the changing political situation on the Korean peninsula. It establishes that the situation is amenable to Bayesian game analysis and presents an analysis of Bayesian games ...
More
This chapter discusses the changing political situation on the Korean peninsula. It establishes that the situation is amenable to Bayesian game analysis and presents an analysis of Bayesian games played between North and South Korea, and North Korea and the United States. It also discusses the insights and substantive implications that these models provide, after presenting a solution to the equilibria of these games.Less
This chapter discusses the changing political situation on the Korean peninsula. It establishes that the situation is amenable to Bayesian game analysis and presents an analysis of Bayesian games played between North and South Korea, and North Korea and the United States. It also discusses the insights and substantive implications that these models provide, after presenting a solution to the equilibria of these games.
Edward P. Herbst and Frank Schorfheide
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691161082
- eISBN:
- 9781400873739
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691161082.001.0001
- Subject:
- Economics and Finance, Econometrics
Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy ...
More
Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.Less
Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.
Kevin A. Clarke and David M. Primo
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780195382198
- eISBN:
- 9780199932399
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195382198.003.0005
- Subject:
- Political Science, Comparative Politics
Empirical models are the subject of this chapter. We begin by defining what an empirical model is and its relationship to the data. We discuss how to understand an empirical model under the ...
More
Empirical models are the subject of this chapter. We begin by defining what an empirical model is and its relationship to the data. We discuss how to understand an empirical model under the model-based account. We argue that empirical models can be useful in one or more of three ways: prediction, measurement, and characterization. We pay particular attention to theory testing as the most common use of empirical models and the use to which empirical models are least suited. We demonstrate that the combination of a hypothetico-deductive relationship between the theoretical model and the hypothesis to be tested and a hypothetico-deductive relationship between the hypothesis to be tested and the data prevents empirical model testing. This logic holds regardless of the statistical approach—falsificationist, verificationist, or Bayesian—taken. We then address the other uses of empirical modeling by presenting examples of empirical models drawn from the literature that eschew theory testing while remaining useful, and by most accounts, scientific.Less
Empirical models are the subject of this chapter. We begin by defining what an empirical model is and its relationship to the data. We discuss how to understand an empirical model under the model-based account. We argue that empirical models can be useful in one or more of three ways: prediction, measurement, and characterization. We pay particular attention to theory testing as the most common use of empirical models and the use to which empirical models are least suited. We demonstrate that the combination of a hypothetico-deductive relationship between the theoretical model and the hypothesis to be tested and a hypothetico-deductive relationship between the hypothesis to be tested and the data prevents empirical model testing. This logic holds regardless of the statistical approach—falsificationist, verificationist, or Bayesian—taken. We then address the other uses of empirical modeling by presenting examples of empirical models drawn from the literature that eschew theory testing while remaining useful, and by most accounts, scientific.
John Earman
- Published in print:
- 2005
- Published Online:
- January 2012
- ISBN:
- 9780197263419
- eISBN:
- 9780191734175
- Item type:
- chapter
- Publisher:
- British Academy
- DOI:
- 10.5871/bacad/9780197263419.003.0005
- Subject:
- Philosophy, Logic/Philosophy of Mathematics
This chapter discusses the Bayesian analysis of miracles. It is set in the context of the eighteenth-century debate on miracles. The discussion is focused on the probable response of Thomas Bayes to ...
More
This chapter discusses the Bayesian analysis of miracles. It is set in the context of the eighteenth-century debate on miracles. The discussion is focused on the probable response of Thomas Bayes to David Hume's celebrated argument against miracles. The chapter presents the claim that the criticisms Richard Price made against Hume's argument against miracles were largely solid.Less
This chapter discusses the Bayesian analysis of miracles. It is set in the context of the eighteenth-century debate on miracles. The discussion is focused on the probable response of Thomas Bayes to David Hume's celebrated argument against miracles. The chapter presents the claim that the criticisms Richard Price made against Hume's argument against miracles were largely solid.
Paul Damien, Petros Dellaportas, Nicholas G. Polson, and David A. Stephens (eds)
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.001.0001
- Subject:
- Mathematics, Probability / Statistics
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances ...
More
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.Less
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0013
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian ...
More
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian state space model by constructing importance samples of additional parameters. It then shows how to combine these with Kalman filter and smoother outputs to obtain the estimates of state parameters required. A brief description is also given of the alternative simulation technique, Markov chain Monte Carlo methods.Less
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian state space model by constructing importance samples of additional parameters. It then shows how to combine these with Kalman filter and smoother outputs to obtain the estimates of state parameters required. A brief description is also given of the alternative simulation technique, Markov chain Monte Carlo methods.
Fox Colin, Haario Heikki, and Christen J Andrés
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.003.0031
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses the features that are characteristic for the problems most typically treated under the umbrella of inverse problems. It begins by listing representative examples of inverse ...
More
This chapter discusses the features that are characteristic for the problems most typically treated under the umbrella of inverse problems. It begins by listing representative examples of inverse problems followed by a discussion of the key mathematical property of ill-posedness. It then discusses deterministic and regularization methods, and presents some history of Bayesian analysis, as viewed from physics. Next, it provides the framework for current methodology and describes some of the recent advances in Markov chain Monte Carlo (MCMC) algorithms. The chapter concludes with a glimpse of future directions.Less
This chapter discusses the features that are characteristic for the problems most typically treated under the umbrella of inverse problems. It begins by listing representative examples of inverse problems followed by a discussion of the key mathematical property of ill-posedness. It then discusses deterministic and regularization methods, and presents some history of Bayesian analysis, as viewed from physics. Next, it provides the framework for current methodology and describes some of the recent advances in Markov chain Monte Carlo (MCMC) algorithms. The chapter concludes with a glimpse of future directions.
Peter Pirolli
- Published in print:
- 2009
- Published Online:
- March 2012
- ISBN:
- 9780195374827
- eISBN:
- 9780199847693
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195374827.003.0022
- Subject:
- Psychology, Cognitive Psychology
This chapter summarizes information foraging research on use of the World Wide Web (the Web). The particular focus is placed on a psychological theory of information scent that is embedded in a ...
More
This chapter summarizes information foraging research on use of the World Wide Web (the Web). The particular focus is placed on a psychological theory of information scent that is embedded in a broader model of information foraging on the Web. It specifically presents a theoretical account of information scent that supports the development of models of navigation choice. Next, it reports some aspects of the Web environment that will be relevant to developing a model of how people assess information scent cues to navigate. The Brunswik's lens model view is then used to frame the development of a stochastic model of individual utility judgment and choice that derives from a Bayesian analysis of the environment. Finally, it evaluates a model of Web foraging that addresses data collected from Web users working on tasks in a study that attempted to follow Brunswik's tenets of representative design.Less
This chapter summarizes information foraging research on use of the World Wide Web (the Web). The particular focus is placed on a psychological theory of information scent that is embedded in a broader model of information foraging on the Web. It specifically presents a theoretical account of information scent that supports the development of models of navigation choice. Next, it reports some aspects of the Web environment that will be relevant to developing a model of how people assess information scent cues to navigate. The Brunswik's lens model view is then used to frame the development of a stochastic model of individual utility judgment and choice that derives from a Bayesian analysis of the environment. Finally, it evaluates a model of Web foraging that addresses data collected from Web users working on tasks in a study that attempted to follow Brunswik's tenets of representative design.
John Goldsmith
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199547548
- eISBN:
- 9780191720628
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199547548.003.0007
- Subject:
- Linguistics, Psycholinguistics / Neurolinguistics / Cognitive Linguistics, Computational Linguistics
A range of approaches to word structure assume segmentation of words into morphs. This chapter proposes explicit algorithm that takes natural language text as its input, and produces the ...
More
A range of approaches to word structure assume segmentation of words into morphs. This chapter proposes explicit algorithm that takes natural language text as its input, and produces the morphological structure of text as its output. Within this model, formal notions that correspond naturally to the traditional notion of analogy are useful and important as part of a boot-strapping heuristic for the discovery of morphological structure. At the same time, it is necessary to develop a refined quantitative model in order to find the kind of articulated linguistic structures that are to be found in natural languages.Less
A range of approaches to word structure assume segmentation of words into morphs. This chapter proposes explicit algorithm that takes natural language text as its input, and produces the morphological structure of text as its output. Within this model, formal notions that correspond naturally to the traditional notion of analogy are useful and important as part of a boot-strapping heuristic for the discovery of morphological structure. At the same time, it is necessary to develop a refined quantitative model in order to find the kind of articulated linguistic structures that are to be found in natural languages.
- Published in print:
- 2009
- Published Online:
- March 2013
- ISBN:
- 9780226498065
- eISBN:
- 9780226498089
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226498089.003.0009
- Subject:
- History, History of Science, Technology, and Medicine
This chapter, which discusses the controversy among statisticians in the United States over the analysis of DNA database “cold hits,” considers the defense use of Bayesian analysis to unpack and ...
More
This chapter, which discusses the controversy among statisticians in the United States over the analysis of DNA database “cold hits,” considers the defense use of Bayesian analysis to unpack and neutralize the impressive random match probabilities presented by the prosecution's experts. It suggests that Bayesian treatments can potentially open up a creative repackaging of evidence that disrupts settled legal and vernacular distinctions between expert and commonsense knowledge.Less
This chapter, which discusses the controversy among statisticians in the United States over the analysis of DNA database “cold hits,” considers the defense use of Bayesian analysis to unpack and neutralize the impressive random match probabilities presented by the prosecution's experts. It suggests that Bayesian treatments can potentially open up a creative repackaging of evidence that disrupts settled legal and vernacular distinctions between expert and commonsense knowledge.
Sungbae An and Kang Heedon
- Published in print:
- 2011
- Published Online:
- February 2013
- ISBN:
- 9780226386898
- eISBN:
- 9780226386904
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226386904.003.0010
- Subject:
- Economics and Finance, South and East Asia
This chapter describes the oil shocks using a dynamic stochastic general equilibrium (DSGE) model for the Korean economy. The Korean economy depends entirely on imports for its acquisition of crude ...
More
This chapter describes the oil shocks using a dynamic stochastic general equilibrium (DSGE) model for the Korean economy. The Korean economy depends entirely on imports for its acquisition of crude oil, and households, entrepreneurs, and policymakers are interested in knowing to what extent the rise in oil prices affects the economy. Within an Bayesian estimation framework including DSGE-vector autoregressions (VARs), the empirical analysis used is based on Korean aggregate data. Using Bayesian analysis, the model is used to check the importance of each channel that transmits an oil price shock to the economy. It is found that the model economy produces reasonable posterior estimates of the structural parameters and works relatively well compared to impulse responses from the VAR with optimal prior weight from the DSGE model. A more elaborated model on government behavior is anticipated to investigate the pass-through of oil price shocks.Less
This chapter describes the oil shocks using a dynamic stochastic general equilibrium (DSGE) model for the Korean economy. The Korean economy depends entirely on imports for its acquisition of crude oil, and households, entrepreneurs, and policymakers are interested in knowing to what extent the rise in oil prices affects the economy. Within an Bayesian estimation framework including DSGE-vector autoregressions (VARs), the empirical analysis used is based on Korean aggregate data. Using Bayesian analysis, the model is used to check the importance of each channel that transmits an oil price shock to the economy. It is found that the model economy produces reasonable posterior estimates of the structural parameters and works relatively well compared to impulse responses from the VAR with optimal prior weight from the DSGE model. A more elaborated model on government behavior is anticipated to investigate the pass-through of oil price shocks.
Therese M. Donovan and Ruth M. Mickey
- Published in print:
- 2019
- Published Online:
- July 2019
- ISBN:
- 9780198841296
- eISBN:
- 9780191876820
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198841296.003.0007
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
Chapter 7 discusses the “Portrait Problem,” which concerns the dispute about whether a portrait frequently associated with Thomas Bayes (and used, in fact, as the cover of this book!) is actually a ...
More
Chapter 7 discusses the “Portrait Problem,” which concerns the dispute about whether a portrait frequently associated with Thomas Bayes (and used, in fact, as the cover of this book!) is actually a picture of him. In doing so, the chapter highlights the fact that multiple pieces of information can be used in a Bayesian analysis. A key concept in this chapter is that multiple sources of data can be combined in a Bayesian inference framework. The main take-home point is that Bayesian analysis can be very, very flexible. A Bayesian analysis is possible as long as the likelihood of observing the data under each hypothesis can be computed. The chapter also discusses the concepts of joint likelihood and independence.Less
Chapter 7 discusses the “Portrait Problem,” which concerns the dispute about whether a portrait frequently associated with Thomas Bayes (and used, in fact, as the cover of this book!) is actually a picture of him. In doing so, the chapter highlights the fact that multiple pieces of information can be used in a Bayesian analysis. A key concept in this chapter is that multiple sources of data can be combined in a Bayesian inference framework. The main take-home point is that Bayesian analysis can be very, very flexible. A Bayesian analysis is possible as long as the likelihood of observing the data under each hypothesis can be computed. The chapter also discusses the concepts of joint likelihood and independence.
Mark Steyvers and Thomas L. Griffiths
- Published in print:
- 2008
- Published Online:
- March 2012
- ISBN:
- 9780199216093
- eISBN:
- 9780191695971
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199216093.003.0015
- Subject:
- Psychology, Cognitive Psychology
This chapter provides a complementary Bayesian analysis of the problem of memory retrieval. A Bayesian model that is able both to classify words into semantically coherent groups, merely from ...
More
This chapter provides a complementary Bayesian analysis of the problem of memory retrieval. A Bayesian model that is able both to classify words into semantically coherent groups, merely from observing their co-occurrence patterns in texts, is used as the basis for understanding aspects not only of how some linguistic categories might be created, but also how relevant information can be retrieved, using probabilistic principles. This work can be viewed as a natural follow-on from Anderson and colleagues' pioneering rational analyses of memory (Anderson & Milson, 1989; Anderson & Schooler, 1991). This chapter uses innovations in information retrieval as a way to explore the connections between research on human memory and information retrieval systems. It also provides an example of how cognitive research can help information retrieval research by formalizing theories of knowledge and memory organization that have been proposed by cognitive psychologists.Less
This chapter provides a complementary Bayesian analysis of the problem of memory retrieval. A Bayesian model that is able both to classify words into semantically coherent groups, merely from observing their co-occurrence patterns in texts, is used as the basis for understanding aspects not only of how some linguistic categories might be created, but also how relevant information can be retrieved, using probabilistic principles. This work can be viewed as a natural follow-on from Anderson and colleagues' pioneering rational analyses of memory (Anderson & Milson, 1989; Anderson & Schooler, 1991). This chapter uses innovations in information retrieval as a way to explore the connections between research on human memory and information retrieval systems. It also provides an example of how cognitive research can help information retrieval research by formalizing theories of knowledge and memory organization that have been proposed by cognitive psychologists.
Bruce Walsh and Michael Lynch
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780198830870
- eISBN:
- 9780191868986
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198830870.001.0001
- Subject:
- Biology, Evolutionary Biology / Genetics, Biochemistry / Molecular Biology
Quantitative traits—be they morphological or physiological characters, aspects of behavior, or genome-level features such as the amount of RNA or protein expression for a specific gene—usually show ...
More
Quantitative traits—be they morphological or physiological characters, aspects of behavior, or genome-level features such as the amount of RNA or protein expression for a specific gene—usually show considerable variation within and among populations. Quantitative genetics, also referred to as the genetics of complex traits, is the study of such characters and is based on mathematical models of evolution in which many genes influence the trait and in which non-genetic factors may also be important. Evolution and Selection of Quantitative Traits presents a holistic treatment of the subject, showing the interplay between theory and data with extensive discussions on statistical issues relating to the estimation of the biologically relevant parameters for these models. Quantitative genetics is viewed as the bridge between complex mathematical models of trait evolution and real-world data, and the authors have clearly framed their treatment as such. This is the second volume in a planned trilogy that summarizes the modern field of quantitative genetics, informed by empirical observations from wide-ranging fields (agriculture, evolution, ecology, and human biology) as well as population genetics, statistical theory, mathematical modeling, genetics, and genomics. Whilst volume 1 (1998) dealt with the genetics of such traits, the main focus of volume 2 is on their evolution, with a special emphasis on detecting selection (ranging from the use of genomic and historical data through to ecological field data) and examining its consequences. This extensive work of reference is suitable for graduate level students as well as professional researchers (both empiricists and theoreticians) in the fields of evolutionary biology, genetics, and genomics. It will also be of particular relevance and use to plant and animal breeders, human geneticists, and statisticians.Less
Quantitative traits—be they morphological or physiological characters, aspects of behavior, or genome-level features such as the amount of RNA or protein expression for a specific gene—usually show considerable variation within and among populations. Quantitative genetics, also referred to as the genetics of complex traits, is the study of such characters and is based on mathematical models of evolution in which many genes influence the trait and in which non-genetic factors may also be important. Evolution and Selection of Quantitative Traits presents a holistic treatment of the subject, showing the interplay between theory and data with extensive discussions on statistical issues relating to the estimation of the biologically relevant parameters for these models. Quantitative genetics is viewed as the bridge between complex mathematical models of trait evolution and real-world data, and the authors have clearly framed their treatment as such. This is the second volume in a planned trilogy that summarizes the modern field of quantitative genetics, informed by empirical observations from wide-ranging fields (agriculture, evolution, ecology, and human biology) as well as population genetics, statistical theory, mathematical modeling, genetics, and genomics. Whilst volume 1 (1998) dealt with the genetics of such traits, the main focus of volume 2 is on their evolution, with a special emphasis on detecting selection (ranging from the use of genomic and historical data through to ecological field data) and examining its consequences. This extensive work of reference is suitable for graduate level students as well as professional researchers (both empiricists and theoreticians) in the fields of evolutionary biology, genetics, and genomics. It will also be of particular relevance and use to plant and animal breeders, human geneticists, and statisticians.