Lars Peter Hansen and Thomas J. Sargent
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691042770
- eISBN:
- 9781400848188
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691042770.003.0001
- Subject:
- Economics and Finance, History of Economic Thought
This chapter sets out the book's focus, namely constructing and applying competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. Here, an economy will ...
More
This chapter sets out the book's focus, namely constructing and applying competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. Here, an economy will consist of a list of matrices that describe people's household technologies, their preferences over consumption services, their production technologies, and their information sets. Competitive equilibrium allocations and prices satisfy some equations that are easy to write down and solve. These competitive equilibrium outcomes have representations that are convenient to represent and estimate econometrically. The chapter then discusses the construction of a class of economies, the computer programs used, followed by an overview of the subsequent chapters.Less
This chapter sets out the book's focus, namely constructing and applying competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. Here, an economy will consist of a list of matrices that describe people's household technologies, their preferences over consumption services, their production technologies, and their information sets. Competitive equilibrium allocations and prices satisfy some equations that are easy to write down and solve. These competitive equilibrium outcomes have representations that are convenient to represent and estimate econometrically. The chapter then discusses the construction of a class of economies, the computer programs used, followed by an overview of the subsequent chapters.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0011
- Subject:
- Mathematics, Probability / Statistics
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows ...
More
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows that importance sampling methods can be adopted for estimating functions of the state vector and the error variance matrices of the resulting estimates. It develops estimates of conditional densities, distribution functions, and quantiles of interest. Of key importance is the method of estimating unknown parameters by maximum likelihood. The methods are based on standard ideas in simulation methodology and, in particular, importance sampling.Less
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows that importance sampling methods can be adopted for estimating functions of the state vector and the error variance matrices of the resulting estimates. It develops estimates of conditional densities, distribution functions, and quantiles of interest. Of key importance is the method of estimating unknown parameters by maximum likelihood. The methods are based on standard ideas in simulation methodology and, in particular, importance sampling.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0004
- Subject:
- Mathematics, Probability / Statistics
This chapter begins with a set of four lemmas from elementary multivariate regression which provides the essentials of the theory for the general linear state space model from both a classical and a ...
More
This chapter begins with a set of four lemmas from elementary multivariate regression which provides the essentials of the theory for the general linear state space model from both a classical and a Bayesian standpoint. The four lemmas lead to derivations of the Kalman filter and smoothing recursions for the estimation of the state vector and its conditional variance matrix given the data. The chapter also derives recursions for estimating the observation and state disturbances, and derives the simulation smoother, which is an important tool in the simulation methods employed later in the book. It shows that allowance for missing observations and forecasting are easily dealt with in the state space framework.Less
This chapter begins with a set of four lemmas from elementary multivariate regression which provides the essentials of the theory for the general linear state space model from both a classical and a Bayesian standpoint. The four lemmas lead to derivations of the Kalman filter and smoothing recursions for the estimation of the state vector and its conditional variance matrix given the data. The chapter also derives recursions for estimating the observation and state disturbances, and derives the simulation smoother, which is an important tool in the simulation methods employed later in the book. It shows that allowance for missing observations and forecasting are easily dealt with in the state space framework.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0001
- Subject:
- Mathematics, Probability / Statistics
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also ...
More
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also describes the notations used and other books on state space methods.Less
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also describes the notations used and other books on state space methods.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0013
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian ...
More
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian state space model by constructing importance samples of additional parameters. It then shows how to combine these with Kalman filter and smoother outputs to obtain the estimates of state parameters required. A brief description is also given of the alternative simulation technique, Markov chain Monte Carlo methods.Less
This chapter discusses the use of importance sampling for the estimation of parameters in Bayesian analysis for models of Part I and Part II. It first develops the analysis of the linear Gaussian state space model by constructing importance samples of additional parameters. It then shows how to combine these with Kalman filter and smoother outputs to obtain the estimates of state parameters required. A brief description is also given of the alternative simulation technique, Markov chain Monte Carlo methods.
Neil Shephard
- Published in print:
- 2015
- Published Online:
- January 2016
- ISBN:
- 9780199683666
- eISBN:
- 9780191763298
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199683666.003.0010
- Subject:
- Economics and Finance, Econometrics
This chapter generalizes the familiar linear Gaussian unobserved component models or structural time series models to martingale unobserved component models. This generates forecasts whose rate of ...
More
This chapter generalizes the familiar linear Gaussian unobserved component models or structural time series models to martingale unobserved component models. This generates forecasts whose rate of discounting of data is time-varying or local. How to handle such models is shown, effectively using an auxiliary particle filter which deploys M competing Kalman filters run in parallel. Here, one thinks of M as being 1000 or more. The model is applied to inflation forecasting. The martingale unobserved component model generalizes in several ways, to allow for trends, cycles, and seasonal components, and the methods developed in this chapter can be extended in the same way.Less
This chapter generalizes the familiar linear Gaussian unobserved component models or structural time series models to martingale unobserved component models. This generates forecasts whose rate of discounting of data is time-varying or local. How to handle such models is shown, effectively using an auxiliary particle filter which deploys M competing Kalman filters run in parallel. Here, one thinks of M as being 1000 or more. The model is applied to inflation forecasting. The martingale unobserved component model generalizes in several ways, to allow for trends, cycles, and seasonal components, and the methods developed in this chapter can be extended in the same way.
Hedibert F Lopes and Carlos M Carvalho
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.003.0011
- Subject:
- Mathematics, Probability / Statistics
This chapter provides a step-by-step review of Monte Carlo (MC) methods for filtering in general nonlinear and non-Gaussian dynamic models, also known as state-space models or hidden Markov models. ...
More
This chapter provides a step-by-step review of Monte Carlo (MC) methods for filtering in general nonlinear and non-Gaussian dynamic models, also known as state-space models or hidden Markov models. The chapter is organized as follows. Section 11.2 introduces the basic notation, results, and references for the general class of Gaussian dynamic linear models (DLM), the AR(1) plus noise model, and the standard stochastic volatility model with AR(1) dynamics. Sections 11.3 and 11.4 discuss particle filters for state learning with fixed parameters (also known as pure filtering) and particle filters for state and parameter learning, respectively. Section 11.5 deals with general issues, such as MC error, sequential model checking, particle smoothing, and the interaction between particle filters and Markov chain Monte Carlo (MCMC) schemes.Less
This chapter provides a step-by-step review of Monte Carlo (MC) methods for filtering in general nonlinear and non-Gaussian dynamic models, also known as state-space models or hidden Markov models. The chapter is organized as follows. Section 11.2 introduces the basic notation, results, and references for the general class of Gaussian dynamic linear models (DLM), the AR(1) plus noise model, and the standard stochastic volatility model with AR(1) dynamics. Sections 11.3 and 11.4 discuss particle filters for state learning with fixed parameters (also known as pure filtering) and particle filters for state and parameter learning, respectively. Section 11.5 deals with general issues, such as MC error, sequential model checking, particle smoothing, and the interaction between particle filters and Markov chain Monte Carlo (MCMC) schemes.
Edward P. Herbst and Frank Schorfheide
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691161082
- eISBN:
- 9781400873739
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691161082.003.0003
- Subject:
- Economics and Finance, Econometrics
This chapter provides a self-contained review of Bayesian inference and decision making. It begins with a discussion of Bayesian inference for a simple autoregressive (AR) model, which takes the form ...
More
This chapter provides a self-contained review of Bayesian inference and decision making. It begins with a discussion of Bayesian inference for a simple autoregressive (AR) model, which takes the form of a Gaussian linear regression. For this model, the posterior distribution can be characterized analytically and closed-form expressions for its moments are readily available. The chapter also examines how to turn posterior distributions into point estimates, interval estimates, forecasts, and how to solve general decision problems. The chapter shows how in a Bayesian setting, the calculus of probability is used to characterize and update an individual's state of knowledge or degree of beliefs with respect to quantities such as model parameters or future observations.Less
This chapter provides a self-contained review of Bayesian inference and decision making. It begins with a discussion of Bayesian inference for a simple autoregressive (AR) model, which takes the form of a Gaussian linear regression. For this model, the posterior distribution can be characterized analytically and closed-form expressions for its moments are readily available. The chapter also examines how to turn posterior distributions into point estimates, interval estimates, forecasts, and how to solve general decision problems. The chapter shows how in a Bayesian setting, the calculus of probability is used to characterize and update an individual's state of knowledge or degree of beliefs with respect to quantities such as model parameters or future observations.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0007
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses maximum likelihood estimation of parameters both for the case where the distribution of the initial state vector is known and for the case where at least some elements of the ...
More
This chapter discusses maximum likelihood estimation of parameters both for the case where the distribution of the initial state vector is known and for the case where at least some elements of the vector are diffuse or are treated as fixed and unknown. For the linear Gaussian model it shows that the likelihood can be calculated by a routine application of the Kalman filter, even when the initial state vector is fully or partially diffuse. It details the computation of the likelihood when the univariate treatment of multivariate observations is adopted. It considers how the loglikelihood can be maximised by means of iterative numerical procedures. An important part in this process is played by the score vector. The chapter shows how this is calculated, both for the case where the initial state vector has a known distribution and for the diffuse case. A useful device for maximisation of the loglikelihood in some cases, particularly in the early stages of maximisation, is the EM algorithm; details are provided for the linear Gaussian model. The chapter also considers biases in estimates due to errors in parameter estimation and ends with a discussion of some questions of goodness-of-fit and diagnostic checks.Less
This chapter discusses maximum likelihood estimation of parameters both for the case where the distribution of the initial state vector is known and for the case where at least some elements of the vector are diffuse or are treated as fixed and unknown. For the linear Gaussian model it shows that the likelihood can be calculated by a routine application of the Kalman filter, even when the initial state vector is fully or partially diffuse. It details the computation of the likelihood when the univariate treatment of multivariate observations is adopted. It considers how the loglikelihood can be maximised by means of iterative numerical procedures. An important part in this process is played by the score vector. The chapter shows how this is calculated, both for the case where the initial state vector has a known distribution and for the diffuse case. A useful device for maximisation of the loglikelihood in some cases, particularly in the early stages of maximisation, is the EM algorithm; details are provided for the linear Gaussian model. The chapter also considers biases in estimates due to errors in parameter estimation and ends with a discussion of some questions of goodness-of-fit and diagnostic checks.
E. Cosme
- Published in print:
- 2014
- Published Online:
- March 2015
- ISBN:
- 9780198723844
- eISBN:
- 9780191791185
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198723844.003.0004
- Subject:
- Physics, Geophysics, Atmospheric and Environmental Physics
This chapter describes the use of smoothers in data assimilation. The filtering problem in data assimilation consists in estimating the state of a system based on past and present observations. In ...
More
This chapter describes the use of smoothers in data assimilation. The filtering problem in data assimilation consists in estimating the state of a system based on past and present observations. In contrast to filters, amoothers implement Bayesian data assimilation using future observations. Smoothing problems can be posed in different ways. The main formulations in geophysics are fixed-point, fixed-interval, and fixed-lag smoothers. In this chapter, these problems are first introduced in a Bayesian framework, and the most straightforward Bayesian solutions are formulated. Common linear, Gaussian implementations, many of which are based on the classical Kalman filter, are then derived, followed by their ensemble counterparts, based on the usual ensemble Kalman filter techniques. Finally, the pros and cons, as well as the computational complexities, of all the schemes are discussed.Less
This chapter describes the use of smoothers in data assimilation. The filtering problem in data assimilation consists in estimating the state of a system based on past and present observations. In contrast to filters, amoothers implement Bayesian data assimilation using future observations. Smoothing problems can be posed in different ways. The main formulations in geophysics are fixed-point, fixed-interval, and fixed-lag smoothers. In this chapter, these problems are first introduced in a Bayesian framework, and the most straightforward Bayesian solutions are formulated. Common linear, Gaussian implementations, many of which are based on the classical Kalman filter, are then derived, followed by their ensemble counterparts, based on the usual ensemble Kalman filter techniques. Finally, the pros and cons, as well as the computational complexities, of all the schemes are discussed.