Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.003.0002
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it ...
More
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it introduces several tools for statistical modelling and inference that will be utilized in later chapters. It also highlights the close relation between frequentist penalized likelihood approaches and Bayesian inference based on smoothness priors. The chapter is organized as follows. Section 2.1 considers the classical smoothing problem for time series of Gaussian and non-Gaussian observations. Section 2.2 introduces penalized splines and their Bayesian counterpart as a computationally and conceptually attractive alternative to random-walk priors. Section 2.3 extends the univariate smoothing approaches to additive and generalized additive models.Less
This chapter reviews basic concepts for smoothing and semiparametric regression based on roughness penalties or — from a Bayesian perspective — corresponding smoothness priors. In particular, it introduces several tools for statistical modelling and inference that will be utilized in later chapters. It also highlights the close relation between frequentist penalized likelihood approaches and Bayesian inference based on smoothness priors. The chapter is organized as follows. Section 2.1 considers the classical smoothing problem for time series of Gaussian and non-Gaussian observations. Section 2.2 introduces penalized splines and their Bayesian counterpart as a computationally and conceptually attractive alternative to random-walk priors. Section 2.3 extends the univariate smoothing approaches to additive and generalized additive models.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0010
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses approximate filtering and smoothing methods for the analysis of non-Gaussian and nonlinear models. The chapter is organized as follows. Sections 10.2 and 10.3 consider two ...
More
This chapter discusses approximate filtering and smoothing methods for the analysis of non-Gaussian and nonlinear models. The chapter is organized as follows. Sections 10.2 and 10.3 consider two approximate filters, the extended Kalman filter and the unscented Kalman filter, respectively. Section 10.4 considers nonlinear smoothing and shows how approximate smoothing recursions can be derived for the two approximate filters. Section 10.5 argues that approximate solutions for filtering and smoothing can also be obtained when the data are transformed in an appropriate way. Sections 10.6 and 10.7 discuss methods for computing the mode estimate of the state and signal vectors. Different treatments for models with heavy-tailed errors are collected and presented in Section 10.8.Less
This chapter discusses approximate filtering and smoothing methods for the analysis of non-Gaussian and nonlinear models. The chapter is organized as follows. Sections 10.2 and 10.3 consider two approximate filters, the extended Kalman filter and the unscented Kalman filter, respectively. Section 10.4 considers nonlinear smoothing and shows how approximate smoothing recursions can be derived for the two approximate filters. Section 10.5 argues that approximate solutions for filtering and smoothing can also be obtained when the data are transformed in an appropriate way. Sections 10.6 and 10.7 discuss methods for computing the mode estimate of the state and signal vectors. Different treatments for models with heavy-tailed errors are collected and presented in Section 10.8.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0011
- Subject:
- Mathematics, Probability / Statistics
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows ...
More
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows that importance sampling methods can be adopted for estimating functions of the state vector and the error variance matrices of the resulting estimates. It develops estimates of conditional densities, distribution functions, and quantiles of interest. Of key importance is the method of estimating unknown parameters by maximum likelihood. The methods are based on standard ideas in simulation methodology and, in particular, importance sampling.Less
This chapter develops the methodology of importance sampling based on simulation for the analysis of observations from the non-Gaussian and nonlinear models that were specified in Chapter 9. It shows that importance sampling methods can be adopted for estimating functions of the state vector and the error variance matrices of the resulting estimates. It develops estimates of conditional densities, distribution functions, and quantiles of interest. Of key importance is the method of estimating unknown parameters by maximum likelihood. The methods are based on standard ideas in simulation methodology and, in particular, importance sampling.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0001
- Subject:
- Mathematics, Probability / Statistics
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also ...
More
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also describes the notations used and other books on state space methods.Less
This introductory chapter provides an overview of the main themes covered in the present book, namely linear Gaussian state space models and non-Gaussian and nonlinear state space models. It also describes the notations used and other books on state space methods.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0012
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses the filtering of non-Gaussian and nonlinear series by fixing the sample at the values previously obtained at times …, t − 2, t − 1 and choosing a fresh value at time t only. A ...
More
This chapter discusses the filtering of non-Gaussian and nonlinear series by fixing the sample at the values previously obtained at times …, t − 2, t − 1 and choosing a fresh value at time t only. A new recursion over time is then required for the resulting simulation. The method is called particle filtering. The chapter is organized as follows. Section 12.2 considers filtering by the method of Chapter 11. Section 12.3 discusses resampling techniques designed to reduce degeneracy in sampling. Section 12.4 describes six methods of particle filtering, namely bootstrap filtering, auxiliary particle filtering, the extended particle filter, the unscented particle filter, the local regression filter, and the mode equalisation filter.Less
This chapter discusses the filtering of non-Gaussian and nonlinear series by fixing the sample at the values previously obtained at times …, t − 2, t − 1 and choosing a fresh value at time t only. A new recursion over time is then required for the resulting simulation. The method is called particle filtering. The chapter is organized as follows. Section 12.2 considers filtering by the method of Chapter 11. Section 12.3 discusses resampling techniques designed to reduce degeneracy in sampling. Section 12.4 describes six methods of particle filtering, namely bootstrap filtering, auxiliary particle filtering, the extended particle filter, the unscented particle filter, the local regression filter, and the mode equalisation filter.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0014
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses examples which illustrate the methods that were developed in Part II for analysing observations using non-Gaussian and nonlinear state space models. These include the monthly ...
More
This chapter discusses examples which illustrate the methods that were developed in Part II for analysing observations using non-Gaussian and nonlinear state space models. These include the monthly number of van drivers killed in road accidents in Great Britain modelled by a Poisson distribution; the usefulness of the t-distribution for modelling observation errors in a gas consumption series containing outliers; the volatility of exchange rate returns; and fitting a binary model to the results of the annual boat race between teams of the universities of Oxford and Cambridge.Less
This chapter discusses examples which illustrate the methods that were developed in Part II for analysing observations using non-Gaussian and nonlinear state space models. These include the monthly number of van drivers killed in road accidents in Great Britain modelled by a Poisson distribution; the usefulness of the t-distribution for modelling observation errors in a gas consumption series containing outliers; the volatility of exchange rate returns; and fitting a binary model to the results of the annual boat race between teams of the universities of Oxford and Cambridge.
Geert Mesters and Siem Jan Koopman
- Published in print:
- 2015
- Published Online:
- January 2016
- ISBN:
- 9780199683666
- eISBN:
- 9780191763298
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199683666.003.0007
- Subject:
- Economics and Finance, Econometrics
This chapter looks at the forecasting of the yearly outcome of the Boat Race between Cambridge and Oxford. The relative performance of different dynamic models for forty years of forecasting is ...
More
This chapter looks at the forecasting of the yearly outcome of the Boat Race between Cambridge and Oxford. The relative performance of different dynamic models for forty years of forecasting is compared. Each model is defined by a binary density conditional on a latent signal that is specified as a dynamic stochastic process with fixed predictors. The out-of-sample predictive ability of the models is compared with each other by using a variety of loss functions and predictive ability tests. It is found that the model with its latent signal specified as an autoregressive process cannot be outperformed by the other specifications. This model is able to correctly forecast thirty-one out of forty outcomes of the Boat Race.Less
This chapter looks at the forecasting of the yearly outcome of the Boat Race between Cambridge and Oxford. The relative performance of different dynamic models for forty years of forecasting is compared. Each model is defined by a binary density conditional on a latent signal that is specified as a dynamic stochastic process with fixed predictors. The out-of-sample predictive ability of the models is compared with each other by using a variety of loss functions and predictive ability tests. It is found that the model with its latent signal specified as an autoregressive process cannot be outperformed by the other specifications. This model is able to correctly forecast thirty-one out of forty outcomes of the Boat Race.