Rein Taagepera
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199534661
- eISBN:
- 9780191715921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199534661.003.0018
- Subject:
- Political Science, Comparative Politics, Political Economy
Society needs more from social sciences than they have delivered, and this book offers openings. To the society at large, quantitative social scientists presently seem no better at prediction than ...
More
Society needs more from social sciences than they have delivered, and this book offers openings. To the society at large, quantitative social scientists presently seem no better at prediction than qualitative historians, philosophers, and journalists — they just look more boring. computers could be a boon to social sciences, but they have turned out a curse in disguise, by enabling people with insufficient understanding of scientific process to use canned computer programs and grind out reams of numbers parading as “results,” to be printed — and hardly ever used again. One may discard this book on the basis of errors of detail, but the problems it points out will still be there. Unless corrected, they will lead to a Ptolemaic dead end.Less
Society needs more from social sciences than they have delivered, and this book offers openings. To the society at large, quantitative social scientists presently seem no better at prediction than qualitative historians, philosophers, and journalists — they just look more boring. computers could be a boon to social sciences, but they have turned out a curse in disguise, by enabling people with insufficient understanding of scientific process to use canned computer programs and grind out reams of numbers parading as “results,” to be printed — and hardly ever used again. One may discard this book on the basis of errors of detail, but the problems it points out will still be there. Unless corrected, they will lead to a Ptolemaic dead end.
Christian Gouriéroux and Alain Monfort
- Published in print:
- 1997
- Published Online:
- November 2003
- ISBN:
- 9780198774754
- eISBN:
- 9780191596339
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774753.001.0001
- Subject:
- Economics and Finance, Econometrics
This book deals with a new generation of econometric methods leading to criterion functions without simple analytical expression. The difficulty often comes from the presence of integrals of large ...
More
This book deals with a new generation of econometric methods leading to criterion functions without simple analytical expression. The difficulty often comes from the presence of integrals of large dimension in the probability density function or in the moments, and the idea is to circumvent this numerical difficulty by an approach based on simulation. The main methods considered are the methods of Simulated Moments, Simulated Maximum Likelihood, Simulated Pseudo‐Maximum Likelihood, Simulated Non‐Linear Least Squares, and Indirect Inference. These methods are applied to Limited Dependent Variables Models, to Financial Series, and to Switching Regime Models.Less
This book deals with a new generation of econometric methods leading to criterion functions without simple analytical expression. The difficulty often comes from the presence of integrals of large dimension in the probability density function or in the moments, and the idea is to circumvent this numerical difficulty by an approach based on simulation. The main methods considered are the methods of Simulated Moments, Simulated Maximum Likelihood, Simulated Pseudo‐Maximum Likelihood, Simulated Non‐Linear Least Squares, and Indirect Inference. These methods are applied to Limited Dependent Variables Models, to Financial Series, and to Switching Regime Models.
Moody T. Chu and Gene H. Golub
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566649
- eISBN:
- 9780191718021
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566649.003.0006
- Subject:
- Mathematics, Applied Mathematics
Every inverse eigenvalue problem has a natural generalization to a least squares formulation, which sometimes does carry significant purposes in applications. The least squares approximation can be ...
More
Every inverse eigenvalue problem has a natural generalization to a least squares formulation, which sometimes does carry significant purposes in applications. The least squares approximation can be applied to either the spectral constraint or the structural constraint. This chapter highlights some of the main notions when considering a least squares inverse problem, and describes a hybrid lift and projection method.Less
Every inverse eigenvalue problem has a natural generalization to a least squares formulation, which sometimes does carry significant purposes in applications. The least squares approximation can be applied to either the spectral constraint or the structural constraint. This chapter highlights some of the main notions when considering a least squares inverse problem, and describes a hybrid lift and projection method.
T. N. Thiele
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.003.0002
- Subject:
- Mathematics, Probability / Statistics
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in ...
More
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in this paper served as an important source of inspiration for Lauritzen and Spiegelhalter (1988). His geometric construction of the Kalman filter is described as a novelty.Less
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in this paper served as an important source of inspiration for Lauritzen and Spiegelhalter (1988). His geometric construction of the Kalman filter is described as a novelty.
Rein Taagepera
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199534661
- eISBN:
- 9780191715921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199534661.003.0012
- Subject:
- Political Science, Comparative Politics, Political Economy
When data are scattered, Ordinary Least-Squares (OLS) regression produces two quite distinct regression lines – one for y versus x and another for x versus y – and both may differ appreciably from ...
More
When data are scattered, Ordinary Least-Squares (OLS) regression produces two quite distinct regression lines – one for y versus x and another for x versus y – and both may differ appreciably from what your eyes tell you. If data are scattered, OLS regression of y against x will disconfirm a model that actually fits; thus good statistics can be death of good science. Standard OLS equations cannot form a system of interlocking models, because they are unidirectional and nontransitive. Scale-independent symmetric regression avoids these problems of OLS, offering a single reversible and transitive equation.Less
When data are scattered, Ordinary Least-Squares (OLS) regression produces two quite distinct regression lines – one for y versus x and another for x versus y – and both may differ appreciably from what your eyes tell you. If data are scattered, OLS regression of y against x will disconfirm a model that actually fits; thus good statistics can be death of good science. Standard OLS equations cannot form a system of interlocking models, because they are unidirectional and nontransitive. Scale-independent symmetric regression avoids these problems of OLS, offering a single reversible and transitive equation.
J. C. Gower and G. B. Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.003.0005
- Subject:
- Mathematics, Probability / Statistics
This chapter considers projection Procrustes where T is an orthonormal matrix P, which has an interpretation of rotating an orthogonal projection of the P 1-dimensional ...
More
This chapter considers projection Procrustes where T is an orthonormal matrix P, which has an interpretation of rotating an orthogonal projection of the P 1-dimensional configuration X1 to match a P 2-dimensional configuration X2 . In the two-sided version, T1 and T2 may have Q columns where Q P 1 P 2; often, Q = min(P 1, P 2).Less
This chapter considers projection Procrustes where T is an orthonormal matrix P, which has an interpretation of rotating an orthogonal projection of the P 1-dimensional configuration X1 to match a P 2-dimensional configuration X2 . In the two-sided version, T1 and T2 may have Q columns where Q P 1 P 2; often, Q = min(P 1, P 2).
William L. Harper
- Published in print:
- 2011
- Published Online:
- May 2012
- ISBN:
- 9780199570409
- eISBN:
- 9780191728679
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199570409.003.0006
- Subject:
- Philosophy, History of Philosophy, Philosophy of Science
Part I argues that the precision of Newton’s moon-test calculation goes beyond what modern least squares assessment can support from his cited data and that his data afford no support for his ...
More
Part I argues that the precision of Newton’s moon-test calculation goes beyond what modern least squares assessment can support from his cited data and that his data afford no support for his precession correction to offset the action of the sun; but, that Newton is innocent of Westfall’s main accusation of data fudging in the moon-test. Part II argues that Newton’s inference does not depend on his precession correction or on his selection of which lunar distance estimates to include. It argues that a correction for syzygy distances can defend the larger lunar distance Newton assigns in his moon-test of corollary 7 of proposition 37. Appendix 1 discusses the details of Newton’s moon-test calculation from corollary 7 of proposition 37 of book 3. It shows that Newton’s moon-test inference continues to hold up when simplifying assumptions of his basic calculation are replaced by more realistic approximations.Less
Part I argues that the precision of Newton’s moon-test calculation goes beyond what modern least squares assessment can support from his cited data and that his data afford no support for his precession correction to offset the action of the sun; but, that Newton is innocent of Westfall’s main accusation of data fudging in the moon-test. Part II argues that Newton’s inference does not depend on his precession correction or on his selection of which lunar distance estimates to include. It argues that a correction for syzygy distances can defend the larger lunar distance Newton assigns in his moon-test of corollary 7 of proposition 37. Appendix 1 discusses the details of Newton’s moon-test calculation from corollary 7 of proposition 37 of book 3. It shows that Newton’s moon-test inference continues to hold up when simplifying assumptions of his basic calculation are replaced by more realistic approximations.
Christopher G. Small and Jinfang Wang
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506881
- eISBN:
- 9780191709258
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506881.003.0005
- Subject:
- Mathematics, Probability / Statistics
This chapter studies the specific problem of selecting a root of an estimating equation with more than one root, for instance, using methods by removing the irregularity of the estimating function or ...
More
This chapter studies the specific problem of selecting a root of an estimating equation with more than one root, for instance, using methods by removing the irregularity of the estimating function or using the idea of efficient estimation by one step Newton-Raphson iteration from a consistent estimator. The Godambe efficient estimator is defined and various modifications to the Newton-Raphson iterative procedure are suggested that are particularly applicable to estimating functions. A modification to Muller's method is also discussed. Other methods discussed include examination of asymptotic properties of roots, a weighted least squares method, a bootstrap quadratic likelihood method, an information theoretic approach for location models, and the method of model enlargement. It also discusses briefly the much neglected problem of estimating equations with no solutions at all. Construction of confidence intervals using unbiased estimating functions is also discussed.Less
This chapter studies the specific problem of selecting a root of an estimating equation with more than one root, for instance, using methods by removing the irregularity of the estimating function or using the idea of efficient estimation by one step Newton-Raphson iteration from a consistent estimator. The Godambe efficient estimator is defined and various modifications to the Newton-Raphson iterative procedure are suggested that are particularly applicable to estimating functions. A modification to Muller's method is also discussed. Other methods discussed include examination of asymptotic properties of roots, a weighted least squares method, a bootstrap quadratic likelihood method, an information theoretic approach for location models, and the method of model enlargement. It also discusses briefly the much neglected problem of estimating equations with no solutions at all. Construction of confidence intervals using unbiased estimating functions is also discussed.
Partha P. Mitra and Hemant Bokil
- Published in print:
- 2007
- Published Online:
- May 2009
- ISBN:
- 9780195178081
- eISBN:
- 9780199864829
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195178081.003.0006
- Subject:
- Neuroscience, Techniques, Molecular and Cellular Systems
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval ...
More
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval estimation, hypothesis testing, nonparametric tests, and Bayesian estimation and inference.Less
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval estimation, hypothesis testing, nonparametric tests, and Bayesian estimation and inference.
J. C. Gower and G. B. Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.003.0013
- Subject:
- Mathematics, Probability / Statistics
The multi-set Procrustes problem may be viewed as a part of the family of three-mode multidimensional scaling methods. This chapter discusses links with other three-mode methods, especially PINDIS, ...
More
The multi-set Procrustes problem may be viewed as a part of the family of three-mode multidimensional scaling methods. This chapter discusses links with other three-mode methods, especially PINDIS, INDSCAL, STATIS, SMACOF, ALSCAL, and generalizations of canonical correlation.Less
The multi-set Procrustes problem may be viewed as a part of the family of three-mode multidimensional scaling methods. This chapter discusses links with other three-mode methods, especially PINDIS, INDSCAL, STATIS, SMACOF, ALSCAL, and generalizations of canonical correlation.
J. C. Gower and G. B. Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.003.0010
- Subject:
- Mathematics, Probability / Statistics
This chapter continues the discussion of the multi-set problem, presenting several forms of analysis of variance which have only a rudimentary form for the two sets Procrustes problem but which give ...
More
This chapter continues the discussion of the multi-set problem, presenting several forms of analysis of variance which have only a rudimentary form for the two sets Procrustes problem but which give more detailed information with K sets. The terms in this analysis of variance help with interpretation and throw more light on the possible choices of criteria suitable for fitting Procrustes models by least squares.Less
This chapter continues the discussion of the multi-set problem, presenting several forms of analysis of variance which have only a rudimentary form for the two sets Procrustes problem but which give more detailed information with K sets. The terms in this analysis of variance help with interpretation and throw more light on the possible choices of criteria suitable for fitting Procrustes models by least squares.
Qin Duo
- Published in print:
- 1997
- Published Online:
- November 2003
- ISBN:
- 9780198292876
- eISBN:
- 9780191596803
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292872.003.0004
- Subject:
- Economics and Finance, History of Economic Thought, Econometrics
Narrates the process of how estimation was formalized. Estimation can be seen as the genesis of econometrics, since finding estimates for coefficients of economically meaningful relationships has ...
More
Narrates the process of how estimation was formalized. Estimation can be seen as the genesis of econometrics, since finding estimates for coefficients of economically meaningful relationships has always been the central motive and fulfilment of applied modelling activities. The process therefore became separated out as one of the basic steps along with model construction, identification, and testing. Subsequent research activities in estimation were confined to technical development of new optimal estimators for increasingly complicated model forms. The first section of the chapter describes the early developments in estimation methods centring around the least squares (LS) principle; how this led to the maximum‐likelihood (ML) method in a simultaneous‐equations system is the content of the second section; the third section turns to look at special problems in the context of time‐series analysis; other developments concerning errors‐in‐variables models are summed up in the fourth section; and the final completion of basic estimation theory in orthodox econometrics takes up the final section.Less
Narrates the process of how estimation was formalized. Estimation can be seen as the genesis of econometrics, since finding estimates for coefficients of economically meaningful relationships has always been the central motive and fulfilment of applied modelling activities. The process therefore became separated out as one of the basic steps along with model construction, identification, and testing. Subsequent research activities in estimation were confined to technical development of new optimal estimators for increasingly complicated model forms. The first section of the chapter describes the early developments in estimation methods centring around the least squares (LS) principle; how this led to the maximum‐likelihood (ML) method in a simultaneous‐equations system is the content of the second section; the third section turns to look at special problems in the context of time‐series analysis; other developments concerning errors‐in‐variables models are summed up in the fourth section; and the final completion of basic estimation theory in orthodox econometrics takes up the final section.
Shoutir Kishore Chatterjee
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198525318
- eISBN:
- 9780191711657
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198525318.003.0008
- Subject:
- Mathematics, Probability / Statistics
In the first half of the 19th century, Laplace, Gauss, and a number of other contributors enriched statistical thought. The quest for a suitable model for the distribution of observational errors led ...
More
In the first half of the 19th century, Laplace, Gauss, and a number of other contributors enriched statistical thought. The quest for a suitable model for the distribution of observational errors led to Gauss’s derivation of the normal model from the ‘A. M. postulate’. Laplace’s derivation of the Central Limit Theorem gave further support to the model. Different methods of curve fitting, of which the Least Squares method — which was heuristically proposed by Legendre and to which Gauss provided first a Bayesian and then a sampling theory justification — was the most important. During this period, Laplace worked on the large sample sampling theory approach to inference, and both he and Gauss introduced the idea of relative efficiency of estimates in the context of particular problems. In fact, the seeds of some later concepts like that of sufficiency, variance component models, and diffusion processes can be found in works carried out at this time.Less
In the first half of the 19th century, Laplace, Gauss, and a number of other contributors enriched statistical thought. The quest for a suitable model for the distribution of observational errors led to Gauss’s derivation of the normal model from the ‘A. M. postulate’. Laplace’s derivation of the Central Limit Theorem gave further support to the model. Different methods of curve fitting, of which the Least Squares method — which was heuristically proposed by Legendre and to which Gauss provided first a Bayesian and then a sampling theory justification — was the most important. During this period, Laplace worked on the large sample sampling theory approach to inference, and both he and Gauss introduced the idea of relative efficiency of estimates in the context of particular problems. In fact, the seeds of some later concepts like that of sufficiency, variance component models, and diffusion processes can be found in works carried out at this time.
A. Hald
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.003.0007
- Subject:
- Mathematics, Probability / Statistics
This chapter presents a reprint of part of Hald (2000a), containing a detailed discussion of Thiele's halfinvariants and their use in series expansions. Section 2 describes how Poisson, Bessel, and ...
More
This chapter presents a reprint of part of Hald (2000a), containing a detailed discussion of Thiele's halfinvariants and their use in series expansions. Section 2 describes how Poisson, Bessel, and Bienaymé generalized the Laplacean central limit theorem by including more terms in the expansion of the logarithm of the characteristic function. Section 3 discusses Chebyshev's least squares fitting of a polynomial to the observed values of a function by means of orthogonal polynomials. Section 4 explains how Thiele and Gram introduced the Gram-Charlier series from a completely different point of view.Less
This chapter presents a reprint of part of Hald (2000a), containing a detailed discussion of Thiele's halfinvariants and their use in series expansions. Section 2 describes how Poisson, Bessel, and Bienaymé generalized the Laplacean central limit theorem by including more terms in the expansion of the logarithm of the characteristic function. Section 3 discusses Chebyshev's least squares fitting of a polynomial to the observed values of a function by means of orthogonal polynomials. Section 4 explains how Thiele and Gram introduced the Gram-Charlier series from a completely different point of view.
Peter Main
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199219469
- eISBN:
- 9780191722516
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199219469.003.0012
- Subject:
- Physics, Crystallography: Physics
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures ...
More
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures including unweighted and weighted means, and least-squares criteria for a ‘best fit’. The simple case of linear regression for two parameters of a straight line is treated in some detail in order to explain the least-squares tools of observational equations and matrix algebra, leading to variances and covariances. Restraints and constraints are applied, and their important distinction made clear. Non-linearity in the observational equations leads to further complications, with only parameter shifts rather than the parameters themselves obtainable through least-squares treatment. Ill-conditioning and matrix singularity are explained, with reference to crystallographic relevance. Computing aspects are considered, since least-squares refinement is particularly expensive computationally.Less
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures including unweighted and weighted means, and least-squares criteria for a ‘best fit’. The simple case of linear regression for two parameters of a straight line is treated in some detail in order to explain the least-squares tools of observational equations and matrix algebra, leading to variances and covariances. Restraints and constraints are applied, and their important distinction made clear. Non-linearity in the observational equations leads to further complications, with only parameter shifts rather than the parameters themselves obtainable through least-squares treatment. Ill-conditioning and matrix singularity are explained, with reference to crystallographic relevance. Computing aspects are considered, since least-squares refinement is particularly expensive computationally.
Dennis Sherwood and Jon Cooper
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199559046
- eISBN:
- 9780191595028
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199559046.003.0015
- Subject:
- Physics, Crystallography: Physics
This chapter begins by emphasizing that the initial electron density for a protein will be significantly affected by errors in the experimental phases and, subsequently, additional errors of ...
More
This chapter begins by emphasizing that the initial electron density for a protein will be significantly affected by errors in the experimental phases and, subsequently, additional errors of interpretation will arise when a model of the structure is built. It describes methods for fitting a protein molecule to its electron density map (both manual and automated) and demonstrates the importance of interactive computer graphics in these processes. It then covers the underlying theory of methods by which the model is adjusted to maximise its agreement with the experimental structure factor or intensity data. The chapter describes the role of stereochemical restraints in macromolecular refinement; recently developed methods of improving the efficiency of refinement which exploit molecular dynamics at high temperature and/or maximum likelihood statistics; the exploitation of non-crystallographic symmetry in refinement; and the process of treating the protein, or parts of it, as rigid groups to improve the radius of convergence or analyse the dynamics of the molecule. Methods for calculating electron density maps which minimise the problem of model bias are described in detail along with criteria by which the success or otherwise of refinement may be judged.Less
This chapter begins by emphasizing that the initial electron density for a protein will be significantly affected by errors in the experimental phases and, subsequently, additional errors of interpretation will arise when a model of the structure is built. It describes methods for fitting a protein molecule to its electron density map (both manual and automated) and demonstrates the importance of interactive computer graphics in these processes. It then covers the underlying theory of methods by which the model is adjusted to maximise its agreement with the experimental structure factor or intensity data. The chapter describes the role of stereochemical restraints in macromolecular refinement; recently developed methods of improving the efficiency of refinement which exploit molecular dynamics at high temperature and/or maximum likelihood statistics; the exploitation of non-crystallographic symmetry in refinement; and the process of treating the protein, or parts of it, as rigid groups to improve the radius of convergence or analyse the dynamics of the molecule. Methods for calculating electron density maps which minimise the problem of model bias are described in detail along with criteria by which the success or otherwise of refinement may be judged.
Christian Gouriéroux and Alain Monfort
- Published in print:
- 1997
- Published Online:
- November 2003
- ISBN:
- 9780198774754
- eISBN:
- 9780191596339
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774753.003.0003
- Subject:
- Economics and Finance, Econometrics
The simulated analogues to Maximum Likelihood, Pseudo‐Maximum Likelihood, and Non‐Linear Least Squares Methods are presented. Their asymptotic properties and bias corrections are given under various ...
More
The simulated analogues to Maximum Likelihood, Pseudo‐Maximum Likelihood, and Non‐Linear Least Squares Methods are presented. Their asymptotic properties and bias corrections are given under various assumptions. Several kinds of simulators are explored and, among them, simulations based on conditioning, on EM algorithm, or on importance sampling. The Metropolis Hastings algorithm is also considered.Less
The simulated analogues to Maximum Likelihood, Pseudo‐Maximum Likelihood, and Non‐Linear Least Squares Methods are presented. Their asymptotic properties and bias corrections are given under various assumptions. Several kinds of simulators are explored and, among them, simulations based on conditioning, on EM algorithm, or on importance sampling. The Metropolis Hastings algorithm is also considered.
David F. Hendry
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198283164
- eISBN:
- 9780191596384
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283164.003.0006
- Subject:
- Economics and Finance, Econometrics
Regression, linear least‐squares approximation, contingent plan, and behavioural model are distinguished as four interpretations that ‘look alike’ yet have different properties. Models of ...
More
Regression, linear least‐squares approximation, contingent plan, and behavioural model are distinguished as four interpretations that ‘look alike’ yet have different properties. Models of expectations formation are analysed including rational, consistent, unbiased, and economically rational expectations, the last highlighting the instrumental role of expectations in achieving plans.Less
Regression, linear least‐squares approximation, contingent plan, and behavioural model are distinguished as four interpretations that ‘look alike’ yet have different properties. Models of expectations formation are analysed including rational, consistent, unbiased, and economically rational expectations, the last highlighting the instrumental role of expectations in achieving plans.
David F. Hendry
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198283164
- eISBN:
- 9780191596384
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283164.003.0003
- Subject:
- Economics and Finance, Econometrics
Least squares and recursive methods for estimating the values of unknown parameters and the logic of testing in empirical modelling, are discussed. The tools needed for investigating the properties ...
More
Least squares and recursive methods for estimating the values of unknown parameters and the logic of testing in empirical modelling, are discussed. The tools needed for investigating the properties of statistics in economics, namely, large‐sample distribution theory and Monte Carlo simulation techniques, are described. Ergodicity is explained, as are tools for investigating non‐stationarity due to unit roots.Less
Least squares and recursive methods for estimating the values of unknown parameters and the logic of testing in empirical modelling, are discussed. The tools needed for investigating the properties of statistics in economics, namely, large‐sample distribution theory and Monte Carlo simulation techniques, are described. Ergodicity is explained, as are tools for investigating non‐stationarity due to unit roots.
Judith D. Singer and John B. Willett
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780195152968
- eISBN:
- 9780199864980
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195152968.003.0004
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter delves deeper into the specification, estimation, and interpretation of the multilevel model for change. Following introduction of a new data set, it presents a composite formulation of ...
More
This chapter delves deeper into the specification, estimation, and interpretation of the multilevel model for change. Following introduction of a new data set, it presents a composite formulation of the model that combines the level-1 and level-2 submodels together into a single equation. The new composite model leads naturally to consideration of alternative methods of estimation. The chapter not only describes two new methods—generalized least squares (GLS) and iterative generalized least squares (IGLS) within each, it distinguishes further between two types of approaches, the full and the restricted. The remainder of the chapter focuses on real-world issues of data analysis.Less
This chapter delves deeper into the specification, estimation, and interpretation of the multilevel model for change. Following introduction of a new data set, it presents a composite formulation of the model that combines the level-1 and level-2 submodels together into a single equation. The new composite model leads naturally to consideration of alternative methods of estimation. The chapter not only describes two new methods—generalized least squares (GLS) and iterative generalized least squares (IGLS) within each, it distinguishes further between two types of approaches, the full and the restricted. The remainder of the chapter focuses on real-world issues of data analysis.