Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0008
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
Regression is a special case of the general model fitting and selection procedures discussed in chapters 4 and 5. It can be defined as the relation between a dependent variable, y, and a set of ...
More
Regression is a special case of the general model fitting and selection procedures discussed in chapters 4 and 5. It can be defined as the relation between a dependent variable, y, and a set of independent variables, x, that describes the expectation value of y given x: E [y¦x]. The purpose of obtaining a “best-fit” model ranges from scientific interest in the values of model parameters (e.g., the properties of dark energy, or of a newly discovered planet) to the predictive power of the resulting model (e.g., predicting solar activity). This chapter starts with a general formulation for regression, list various simplified cases, and then discusses methods that can be used to address them, such as regression for linear models, kernel regression, robust regression and nonlinear regression.Less
Regression is a special case of the general model fitting and selection procedures discussed in chapters 4 and 5. It can be defined as the relation between a dependent variable, y, and a set of independent variables, x, that describes the expectation value of y given x: E [y¦x]. The purpose of obtaining a “best-fit” model ranges from scientific interest in the values of model parameters (e.g., the properties of dark energy, or of a newly discovered planet) to the predictive power of the resulting model (e.g., predicting solar activity). This chapter starts with a general formulation for regression, list various simplified cases, and then discusses methods that can be used to address them, such as regression for linear models, kernel regression, robust regression and nonlinear regression.
Allan McCutcheon and Colin Mills
- Published in print:
- 1998
- Published Online:
- November 2003
- ISBN:
- 9780198292371
- eISBN:
- 9780191600159
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292376.003.0005
- Subject:
- Political Science, Reference
Extending the basic regression model to the analysis of contingency tables, using odds and odds ratios. The worked example shows how log‐linear and latent class techniques can be assimilated into a ...
More
Extending the basic regression model to the analysis of contingency tables, using odds and odds ratios. The worked example shows how log‐linear and latent class techniques can be assimilated into a single model using GLIM, LCAG, and LEM software, and how to interpret the BIC and AIC statistics.Less
Extending the basic regression model to the analysis of contingency tables, using odds and odds ratios. The worked example shows how log‐linear and latent class techniques can be assimilated into a single model using GLIM, LCAG, and LEM software, and how to interpret the BIC and AIC statistics.
Ray Chambers and Robert Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey ...
More
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.Less
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.
Steffen L. Lauritzen
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.001.0001
- Subject:
- Mathematics, Probability / Statistics
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private ...
More
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private insurance company. Thiele worked in astronomy, mathematics, actuarial science, and statistics, his most spectacular contributions were in the latter two areas, where his published work was far ahead of his time. This book is concerned with his statistical work. It evolves around his three main statistical masterpieces, which are now translated into English for the first time: 1) his article from 1880 where he derives the Kalman filter; 2) his book from 1889, where he lays out the subject of statistics in a highly original way, derives the half-invariants (today known as cumulants), the notion of likelihood in the case of binomial experiments, the canonical form of the linear normal model, and develops model criticism via analysis of residuals; and 3) an article from 1899 where he completes the theory of the half-invariants. This book also contains three chapters, written by A. Hald and S. L. Lauritzen, which describe Thiele's statistical work in modern terms and puts it into an historical perspective.Less
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private insurance company. Thiele worked in astronomy, mathematics, actuarial science, and statistics, his most spectacular contributions were in the latter two areas, where his published work was far ahead of his time. This book is concerned with his statistical work. It evolves around his three main statistical masterpieces, which are now translated into English for the first time: 1) his article from 1880 where he derives the Kalman filter; 2) his book from 1889, where he lays out the subject of statistics in a highly original way, derives the half-invariants (today known as cumulants), the notion of likelihood in the case of binomial experiments, the canonical form of the linear normal model, and develops model criticism via analysis of residuals; and 3) an article from 1899 where he completes the theory of the half-invariants. This book also contains three chapters, written by A. Hald and S. L. Lauritzen, which describe Thiele's statistical work in modern terms and puts it into an historical perspective.
Péter Róbert and Erzsébet Bukodi
- Published in print:
- 2004
- Published Online:
- November 2004
- ISBN:
- 9780199258451
- eISBN:
- 9780191601491
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199258457.003.0012
- Subject:
- Political Science, European Union
Investigates temporal changes in Hungarian mobility patterns. Large-scale data sets of the Hungarian Central Statistical Office, collected between 1973 and 2000 are used for this purpose. In addition ...
More
Investigates temporal changes in Hungarian mobility patterns. Large-scale data sets of the Hungarian Central Statistical Office, collected between 1973 and 2000 are used for this purpose. In addition to descriptive statistics, log-linear and log-multiplicative models are fitted to the data in order to investigate trends of temporal changes. Descriptive results indicate that the restructuring of the class distribution slowed down in the 1980s in comparison to the 1970s but it increased again in the 1990s. Observed mobility rates turned out to be relatively high but data does not indicate an increase in the openness of the Hungarian society. For relative mobility rates, the hypothesis of constant social fluidity cannot be rejected for Hungary. Though an increase in social fluidity did occur between 1973 and 1983, it levelled off between 1983 and 1992, and it reversed between 1992 and 2000.Less
Investigates temporal changes in Hungarian mobility patterns. Large-scale data sets of the Hungarian Central Statistical Office, collected between 1973 and 2000 are used for this purpose. In addition to descriptive statistics, log-linear and log-multiplicative models are fitted to the data in order to investigate trends of temporal changes. Descriptive results indicate that the restructuring of the class distribution slowed down in the 1980s in comparison to the 1970s but it increased again in the 1990s. Observed mobility rates turned out to be relatively high but data does not indicate an increase in the openness of the Hungarian society. For relative mobility rates, the hypothesis of constant social fluidity cannot be rejected for Hungary. Though an increase in social fluidity did occur between 1973 and 1983, it levelled off between 1983 and 1992, and it reversed between 1992 and 2000.
John E. Jackson
- Published in print:
- 1998
- Published Online:
- November 2003
- ISBN:
- 9780198294719
- eISBN:
- 9780191599361
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198294719.003.0032
- Subject:
- Political Science, Reference
Reviews methodological techniques available across the discipline of political science. Econometrics and political science methods include structural equation estimations, time‐series analysis, and ...
More
Reviews methodological techniques available across the discipline of political science. Econometrics and political science methods include structural equation estimations, time‐series analysis, and non‐linear models. Alternative approaches analyse public preferences, political institutions, and path dependence political economy modelling. The drawbacks of these methods are examined by questioning their underlying assumptions and examining their consequences. While there is cause for concern, solace lies in the fact that these problems are also faced across other disciplines.Less
Reviews methodological techniques available across the discipline of political science. Econometrics and political science methods include structural equation estimations, time‐series analysis, and non‐linear models. Alternative approaches analyse public preferences, political institutions, and path dependence political economy modelling. The drawbacks of these methods are examined by questioning their underlying assumptions and examining their consequences. While there is cause for concern, solace lies in the fact that these problems are also faced across other disciplines.
John G. Orme and Terri Combs-Orme
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780195329452
- eISBN:
- 9780199864812
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195329452.003.0001
- Subject:
- Social Work, Research and Evaluation
This chapter is a brief review of some major concepts of linear regression, presented in the context of simple examples using both dichotomous and continuous independent variables. The chapter ...
More
This chapter is a brief review of some major concepts of linear regression, presented in the context of simple examples using both dichotomous and continuous independent variables. The chapter compares and contrasts linear regression and the regression models for discrete dependent variables discussed in the remaining chapters of the book in order to clarify the major concepts. This chapter explains the generalized linear model (GZLM) in the context of linear regression and discuss and illustrates residuals, spurious relationships, interactions and curvilinear relationships, and multicollinearity. In preparation for the regression models discussed in subsequent chapters, the chapter also explains the link function, maximum likelihood estimation, issues related to sample size, assumptions and limitations, and model specification and evaluation.Less
This chapter is a brief review of some major concepts of linear regression, presented in the context of simple examples using both dichotomous and continuous independent variables. The chapter compares and contrasts linear regression and the regression models for discrete dependent variables discussed in the remaining chapters of the book in order to clarify the major concepts. This chapter explains the generalized linear model (GZLM) in the context of linear regression and discuss and illustrates residuals, spurious relationships, interactions and curvilinear relationships, and multicollinearity. In preparation for the regression models discussed in subsequent chapters, the chapter also explains the link function, maximum likelihood estimation, issues related to sample size, assumptions and limitations, and model specification and evaluation.
Elinor Scarbrough and Eric Tanenbaum (eds)
- Published in print:
- 1998
- Published Online:
- November 2003
- ISBN:
- 9780198292371
- eISBN:
- 9780191600159
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292376.001.0001
- Subject:
- Political Science, Reference
This volume is a collection of commissioned articles by 16 experts in social science methodology, each contribution introducing experienced social scientists to more advanced analytic techniques. The ...
More
This volume is a collection of commissioned articles by 16 experts in social science methodology, each contribution introducing experienced social scientists to more advanced analytic techniques. The contributions explain the theoretical underpinnings of a particular technique, and illustrate the approach with a worked example. The techniques covered are the basic regression model and its extensions, linear structural equation modelling, log‐linear and latent class models, multi‐level modelling, and three extensions to modelling time series data. In these contributions, statistical notation is kept to a minimum; where necessary, it is consigned to footnotes or an appendix. Three final contributions introduce new developments in rational choice theory and discourse analysis.Less
This volume is a collection of commissioned articles by 16 experts in social science methodology, each contribution introducing experienced social scientists to more advanced analytic techniques. The contributions explain the theoretical underpinnings of a particular technique, and illustrate the approach with a worked example. The techniques covered are the basic regression model and its extensions, linear structural equation modelling, log‐linear and latent class models, multi‐level modelling, and three extensions to modelling time series data. In these contributions, statistical notation is kept to a minimum; where necessary, it is consigned to footnotes or an appendix. Three final contributions introduce new developments in rational choice theory and discourse analysis.
Ramon Marimon and Andrew Scott (eds)
- Published in print:
- 2001
- Published Online:
- November 2003
- ISBN:
- 9780199248278
- eISBN:
- 9780191596605
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199248273.001.0001
- Subject:
- Economics and Finance, Macro- and Monetary Economics
Macroeconomics increasingly uses stochastic dynamic general equilibrium models to understand theoretical and policy issues. Unless very strong assumptions are made, understanding the properties of ...
More
Macroeconomics increasingly uses stochastic dynamic general equilibrium models to understand theoretical and policy issues. Unless very strong assumptions are made, understanding the properties of particular models requires solving the model using a computer. This volume brings together leading contributors in the field who explain in detail how to implement the computational techniques needed to solve dynamic economics models. It is based on lectures presented at the 7th Summer School of the European Economic Association on computational methods for the study of dynamic economies, held in 1996. A broad spread of techniques is covered, and their application to a wide range of subjects discussed. The book provides the basics of a tool kit that researchers and graduate students can use to solve and analyse their own theoretical models. It is oriented towards economists who already have the equivalent of a first year of graduate studies or to any advanced undergraduates or researchers with a solid mathematical background. No competence with writing computer codes is assumed. After an introduction by the editors, it is arranged in three parts: I Almost linear methods; II Nonlinear methods; and III Solving some dynamic economies.Less
Macroeconomics increasingly uses stochastic dynamic general equilibrium models to understand theoretical and policy issues. Unless very strong assumptions are made, understanding the properties of particular models requires solving the model using a computer. This volume brings together leading contributors in the field who explain in detail how to implement the computational techniques needed to solve dynamic economics models. It is based on lectures presented at the 7th Summer School of the European Economic Association on computational methods for the study of dynamic economies, held in 1996. A broad spread of techniques is covered, and their application to a wide range of subjects discussed. The book provides the basics of a tool kit that researchers and graduate students can use to solve and analyse their own theoretical models. It is oriented towards economists who already have the equivalent of a first year of graduate studies or to any advanced undergraduates or researchers with a solid mathematical background. No competence with writing computer codes is assumed. After an introduction by the editors, it is arranged in three parts: I Almost linear methods; II Nonlinear methods; and III Solving some dynamic economies.
Richard Breen
- Published in print:
- 2004
- Published Online:
- November 2004
- ISBN:
- 9780199258451
- eISBN:
- 9780191601491
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199258457.003.0002
- Subject:
- Political Science, European Union
Introduces and explains, in a fairly non-technical fashion, the main quantitative methods employed in the study of intergenerational mobility. These include methods for modelling mobility tables, in ...
More
Introduces and explains, in a fairly non-technical fashion, the main quantitative methods employed in the study of intergenerational mobility. These include methods for modelling mobility tables, in the light of hypotheses about the distribution of cases in these tables, and methods for assessing whether such a hypothesized model gives an adequate account of the observed data.Less
Introduces and explains, in a fairly non-technical fashion, the main quantitative methods employed in the study of intergenerational mobility. These include methods for modelling mobility tables, in the light of hypotheses about the distribution of cases in these tables, and methods for assessing whether such a hypothesized model gives an adequate account of the observed data.
Manuel Arellano
- Published in print:
- 2003
- Published Online:
- July 2005
- ISBN:
- 9780199245284
- eISBN:
- 9780191602481
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199245282.001.0001
- Subject:
- Economics and Finance, Econometrics
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic ...
More
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic responses and unobserved heterogeneity in panel data models. The book is divided into three parts. Part I deals with static models. Part II discusses pure time series models. Part III considers dynamic conditional models.Less
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic responses and unobserved heterogeneity in panel data models. The book is divided into three parts. Part I deals with static models. Part II discusses pure time series models. Part III considers dynamic conditional models.
Michael S. Landy, Martin S. Banks, and David C. Knill
- Published in print:
- 2011
- Published Online:
- September 2012
- ISBN:
- 9780195387247
- eISBN:
- 9780199918379
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195387247.003.0001
- Subject:
- Psychology, Cognitive Neuroscience, Cognitive Psychology
This chapter provides a general introduction to the field of cue combination from the perspective of optimal cue integration. It works through a number of qualitatively different problems and ...
More
This chapter provides a general introduction to the field of cue combination from the perspective of optimal cue integration. It works through a number of qualitatively different problems and illustrate how building ideal observers helps formulate the scientific questions that need to be answered in order to understand how the brain solves these problems. It begins with a simple example of integration leading to a linear model of cue integration. This is followed by a summary of a general approach to optimality: Bayesian estimation and decision theory. It then reviews situations in which realistic generative models of sensory data lead to nonlinear ideal-observer models. Subsequent sections review empirical studies of cue combination and issues they raise, as well as open questions in the field.Less
This chapter provides a general introduction to the field of cue combination from the perspective of optimal cue integration. It works through a number of qualitatively different problems and illustrate how building ideal observers helps formulate the scientific questions that need to be answered in order to understand how the brain solves these problems. It begins with a simple example of integration leading to a linear model of cue integration. This is followed by a summary of a general approach to optimality: Bayesian estimation and decision theory. It then reviews situations in which realistic generative models of sensory data lead to nonlinear ideal-observer models. Subsequent sections review empirical studies of cue combination and issues they raise, as well as open questions in the field.
R. Duncan Luce
- Published in print:
- 1991
- Published Online:
- January 2008
- ISBN:
- 9780195070019
- eISBN:
- 9780199869879
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195070019.003.0007
- Subject:
- Psychology, Cognitive Models and Architectures
This chapter examines models that attempt to formulate amy information the mind has about the signals presented and how the mind then makes decisions about what and when to respond. Topics discussed ...
More
This chapter examines models that attempt to formulate amy information the mind has about the signals presented and how the mind then makes decisions about what and when to respond. Topics discussed include two-state mixtures; a linear operator model for sequential effects; the fast guess account of errors; a three-state, fast-guess, memory model; and data with response errors.Less
This chapter examines models that attempt to formulate amy information the mind has about the signals presented and how the mind then makes decisions about what and when to respond. Topics discussed include two-state mixtures; a linear operator model for sequential effects; the fast guess account of errors; a three-state, fast-guess, memory model; and data with response errors.
Alfonso Novales, Emilio Domínguez, Javier J. Pérez, and Jesús Ruiz
- Published in print:
- 2001
- Published Online:
- November 2003
- ISBN:
- 9780199248278
- eISBN:
- 9780191596605
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199248273.003.0004
- Subject:
- Economics and Finance, Macro- and Monetary Economics
Discusses the main issues involved in practical applications of solution methods that have been proposed for rational expectations models, based on eigenvalue–eigenvector decompositions. It starts by ...
More
Discusses the main issues involved in practical applications of solution methods that have been proposed for rational expectations models, based on eigenvalue–eigenvector decompositions. It starts by reviewing how a numerical solution can be derived for the standard deterministic Cass–Koopmans–Brock–Mirman economy, pointing out the relevance of stability conditions. Next the general structure used to solve linear rational expectations models, and its extension to nonlinear models, is summarized. The solution method is then applied to Hansen's (1985) model of indivisible labour, and comparisons with other solution approaches are discussed. It is then shown how the eigenvalue–eigenvector decomposition can help to separately identify variables of a similar nature (as is the case when physical capital and inventories are inputs in an aggregate production technology), and how the solution method can be adapted to deal with endogenous growth models.Less
Discusses the main issues involved in practical applications of solution methods that have been proposed for rational expectations models, based on eigenvalue–eigenvector decompositions. It starts by reviewing how a numerical solution can be derived for the standard deterministic Cass–Koopmans–Brock–Mirman economy, pointing out the relevance of stability conditions. Next the general structure used to solve linear rational expectations models, and its extension to nonlinear models, is summarized. The solution method is then applied to Hansen's (1985) model of indivisible labour, and comparisons with other solution approaches are discussed. It is then shown how the eigenvalue–eigenvector decomposition can help to separately identify variables of a similar nature (as is the case when physical capital and inventories are inputs in an aggregate production technology), and how the solution method can be adapted to deal with endogenous growth models.
Richard M. Goodwin
- Published in print:
- 1990
- Published Online:
- November 2003
- ISBN:
- 9780198283355
- eISBN:
- 9780191596315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283350.003.0003
- Subject:
- Economics and Finance, Macro- and Monetary Economics
Teases out parallels in the thinking of von Neumann and Marx. Goodwin presents a simplified version of the von Neumann model removing the assumption of infinite labour supply. The resulting ...
More
Teases out parallels in the thinking of von Neumann and Marx. Goodwin presents a simplified version of the von Neumann model removing the assumption of infinite labour supply. The resulting non‐linear difference system shows endogenously erratic behaviour with cyclical output growth. In the long run, this system ceases to oscillate and another model is proposed to circumvent this problem. For low parameter values, the model has a fixed point; for moderate values, it has a limit cycle; and for higher values, a chaotic attractor is observed.Less
Teases out parallels in the thinking of von Neumann and Marx. Goodwin presents a simplified version of the von Neumann model removing the assumption of infinite labour supply. The resulting non‐linear difference system shows endogenously erratic behaviour with cyclical output growth. In the long run, this system ceases to oscillate and another model is proposed to circumvent this problem. For low parameter values, the model has a fixed point; for moderate values, it has a limit cycle; and for higher values, a chaotic attractor is observed.
Richard M. Goodwin
- Published in print:
- 1990
- Published Online:
- November 2003
- ISBN:
- 9780198283355
- eISBN:
- 9780191596315
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283350.003.0002
- Subject:
- Economics and Finance, Macro- and Monetary Economics
Deals with the classical dynamical problem of technological advances in an agricultural (corn) economy. Goodwin asserts that two mis‐specifications—concerning labour supply and technical ...
More
Deals with the classical dynamical problem of technological advances in an agricultural (corn) economy. Goodwin asserts that two mis‐specifications—concerning labour supply and technical progress—hampered classical models. A discrete time model is proposed with corn production embedded in a wider economy. The model has a chaotic attractor, and highly erratic market dynamics follow even in the absence of exogenous shocks.Less
Deals with the classical dynamical problem of technological advances in an agricultural (corn) economy. Goodwin asserts that two mis‐specifications—concerning labour supply and technical progress—hampered classical models. A discrete time model is proposed with corn production embedded in a wider economy. The model has a chaotic attractor, and highly erratic market dynamics follow even in the absence of exogenous shocks.
Sylvain Baillet
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780195307238
- eISBN:
- 9780199863990
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195307238.003.0005
- Subject:
- Neuroscience, Behavioral Neuroscience, Techniques
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis ...
More
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis of MEG data bears a great resemblance to the analysis of functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) activation maps, therefore much of the methodology can be borrowed or adapted from the functional neuroimaging literature. In particular, the General Linear Modeling (GLM) approach, where the MEG data are first mapped into brain space, and then fitted to a univariate or multivariate model at each surface or volume element, is described. A desired contrast of the estimated parameters produces a statistical map, which is then thresholded for evidence of an experimental effect. The chapter also describes several approaches that can produce corrected thresholds and control for false positives: Bonferroni, Random Field Theory (RFT), permutation tests, and False Discovery error Rate (FDR).Less
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis of MEG data bears a great resemblance to the analysis of functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) activation maps, therefore much of the methodology can be borrowed or adapted from the functional neuroimaging literature. In particular, the General Linear Modeling (GLM) approach, where the MEG data are first mapped into brain space, and then fitted to a univariate or multivariate model at each surface or volume element, is described. A desired contrast of the estimated parameters produces a statistical map, which is then thresholded for evidence of an experimental effect. The chapter also describes several approaches that can produce corrected thresholds and control for false positives: Bonferroni, Random Field Theory (RFT), permutation tests, and False Discovery error Rate (FDR).
A. Hald
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.003.0005
- Subject:
- Mathematics, Probability / Statistics
This chapter presents a reprint of Hald (1981), containing a detailed discussion of Thiele's contributions to statistics and a brief summary of some of his contributions to other areas. Topics ...
More
This chapter presents a reprint of Hald (1981), containing a detailed discussion of Thiele's contributions to statistics and a brief summary of some of his contributions to other areas. Topics covered include skew distributions, cumulants, estimation methods and k statistics, the linear model with normally distributed errors, analysis of variance, and a time series model combining Brownian motion and the linear model with normally distributed errors. Thiele's work is placed in a historical perspective and explained in modern terms.Less
This chapter presents a reprint of Hald (1981), containing a detailed discussion of Thiele's contributions to statistics and a brief summary of some of his contributions to other areas. Topics covered include skew distributions, cumulants, estimation methods and k statistics, the linear model with normally distributed errors, analysis of variance, and a time series model combining Brownian motion and the linear model with normally distributed errors. Thiele's work is placed in a historical perspective and explained in modern terms.
Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.003.0004
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination ...
More
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination with Gaussian priors for random effects, and outlines various model specifications that are included as special cases in SPMMs. Section 4.2 describes inferential techniques, detailing both empirical Bayes estimation based on mixed model technology and full Bayes techniques. Section 4.3 discusses the relation between Bayesian smoothing and correlation. Section 4.4 considers some additional or alternative semiparametric extensions of generalized linear mixed models: First, as in Section 3.2, the assumption of Gaussian random effects can be removed by allowing nonparametric Dirichlet process or Dirichlet process mixture priors in combination with Gaussian smoothness priors for functional effects. Second, local adaptivity of functional effects can be improved by scale mixtures of Gaussian smoothness priors, with variance parameters following stochastic process priors in another hierarchical stage. Third, the case of high-dimensional fixed effects β is also considered, with Bayesian shrinkage priors regularizing the resulting ill-posed inferential problem. Shrinkage priors can also be used for model choice and variable selection. The final Section 4.5 describes strategies for model choice and model checking in SPMMs.Less
This chapter considers Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. Section 4.1 assumes Gaussian smoothness priors, focusing on Bayesian P-splines in combination with Gaussian priors for random effects, and outlines various model specifications that are included as special cases in SPMMs. Section 4.2 describes inferential techniques, detailing both empirical Bayes estimation based on mixed model technology and full Bayes techniques. Section 4.3 discusses the relation between Bayesian smoothing and correlation. Section 4.4 considers some additional or alternative semiparametric extensions of generalized linear mixed models: First, as in Section 3.2, the assumption of Gaussian random effects can be removed by allowing nonparametric Dirichlet process or Dirichlet process mixture priors in combination with Gaussian smoothness priors for functional effects. Second, local adaptivity of functional effects can be improved by scale mixtures of Gaussian smoothness priors, with variance parameters following stochastic process priors in another hierarchical stage. Third, the case of high-dimensional fixed effects β is also considered, with Bayesian shrinkage priors regularizing the resulting ill-posed inferential problem. Shrinkage priors can also be used for model choice and variable selection. The final Section 4.5 describes strategies for model choice and model checking in SPMMs.
William R. Nugent
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195369625
- eISBN:
- 9780199865208
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195369625.003.0004
- Subject:
- Social Work, Research and Evaluation
Methods for analyzing data that come from research designs utilizing both single case and group design methods are described and illustrated in this chapter. Among the analysis methods described is ...
More
Methods for analyzing data that come from research designs utilizing both single case and group design methods are described and illustrated in this chapter. Among the analysis methods described is the use of hierarchical linear models, or what is sometimes referred to as growth curve modeling.Less
Methods for analyzing data that come from research designs utilizing both single case and group design methods are described and illustrated in this chapter. Among the analysis methods described is the use of hierarchical linear models, or what is sometimes referred to as growth curve modeling.