Peter Politser
- Published in print:
- 2008
- Published Online:
- May 2008
- ISBN:
- 9780195305821
- eISBN:
- 9780199867783
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195305821.003.0004
- Subject:
- Psychology, Cognitive Psychology
This chapter examines the alternative, behavioral economic models of evaluation. These models include diagnostic elements (differences in response to risk vs. ambiguity, attention to the chances of a ...
More
This chapter examines the alternative, behavioral economic models of evaluation. These models include diagnostic elements (differences in response to risk vs. ambiguity, attention to the chances of a positive or negative event, sensitivity to changes in probability as well as optimism or pessimism), elements related to management (expectancy-related and goal-related utilities), as well as outcome evaluations (disappointment, elation, and regret, as well as the experienced disutility of waiting for outcomes to occur). In addition, these models considered other factors that can change evaluations, such as learning and context. The investigation of the neural correlates of these behavioral economic parameters of choice clarified why some irrational violations of the axioms or reasons may occur or even be justified. The chapter also describes other forms of inconsistency in evaluation, beyond mere inconsistency with the economic axioms. These include conflicts between remembered, experienced, predictive, expectancy-related, and goal-related utility.Less
This chapter examines the alternative, behavioral economic models of evaluation. These models include diagnostic elements (differences in response to risk vs. ambiguity, attention to the chances of a positive or negative event, sensitivity to changes in probability as well as optimism or pessimism), elements related to management (expectancy-related and goal-related utilities), as well as outcome evaluations (disappointment, elation, and regret, as well as the experienced disutility of waiting for outcomes to occur). In addition, these models considered other factors that can change evaluations, such as learning and context. The investigation of the neural correlates of these behavioral economic parameters of choice clarified why some irrational violations of the axioms or reasons may occur or even be justified. The chapter also describes other forms of inconsistency in evaluation, beyond mere inconsistency with the economic axioms. These include conflicts between remembered, experienced, predictive, expectancy-related, and goal-related utility.
A. Townsend Peterson, Jorge Soberón, Richard G. Pearson, Robert P. Anderson, Enrique Martínez-Meyer, Miguel Nakamura, and Miguel Bastos Araújo
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691136868
- eISBN:
- 9781400840670
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691136868.003.0009
- Subject:
- Biology, Ecology
This chapter describes a framework for selecting appropriate strategies for evaluating model performance and significance. It begins with a review of key concepts, focusing on how primary occurrence ...
More
This chapter describes a framework for selecting appropriate strategies for evaluating model performance and significance. It begins with a review of key concepts, focusing on how primary occurrence data can be presence-only, presence/background, presence/pseudoabsence, or presence/absence as well as factors that may contribute to apparent commission error. It then considers the availability of two pools of occurrence data: one for model calibration and another for evaluation of model predictions. It also discusses strategies for detecting overfitting or sensitivity to bias in model calibration, with particular emphasis on quantification of performance and tests of significance. Finally, it suggests directions for future research as regards model evaluation, highlighting areas in need of theoretical and/or methodological advances.Less
This chapter describes a framework for selecting appropriate strategies for evaluating model performance and significance. It begins with a review of key concepts, focusing on how primary occurrence data can be presence-only, presence/background, presence/pseudoabsence, or presence/absence as well as factors that may contribute to apparent commission error. It then considers the availability of two pools of occurrence data: one for model calibration and another for evaluation of model predictions. It also discusses strategies for detecting overfitting or sensitivity to bias in model calibration, with particular emphasis on quantification of performance and tests of significance. Finally, it suggests directions for future research as regards model evaluation, highlighting areas in need of theoretical and/or methodological advances.
Donna Harrington
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780195339888
- eISBN:
- 9780199863662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195339888.003.0006
- Subject:
- Social Work, Research and Evaluation
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. ...
More
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. Longitudinal measurement invariance and equivalent models are briefly shown. Finally, multilevel confirmatory factor analysis models are also mentioned.Less
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. Longitudinal measurement invariance and equivalent models are briefly shown. Finally, multilevel confirmatory factor analysis models are also mentioned.
A. Townsend Peterson, Jorge Soberón, Richard G. Pearson, Robert P. Anderson, Enrique Martínez-Meyer, Miguel Nakamura, and Miguel Bastos Araújo
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691136868
- eISBN:
- 9781400840670
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691136868.003.0004
- Subject:
- Biology, Ecology
This chapter considers the practice of modeling ecological niches and estimating geographic distributions. It first introduces the general principles and definitions underlying ecological niche ...
More
This chapter considers the practice of modeling ecological niches and estimating geographic distributions. It first introduces the general principles and definitions underlying ecological niche modeling and species distribution modeling, focusing on model calibration and evaluation, before discussing the principal steps to be followed in building niche models. The first task in building a niche model is to collate, process, error-check, and format the data that are necessary as input. Two types of data are required: primary occurrence data documenting known presences (and sometimes absences) of the species, and environmental predictors in the form of raster-format GIS layers summarizing scenopoetic variables that may (or may not) be involved in delineating the ecological requirements of the species. The next step is to use a modeling algorithm to characterize the species’ ecological niche as a function of the environmental variables, followed by model projection and evaluation and finally, model transferability.Less
This chapter considers the practice of modeling ecological niches and estimating geographic distributions. It first introduces the general principles and definitions underlying ecological niche modeling and species distribution modeling, focusing on model calibration and evaluation, before discussing the principal steps to be followed in building niche models. The first task in building a niche model is to collate, process, error-check, and format the data that are necessary as input. Two types of data are required: primary occurrence data documenting known presences (and sometimes absences) of the species, and environmental predictors in the form of raster-format GIS layers summarizing scenopoetic variables that may (or may not) be involved in delineating the ecological requirements of the species. The next step is to use a modeling algorithm to characterize the species’ ecological niche as a function of the environmental variables, followed by model projection and evaluation and finally, model transferability.
David F. Hendry
- Published in print:
- 2000
- Published Online:
- November 2003
- ISBN:
- 9780198293545
- eISBN:
- 9780191596391
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198293542.003.0020
- Subject:
- Economics and Finance, Econometrics
The model class is summarized in terms of the properties of specific linear models, including cointegration and equilibrium (error) correction. Model evaluation is based on the associated information ...
More
The model class is summarized in terms of the properties of specific linear models, including cointegration and equilibrium (error) correction. Model evaluation is based on the associated information taxonomy, for sequential conditioning, exogeneity, invariance, constancy and recursivity, and encompassing. The theory of reduction provides a basis for general‐to‐specific modelling. Test types, modelling strategies, and system estimation are also briefly discussed.Less
The model class is summarized in terms of the properties of specific linear models, including cointegration and equilibrium (error) correction. Model evaluation is based on the associated information taxonomy, for sequential conditioning, exogeneity, invariance, constancy and recursivity, and encompassing. The theory of reduction provides a basis for general‐to‐specific modelling. Test types, modelling strategies, and system estimation are also briefly discussed.
David F. Hendry
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198283164
- eISBN:
- 9780191596384
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283164.003.0011
- Subject:
- Economics and Finance, Econometrics
Linear system modelling is structured in 10 stages from the general to the specific. The dynamic statistical system is the maintained model, defined by the variables of interest, their distributions, ...
More
Linear system modelling is structured in 10 stages from the general to the specific. The dynamic statistical system is the maintained model, defined by the variables of interest, their distributions, whether they are modelled or non‐modelled, and their lag polynomials. An econometric model is a (possibly) simultaneous‐equations entity, which is intended to isolate autonomous, parsimonious relationships based on economic theory. That model must adequately characterize the data evidence and account for the results in the congruent statistical system. Model formulation, identification, estimation (using an estimator generating equation), encompassing, and evaluation are considered.Less
Linear system modelling is structured in 10 stages from the general to the specific. The dynamic statistical system is the maintained model, defined by the variables of interest, their distributions, whether they are modelled or non‐modelled, and their lag polynomials. An econometric model is a (possibly) simultaneous‐equations entity, which is intended to isolate autonomous, parsimonious relationships based on economic theory. That model must adequately characterize the data evidence and account for the results in the congruent statistical system. Model formulation, identification, estimation (using an estimator generating equation), encompassing, and evaluation are considered.
David F. Hendry
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198283164
- eISBN:
- 9780191596384
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283164.001.0001
- Subject:
- Economics and Finance, Econometrics
This systematic and integrated framework for econometric modelling is organized in terms of three levels of knowledge: probability, estimation, and modelling. All necessary concepts of econometrics ...
More
This systematic and integrated framework for econometric modelling is organized in terms of three levels of knowledge: probability, estimation, and modelling. All necessary concepts of econometrics (including exogeneity and encompassing), models, processes, estimators, and inference procedures (centred on maximum likelihood) are discussed with solved examples and exercises. Practical problems in empirical modelling, such as model discovery, evaluation, and data mining are addressed, and illustrated using the software system PcGive. Background analyses cover matrix algebra, probability theory, multiple regression, stationary and non‐stationary stochastic processes, asymptotic distribution theory, Monte Carlo methods, numerical optimization, and macro‐econometric models. The reader will master the theory and practice of modelling non‐stationary (cointegrated) economic time series, based on a rigorous theory of reduction.Less
This systematic and integrated framework for econometric modelling is organized in terms of three levels of knowledge: probability, estimation, and modelling. All necessary concepts of econometrics (including exogeneity and encompassing), models, processes, estimators, and inference procedures (centred on maximum likelihood) are discussed with solved examples and exercises. Practical problems in empirical modelling, such as model discovery, evaluation, and data mining are addressed, and illustrated using the software system PcGive. Background analyses cover matrix algebra, probability theory, multiple regression, stationary and non‐stationary stochastic processes, asymptotic distribution theory, Monte Carlo methods, numerical optimization, and macro‐econometric models. The reader will master the theory and practice of modelling non‐stationary (cointegrated) economic time series, based on a rigorous theory of reduction.
Timo Teräsvirta, Dag Tjøstheim, and W. J. Granger
- Published in print:
- 2010
- Published Online:
- May 2011
- ISBN:
- 9780199587148
- eISBN:
- 9780191595387
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199587148.003.0016
- Subject:
- Economics and Finance, Econometrics
The topic of this chapter is nonlinear model building. Building non‐parametric models is considered first, followed by building various types of parametric nonlinear models. The latter include smooth ...
More
The topic of this chapter is nonlinear model building. Building non‐parametric models is considered first, followed by building various types of parametric nonlinear models. The latter include smooth transition, switching regression, and artificial neural network models. The three stages of model building: specification, estimation and evaluation, are illustrated by a number of empirical examples involving both economic and non‐economic time series and data sets.Less
The topic of this chapter is nonlinear model building. Building non‐parametric models is considered first, followed by building various types of parametric nonlinear models. The latter include smooth transition, switching regression, and artificial neural network models. The three stages of model building: specification, estimation and evaluation, are illustrated by a number of empirical examples involving both economic and non‐economic time series and data sets.
Qin Duo
- Published in print:
- 1997
- Published Online:
- November 2003
- ISBN:
- 9780198292876
- eISBN:
- 9780191596803
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292872.003.0006
- Subject:
- Economics and Finance, History of Economic Thought, Econometrics
Addresses the issue of testing, and reveals some intrinsic problems pertaining to hypothesis testing beneath the achievements of formalizing econometrics. Theory verification through applied studies ...
More
Addresses the issue of testing, and reveals some intrinsic problems pertaining to hypothesis testing beneath the achievements of formalizing econometrics. Theory verification through applied studies forms one of the main motives for formalizing methods of model estimation and identification, and the statistical theory of hypothesis testing was accepted without much dispute quite early as the technical vehicle to fulfil this desire. However, during the adoption of the theory into econometrics in the 1940s and 1950s, the achievable domain of verification turned out to be considerably reduced, as testing in econometrics proper gradually dwindled into part of the modelling procedure and pertained to model evaluation using statistical testing tools; in the applied field, empirical modellers took on the task of discriminating between and verifying economic theories against the model results, and carried this out in an ad hoc and often non‐sequitur manner. Describes how the desire to test diverged into model evaluation in econometric theory on the one hand, and economic theory verification in practice on the other, as econometric testing theory took shape. The story begins with the early period prior to the formative movement in the first section of the chapter; the following section looks at the period in which the theme of hypothesis testing was introduced, and the first test emerged in econometrics; the last two sections report, respectively, on how model testing in applied econometrics and test design in theoretical econometrics developed and moved apart.Less
Addresses the issue of testing, and reveals some intrinsic problems pertaining to hypothesis testing beneath the achievements of formalizing econometrics. Theory verification through applied studies forms one of the main motives for formalizing methods of model estimation and identification, and the statistical theory of hypothesis testing was accepted without much dispute quite early as the technical vehicle to fulfil this desire. However, during the adoption of the theory into econometrics in the 1940s and 1950s, the achievable domain of verification turned out to be considerably reduced, as testing in econometrics proper gradually dwindled into part of the modelling procedure and pertained to model evaluation using statistical testing tools; in the applied field, empirical modellers took on the task of discriminating between and verifying economic theories against the model results, and carried this out in an ad hoc and often non‐sequitur manner. Describes how the desire to test diverged into model evaluation in econometric theory on the one hand, and economic theory verification in practice on the other, as econometric testing theory took shape. The story begins with the early period prior to the formative movement in the first section of the chapter; the following section looks at the period in which the theme of hypothesis testing was introduced, and the first test emerged in econometrics; the last two sections report, respectively, on how model testing in applied econometrics and test design in theoretical econometrics developed and moved apart.
Jennifer Castle and Neil Shephard (eds)
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199237197
- eISBN:
- 9780191717314
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199237197.001.0001
- Subject:
- Economics and Finance, Econometrics
David F. Hendry is a seminal figure in modern econometrics. He has pioneered the LSE approach to econometrics, and his influence is extensive. This book is a collection of original research in ...
More
David F. Hendry is a seminal figure in modern econometrics. He has pioneered the LSE approach to econometrics, and his influence is extensive. This book is a collection of original research in time-series econometrics, both theoretical and applied, and reflects David's interests in econometric methodology. Many internationally renowned econometricians who have collaborated with Hendry or have been influenced by his research have contributed to this book, which provides a reflection on the recent advances in econometrics and considers the future progress for the methodology of econometrics. The book is broadly divided into five sections, including model selection, correlations, forecasting, methodology, and empirical applications, although the boundaries are certainly opaque. Central themes of the book include dynamic modelling and the properties of time series data, model selection and model evaluation, forecasting, policy analysis, exogeneity and causality, and encompassing. The contributions cover the full breadth of time series econometrics but all with the overarching theme of congruent econometric modelling using the coherent and comprehensive methodology that David has pioneered. The book assimilates scholarly work at the frontier of academic research, encapsulating the current thinking in modern day econometrics and reflecting the intellectual impact that David has had, and will continue to have, on the profession.Less
David F. Hendry is a seminal figure in modern econometrics. He has pioneered the LSE approach to econometrics, and his influence is extensive. This book is a collection of original research in time-series econometrics, both theoretical and applied, and reflects David's interests in econometric methodology. Many internationally renowned econometricians who have collaborated with Hendry or have been influenced by his research have contributed to this book, which provides a reflection on the recent advances in econometrics and considers the future progress for the methodology of econometrics. The book is broadly divided into five sections, including model selection, correlations, forecasting, methodology, and empirical applications, although the boundaries are certainly opaque. Central themes of the book include dynamic modelling and the properties of time series data, model selection and model evaluation, forecasting, policy analysis, exogeneity and causality, and encompassing. The contributions cover the full breadth of time series econometrics but all with the overarching theme of congruent econometric modelling using the coherent and comprehensive methodology that David has pioneered. The book assimilates scholarly work at the frontier of academic research, encapsulating the current thinking in modern day econometrics and reflecting the intellectual impact that David has had, and will continue to have, on the profession.
Célia Martinie, Philippe Palanque, and Camille Fayollas
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198799603
- eISBN:
- 9780191839832
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198799603.003.0010
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy
Arguments to support validity of most contributions in the field of human–computer interaction are based on detailed results of empirical studies involving cohorts of tested users confronted with a ...
More
Arguments to support validity of most contributions in the field of human–computer interaction are based on detailed results of empirical studies involving cohorts of tested users confronted with a set of tasks performed on a prototype version of an interactive system. This chapter presents how the Interactive Cooperative Objects (ICO) formal models of the entire interactive system can support predictive and summative performance evaluation activities by exploiting the models. Predictive performance evaluation is supported by ICO formal models of interactive systems enriched with perceptive, cognitive, and motoric information about the users. Summative usability evaluation is addressed at the level of the software system, which is able to exhaustively log all the user actions performed on the interactive system The articulation of these two evaluation approaches is demonstrated on a case study from the avionics domain with a step-by-step tutorial on how to apply the approach.Less
Arguments to support validity of most contributions in the field of human–computer interaction are based on detailed results of empirical studies involving cohorts of tested users confronted with a set of tasks performed on a prototype version of an interactive system. This chapter presents how the Interactive Cooperative Objects (ICO) formal models of the entire interactive system can support predictive and summative performance evaluation activities by exploiting the models. Predictive performance evaluation is supported by ICO formal models of interactive systems enriched with perceptive, cognitive, and motoric information about the users. Summative usability evaluation is addressed at the level of the software system, which is able to exhaustively log all the user actions performed on the interactive system The articulation of these two evaluation approaches is demonstrated on a case study from the avionics domain with a step-by-step tutorial on how to apply the approach.
Paul Morris and Bob Adamson
- Published in print:
- 2010
- Published Online:
- May 2013
- ISBN:
- 9789888028016
- eISBN:
- 9789888180257
- Item type:
- chapter
- Publisher:
- Hong Kong University Press
- DOI:
- 10.5790/hongkong/9789888028016.003.0009
- Subject:
- Education, Educational Policy and Politics
This chapter first analyzes an approach to a complete evaluation of all aspects of a curriculum. Then, it moves on to examine various types of evaluation which focus on specific components of the ...
More
This chapter first analyzes an approach to a complete evaluation of all aspects of a curriculum. Then, it moves on to examine various types of evaluation which focus on specific components of the curriculum.Less
This chapter first analyzes an approach to a complete evaluation of all aspects of a curriculum. Then, it moves on to examine various types of evaluation which focus on specific components of the curriculum.
Damaris Zurell and Jan O. Engler
- Published in print:
- 2019
- Published Online:
- September 2019
- ISBN:
- 9780198824268
- eISBN:
- 9780191862809
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198824268.003.0006
- Subject:
- Biology, Ornithology, Animal Biology
Impact assessments increasingly rely on models to project the potential impacts of climate change on species distributions. Ecological niche models have become established as an efficient and widely ...
More
Impact assessments increasingly rely on models to project the potential impacts of climate change on species distributions. Ecological niche models have become established as an efficient and widely used method for interpolating (and sometimes extrapolating) species’ distributions. They use statistical and machine-learning approaches to relate species’ observations to environmental predictor variables and identify the main environmental determinants of species’ ranges. Based on this estimated species–environment relationship, the species’ potential distribution can be mapped in space (and time). In this chapter, we explain the concept and underlying assumptions of ecological niche models, describe the basic modelling steps using the silvereye (Zosterops lateralis) as a simple real-world example, identify potential sources of uncertainty in underlying data and in the model, and discuss potential limitations as well as latest developments and future perspectives of ecological niche models in a global change context.Less
Impact assessments increasingly rely on models to project the potential impacts of climate change on species distributions. Ecological niche models have become established as an efficient and widely used method for interpolating (and sometimes extrapolating) species’ distributions. They use statistical and machine-learning approaches to relate species’ observations to environmental predictor variables and identify the main environmental determinants of species’ ranges. Based on this estimated species–environment relationship, the species’ potential distribution can be mapped in space (and time). In this chapter, we explain the concept and underlying assumptions of ecological niche models, describe the basic modelling steps using the silvereye (Zosterops lateralis) as a simple real-world example, identify potential sources of uncertainty in underlying data and in the model, and discuss potential limitations as well as latest developments and future perspectives of ecological niche models in a global change context.
Jeasik Cho
- Published in print:
- 2017
- Published Online:
- October 2017
- ISBN:
- 9780199330010
- eISBN:
- 9780190490089
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199330010.003.0003
- Subject:
- Psychology, Social Psychology
This chapter discusses a number of practical evaluation tools used by qualitative research journals. First, the chapter discusses the American Educational Research Association’s “Standards for ...
More
This chapter discusses a number of practical evaluation tools used by qualitative research journals. First, the chapter discusses the American Educational Research Association’s “Standards for Reporting on Empirical Social Science Research,” which emphasizes warrantability and transparency. Second, many ideas on reviewing qualitative research are briefly presented. Third, current qualitative research journals that use and those that do not use specific evaluation tools are discussed. The reasons why some journal editors do not use such specific evaluation tools are identified: trust, freedom, the nature of qualitative research, and “it works.” Other journals that use specific review guides are analyzed. This chapter suggests a holistic way of understanding the evaluation of qualitative research by taking three elements (core values, research processes, and key dimensions) into consideration. The seven most commonly used evaluation criteria are discussed: importance to the field, qualities, writing, data analysis, theoretical framework, participant, and impact/readership.Less
This chapter discusses a number of practical evaluation tools used by qualitative research journals. First, the chapter discusses the American Educational Research Association’s “Standards for Reporting on Empirical Social Science Research,” which emphasizes warrantability and transparency. Second, many ideas on reviewing qualitative research are briefly presented. Third, current qualitative research journals that use and those that do not use specific evaluation tools are discussed. The reasons why some journal editors do not use such specific evaluation tools are identified: trust, freedom, the nature of qualitative research, and “it works.” Other journals that use specific review guides are analyzed. This chapter suggests a holistic way of understanding the evaluation of qualitative research by taking three elements (core values, research processes, and key dimensions) into consideration. The seven most commonly used evaluation criteria are discussed: importance to the field, qualities, writing, data analysis, theoretical framework, participant, and impact/readership.