John E. Till and Helen Grogan (eds)
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780195127270
- eISBN:
- 9780199869121
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195127270.001.0001
- Subject:
- Biology, Ecology, Biochemistry / Molecular Biology
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the ...
More
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the public because decision makers typically use that endpoint to allocate resources and resolve issues. Chapters in the book explain the fundamental steps of radiological assessment, and they are organized in a sequence that would typically be used when undertaking an analysis of risk. The key components of radiological risk assessment discussed include source terms, atmospheric transport, surface water transport, groundwater transport, terrestrial and aquatic food chain pathways, estimating exposures, conversion of intakes and exposures to dose and risk, uncertainty analysis, environmental epidemiology, and model validation. A chapter on regulations related to environmental exposure is also included. Contributors to the book are well known experts from the various disciplines addressed.Less
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the public because decision makers typically use that endpoint to allocate resources and resolve issues. Chapters in the book explain the fundamental steps of radiological assessment, and they are organized in a sequence that would typically be used when undertaking an analysis of risk. The key components of radiological risk assessment discussed include source terms, atmospheric transport, surface water transport, groundwater transport, terrestrial and aquatic food chain pathways, estimating exposures, conversion of intakes and exposures to dose and risk, uncertainty analysis, environmental epidemiology, and model validation. A chapter on regulations related to environmental exposure is also included. Contributors to the book are well known experts from the various disciplines addressed.
E. J. Milner-Gulland and Marcus Rowcliffe
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198530367
- eISBN:
- 9780191713095
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198530367.003.0005
- Subject:
- Biology, Biodiversity / Conservation Biology
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may ...
More
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may simply be in the form of a conceptual model, but is much more powerful when formalized as a mathematical model. This chapter introduces methods for building a model of the system that can be used to predict future sustainability with or without management interventions. The emphasis is on the simulation of biological and bioeconomic dynamics, for which step-by-step worked examples are given. These examples start with conceptual models, then show how to formalize these as mathematical equations, build these into computer code; test model sensitivity, validity, and alternative structures; and finally, explore future scenarios. Methods for modelling stochasticity and human behaviour are also introduced, as well as the use of Bayesian methods for understanding dynamic systems and exploring management interventions.Less
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may simply be in the form of a conceptual model, but is much more powerful when formalized as a mathematical model. This chapter introduces methods for building a model of the system that can be used to predict future sustainability with or without management interventions. The emphasis is on the simulation of biological and bioeconomic dynamics, for which step-by-step worked examples are given. These examples start with conceptual models, then show how to formalize these as mathematical equations, build these into computer code; test model sensitivity, validity, and alternative structures; and finally, explore future scenarios. Methods for modelling stochasticity and human behaviour are also introduced, as well as the use of Bayesian methods for understanding dynamic systems and exploring management interventions.
Magy Seif El-Nasr, Truong Huy Nguyen Dinh, Alessandro Canossa, and Anders Drachen
- Published in print:
- 2021
- Published Online:
- November 2021
- ISBN:
- 9780192897879
- eISBN:
- 9780191919466
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192897879.003.0008
- Subject:
- Computer Science, Human-Computer Interaction, Game Studies
This chapter focuses on two specific steps in the machine learning process, called model validation and model evaluation. Specifically, model validation is the step used to tune the hyperparameters ...
More
This chapter focuses on two specific steps in the machine learning process, called model validation and model evaluation. Specifically, model validation is the step used to tune the hyperparameters of the model. Here, we often integrate a cross-validation process, which we discuss in detail in this chapter. Model evaluation, on the other hand, is the process of testing the performance of the model using unseen data, the test dataset. These processes are used to ensure that the model we developed through the algorithms discussed in Chapter 6 are reliable, given our data. The chapter will include labs to give you a practical introduction to these steps, given the modeling techniques discussed in the last chapter.Less
This chapter focuses on two specific steps in the machine learning process, called model validation and model evaluation. Specifically, model validation is the step used to tune the hyperparameters of the model. Here, we often integrate a cross-validation process, which we discuss in detail in this chapter. Model evaluation, on the other hand, is the process of testing the performance of the model using unseen data, the test dataset. These processes are used to ensure that the model we developed through the algorithms discussed in Chapter 6 are reliable, given our data. The chapter will include labs to give you a practical introduction to these steps, given the modeling techniques discussed in the last chapter.
In‐Koo Cho and Kenneth Kasa
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199666126
- eISBN:
- 9780191749278
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199666126.003.0006
- Subject:
- Economics and Finance, Macro- and Monetary Economics
This chapter studies adaptive learning with multiple models. An agent is aware of potential model misspecification, and tries to detect it, in realtime, using an econometric specification test. If ...
More
This chapter studies adaptive learning with multiple models. An agent is aware of potential model misspecification, and tries to detect it, in realtime, using an econometric specification test. If the current model passes the test, it is used to construct an optimal policy. If it fails the test, a new model is randomly selected from a fixed set of models. As the rate of coefficient updating decreases, one model becomes dominant, and is used ‘almost always’. Dominant models can be characterized using the tools of large deviations theory. The analysis is applied to a standard cobweb model.Less
This chapter studies adaptive learning with multiple models. An agent is aware of potential model misspecification, and tries to detect it, in realtime, using an econometric specification test. If the current model passes the test, it is used to construct an optimal policy. If it fails the test, a new model is randomly selected from a fixed set of models. As the rate of coefficient updating decreases, one model becomes dominant, and is used ‘almost always’. Dominant models can be characterized using the tools of large deviations theory. The analysis is applied to a standard cobweb model.
Steven F. Railsback and Bret C. Harvey (eds)
- Published in print:
- 2020
- Published Online:
- January 2021
- ISBN:
- 9780691195285
- eISBN:
- 9780691195377
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691195285.003.0010
- Subject:
- Biology, Ecology
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a ...
More
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a significant credibility challenge. This challenge is complicated by the many ways that models can gain or lose credibility, and widespread confusion surrounding the term model validation. The chapter then addresses the task of testing, improving, and establishing the credibility of individual-based models (IBMs) that contain adaptive individual behavior. The experience with the trout and salmon models provides the primary basis for this discussion, but other long-term modeling projects have produced similar experiences. The chapter summarizes some of the issues and challenges that typically arise and how they have been dealt with, before presenting lessons learned from two decades of empirical and simulation studies addressing credibility of the salmonid models.Less
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a significant credibility challenge. This challenge is complicated by the many ways that models can gain or lose credibility, and widespread confusion surrounding the term model validation. The chapter then addresses the task of testing, improving, and establishing the credibility of individual-based models (IBMs) that contain adaptive individual behavior. The experience with the trout and salmon models provides the primary basis for this discussion, but other long-term modeling projects have produced similar experiences. The chapter summarizes some of the issues and challenges that typically arise and how they have been dealt with, before presenting lessons learned from two decades of empirical and simulation studies addressing credibility of the salmonid models.
Joseph A. Veech
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780198829287
- eISBN:
- 9780191868078
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198829287.003.0010
- Subject:
- Biology, Ecology, Biomathematics / Statistics and Data Analysis / Complexity Studies
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit ...
More
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit describes how well the predictor variables explain the variance in the response variable, typically species presence–absence or abundance. When more than one statistical model has been produced by the habitat analysis, these can be compared by a formal procedure called model comparison. This usually involves identifying the model with the lowest Akaike information criterion (AIC) value. If the statistical model is considered a predictive tool then its predictive accuracy needs to be assessed. There are many metrics for assessing the predictive performance of a model and quantifying rates of correct and incorrect classification; the latter are error rates. Many of these metrics are based on the numbers of true positive, true negative, false positive, and false negative observations in an independent dataset. “True” and “false” refer to whether species presence–absence was correctly predicted or not. Predictive performance can also be assessed by constructing a receiver operating characteristic (ROC) curve and calculating area under the curve (AUC) values. High AUC values approaching 1 indicate good predictive performance, whereas a value near 0.5 indicates a poor model that predicts species presence–absence no better than a random guess.Less
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit describes how well the predictor variables explain the variance in the response variable, typically species presence–absence or abundance. When more than one statistical model has been produced by the habitat analysis, these can be compared by a formal procedure called model comparison. This usually involves identifying the model with the lowest Akaike information criterion (AIC) value. If the statistical model is considered a predictive tool then its predictive accuracy needs to be assessed. There are many metrics for assessing the predictive performance of a model and quantifying rates of correct and incorrect classification; the latter are error rates. Many of these metrics are based on the numbers of true positive, true negative, false positive, and false negative observations in an independent dataset. “True” and “false” refer to whether species presence–absence was correctly predicted or not. Predictive performance can also be assessed by constructing a receiver operating characteristic (ROC) curve and calculating area under the curve (AUC) values. High AUC values approaching 1 indicate good predictive performance, whereas a value near 0.5 indicates a poor model that predicts species presence–absence no better than a random guess.
Cathal O'Donoghue
- Published in print:
- 2021
- Published Online:
- September 2021
- ISBN:
- 9780198852872
- eISBN:
- 9780191887178
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198852872.003.0003
- Subject:
- Economics and Finance, Econometrics
This chapter discusses the development of a static microsimulation model for the purpose of undertaking an anti-poverty policy reform. Microsimulation models, which simulate the legislative detail of ...
More
This chapter discusses the development of a static microsimulation model for the purpose of undertaking an anti-poverty policy reform. Microsimulation models, which simulate the legislative detail of poverty-reduction instruments, can be used to make social-protection instruments more effective in this objective by helping to improve the targeting of these instruments. This chapter describes firstly the structure of the dataset required for microsimulation modelling. It then creates a theoretical understanding of the structure of social transfers, and of the concept of a hypothetical microsimulation model. Although the model developed in this chapter abstracts from the population complexity described in Chapter 1, it allows us in a simpler way to understand the targeting and structure of anti-poverty policies. Some of the issues that arise in creating a base dataset for a microsimulation model are discussed. As validation, debugging, and error checking are paramount in model development, the use of a hypothetical family model to use for validation purposes is introduced. We define some concepts used to calculate the poverty efficiency of a social-protection instrument. Finally, the chapter undertakes a simulation of the development of a means-tested benefit.Less
This chapter discusses the development of a static microsimulation model for the purpose of undertaking an anti-poverty policy reform. Microsimulation models, which simulate the legislative detail of poverty-reduction instruments, can be used to make social-protection instruments more effective in this objective by helping to improve the targeting of these instruments. This chapter describes firstly the structure of the dataset required for microsimulation modelling. It then creates a theoretical understanding of the structure of social transfers, and of the concept of a hypothetical microsimulation model. Although the model developed in this chapter abstracts from the population complexity described in Chapter 1, it allows us in a simpler way to understand the targeting and structure of anti-poverty policies. Some of the issues that arise in creating a base dataset for a microsimulation model are discussed. As validation, debugging, and error checking are paramount in model development, the use of a hypothetical family model to use for validation purposes is introduced. We define some concepts used to calculate the poverty efficiency of a social-protection instrument. Finally, the chapter undertakes a simulation of the development of a means-tested benefit.
Thomas J. Sargent and Jouko Vilmunen
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199666126
- eISBN:
- 9780191749278
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199666126.003.0001
- Subject:
- Economics and Finance, Macro- and Monetary Economics
The logical structures that emerge from striving to put economic theory at the service of public policy are intrinsically beautiful in their internal structures and also useful in their applications. ...
More
The logical structures that emerge from striving to put economic theory at the service of public policy are intrinsically beautiful in their internal structures and also useful in their applications. The contributors to this volume are leaders in patiently creating models and pushing forward technical frontiers. Revealed preferences show that they love equilibrium stochastic processes that are determined by Euler equations for modelling people's decisions about working, saving, investing, and learning, set within contexts designed to enlighten monetary and fiscal policy‐makers. Their papers focus on forces that can help understand macroeconomic outcomes, including learning, multiple equilibria, moral hazard, asymmetric information, heterogeneity, and constraints on commitment technologies. The contributions follow modern macroeconomics in using mathematics and statistics to understand behaviour in situations where there is uncertainty about how the future unfolds from the past. The contributions to this volume cover a wide range of issues in macroeconomics and macroeconomic policy. They thus seek to give a broader and more balanced view of the scope of modern macroeconomics.Less
The logical structures that emerge from striving to put economic theory at the service of public policy are intrinsically beautiful in their internal structures and also useful in their applications. The contributors to this volume are leaders in patiently creating models and pushing forward technical frontiers. Revealed preferences show that they love equilibrium stochastic processes that are determined by Euler equations for modelling people's decisions about working, saving, investing, and learning, set within contexts designed to enlighten monetary and fiscal policy‐makers. Their papers focus on forces that can help understand macroeconomic outcomes, including learning, multiple equilibria, moral hazard, asymmetric information, heterogeneity, and constraints on commitment technologies. The contributions follow modern macroeconomics in using mathematics and statistics to understand behaviour in situations where there is uncertainty about how the future unfolds from the past. The contributions to this volume cover a wide range of issues in macroeconomics and macroeconomic policy. They thus seek to give a broader and more balanced view of the scope of modern macroeconomics.