John E. Till and Helen Grogan (eds)
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780195127270
- eISBN:
- 9780199869121
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195127270.001.0001
- Subject:
- Biology, Ecology, Biochemistry / Molecular Biology
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the ...
More
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the public because decision makers typically use that endpoint to allocate resources and resolve issues. Chapters in the book explain the fundamental steps of radiological assessment, and they are organized in a sequence that would typically be used when undertaking an analysis of risk. The key components of radiological risk assessment discussed include source terms, atmospheric transport, surface water transport, groundwater transport, terrestrial and aquatic food chain pathways, estimating exposures, conversion of intakes and exposures to dose and risk, uncertainty analysis, environmental epidemiology, and model validation. A chapter on regulations related to environmental exposure is also included. Contributors to the book are well known experts from the various disciplines addressed.Less
This book is an update and major revision to Radiological Assessment: A Textbook on Environmental Dose Analysis published by the U.S. Nuclear Regulatory Commission in 1983. It focuses on risk to the public because decision makers typically use that endpoint to allocate resources and resolve issues. Chapters in the book explain the fundamental steps of radiological assessment, and they are organized in a sequence that would typically be used when undertaking an analysis of risk. The key components of radiological risk assessment discussed include source terms, atmospheric transport, surface water transport, groundwater transport, terrestrial and aquatic food chain pathways, estimating exposures, conversion of intakes and exposures to dose and risk, uncertainty analysis, environmental epidemiology, and model validation. A chapter on regulations related to environmental exposure is also included. Contributors to the book are well known experts from the various disciplines addressed.
Steven F. Railsback and Bret C. Harvey (eds)
- Published in print:
- 2020
- Published Online:
- January 2021
- ISBN:
- 9780691195285
- eISBN:
- 9780691195377
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691195285.003.0010
- Subject:
- Biology, Ecology
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a ...
More
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a significant credibility challenge. This challenge is complicated by the many ways that models can gain or lose credibility, and widespread confusion surrounding the term model validation. The chapter then addresses the task of testing, improving, and establishing the credibility of individual-based models (IBMs) that contain adaptive individual behavior. The experience with the trout and salmon models provides the primary basis for this discussion, but other long-term modeling projects have produced similar experiences. The chapter summarizes some of the issues and challenges that typically arise and how they have been dealt with, before presenting lessons learned from two decades of empirical and simulation studies addressing credibility of the salmonid models.Less
This chapter assesses how state- and prediction-based theory (SPT), as a nontraditional approach to modeling adaptive behavior embedded in a nontraditional population modeling approach, faces a significant credibility challenge. This challenge is complicated by the many ways that models can gain or lose credibility, and widespread confusion surrounding the term model validation. The chapter then addresses the task of testing, improving, and establishing the credibility of individual-based models (IBMs) that contain adaptive individual behavior. The experience with the trout and salmon models provides the primary basis for this discussion, but other long-term modeling projects have produced similar experiences. The chapter summarizes some of the issues and challenges that typically arise and how they have been dealt with, before presenting lessons learned from two decades of empirical and simulation studies addressing credibility of the salmonid models.
Joseph A. Veech
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780198829287
- eISBN:
- 9780191868078
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198829287.003.0010
- Subject:
- Biology, Ecology, Biomathematics / Statistics and Data Analysis / Complexity Studies
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit ...
More
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit describes how well the predictor variables explain the variance in the response variable, typically species presence–absence or abundance. When more than one statistical model has been produced by the habitat analysis, these can be compared by a formal procedure called model comparison. This usually involves identifying the model with the lowest Akaike information criterion (AIC) value. If the statistical model is considered a predictive tool then its predictive accuracy needs to be assessed. There are many metrics for assessing the predictive performance of a model and quantifying rates of correct and incorrect classification; the latter are error rates. Many of these metrics are based on the numbers of true positive, true negative, false positive, and false negative observations in an independent dataset. “True” and “false” refer to whether species presence–absence was correctly predicted or not. Predictive performance can also be assessed by constructing a receiver operating characteristic (ROC) curve and calculating area under the curve (AUC) values. High AUC values approaching 1 indicate good predictive performance, whereas a value near 0.5 indicates a poor model that predicts species presence–absence no better than a random guess.Less
There are several additional statistical procedures that can be conducted after a habitat analysis. The statistical model produced by a habitat analysis can be assessed for fit to the data. Model fit describes how well the predictor variables explain the variance in the response variable, typically species presence–absence or abundance. When more than one statistical model has been produced by the habitat analysis, these can be compared by a formal procedure called model comparison. This usually involves identifying the model with the lowest Akaike information criterion (AIC) value. If the statistical model is considered a predictive tool then its predictive accuracy needs to be assessed. There are many metrics for assessing the predictive performance of a model and quantifying rates of correct and incorrect classification; the latter are error rates. Many of these metrics are based on the numbers of true positive, true negative, false positive, and false negative observations in an independent dataset. “True” and “false” refer to whether species presence–absence was correctly predicted or not. Predictive performance can also be assessed by constructing a receiver operating characteristic (ROC) curve and calculating area under the curve (AUC) values. High AUC values approaching 1 indicate good predictive performance, whereas a value near 0.5 indicates a poor model that predicts species presence–absence no better than a random guess.