Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0005
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian ...
More
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian inference in early sections of the chapter, and then illustrates them with several examples in sections that follow. Numerical techniques for solving complex problems are next discussed, and the final section provides a summary of pros and cons for classical and Bayesian method. It argues that most users of Bayesian estimation methods are likely to use a mix of Bayesian and frequentist tools. The reverse is also true—frequentist data analysts, even if they stay formally within the frequentist framework, are often influenced by “Bayesian thinking,” referring to “priors” and “posteriors.” The most advisable position is to know both paradigms well, in order to make informed judgments about which tools to apply in which situations.Less
This chapter introduces the most important aspects of Bayesian statistical inference and techniques for performing such calculations in practice. It first reviews the basic steps in Bayesian inference in early sections of the chapter, and then illustrates them with several examples in sections that follow. Numerical techniques for solving complex problems are next discussed, and the final section provides a summary of pros and cons for classical and Bayesian method. It argues that most users of Bayesian estimation methods are likely to use a mix of Bayesian and frequentist tools. The reverse is also true—frequentist data analysts, even if they stay formally within the frequentist framework, are often influenced by “Bayesian thinking,” referring to “priors” and “posteriors.” The most advisable position is to know both paradigms well, in order to make informed judgments about which tools to apply in which situations.
E. J. Milner-Gulland and Marcus Rowcliffe
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198530367
- eISBN:
- 9780191713095
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198530367.003.0005
- Subject:
- Biology, Biodiversity / Conservation Biology
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may ...
More
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may simply be in the form of a conceptual model, but is much more powerful when formalized as a mathematical model. This chapter introduces methods for building a model of the system that can be used to predict future sustainability with or without management interventions. The emphasis is on the simulation of biological and bioeconomic dynamics, for which step-by-step worked examples are given. These examples start with conceptual models, then show how to formalize these as mathematical equations, build these into computer code; test model sensitivity, validity, and alternative structures; and finally, explore future scenarios. Methods for modelling stochasticity and human behaviour are also introduced, as well as the use of Bayesian methods for understanding dynamic systems and exploring management interventions.Less
The effective management of natural resources use requires a mechanistic understanding of the system, not just correlations between variables of the kind discussed in Chapter 4. Understanding may simply be in the form of a conceptual model, but is much more powerful when formalized as a mathematical model. This chapter introduces methods for building a model of the system that can be used to predict future sustainability with or without management interventions. The emphasis is on the simulation of biological and bioeconomic dynamics, for which step-by-step worked examples are given. These examples start with conceptual models, then show how to formalize these as mathematical equations, build these into computer code; test model sensitivity, validity, and alternative structures; and finally, explore future scenarios. Methods for modelling stochasticity and human behaviour are also introduced, as well as the use of Bayesian methods for understanding dynamic systems and exploring management interventions.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0006
- Subject:
- Biology, Ecology
This chapter seeks to explain hierarchical models and how they differ from simple Bayesian models and to illustrate building hierarchical models using mathematically correct expressions. It begins ...
More
This chapter seeks to explain hierarchical models and how they differ from simple Bayesian models and to illustrate building hierarchical models using mathematically correct expressions. It begins with the definition of hierarchical models. Next, the chapter introduces four general classes of hierarchical models that have broad application in ecology. These classes can be used individually or in combination to attack virtually any research problem. Examples are used to show how to draw Bayesian networks that portray stochastic relationships between observed and unobserved quantities. The chapter furthermore shows how to use network drawings as a guide for writing posterior and joint distributions.Less
This chapter seeks to explain hierarchical models and how they differ from simple Bayesian models and to illustrate building hierarchical models using mathematically correct expressions. It begins with the definition of hierarchical models. Next, the chapter introduces four general classes of hierarchical models that have broad application in ecology. These classes can be used individually or in combination to attack virtually any research problem. Examples are used to show how to draw Bayesian networks that portray stochastic relationships between observed and unobserved quantities. The chapter furthermore shows how to use network drawings as a guide for writing posterior and joint distributions.
DAVID LAGNADO
- Published in print:
- 2011
- Published Online:
- January 2013
- ISBN:
- 9780197264843
- eISBN:
- 9780191754050
- Item type:
- chapter
- Publisher:
- British Academy
- DOI:
- 10.5871/bacad/9780197264843.003.0007
- Subject:
- Sociology, Methodology and Statistics
This chapter argues that people reason about legal evidence using small-scale qualitative networks. These cognitive networks are typically qualitative and incomplete, and based on people's causal ...
More
This chapter argues that people reason about legal evidence using small-scale qualitative networks. These cognitive networks are typically qualitative and incomplete, and based on people's causal beliefs about the specifics of the case as well as the workings of the physical and social world in general. A key feature of these networks is their ability to represent qualitative relations between hypotheses and evidence, allowing reasoners to capture the concepts of dependency and relevance critical in legal contexts. In support of this claim, the chapter introduces some novel empirical and formal work on alibi evidence, showing that people's reasoning conforms to the dictates of a qualitative Bayesian model. However, people's inferences do not always conform to Bayesian prescripts. Empirical studies are also discussed in which people over-extend the discredit of one item of evidence to other unrelated items. This bias is explained in terms of the propensity to group positive and negative evidence separately and the use of coherence-based inference mechanisms. It is argued that these cognitive processes are a natural response to deal with the complexity of legal evidence.Less
This chapter argues that people reason about legal evidence using small-scale qualitative networks. These cognitive networks are typically qualitative and incomplete, and based on people's causal beliefs about the specifics of the case as well as the workings of the physical and social world in general. A key feature of these networks is their ability to represent qualitative relations between hypotheses and evidence, allowing reasoners to capture the concepts of dependency and relevance critical in legal contexts. In support of this claim, the chapter introduces some novel empirical and formal work on alibi evidence, showing that people's reasoning conforms to the dictates of a qualitative Bayesian model. However, people's inferences do not always conform to Bayesian prescripts. Empirical studies are also discussed in which people over-extend the discredit of one item of evidence to other unrelated items. This bias is explained in terms of the propensity to group positive and negative evidence separately and the use of coherence-based inference mechanisms. It is argued that these cognitive processes are a natural response to deal with the complexity of legal evidence.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0005
- Subject:
- Biology, Ecology
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem ...
More
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem works, by making use of the conditional rule of probability and the law of total probability. The chapter begins with the central, underpinning tenet of the Bayesian view: the world can be divided into quantities that are observed and quantities that are unobserved. Unobserved quantities include parameters in models, latent states predicted by models, missing data, effect sizes, future states, and data before they are observed. We wish to learn about these quantities using observations. The Bayesian framework for achieving that understanding is applied in exactly the same way regardless of the specifics of the research problem at hand or the nature of the unobserved quantities.Less
This chapter lays out the basic principles of Bayesian inference, building on the concepts of probability developed in Chapter 3. It seeks to use the rules of probability to show how Bayes' theorem works, by making use of the conditional rule of probability and the law of total probability. The chapter begins with the central, underpinning tenet of the Bayesian view: the world can be divided into quantities that are observed and quantities that are unobserved. Unobserved quantities include parameters in models, latent states predicted by models, missing data, effect sizes, future states, and data before they are observed. We wish to learn about these quantities using observations. The Bayesian framework for achieving that understanding is applied in exactly the same way regardless of the specifics of the research problem at hand or the nature of the unobserved quantities.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.001.0001
- Subject:
- Biology, Ecology
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This book provides a comprehensive ...
More
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This book provides a comprehensive and accessible introduction to the latest Bayesian methods. It emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach, and is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. The book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals. This book enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Less
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This book provides a comprehensive and accessible introduction to the latest Bayesian methods. It emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach, and is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. The book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals. This book enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0010
- Subject:
- Biology, Ecology
This chapter offers a general set of steps for writing models to assist the researcher in formulating their own approach to the Bayesian model. The crucial skill of specifying models is often ...
More
This chapter offers a general set of steps for writing models to assist the researcher in formulating their own approach to the Bayesian model. The crucial skill of specifying models is often neglected in statistical texts in general and texts on Bayesian modeling in particular. The central importance of model specification also motivates this chapter. The overarching challenge in building models is to specify the components of the posterior distribution and the joint distribution and to factor the joint distribution into sensible parts. This chapter first lays out a framework for doing just that, albeit in somewhat abstract terms, before moving on to a more concrete example—the effects of grazing by livestock and wild ungulates on structure and function of a sagebrush steppe ecosystem.Less
This chapter offers a general set of steps for writing models to assist the researcher in formulating their own approach to the Bayesian model. The crucial skill of specifying models is often neglected in statistical texts in general and texts on Bayesian modeling in particular. The central importance of model specification also motivates this chapter. The overarching challenge in building models is to specify the components of the posterior distribution and the joint distribution and to factor the joint distribution into sensible parts. This chapter first lays out a framework for doing just that, albeit in somewhat abstract terms, before moving on to a more concrete example—the effects of grazing by livestock and wild ungulates on structure and function of a sagebrush steppe ecosystem.
Ladan Shams and Ulrik Beierholm
- Published in print:
- 2011
- Published Online:
- September 2012
- ISBN:
- 9780195387247
- eISBN:
- 9780199918379
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195387247.003.0013
- Subject:
- Psychology, Cognitive Neuroscience, Cognitive Psychology
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to ...
More
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to complete segregation. Next, it describes two Bayesian causal-inference models that can account for the entire range of combinations of two or more sensory cues. It shows that one of these models, which is a hierarchical Bayesian model, is a special form of the other one (which is a nonhierarchical model). It then compares the predictions of these models with human data in multiple experiments and shows that Bayesian causal-inference models can account for the human data remarkably well. Finally, a study is presented that investigates the stability of priors in the face of drastic change in sensory conditions.Less
This chapter first discusses experimental findings showing that multisensory perception encompasses a spectrum of phenomena ranging from full integration (or fusion), to partial integration, to complete segregation. Next, it describes two Bayesian causal-inference models that can account for the entire range of combinations of two or more sensory cues. It shows that one of these models, which is a hierarchical Bayesian model, is a special form of the other one (which is a nonhierarchical model). It then compares the predictions of these models with human data in multiple experiments and shows that Bayesian causal-inference models can account for the human data remarkably well. Finally, a study is presented that investigates the stability of priors in the face of drastic change in sensory conditions.
Dirk U. Pfeiffer, Timothy P. Robinson, Mark Stevenson, Kim B. Stevens, David J. Rogers, and Archie C. A. Clements
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780198509882
- eISBN:
- 9780191709128
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509882.003.0006
- Subject:
- Biology, Disease Ecology / Epidemiology
This chapter discusses spatial variation in risk. Epidemiological disease investigations should include an assessment of the spatial variation of disease risk, as this may provide important clues ...
More
This chapter discusses spatial variation in risk. Epidemiological disease investigations should include an assessment of the spatial variation of disease risk, as this may provide important clues leading to causal explanations. The objective is to produce a map representation of the important spatial effects present in the data while simultaneously removing any distracting noise or extreme values. The resulting smoothed map should have increased precision without introducing significant bias. The method used to analyse the data depends on how they have been recorded. Smoothing based on kernel functions, smoothing based and on Bayesian models, and spatial interpolation are discussed.Less
This chapter discusses spatial variation in risk. Epidemiological disease investigations should include an assessment of the spatial variation of disease risk, as this may provide important clues leading to causal explanations. The objective is to produce a map representation of the important spatial effects present in the data while simultaneously removing any distracting noise or extreme values. The resulting smoothed map should have increased precision without introducing significant bias. The method used to analyse the data depends on how they have been recorded. Smoothing based on kernel functions, smoothing based and on Bayesian models, and spatial interpolation are discussed.
Claudia Tebaldi, Bruno Sansó, and Richard L. Smith
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0021
- Subject:
- Mathematics, Probability / Statistics
The use of projections from ensembles of climate models to characterize fu ture climate change at regional scales has become the most widely adopted framework, as opposed to what was standard ...
More
The use of projections from ensembles of climate models to characterize fu ture climate change at regional scales has become the most widely adopted framework, as opposed to what was standard practice until just a few years ago when a single model's projections constituted the basis for arguing about future changes and their impacts. It is believed that comparing and synthe sizing simulations of multiple models is key to quantifying a best estimate of the future changes and its uncertainty. In the last few years there has been an explosion of literature in climate change science where mostly heuristic meth ods of synthesizing the output of multiple models have been proposed, and the statistical literature is showing more involvement by our community as well, of late. In this paper we give a brief overview of the mainstreams of research in this area and then focus on our recent work, through which we have proposed the framework of hierarchical Bayesian models to combine information from model simulations and observations, in order to derive posterior probabilities of temperature and precipitation change at regional scales.Less
The use of projections from ensembles of climate models to characterize fu ture climate change at regional scales has become the most widely adopted framework, as opposed to what was standard practice until just a few years ago when a single model's projections constituted the basis for arguing about future changes and their impacts. It is believed that comparing and synthe sizing simulations of multiple models is key to quantifying a best estimate of the future changes and its uncertainty. In the last few years there has been an explosion of literature in climate change science where mostly heuristic meth ods of synthesizing the output of multiple models have been proposed, and the statistical literature is showing more involvement by our community as well, of late. In this paper we give a brief overview of the mainstreams of research in this area and then focus on our recent work, through which we have proposed the framework of hierarchical Bayesian models to combine information from model simulations and observations, in order to derive posterior probabilities of temperature and precipitation change at regional scales.
N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0001
- Subject:
- Biology, Ecology
This chapter sketches an approach to inference applicable to an enormous range of research problems—one that can be understood from first principles and that can be unambiguously communicated to ...
More
This chapter sketches an approach to inference applicable to an enormous range of research problems—one that can be understood from first principles and that can be unambiguously communicated to other scientists, managers, and policy makers. In doing research, it is important that one is able to ask important questions and provide compelling answers to them. Doing so depends on establishing a line of inference that extends from current thinking, theory, and questions to new insight qualified by uncertainty. This chapter introduces a highly general, flexible approach to establishing this line of inference. It offers a somewhat abstract overview of this framework followed by a concrete example to properly illustrate this framework.Less
This chapter sketches an approach to inference applicable to an enormous range of research problems—one that can be understood from first principles and that can be unambiguously communicated to other scientists, managers, and policy makers. In doing research, it is important that one is able to ask important questions and provide compelling answers to them. Doing so depends on establishing a line of inference that extends from current thinking, theory, and questions to new insight qualified by uncertainty. This chapter introduces a highly general, flexible approach to establishing this line of inference. It offers a somewhat abstract overview of this framework followed by a concrete example to properly illustrate this framework.
Michael Springborn, Christopher Costello, and Peyton Ferrier
- Published in print:
- 2009
- Published Online:
- May 2010
- ISBN:
- 9780199560158
- eISBN:
- 9780191721557
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199560158.003.0010
- Subject:
- Biology, Ecology, Biodiversity / Conservation Biology
This chapter identifies variables from the port inspection setting that influence the gains to exploration via random inspections. It begins by describing a Bayesian learning model of trade-related ...
More
This chapter identifies variables from the port inspection setting that influence the gains to exploration via random inspections. It begins by describing a Bayesian learning model of trade-related non-indigenous species (NIS) risk in Section 10.2, which captures uncertainty over the true probability that trade from a given source is infested, and provides a framework for updating these beliefs as observations accrue. The formal inspection allocation decision problem is expressed in Section 10.3, where the computational demands of the central task are made clear. The analysis in Section 10.4 begins with the simplest possible nontrivial version of the problem. Elements of real-world complexity are subsequently added to build intuition for the ultimate task of exploring random inspection policy in an empirically-based setting. The workhorse method applies Monte Carlo simulation under various policies to characterize performance in terms of interceptions and to identify optimal choices for design and intensity of exploration.Less
This chapter identifies variables from the port inspection setting that influence the gains to exploration via random inspections. It begins by describing a Bayesian learning model of trade-related non-indigenous species (NIS) risk in Section 10.2, which captures uncertainty over the true probability that trade from a given source is infested, and provides a framework for updating these beliefs as observations accrue. The formal inspection allocation decision problem is expressed in Section 10.3, where the computational demands of the central task are made clear. The analysis in Section 10.4 begins with the simplest possible nontrivial version of the problem. Elements of real-world complexity are subsequently added to build intuition for the ultimate task of exploring random inspection policy in an empirically-based setting. The workhorse method applies Monte Carlo simulation under various policies to characterize performance in terms of interceptions and to identify optimal choices for design and intensity of exploration.
Timothy E. Hanson and Alejandro Jara
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.003.0030
- Subject:
- Mathematics, Probability / Statistics
This chapter compares two Bayesian nonparametric models that generalize the accelerated failure time model, based on recent work on probability models for predictor-dependent probability ...
More
This chapter compares two Bayesian nonparametric models that generalize the accelerated failure time model, based on recent work on probability models for predictor-dependent probability distributions. It begins by reviewing commonly used semiparametric survival models. It then discusses the Bayesian nonparametric priors used in the generalizations of the accelerated failure time (AFT) model. Next, the two generalizations of the accelerated failure time model are introduced and compared by means of real-life data analyses. The models correspond to generalizations of AFT models based on dependent extensions of the Dirichlet process (DP) and Polya tree (PT) priors. Advantages of the induced survival regression models include ease of interpretability and computational tractability.Less
This chapter compares two Bayesian nonparametric models that generalize the accelerated failure time model, based on recent work on probability models for predictor-dependent probability distributions. It begins by reviewing commonly used semiparametric survival models. It then discusses the Bayesian nonparametric priors used in the generalizations of the accelerated failure time (AFT) model. Next, the two generalizations of the accelerated failure time model are introduced and compared by means of real-life data analyses. The models correspond to generalizations of AFT models based on dependent extensions of the Dirichlet process (DP) and Polya tree (PT) priors. Advantages of the induced survival regression models include ease of interpretability and computational tractability.
L. Bernardinelli, C. Pascutto, C. Montomoli, and W. Gilks
- Published in print:
- 2001
- Published Online:
- September 2009
- ISBN:
- 9780198515326
- eISBN:
- 9780191723667
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198515326.003.0016
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter describes a Bayesian hierarchical model and applies it to a new dataset on insulin-dependent diabetes mellitus (IDDM) prevalence among 18-year-old males born in Sardinia between 1936 and ...
More
This chapter describes a Bayesian hierarchical model and applies it to a new dataset on insulin-dependent diabetes mellitus (IDDM) prevalence among 18-year-old males born in Sardinia between 1936 and 1973, using malaria prevalence in 1938–40 as the ecological covariate. It shows how to deal with the potential bias associated with using such a proxy by extending the Bayesian model to allow for covariate measurement error. It provides a method for choosing the hyperprior distributions for the spatial variation parameters of the model, and discusses the sensitivity of the results to different choices.Less
This chapter describes a Bayesian hierarchical model and applies it to a new dataset on insulin-dependent diabetes mellitus (IDDM) prevalence among 18-year-old males born in Sardinia between 1936 and 1973, using malaria prevalence in 1938–40 as the ecological covariate. It shows how to deal with the potential bias associated with using such a proxy by extending the Bayesian model to allow for covariate measurement error. It provides a method for choosing the hyperprior distributions for the spatial variation parameters of the model, and discusses the sensitivity of the results to different choices.
D. Clayton and L. Bernardinelli
- Published in print:
- 1996
- Published Online:
- September 2009
- ISBN:
- 9780192622358
- eISBN:
- 9780191723636
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192622358.003.0018
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
The construction of disease maps has been a central problem of descriptive epidemiology throughout its history. There are two main classes of disease maps: maps of standardized rates, and maps of ...
More
The construction of disease maps has been a central problem of descriptive epidemiology throughout its history. There are two main classes of disease maps: maps of standardized rates, and maps of statistical significance of the difference between risk in each area and the overall risk averaged over the entire map. This chapter focuses on the mapping problem with particular attention to smoothing maps of rates computed for small areas. The use of spatial correlation structure along with Bayesian concepts suggests ways of smoothing these maps.Less
The construction of disease maps has been a central problem of descriptive epidemiology throughout its history. There are two main classes of disease maps: maps of standardized rates, and maps of statistical significance of the difference between risk in each area and the overall risk averaged over the entire map. This chapter focuses on the mapping problem with particular attention to smoothing maps of rates computed for small areas. The use of spatial correlation structure along with Bayesian concepts suggests ways of smoothing these maps.
Jan Sprenger and Stephan Hartmann
- Published in print:
- 2019
- Published Online:
- October 2019
- ISBN:
- 9780199672110
- eISBN:
- 9780191881671
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199672110.003.0010
- Subject:
- Philosophy, Philosophy of Science
Is simplicity a virtue of a good scientific theory, and are simpler theories more likely to be true or predictively successful? If so, how much should simplicity count vis-à-vis predictive accuracy? ...
More
Is simplicity a virtue of a good scientific theory, and are simpler theories more likely to be true or predictively successful? If so, how much should simplicity count vis-à-vis predictive accuracy? We address this question using Bayesian inference, focusing on the context of statistical model selection and an interpretation of simplicity via the degree of freedoms of a model. We rebut claims to prove the epistemic value of simplicity by means of showing its particular role in Bayesian model selection strategies (e.g., the BIC or the MML). Instead, we show that Bayesian inference in the context of model selection is usually done in a philosophically eclectic, instrumental fashion that is more tuned to practical applications than to philosophical foundations. Thus, these techniques cannot justify a particular “appropriate weight of simplicity in model selection”.Less
Is simplicity a virtue of a good scientific theory, and are simpler theories more likely to be true or predictively successful? If so, how much should simplicity count vis-à-vis predictive accuracy? We address this question using Bayesian inference, focusing on the context of statistical model selection and an interpretation of simplicity via the degree of freedoms of a model. We rebut claims to prove the epistemic value of simplicity by means of showing its particular role in Bayesian model selection strategies (e.g., the BIC or the MML). Instead, we show that Bayesian inference in the context of model selection is usually done in a philosophically eclectic, instrumental fashion that is more tuned to practical applications than to philosophical foundations. Thus, these techniques cannot justify a particular “appropriate weight of simplicity in model selection”.
Brian D. Haig
- Published in print:
- 2018
- Published Online:
- January 2018
- ISBN:
- 9780190222055
- eISBN:
- 9780190871734
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190222055.003.0004
- Subject:
- Psychology, Social Psychology
Chapter 4 focuses on Bayesian confirmation theory, a formal theory of reasoning based on probability theory. It deals with important, and related, general ideas, such as rationality, confirmation, ...
More
Chapter 4 focuses on Bayesian confirmation theory, a formal theory of reasoning based on probability theory. It deals with important, and related, general ideas, such as rationality, confirmation, and inductive inference, including statistical inference. The chapter also provides a selective discussion of Bayesian statistics. The chapter traces some of the broad contours of Bayesian confirmation theory and then presents an evaluation of a philosophy of Bayesian statistical practice. Psychology’s attitudes to Bayesianism are briefly discussed. Considered is the question of whether Bayesianism provides an illuminating account of the approach to theory evaluation known as inference to the best explanation. The chapter offers some broad recommendations for research practice.Less
Chapter 4 focuses on Bayesian confirmation theory, a formal theory of reasoning based on probability theory. It deals with important, and related, general ideas, such as rationality, confirmation, and inductive inference, including statistical inference. The chapter also provides a selective discussion of Bayesian statistics. The chapter traces some of the broad contours of Bayesian confirmation theory and then presents an evaluation of a philosophy of Bayesian statistical practice. Psychology’s attitudes to Bayesianism are briefly discussed. Considered is the question of whether Bayesianism provides an illuminating account of the approach to theory evaluation known as inference to the best explanation. The chapter offers some broad recommendations for research practice.
- Published in print:
- 2009
- Published Online:
- June 2013
- ISBN:
- 9780804762694
- eISBN:
- 9780804772372
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804762694.003.0002
- Subject:
- Political Science, Conflict Politics and Policy
This chapter outlines the existing literature about war termination. It demonstrates that there is a causal connection between the difficulty of ending wars and the fact that they are started and ...
More
This chapter outlines the existing literature about war termination. It demonstrates that there is a causal connection between the difficulty of ending wars and the fact that they are started and ended by politicians. Realpolitik, domestic politics, and bargaining models can be used to illustrate categories about war termination. Bayesian models of war termination theorized more explicitly about lags in the updating process that can result in protracted stalemates. These models showed that a lag in updating can delay war termination, they do not elaborate on how states get beyond this lag. Ending war needed settling with the enemy as well as at home. It is noted that policy stability can result governing coalitions being ineffective for altering the status quo, even when such changes are necessary or desirable. Furthermore, a coalition shift can overpower the obstacles to peace and thus allows a war to end.Less
This chapter outlines the existing literature about war termination. It demonstrates that there is a causal connection between the difficulty of ending wars and the fact that they are started and ended by politicians. Realpolitik, domestic politics, and bargaining models can be used to illustrate categories about war termination. Bayesian models of war termination theorized more explicitly about lags in the updating process that can result in protracted stalemates. These models showed that a lag in updating can delay war termination, they do not elaborate on how states get beyond this lag. Ending war needed settling with the enemy as well as at home. It is noted that policy stability can result governing coalitions being ineffective for altering the status quo, even when such changes are necessary or desirable. Furthermore, a coalition shift can overpower the obstacles to peace and thus allows a war to end.
Bruce Walsh and Michael Lynch
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780198830870
- eISBN:
- 9780191868986
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198830870.003.0019
- Subject:
- Biology, Evolutionary Biology / Genetics, Biochemistry / Molecular Biology
When the full pedigree of individuals whose values (records) were used in the selection decisions during an experiment (or breeding program) is known, LS analysis can be replaced by mixed models and ...
More
When the full pedigree of individuals whose values (records) were used in the selection decisions during an experiment (or breeding program) is known, LS analysis can be replaced by mixed models and their Bayesian extensions. In this setting, REML can be used to estimate genetic variances and BLUP can be used to estimate the mean breeding value in any given generation. The latter allows for genetic trends to be separated from environmental trends without the need for a control population. Under the infinitesimal model setting (wherein selection-induced allele-frequency changes are small during the course of the experiment), the use of the relationship matrix in a BLUP analysis accounts for drift, nonrandom mating, and linkage disequilibrium.Less
When the full pedigree of individuals whose values (records) were used in the selection decisions during an experiment (or breeding program) is known, LS analysis can be replaced by mixed models and their Bayesian extensions. In this setting, REML can be used to estimate genetic variances and BLUP can be used to estimate the mean breeding value in any given generation. The latter allows for genetic trends to be separated from environmental trends without the need for a control population. Under the infinitesimal model setting (wherein selection-induced allele-frequency changes are small during the course of the experiment), the use of the relationship matrix in a BLUP analysis accounts for drift, nonrandom mating, and linkage disequilibrium.
Scott L. Zeger, Francesca Dominici, Aidan Mcdermott, and Jonathan M. Samet
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780195146493
- eISBN:
- 9780199864928
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195146493.003.0010
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter illustrates the use of log-linear regression and hierarchical models to estimate the association of daily mortality with acute exposure to particulate air pollution. It focuses on ...
More
This chapter illustrates the use of log-linear regression and hierarchical models to estimate the association of daily mortality with acute exposure to particulate air pollution. It focuses on multistage models of daily mortality data in the eighty-eight largest cities in the United States to illustrate the main ideas. These models have been used to quantify the risks of shorter-term exposure to particulate pollution and to address key causal questions.Less
This chapter illustrates the use of log-linear regression and hierarchical models to estimate the association of daily mortality with acute exposure to particulate air pollution. It focuses on multistage models of daily mortality data in the eighty-eight largest cities in the United States to illustrate the main ideas. These models have been used to quantify the risks of shorter-term exposure to particulate pollution and to address key causal questions.