José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West (eds)
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.001.0001
- Subject:
- Mathematics, Probability / Statistics
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in ...
More
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.Less
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.
Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0004
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis ...
More
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.Less
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.
Elliott Sober
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780199297306
- eISBN:
- 9780191713729
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199297306.003.0003
- Subject:
- Biology, Evolutionary Biology / Genetics
The use of a principle of parsimony in phylogenetic inference is both widespread and controversial. It is controversial because biologists, who view phylogenetic inference as first and foremost a ...
More
The use of a principle of parsimony in phylogenetic inference is both widespread and controversial. It is controversial because biologists, who view phylogenetic inference as first and foremost a statistical problem, have pressed the question of what one must assume about the evolutionary process if one is entitled to use parsimony in this way. They suspect, not just that parsimony makes assumptions about the evolutionary process, but that it makes highly specific assumptions that are often implausible. That it must make some assumptions seems clear to them because they are confident that the method of maximum parsimony must resemble the main statistical procedure used to make phylogenetic inferences: the method of maximum likelihood. Likelihoodists suspect that parsimony nonetheless involves an implicit model. The question for them is to discover what that model is. This chapter discusses parsimony's ostensive presuppositions by examining the relationship that exists between maximum likelihood and maximum parsimony among simple examples in which parsimony and likelihood disagree.Less
The use of a principle of parsimony in phylogenetic inference is both widespread and controversial. It is controversial because biologists, who view phylogenetic inference as first and foremost a statistical problem, have pressed the question of what one must assume about the evolutionary process if one is entitled to use parsimony in this way. They suspect, not just that parsimony makes assumptions about the evolutionary process, but that it makes highly specific assumptions that are often implausible. That it must make some assumptions seems clear to them because they are confident that the method of maximum parsimony must resemble the main statistical procedure used to make phylogenetic inferences: the method of maximum likelihood. Likelihoodists suspect that parsimony nonetheless involves an implicit model. The question for them is to discover what that model is. This chapter discusses parsimony's ostensive presuppositions by examining the relationship that exists between maximum likelihood and maximum parsimony among simple examples in which parsimony and likelihood disagree.
Michele Maggiore
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198570745
- eISBN:
- 9780191717666
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570745.003.0007
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular ...
More
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.Less
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.
Russell Davidson
- Published in print:
- 1999
- Published Online:
- November 2003
- ISBN:
- 9780198292111
- eISBN:
- 9780191596537
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292112.003.0013
- Subject:
- Economics and Finance, Macro- and Monetary Economics, Microeconomics
Russell Davidson explains how much has changed over the previous decade and how he sees things evolving with perhaps fewer fundamental changes than have occurred before these chapters were written. ...
More
Russell Davidson explains how much has changed over the previous decade and how he sees things evolving with perhaps fewer fundamental changes than have occurred before these chapters were written. Of course, the divide between Bayesians and frequentists continues and micro‐oriented econometricians use rather different techniques than their macro‐oriented counterparts. The author insists on the role of the computer in aiding the development of such techniques as ‘bootstrapping’ Bootstrapping followed in the wake of Monte Carlo methods which were already computer intensive. Major developments have and will occur in panel data techniques and in financial econometrics with the availability of high frequency data. He suggests in conclusion that forecasting will be largely Bayesian, while frequentist methods will be used to analyse the economic record.Less
Russell Davidson explains how much has changed over the previous decade and how he sees things evolving with perhaps fewer fundamental changes than have occurred before these chapters were written. Of course, the divide between Bayesians and frequentists continues and micro‐oriented econometricians use rather different techniques than their macro‐oriented counterparts. The author insists on the role of the computer in aiding the development of such techniques as ‘bootstrapping’ Bootstrapping followed in the wake of Monte Carlo methods which were already computer intensive. Major developments have and will occur in panel data techniques and in financial econometrics with the availability of high frequency data. He suggests in conclusion that forecasting will be largely Bayesian, while frequentist methods will be used to analyse the economic record.
Mark Kelman
- Published in print:
- 2011
- Published Online:
- May 2011
- ISBN:
- 9780199755608
- eISBN:
- 9780199895236
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199755608.003.0004
- Subject:
- Law, Philosophy of Law
F&F theorists claim H&B theorists have merely exposed laboratory frailties in judgment and decision-making; the findings do not imply poor performance in natural environments. H&B experimenters ...
More
F&F theorists claim H&B theorists have merely exposed laboratory frailties in judgment and decision-making; the findings do not imply poor performance in natural environments. H&B experimenters purportedly often present problems in a cognitively intractable form rather than the more tractable form they would take in natural environments, and they often ask people to solve, through abstract methods, problems of no practical significance that formally resemble important problems that they solve without using formal logic. Moreover, at times, subjects will substitute a pay-off structure from the real-world variant of the “game” that resembles the “game” the experimenters have established with its own unfamiliar pay-off structure and, at other times, people will reinterpret the language of instructions they are given because they draw implications from the quasi-conversation with the experimenter that are not literally present. F&F scholars further believe that the heuristics that the H&B scholars have identified are both under-theorized—there is no adaptationist account of why any of the cognitive mechanisms they identify would have developed—and under-defined.Less
F&F theorists claim H&B theorists have merely exposed laboratory frailties in judgment and decision-making; the findings do not imply poor performance in natural environments. H&B experimenters purportedly often present problems in a cognitively intractable form rather than the more tractable form they would take in natural environments, and they often ask people to solve, through abstract methods, problems of no practical significance that formally resemble important problems that they solve without using formal logic. Moreover, at times, subjects will substitute a pay-off structure from the real-world variant of the “game” that resembles the “game” the experimenters have established with its own unfamiliar pay-off structure and, at other times, people will reinterpret the language of instructions they are given because they draw implications from the quasi-conversation with the experimenter that are not literally present. F&F scholars further believe that the heuristics that the H&B scholars have identified are both under-theorized—there is no adaptationist account of why any of the cognitive mechanisms they identify would have developed—and under-defined.
M. D. Edge
- Published in print:
- 2019
- Published Online:
- October 2019
- ISBN:
- 9780198827627
- eISBN:
- 9780191866463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198827627.003.0007
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
This chapter marks a turning point. The preceding two chapters considered probability theory, which describes the kinds of data that result from specified processes. The remainder of the book ...
More
This chapter marks a turning point. The preceding two chapters considered probability theory, which describes the kinds of data that result from specified processes. The remainder of the book consider statistical estimation and inference, which starts with data and attempts to make conclusions about the process that produced them. First, general concepts in statistical estimation and inference are discussed, and then simple linear regression from nonparametric/semiparametric, parametric frequentist, and Bayesian perspectives.Less
This chapter marks a turning point. The preceding two chapters considered probability theory, which describes the kinds of data that result from specified processes. The remainder of the book consider statistical estimation and inference, which starts with data and attempts to make conclusions about the process that produced them. First, general concepts in statistical estimation and inference are discussed, and then simple linear regression from nonparametric/semiparametric, parametric frequentist, and Bayesian perspectives.
James B. Elsner and Thomas H. Jagger
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199827633
- eISBN:
- 9780197563199
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199827633.003.0004
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
This book is about hurricanes, climate, and statistics. These topics may not seem related. Hurricanes are violent winds and flooding rains, climate is ...
More
This book is about hurricanes, climate, and statistics. These topics may not seem related. Hurricanes are violent winds and flooding rains, climate is about weather conditions from the past, and statistics is about numbers. But what if you wanted to estimate the probability of winds exceeding 60 ms−1 in Florida next year. The answer involves all three, hurricanes (fastest winds), climate (weather of the past), and statistics (probability). This book teaches you how to answer these questions in a rigorous and scientific way. We begin here with a short description of the topics and a few notes on what this book is about. A hurricane is an area of low air pressure over the warm tropical ocean. The low pressure creates showers and thunderstorms that start the winds rotating. The rotation helps to develop new thunderstorms. A tropical storm forms when the rotating winds exceed 17 ms−1 and a hurricane when they exceed 33 ms−1. Once formed, the winds continue to blow despite friction by an in-up-and-out circulation that imports heat at high temperature from the ocean and exports heat at lower temperature in the upper troposphere (near 16 km), which is similar to the way a steam engine converts thermal energy to mechanical motion. In short, a hurricane is powered by moisture and heat. Strong winds are a hurricane’s defining characteristic. Wind is caused by the change in air pressure between two locations. In the center of a hurricane, the air pressure, which is the weight of a column of air from the surface to the top of the atmosphere, is quite low compared with the air pressure outside the hurricane. This difference causes the air to move from the outside inward toward the center. By a combination of friction as the air rubs on the ocean below and the spin of the earth as it rotates on its axis, the air does not move directly inward but rather spirals in a counter clockwise direction toward the region of lowest pressure.
Less
This book is about hurricanes, climate, and statistics. These topics may not seem related. Hurricanes are violent winds and flooding rains, climate is about weather conditions from the past, and statistics is about numbers. But what if you wanted to estimate the probability of winds exceeding 60 ms−1 in Florida next year. The answer involves all three, hurricanes (fastest winds), climate (weather of the past), and statistics (probability). This book teaches you how to answer these questions in a rigorous and scientific way. We begin here with a short description of the topics and a few notes on what this book is about. A hurricane is an area of low air pressure over the warm tropical ocean. The low pressure creates showers and thunderstorms that start the winds rotating. The rotation helps to develop new thunderstorms. A tropical storm forms when the rotating winds exceed 17 ms−1 and a hurricane when they exceed 33 ms−1. Once formed, the winds continue to blow despite friction by an in-up-and-out circulation that imports heat at high temperature from the ocean and exports heat at lower temperature in the upper troposphere (near 16 km), which is similar to the way a steam engine converts thermal energy to mechanical motion. In short, a hurricane is powered by moisture and heat. Strong winds are a hurricane’s defining characteristic. Wind is caused by the change in air pressure between two locations. In the center of a hurricane, the air pressure, which is the weight of a column of air from the surface to the top of the atmosphere, is quite low compared with the air pressure outside the hurricane. This difference causes the air to move from the outside inward toward the center. By a combination of friction as the air rubs on the ocean below and the spin of the earth as it rotates on its axis, the air does not move directly inward but rather spirals in a counter clockwise direction toward the region of lowest pressure.
Harry Collins
- Published in print:
- 2013
- Published Online:
- May 2014
- ISBN:
- 9780226052298
- eISBN:
- 9780226052328
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226052328.003.0006
- Subject:
- History, History of Science, Technology, and Medicine
Statistical tests appear to be merely matters of calculation but they depend on all manner of assumptions and things that cannot be known. The war between Bayesians and Frequentists is explained.
Statistical tests appear to be merely matters of calculation but they depend on all manner of assumptions and things that cannot be known. The war between Bayesians and Frequentists is explained.
Michael R. Powers
- Published in print:
- 2014
- Published Online:
- November 2015
- ISBN:
- 9780231153676
- eISBN:
- 9780231527057
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231153676.003.0004
- Subject:
- Economics and Finance, Development, Growth, and Environmental
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the ...
More
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the concept of a random sample, and then summarizes how such samples are used to construct both point and interval estimates. Next, it introduces three important asymptotic results—the law of large numbers, the central limit theorem, and the generalized central limit theorem—followed by a discussion of the practical validity of the independence assumption underlying random samples. Finally, it considers in some detail the method of hypothesis testing, whose framework follows much the same logic as both the U.S. criminal justice system and the scientific method as it is generally understood.Less
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the concept of a random sample, and then summarizes how such samples are used to construct both point and interval estimates. Next, it introduces three important asymptotic results—the law of large numbers, the central limit theorem, and the generalized central limit theorem—followed by a discussion of the practical validity of the independence assumption underlying random samples. Finally, it considers in some detail the method of hypothesis testing, whose framework follows much the same logic as both the U.S. criminal justice system and the scientific method as it is generally understood.
Ziheng Yang
- Published in print:
- 2014
- Published Online:
- August 2014
- ISBN:
- 9780199602605
- eISBN:
- 9780191782251
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199602605.003.0006
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Evolutionary Biology / Genetics
This chapter summarizes the Frequentist–Bayesian controversy in statistics, and introduces the basic theory of Bayesian statistical inference, such as the prior, posterior, and Bayes’ theorem. ...
More
This chapter summarizes the Frequentist–Bayesian controversy in statistics, and introduces the basic theory of Bayesian statistical inference, such as the prior, posterior, and Bayes’ theorem. Classical methods for Bayesian computation, such as numerical integration, Laplacian expansion, Monte Carlo integration, and importance sampling, are illustrated using biological examples.Less
This chapter summarizes the Frequentist–Bayesian controversy in statistics, and introduces the basic theory of Bayesian statistical inference, such as the prior, posterior, and Bayes’ theorem. Classical methods for Bayesian computation, such as numerical integration, Laplacian expansion, Monte Carlo integration, and importance sampling, are illustrated using biological examples.
Brian Dennis
- Published in print:
- 2004
- Published Online:
- February 2013
- ISBN:
- 9780226789552
- eISBN:
- 9780226789583
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226789583.003.0011
- Subject:
- Biology, Ecology
The questioning of science and the scientific method continues within the science of ecology. The use of Bayesian statistical analysis has recently been advocated in ecology, supposedly to aid ...
More
The questioning of science and the scientific method continues within the science of ecology. The use of Bayesian statistical analysis has recently been advocated in ecology, supposedly to aid decision makers and enhance the pace of progress. Bayesian statistics provides conclusions in the face of incomplete information. However, Bayesian statistics represents a much different approach to science than the frequentist statistics studied by most ecologists. This chapter discusses the influence of postmodernism and relativism on the scientific process and in particular its implications, through the use of subjective Bayesian approach, in statistical inference. It argues that subjective Bayesianism is “tobacco science” and that its use in ecological analysis and environmental policy making can be dangerous. It claims that science works through replicability and skepticism, with methods considered ineffective until they have proven their worth. It proposes the use of a frequentist approach to statistical analysis because it corresponds to the skeptical worldview of scientists.Less
The questioning of science and the scientific method continues within the science of ecology. The use of Bayesian statistical analysis has recently been advocated in ecology, supposedly to aid decision makers and enhance the pace of progress. Bayesian statistics provides conclusions in the face of incomplete information. However, Bayesian statistics represents a much different approach to science than the frequentist statistics studied by most ecologists. This chapter discusses the influence of postmodernism and relativism on the scientific process and in particular its implications, through the use of subjective Bayesian approach, in statistical inference. It argues that subjective Bayesianism is “tobacco science” and that its use in ecological analysis and environmental policy making can be dangerous. It claims that science works through replicability and skepticism, with methods considered ineffective until they have proven their worth. It proposes the use of a frequentist approach to statistical analysis because it corresponds to the skeptical worldview of scientists.
Mark L. Taper and Subhash R. Lele
- Published in print:
- 2004
- Published Online:
- February 2013
- ISBN:
- 9780226789552
- eISBN:
- 9780226789583
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226789583.003.0016
- Subject:
- Biology, Ecology
A method that has proved extremely successful in the history of science is to take ideas about how nature works, whether obtained deductively or inductively, and translate them into quantitative ...
More
A method that has proved extremely successful in the history of science is to take ideas about how nature works, whether obtained deductively or inductively, and translate them into quantitative statements. These statements, then, can be compared with the realizations of the processes under study. The main two schools of statistical thought, frequentist and Bayesian statistics, do not address the question of evidence explicitly. This chapter summarizes various approaches to quantifying scientific evidence and compares them to Bayesian and frequentist statistics. It discusses ideas on model adequacy and model selection in the context of quantifying evidence and explores the role and scope of the use of expert opinion. Replication is usually highly desirable but in many ecological experiments difficult to obtain. How can one quantify evidence obtained from unreplicated data? Nuisance parameters, composite hypotheses, and outliers are realities of nature. Finally, the chapter raises a number of important unresolved issues, such as using evidence to make decisions without resorting to subjective probability.Less
A method that has proved extremely successful in the history of science is to take ideas about how nature works, whether obtained deductively or inductively, and translate them into quantitative statements. These statements, then, can be compared with the realizations of the processes under study. The main two schools of statistical thought, frequentist and Bayesian statistics, do not address the question of evidence explicitly. This chapter summarizes various approaches to quantifying scientific evidence and compares them to Bayesian and frequentist statistics. It discusses ideas on model adequacy and model selection in the context of quantifying evidence and explores the role and scope of the use of expert opinion. Replication is usually highly desirable but in many ecological experiments difficult to obtain. How can one quantify evidence obtained from unreplicated data? Nuisance parameters, composite hypotheses, and outliers are realities of nature. Finally, the chapter raises a number of important unresolved issues, such as using evidence to make decisions without resorting to subjective probability.
Gilles Bénéplanc and Jean-Charles Rochet
- Published in print:
- 2011
- Published Online:
- April 2015
- ISBN:
- 9780199774081
- eISBN:
- 9780190258474
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199774081.003.0005
- Subject:
- Business and Management, Finance, Accounting, and Banking
This chapter discusses two methods for quantifying risks: the frequentist approach and the subjective approach. The frequentist approach can be applied in stationary environments when enough past ...
More
This chapter discusses two methods for quantifying risks: the frequentist approach and the subjective approach. The frequentist approach can be applied in stationary environments when enough past observations are available, while the subjective approach has to be applied when there are not enough observations. In practice, corporations have to deal with combinations of risks and require a combination of the frequentist and subjective approach, called Bayesian updating. Bayesian updating is based on the Bayes formula, which shows how to revise subjective probabilities on the basis of new information.Less
This chapter discusses two methods for quantifying risks: the frequentist approach and the subjective approach. The frequentist approach can be applied in stationary environments when enough past observations are available, while the subjective approach has to be applied when there are not enough observations. In practice, corporations have to deal with combinations of risks and require a combination of the frequentist and subjective approach, called Bayesian updating. Bayesian updating is based on the Bayes formula, which shows how to revise subjective probabilities on the basis of new information.
Jeffrey S. Racine
- Published in print:
- 2019
- Published Online:
- January 2019
- ISBN:
- 9780190900663
- eISBN:
- 9780190933647
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190900663.003.0006
- Subject:
- Economics and Finance, Econometrics
This chapter covers model selection methods and model averaging methods. It relies on knowledge of solving a quadratic program which is outlined in an appendix.
This chapter covers model selection methods and model averaging methods. It relies on knowledge of solving a quadratic program which is outlined in an appendix.
Jan Sprenger and Stephan Hartmann
- Published in print:
- 2019
- Published Online:
- October 2019
- ISBN:
- 9780199672110
- eISBN:
- 9780191881671
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199672110.003.0011
- Subject:
- Philosophy, Philosophy of Science
Subjective Bayesianism is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, ...
More
Subjective Bayesianism is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. We rebut these concerns by bridging the debates on scientific objectivity and Bayesian inference in statistics. First, we show that the above concerns arise equally for frequentist statistical inference. Second, we argue that the involved senses of objectivity are epistemically inert. Third, we show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.Less
Subjective Bayesianism is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. We rebut these concerns by bridging the debates on scientific objectivity and Bayesian inference in statistics. First, we show that the above concerns arise equally for frequentist statistical inference. Second, we argue that the involved senses of objectivity are epistemically inert. Third, we show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.
Jan Sprenger and Stephan Hartmann
- Published in print:
- 2019
- Published Online:
- October 2019
- ISBN:
- 9780199672110
- eISBN:
- 9780191881671
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199672110.003.0009
- Subject:
- Philosophy, Philosophy of Science
According to Popper and other influential philosophers and scientists, scientific knowledge grows by repeatedly testing our best hypotheses. However, the interpretation of non-significant ...
More
According to Popper and other influential philosophers and scientists, scientific knowledge grows by repeatedly testing our best hypotheses. However, the interpretation of non-significant results—those that do not lead to a “rejection” of the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis or provide a reason to accept it? In this chapter, we prove two impossibility results for measures of corroboration that follow Popper’s criterion of measuring both predictive success and the testability of a hypothesis. Then we provide an axiomatic characterization of a more promising and scientifically useful concept of corroboration and discuss implications for the practice of hypothesis testing and the concept of statistical significance.Less
According to Popper and other influential philosophers and scientists, scientific knowledge grows by repeatedly testing our best hypotheses. However, the interpretation of non-significant results—those that do not lead to a “rejection” of the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis or provide a reason to accept it? In this chapter, we prove two impossibility results for measures of corroboration that follow Popper’s criterion of measuring both predictive success and the testability of a hypothesis. Then we provide an axiomatic characterization of a more promising and scientifically useful concept of corroboration and discuss implications for the practice of hypothesis testing and the concept of statistical significance.