Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0004
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis ...
More
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.Less
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.
Partha P. Mitra and Hemant Bokil
- Published in print:
- 2007
- Published Online:
- May 2009
- ISBN:
- 9780195178081
- eISBN:
- 9780199864829
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195178081.003.0006
- Subject:
- Neuroscience, Techniques, Molecular and Cellular Systems
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval ...
More
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval estimation, hypothesis testing, nonparametric tests, and Bayesian estimation and inference.Less
This chapter provides a mini-review of classical and modern statistical methods for data analysis. Topics covered include method of least squares, data visualization, point estimation, interval estimation, hypothesis testing, nonparametric tests, and Bayesian estimation and inference.
José M. Bernardo
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0001
- Subject:
- Mathematics, Probability / Statistics
The complete final product of Bayesian inference is the posterior distribution of the quantity of interest. Important inference summaries include point estimation, region estimation and precise ...
More
The complete final product of Bayesian inference is the posterior distribution of the quantity of interest. Important inference summaries include point estimation, region estimation and precise hypotheses testing. Those summaries may appropriately be described as the solution to specific decision problems which depend on the particular loss function chosen. The use of a continuous loss function leads to an integrated set of solutions where the same prior distribution may be used throughout. Objective Bayesian methods are those which use a prior distribution which only depends on the assumed model and the quantity of interest. As a consequence, objective Bayesian methods produce results which only depend on the assumed model and the data obtained. The combined use of intrinsic discrepancy, an invariant information‐based loss function, and appropriately defined reference priors, provides an integrated objective Bayesian solution to both estimation and hypothesis testing problems. The ideas are illustrated with a large collection of non‐trivial examples.Less
The complete final product of Bayesian inference is the posterior distribution of the quantity of interest. Important inference summaries include point estimation, region estimation and precise hypotheses testing. Those summaries may appropriately be described as the solution to specific decision problems which depend on the particular loss function chosen. The use of a continuous loss function leads to an integrated set of solutions where the same prior distribution may be used throughout. Objective Bayesian methods are those which use a prior distribution which only depends on the assumed model and the quantity of interest. As a consequence, objective Bayesian methods produce results which only depend on the assumed model and the data obtained. The combined use of intrinsic discrepancy, an invariant information‐based loss function, and appropriately defined reference priors, provides an integrated objective Bayesian solution to both estimation and hypothesis testing problems. The ideas are illustrated with a large collection of non‐trivial examples.
Yoram Rubin
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195138047
- eISBN:
- 9780197561676
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195138047.003.0008
- Subject:
- Earth Sciences and Geography, Oceanography and Hydrology
Two important applications of the SRF concept developed in chapter 2 are point estimation and image simulation. Point estimation considers the SRF Z at an unsampled location, x0, and the goal is to ...
More
Two important applications of the SRF concept developed in chapter 2 are point estimation and image simulation. Point estimation considers the SRF Z at an unsampled location, x0, and the goal is to get an estimate for z at x0 which is physically plausible and is optimal in some sense, and to provide a measure of the quality of the estimate. The goal in image simulation is to create an image of Z over the entire domain, one that not only is in agreement with the measurements at their locations, but also captures the correlation pattern of z. We start by considering a family of linear estimators known as kriging. Its appeal is in its simplicity and computational efficiency. We then proceed to discuss Bayesian estimators and will show how to condition estimates on “hard” and “soft” data, and we shall conclude by discussing a couple of simple, easy-to-implement image simulators. One of the simulators presented can be downloaded from the Internet. Linear regression aims at estimating the attribute z at x0: z0 = z(x0), based on a linear combination of n measurements of z: zi = z(xi), i = 1 , . . . ,n. The estimator of z(x0) is z*0, and it is defined by What makes this estimator “linear” is the exclusion of powers and products of measurements. However, nonlinearity may enter the estimation process indirectly, for example, through nonlinear transformation of the attribute. The challenge posed by(3.1) is to determine optimally the n interpolation coefficients λi, and the shift coefficient λ0. The actual estimation error is z*0 - z0; it is unknown, since z0 is unknown, and so no meaningful statement can be made about it. As an alternative, we shall consider the set of all equivalent estimation problems: in this set we maintain the same spatial configuration of measurement locations, but allow for all the possible combinations, or scenarios, of z values at these locations, including x0. We have replaced a single estimation problem with many, but we have improved our situation since now we know the actual z value at x0 and this will allow a systematic approach.
Less
Two important applications of the SRF concept developed in chapter 2 are point estimation and image simulation. Point estimation considers the SRF Z at an unsampled location, x0, and the goal is to get an estimate for z at x0 which is physically plausible and is optimal in some sense, and to provide a measure of the quality of the estimate. The goal in image simulation is to create an image of Z over the entire domain, one that not only is in agreement with the measurements at their locations, but also captures the correlation pattern of z. We start by considering a family of linear estimators known as kriging. Its appeal is in its simplicity and computational efficiency. We then proceed to discuss Bayesian estimators and will show how to condition estimates on “hard” and “soft” data, and we shall conclude by discussing a couple of simple, easy-to-implement image simulators. One of the simulators presented can be downloaded from the Internet. Linear regression aims at estimating the attribute z at x0: z0 = z(x0), based on a linear combination of n measurements of z: zi = z(xi), i = 1 , . . . ,n. The estimator of z(x0) is z*0, and it is defined by What makes this estimator “linear” is the exclusion of powers and products of measurements. However, nonlinearity may enter the estimation process indirectly, for example, through nonlinear transformation of the attribute. The challenge posed by(3.1) is to determine optimally the n interpolation coefficients λi, and the shift coefficient λ0. The actual estimation error is z*0 - z0; it is unknown, since z0 is unknown, and so no meaningful statement can be made about it. As an alternative, we shall consider the set of all equivalent estimation problems: in this set we maintain the same spatial configuration of measurement locations, but allow for all the possible combinations, or scenarios, of z values at these locations, including x0. We have replaced a single estimation problem with many, but we have improved our situation since now we know the actual z value at x0 and this will allow a systematic approach.
William G. Howell, Saul P. Jackman, and Jon C. Rogowski
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780226048253
- eISBN:
- 9780226048420
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226048420.003.0005
- Subject:
- Political Science, American Politics
This chapter turns its focus to roll call votes cast during those congresses when the nation transitioned either into or out of war. Using interest groups as bridging observations, we show that the ...
More
This chapter turns its focus to roll call votes cast during those congresses when the nation transitioned either into or out of war. Using interest groups as bridging observations, we show that the attacks of September 11 corresponded with a marked rise of conservatism in congressional voting behavior. Though the effects are observed across a wide variety of issue areas, they appear particularly pronounced in the domain of purely domestic votes. The U.S. entry into World War II, by contrast, corresponded with a marked shift to the ideological left in congressional voting behavior. Both of these shifts brought congressional voting behavior more in line with presidential preferences. The outbreaks of the other wars in our sample, however, did not induce clear changes in congressional voting behavior. Still, the transitions from war to peace rather consistently corresponded with ideological shifts away from the presidents then in office.Less
This chapter turns its focus to roll call votes cast during those congresses when the nation transitioned either into or out of war. Using interest groups as bridging observations, we show that the attacks of September 11 corresponded with a marked rise of conservatism in congressional voting behavior. Though the effects are observed across a wide variety of issue areas, they appear particularly pronounced in the domain of purely domestic votes. The U.S. entry into World War II, by contrast, corresponded with a marked shift to the ideological left in congressional voting behavior. Both of these shifts brought congressional voting behavior more in line with presidential preferences. The outbreaks of the other wars in our sample, however, did not induce clear changes in congressional voting behavior. Still, the transitions from war to peace rather consistently corresponded with ideological shifts away from the presidents then in office.
Nicole Baerg
- Published in print:
- 2020
- Published Online:
- July 2020
- ISBN:
- 9780190499488
- eISBN:
- 9780190499518
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190499488.003.0004
- Subject:
- Economics and Finance, Public and Welfare
Using archival, textual information from Federal Open Market Committee (FOMC) transcript data as well as FOMC policy statements, chapter 4 demonstrates that when the chair and median member have ...
More
Using archival, textual information from Federal Open Market Committee (FOMC) transcript data as well as FOMC policy statements, chapter 4 demonstrates that when the chair and median member have opposing inflation preferences, the FOMC communicates with greater precision than when the chair and median member have aligned inflation preferences. The author finds this is true, however, when computing the median members able to cast a public vote. The chapter also provides supportive evidence that when committee members are more dissimilar, the number of textual changes to the policy announcement is higher than otherwise. Theoretically, the chapter shows that a combination of members’ preferences and voting rights matter for the level of uncertainty words used in the official policy statement. Methodologically, the chapter demonstrates the innovative use of supervised and unsupervised learning techniques to construct quantitative measures from text as data, applied to central bank committees.Less
Using archival, textual information from Federal Open Market Committee (FOMC) transcript data as well as FOMC policy statements, chapter 4 demonstrates that when the chair and median member have opposing inflation preferences, the FOMC communicates with greater precision than when the chair and median member have aligned inflation preferences. The author finds this is true, however, when computing the median members able to cast a public vote. The chapter also provides supportive evidence that when committee members are more dissimilar, the number of textual changes to the policy announcement is higher than otherwise. Theoretically, the chapter shows that a combination of members’ preferences and voting rights matter for the level of uncertainty words used in the official policy statement. Methodologically, the chapter demonstrates the innovative use of supervised and unsupervised learning techniques to construct quantitative measures from text as data, applied to central bank committees.