Shoutir Kishore Chatterjee
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198525318
- eISBN:
- 9780191711657
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198525318.003.0010
- Subject:
- Mathematics, Probability / Statistics
In the modern era, the methods of statistics were further abstracted from particular practical problems and the subject gained a distinct identity. In the first phase, Edgeworth and Karl Pearson ...
More
In the modern era, the methods of statistics were further abstracted from particular practical problems and the subject gained a distinct identity. In the first phase, Edgeworth and Karl Pearson worked vigorously on model-selecting induction, leading to the formulation of the famous Pearsonian chi-squared test. In the second phase, ‘Student’ started the small-sample theory for model-specific induction with his pioneering work, and Fisher, following up, developed a variety of sampling theory procedures and laid the foundations of the general theory of estimation, multivariate analysis, and the theory of design of experiments. All these areas were subsequently enriched by the contributions of a galaxy of workers. The logic of the behavioural approach to induction was consolidated by Neyman and E. S. Pearson, and was later extended and generalized by Wald. After the emergence of a rigorous theory of subjective probability, there was a revival of interest in the pro-subjective Bayesian and the purely subjective approach in the second half of the 20th century. Work on model-free induction covering large sample procedures, nonparametric methods, and the theory and practice of finite population sampling also progressed steadily during this period.Less
In the modern era, the methods of statistics were further abstracted from particular practical problems and the subject gained a distinct identity. In the first phase, Edgeworth and Karl Pearson worked vigorously on model-selecting induction, leading to the formulation of the famous Pearsonian chi-squared test. In the second phase, ‘Student’ started the small-sample theory for model-specific induction with his pioneering work, and Fisher, following up, developed a variety of sampling theory procedures and laid the foundations of the general theory of estimation, multivariate analysis, and the theory of design of experiments. All these areas were subsequently enriched by the contributions of a galaxy of workers. The logic of the behavioural approach to induction was consolidated by Neyman and E. S. Pearson, and was later extended and generalized by Wald. After the emergence of a rigorous theory of subjective probability, there was a revival of interest in the pro-subjective Bayesian and the purely subjective approach in the second half of the 20th century. Work on model-free induction covering large sample procedures, nonparametric methods, and the theory and practice of finite population sampling also progressed steadily during this period.
Diana C. Mutz
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691144511
- eISBN:
- 9781400840489
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691144511.003.0005
- Subject:
- Sociology, Social Research and Statistics
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments ...
More
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments can be experienced by participants. It begins with a description of how treatments have been implemented in the context of several classic economic games using random population samples. In these studies, the biggest challenge is adapting the often complex instructions and expectations to a sample that is considerably less well educated on average than college student subjects. In order to play and produce valid experimental results, participants in the game have to understand clearly how it works and buy into the realism of the experimental situation.Less
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments can be experienced by participants. It begins with a description of how treatments have been implemented in the context of several classic economic games using random population samples. In these studies, the biggest challenge is adapting the often complex instructions and expectations to a sample that is considerably less well educated on average than college student subjects. In order to play and produce valid experimental results, participants in the game have to understand clearly how it works and buy into the realism of the experimental situation.
Seth J. Schwartz
- Published in print:
- 2022
- Published Online:
- November 2021
- ISBN:
- 9780190095918
- eISBN:
- 9780197612057
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190095918.003.0016
- Subject:
- Psychology, Social Psychology
This chapter addresses work with regionally or nationally representative datasets, which are often used in disciplines like public health, sociology, demography, and political science. Some of these ...
More
This chapter addresses work with regionally or nationally representative datasets, which are often used in disciplines like public health, sociology, demography, and political science. Some of these datasets are publicly available, whereas others are proprietary and can be accessed only by developing a formal proposal and paying a fee. The chapter lays out the types of claims and research questions that datasets are best equipped to support or address. Tips for using these datasets are provided, such as understanding the sampling strategy and the labeling of variables in the codebook. Challenges inherent in using public-use and proprietary datasets are also enumerated.Less
This chapter addresses work with regionally or nationally representative datasets, which are often used in disciplines like public health, sociology, demography, and political science. Some of these datasets are publicly available, whereas others are proprietary and can be accessed only by developing a formal proposal and paying a fee. The chapter lays out the types of claims and research questions that datasets are best equipped to support or address. Tips for using these datasets are provided, such as understanding the sampling strategy and the labeling of variables in the codebook. Challenges inherent in using public-use and proprietary datasets are also enumerated.
Bernt P. Stigum
- Published in print:
- 2014
- Published Online:
- September 2015
- ISBN:
- 9780262028585
- eISBN:
- 9780262323109
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028585.003.0005
- Subject:
- Economics and Finance, Econometrics
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters ...
More
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.Less
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.