Patrick Dattalo
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195378351
- eISBN:
- 9780199864645
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195378351.001.0001
- Subject:
- Social Work, Research and Evaluation
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a ...
More
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a daunting list of legal, ethical, and practical barriers to implementing RS and RA. While there are no easy ways to overcome these barriers, social workers should seek and utilize strategies that minimize sampling and assignment bias. This book is a single source of a diverse set of tools that will maximize a study's validity when RS and RA are neither possible nor practical. Readers are guided in selecting and implementing an appropriate strategy, including exemplar sampling, sequential sampling, randomization tests, multiple imputation, mean-score logistic regression, partial randomization, constructed comparison groups, propensity scores, and instrumental variables methods. Each approach is presented in such a way as to highlight its underlying assumptions, implementation strategies, and strengths and weaknesses.Less
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a daunting list of legal, ethical, and practical barriers to implementing RS and RA. While there are no easy ways to overcome these barriers, social workers should seek and utilize strategies that minimize sampling and assignment bias. This book is a single source of a diverse set of tools that will maximize a study's validity when RS and RA are neither possible nor practical. Readers are guided in selecting and implementing an appropriate strategy, including exemplar sampling, sequential sampling, randomization tests, multiple imputation, mean-score logistic regression, partial randomization, constructed comparison groups, propensity scores, and instrumental variables methods. Each approach is presented in such a way as to highlight its underlying assumptions, implementation strategies, and strengths and weaknesses.
Patrick Dattalo
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195378351
- eISBN:
- 9780199864645
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195378351.003.0001
- Subject:
- Social Work, Research and Evaluation
This chapter defines important terms and concepts. It is assumed that readers are familiar with issues related to the appropriate application of each statistical procedure in terms of assumptions and ...
More
This chapter defines important terms and concepts. It is assumed that readers are familiar with issues related to the appropriate application of each statistical procedure in terms of assumptions and purpose. Interested readers should refer to the appendix for an annotated bibliography of additional resources. The chapter summarizes the books organization as follows: 1. Methodological Alternatives and Supplements to Random Sampling a. Deliberate Sampling for Diversity and Typical Instances b. Sequential Sampling 2. Statistical Alternatives and Supplements to Random Sampling a. Randomization Tests b. Multiple Imputation c. Mean-Score Logistic Regression 3. Methodological Alternatives and Supplements to Random Assignment a. Sequential Assignment and Treatment-As-Usual Combined b. Partially Randomized Preference Trials 4. Statistical Alternatives and Supplements to Random Assignment a. Constructed Comparison Groups b. Propensity Score Matching c. Instrumental Variables Methods 5. Summary and Conclusions Less
This chapter defines important terms and concepts. It is assumed that readers are familiar with issues related to the appropriate application of each statistical procedure in terms of assumptions and purpose. Interested readers should refer to the appendix for an annotated bibliography of additional resources. The chapter summarizes the books organization as follows: 1. Methodological Alternatives and Supplements to Random Sampling a. Deliberate Sampling for Diversity and Typical Instances b. Sequential Sampling 2. Statistical Alternatives and Supplements to Random Sampling a. Randomization Tests b. Multiple Imputation c. Mean-Score Logistic Regression 3. Methodological Alternatives and Supplements to Random Assignment a. Sequential Assignment and Treatment-As-Usual Combined b. Partially Randomized Preference Trials 4. Statistical Alternatives and Supplements to Random Assignment a. Constructed Comparison Groups b. Propensity Score Matching c. Instrumental Variables Methods 5. Summary and Conclusions
Raymond L. Chambers and Robert G. Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.003.0003
- Subject:
- Mathematics, Probability / Statistics
This chapter describes the simplest possible model for a finite population: the homogeneous population model. It is appropriate when there is no auxiliary information that can distinguish between ...
More
This chapter describes the simplest possible model for a finite population: the homogeneous population model. It is appropriate when there is no auxiliary information that can distinguish between different population units. The homogeneous population model assumes equal expected value and variance for the variable of interest for all population units. Values from different units are assumed to be independent although this is relaxed in the last section of the chapter. The empirical best and best linear unbiased predictor of a population total are derived under the model. Inference, sample design and sample size calculation are also discussed. The most appropriate design for this kind of population is usually simple random sampling without replacement. The urn model (also known as the hypergeometric model), a special case of the homogeneous population model, is also discussed.Less
This chapter describes the simplest possible model for a finite population: the homogeneous population model. It is appropriate when there is no auxiliary information that can distinguish between different population units. The homogeneous population model assumes equal expected value and variance for the variable of interest for all population units. Values from different units are assumed to be independent although this is relaxed in the last section of the chapter. The empirical best and best linear unbiased predictor of a population total are derived under the model. Inference, sample design and sample size calculation are also discussed. The most appropriate design for this kind of population is usually simple random sampling without replacement. The urn model (also known as the hypergeometric model), a special case of the homogeneous population model, is also discussed.
Diana C. Mutz
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691144511
- eISBN:
- 9781400840489
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691144511.003.0005
- Subject:
- Sociology, Social Research and Statistics
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments ...
More
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments can be experienced by participants. It begins with a description of how treatments have been implemented in the context of several classic economic games using random population samples. In these studies, the biggest challenge is adapting the often complex instructions and expectations to a sample that is considerably less well educated on average than college student subjects. In order to play and produce valid experimental results, participants in the game have to understand clearly how it works and buy into the realism of the experimental situation.Less
This chapter examines games-based treatments, which are an out-growth of conducting experiments online, where gaming seems only natural and where highly complex, multi-stage experimental treatments can be experienced by participants. It begins with a description of how treatments have been implemented in the context of several classic economic games using random population samples. In these studies, the biggest challenge is adapting the often complex instructions and expectations to a sample that is considerably less well educated on average than college student subjects. In order to play and produce valid experimental results, participants in the game have to understand clearly how it works and buy into the realism of the experimental situation.
Peter Miksza and Kenneth Elpus
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780199391905
- eISBN:
- 9780199391943
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199391905.003.0011
- Subject:
- Music, Theory, Analysis, Composition, Performing Practice/Studies
This chapter introduces the specialized techniques necessary for analyzing data that have been gathered in a complex or multistage survey sample. The chapter details the methods most commonly used to ...
More
This chapter introduces the specialized techniques necessary for analyzing data that have been gathered in a complex or multistage survey sample. The chapter details the methods most commonly used to collect complex survey data and then explains the specific statistical tools that must be employed to correctly analyze complex survey data. First, an overview of the various types of sampling methods is presented, beginning with simple random sampling and moving through other methods to finally discuss the commonly employed research techniques of cluster sampling. The chapter continues with a discussion of survey weights—what they mean and how they are derived. The chapter concludes with software-based suggestions on the proper analysis of survey data.Less
This chapter introduces the specialized techniques necessary for analyzing data that have been gathered in a complex or multistage survey sample. The chapter details the methods most commonly used to collect complex survey data and then explains the specific statistical tools that must be employed to correctly analyze complex survey data. First, an overview of the various types of sampling methods is presented, beginning with simple random sampling and moving through other methods to finally discuss the commonly employed research techniques of cluster sampling. The chapter continues with a discussion of survey weights—what they mean and how they are derived. The chapter concludes with software-based suggestions on the proper analysis of survey data.
Arunabh Ghosh
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780691179476
- eISBN:
- 9780691199214
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691179476.003.0007
- Subject:
- History, Asian History
This chapter unearths a series of heretofore largely forgotten exchanges between Chinese and Indian statisticians. It is based on study of key figures, such as the deputy director of China's State ...
More
This chapter unearths a series of heretofore largely forgotten exchanges between Chinese and Indian statisticians. It is based on study of key figures, such as the deputy director of China's State Statistics Bureau, Wang Sihua, and the Indian statistician P. C. Mahalanobis. Focusing on Chinese interest in the emerging technology of large-scale random sampling, in which Mahalanobis and the Indian Statistical Institute were global innovators, the exchanges point to alternative frameworks for Cold War scientific exchanges while also placing in stark relief the extent to which Chinese statisticians and leaders clearly understood both the strengths and shortcomings of their own statistical system. The chapter traces these exchanges, explaining their timing and the motivation behind them. Each set of actors in these exchanges had its own agenda. The chapter shows that the Indians were particularly keen on learning more about China's planning methods.Less
This chapter unearths a series of heretofore largely forgotten exchanges between Chinese and Indian statisticians. It is based on study of key figures, such as the deputy director of China's State Statistics Bureau, Wang Sihua, and the Indian statistician P. C. Mahalanobis. Focusing on Chinese interest in the emerging technology of large-scale random sampling, in which Mahalanobis and the Indian Statistical Institute were global innovators, the exchanges point to alternative frameworks for Cold War scientific exchanges while also placing in stark relief the extent to which Chinese statisticians and leaders clearly understood both the strengths and shortcomings of their own statistical system. The chapter traces these exchanges, explaining their timing and the motivation behind them. Each set of actors in these exchanges had its own agenda. The chapter shows that the Indians were particularly keen on learning more about China's planning methods.
Cristopher Moore and Stephan Mertens
- Published in print:
- 2011
- Published Online:
- December 2013
- ISBN:
- 9780199233212
- eISBN:
- 9780191775079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199233212.003.0012
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Random sampling is a technique for dealing with possible states or solutions having an exponentially large space. The best method of random sampling generally involves a random walk or a Markov ...
More
Random sampling is a technique for dealing with possible states or solutions having an exponentially large space. The best method of random sampling generally involves a random walk or a Markov chain. A Markov chain requires a number of steps to approach equilibrium, and thus provide a good random sample of the state space. This number of steps is called mixing time, which can be calculated by thinking about how quickly its choices overwhelm the system’s memory of its initial state, the extent to which one part of a system influences another, and how smoothly probability flows from one part of the state space to another. This chapter explores random walks and rapid mixing, first by considering a classic example from physics: a block of iron. It then discusses transition matrices, ergodicity, coupling, spectral gap, and expanders, as well as the role of conductance and the spectral gap in rapid mixing. It concludes by showing that temporal mixing is closely associated with spatial mixing.Less
Random sampling is a technique for dealing with possible states or solutions having an exponentially large space. The best method of random sampling generally involves a random walk or a Markov chain. A Markov chain requires a number of steps to approach equilibrium, and thus provide a good random sample of the state space. This number of steps is called mixing time, which can be calculated by thinking about how quickly its choices overwhelm the system’s memory of its initial state, the extent to which one part of a system influences another, and how smoothly probability flows from one part of the state space to another. This chapter explores random walks and rapid mixing, first by considering a classic example from physics: a block of iron. It then discusses transition matrices, ergodicity, coupling, spectral gap, and expanders, as well as the role of conductance and the spectral gap in rapid mixing. It concludes by showing that temporal mixing is closely associated with spatial mixing.
Diana C. Mutz
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691144511
- eISBN:
- 9781400840489
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691144511.001.0001
- Subject:
- Sociology, Social Research and Statistics
Population-based survey experiments have become an invaluable tool for social scientists struggling to generalize laboratory-based results, and for survey researchers besieged by uncertainties about ...
More
Population-based survey experiments have become an invaluable tool for social scientists struggling to generalize laboratory-based results, and for survey researchers besieged by uncertainties about causality. Thanks to technological advances in recent years, experiments can now be administered to random samples of the population to which a theory applies. Yet until now, there was no self-contained resource for social scientists seeking a concise and accessible overview of this methodology, its strengths and weaknesses, and the unique challenges it poses for implementation and analysis. Drawing on examples from across the social sciences, this book covers everything you need to know to plan, implement, and analyze the results of population-based survey experiments. But it is more than just a “how to” manual. This book challenges conventional wisdom about internal and external validity, showing why strong causal claims need not come at the expense of external validity, and how it is now possible to execute experiments remotely using large-scale population samples. Designed for social scientists across the disciplines, the book provides the first complete introduction to this methodology and features a wealth of examples and practical advice.Less
Population-based survey experiments have become an invaluable tool for social scientists struggling to generalize laboratory-based results, and for survey researchers besieged by uncertainties about causality. Thanks to technological advances in recent years, experiments can now be administered to random samples of the population to which a theory applies. Yet until now, there was no self-contained resource for social scientists seeking a concise and accessible overview of this methodology, its strengths and weaknesses, and the unique challenges it poses for implementation and analysis. Drawing on examples from across the social sciences, this book covers everything you need to know to plan, implement, and analyze the results of population-based survey experiments. But it is more than just a “how to” manual. This book challenges conventional wisdom about internal and external validity, showing why strong causal claims need not come at the expense of external validity, and how it is now possible to execute experiments remotely using large-scale population samples. Designed for social scientists across the disciplines, the book provides the first complete introduction to this methodology and features a wealth of examples and practical advice.
David G. Hankin, Michael S. Mohr, and Ken B. Newman
- Published in print:
- 2019
- Published Online:
- December 2019
- ISBN:
- 9780198815792
- eISBN:
- 9780191853463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815792.003.0012
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the ...
More
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the number of trees per hectare killed by a bark beetle infestation may be exceptionally high in one region of a national forest and near zero elsewhere. In such circumstances it may be highly desirable or even required that a sample survey directed at estimation of total tree mortality across a forest be based on selection of random locations that have good spatial balance, i.e., locations are well spread over the landscape with relatively even distances between them. A simple random sample cannot guarantee good spatial balance. We present two methods that have been proposed for selection of spatially balanced samples: GRTS (Generalized Random Tessellation Stratified Sampling) and BAS (Balanced Acceptance Sampling). Selection of samples using the GRTS approach involves a complicated series of sequential steps that allows generation of spatially balanced samples selected from finite populations or from infinite study areas. Selection of samples using BAS relies on the Halton sequence, is conceptually simpler, and produces samples that generally have better spatial balance than those produced by GRTS. Both approaches rely on use of software that is available in the R statistical/programming environment. Estimation relies on the Horvitz–Thompson estimator. Illustrative examples of running the SPSURVEY software package (used for GRTS) and links to the SDraw package (used for BAS) are provided at http://global.oup.com/uk/companion/hankin.Less
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the number of trees per hectare killed by a bark beetle infestation may be exceptionally high in one region of a national forest and near zero elsewhere. In such circumstances it may be highly desirable or even required that a sample survey directed at estimation of total tree mortality across a forest be based on selection of random locations that have good spatial balance, i.e., locations are well spread over the landscape with relatively even distances between them. A simple random sample cannot guarantee good spatial balance. We present two methods that have been proposed for selection of spatially balanced samples: GRTS (Generalized Random Tessellation Stratified Sampling) and BAS (Balanced Acceptance Sampling). Selection of samples using the GRTS approach involves a complicated series of sequential steps that allows generation of spatially balanced samples selected from finite populations or from infinite study areas. Selection of samples using BAS relies on the Halton sequence, is conceptually simpler, and produces samples that generally have better spatial balance than those produced by GRTS. Both approaches rely on use of software that is available in the R statistical/programming environment. Estimation relies on the Horvitz–Thompson estimator. Illustrative examples of running the SPSURVEY software package (used for GRTS) and links to the SDraw package (used for BAS) are provided at http://global.oup.com/uk/companion/hankin.
David G. Hankin, Michael S. Mohr, and Ken B. Newman
- Published in print:
- 2019
- Published Online:
- December 2019
- ISBN:
- 9780198815792
- eISBN:
- 9780191853463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815792.003.0003
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
This chapter presents a formal quantitative treatment of material covered conceptually in Chapter 2, all with respect to equal probability with replacement (SWR) and without replacement selection ...
More
This chapter presents a formal quantitative treatment of material covered conceptually in Chapter 2, all with respect to equal probability with replacement (SWR) and without replacement selection simple random sampling, (SRS) of samples of size n from a finite population of size N. Small sample space examples are used to illustrate unbiasedness of mean-per-unit estimators of the mean, total and proportion of the target variable, y, for SWR and SRS. Explicit formulas for sampling variance indicate how estimator uncertainty depends on finite population variance, sample size and sampling fraction. Measures of the relative performance of alternative sampling strategies (relative precision, relative efficiency, net relative efficiency) are introduced and applied to mean-per-unit estimators used for the SWR and SRS selection methods. Normality of the sampling distribution of the SRS mean-per-unit estimator depends on sample size but also on the shape of the distribution of the target variable, y, values over the finite population units. Normality of the sampling distribution is required to justify construction of valid 95% confidence intervals that may be constructed around sample estimates based on unbiased estimates of sampling variance. Methods to calculate sample size to achieve accuracy objectives are presented. Additional topics include Bernoulli sampling (a without replacement selection scheme for which sample size is a random variable), the Rao–Blackwell theorem (which allows improvement of estimators that are based on selection methods which may result in repeated selection of the same units), oversampling and nonresponse.Less
This chapter presents a formal quantitative treatment of material covered conceptually in Chapter 2, all with respect to equal probability with replacement (SWR) and without replacement selection simple random sampling, (SRS) of samples of size n from a finite population of size N. Small sample space examples are used to illustrate unbiasedness of mean-per-unit estimators of the mean, total and proportion of the target variable, y, for SWR and SRS. Explicit formulas for sampling variance indicate how estimator uncertainty depends on finite population variance, sample size and sampling fraction. Measures of the relative performance of alternative sampling strategies (relative precision, relative efficiency, net relative efficiency) are introduced and applied to mean-per-unit estimators used for the SWR and SRS selection methods. Normality of the sampling distribution of the SRS mean-per-unit estimator depends on sample size but also on the shape of the distribution of the target variable, y, values over the finite population units. Normality of the sampling distribution is required to justify construction of valid 95% confidence intervals that may be constructed around sample estimates based on unbiased estimates of sampling variance. Methods to calculate sample size to achieve accuracy objectives are presented. Additional topics include Bernoulli sampling (a without replacement selection scheme for which sample size is a random variable), the Rao–Blackwell theorem (which allows improvement of estimators that are based on selection methods which may result in repeated selection of the same units), oversampling and nonresponse.
Kathleen Gerson
- Published in print:
- 2020
- Published Online:
- November 2020
- ISBN:
- 9780199324286
- eISBN:
- 9780197533857
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199324286.003.0003
- Subject:
- Sociology, Social Research and Statistics, Methodology and Statistics
Chapter 3 considers the principles underlying the selection of an appropriate sample for depth interviewing and the range of strategies available to identify and recruit participants in that sample. ...
More
Chapter 3 considers the principles underlying the selection of an appropriate sample for depth interviewing and the range of strategies available to identify and recruit participants in that sample. Rather than claiming representativeness, as a quantitative researcher might, a depth interviewer aims to select a sample capable of yielding theoretically generalizable insights—an approach called theoretical sampling. Theoretical sampling focuses on finding a variety of participants who are well positioned to reveal the practices, mechanisms, and relationships the research seeks to explain. The chapter then looks at the range of strategies for finding a good sample and deciding whom to include and whom to exclude. Whether the sampling strategy involves recruiting randomly selected participants, snowball sampling, seeking volunteers, or some combination, a good sample contains both the core controls and the built-in comparisons needed to answer the study questions and develop an explanation for the outcomes.Less
Chapter 3 considers the principles underlying the selection of an appropriate sample for depth interviewing and the range of strategies available to identify and recruit participants in that sample. Rather than claiming representativeness, as a quantitative researcher might, a depth interviewer aims to select a sample capable of yielding theoretically generalizable insights—an approach called theoretical sampling. Theoretical sampling focuses on finding a variety of participants who are well positioned to reveal the practices, mechanisms, and relationships the research seeks to explain. The chapter then looks at the range of strategies for finding a good sample and deciding whom to include and whom to exclude. Whether the sampling strategy involves recruiting randomly selected participants, snowball sampling, seeking volunteers, or some combination, a good sample contains both the core controls and the built-in comparisons needed to answer the study questions and develop an explanation for the outcomes.
Michael R. Powers
- Published in print:
- 2014
- Published Online:
- November 2015
- ISBN:
- 9780231153676
- eISBN:
- 9780231527057
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231153676.003.0004
- Subject:
- Economics and Finance, Development, Growth, and Environmental
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the ...
More
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the concept of a random sample, and then summarizes how such samples are used to construct both point and interval estimates. Next, it introduces three important asymptotic results—the law of large numbers, the central limit theorem, and the generalized central limit theorem—followed by a discussion of the practical validity of the independence assumption underlying random samples. Finally, it considers in some detail the method of hypothesis testing, whose framework follows much the same logic as both the U.S. criminal justice system and the scientific method as it is generally understood.Less
This chapter explores a number of concepts and methods employed in the frequency/classical approach, called frequentism. To present the standard frequentist paradigm, it begins by defining the concept of a random sample, and then summarizes how such samples are used to construct both point and interval estimates. Next, it introduces three important asymptotic results—the law of large numbers, the central limit theorem, and the generalized central limit theorem—followed by a discussion of the practical validity of the independence assumption underlying random samples. Finally, it considers in some detail the method of hypothesis testing, whose framework follows much the same logic as both the U.S. criminal justice system and the scientific method as it is generally understood.
- Published in print:
- 2006
- Published Online:
- March 2013
- ISBN:
- 9780226316130
- eISBN:
- 9780226315997
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226315997.003.0010
- Subject:
- Law, Constitutional and Administrative Law
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and ...
More
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and benefits of a more randomized universe of crime and punishment. Randomization is the only way to achieve a carceral population that reflects the offending population. Randomization in this context is a form of random sampling: random sampling on the highway, for instance, is the only way that the police would obtain an accurate reflection of the offending population. And random sampling is the central virtue behind randomization. What randomization achieves, in essence, is to neutralize the perverse effects of prediction, both in terms of the possible effects on overall crime and of the other social costs.Less
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and benefits of a more randomized universe of crime and punishment. Randomization is the only way to achieve a carceral population that reflects the offending population. Randomization in this context is a form of random sampling: random sampling on the highway, for instance, is the only way that the police would obtain an accurate reflection of the offending population. And random sampling is the central virtue behind randomization. What randomization achieves, in essence, is to neutralize the perverse effects of prediction, both in terms of the possible effects on overall crime and of the other social costs.
James S. Fishkin
- Published in print:
- 2011
- Published Online:
- February 2015
- ISBN:
- 9780199604432
- eISBN:
- 9780191803574
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199604432.003.0001
- Subject:
- Political Science, Democratization
This chapter first sets out the book's main topic which is deliberative democracy and how to include everyone under conditions where they are effectively motivated to really think about the issues. ...
More
This chapter first sets out the book's main topic which is deliberative democracy and how to include everyone under conditions where they are effectively motivated to really think about the issues. The book looks at the problem of how to fulfil two fundamental values — political equality and deliberation. It then outlines the reasons why is it difficult to achieve both political equality and deliberation. It considers the renewed interest in random sampling and deliberation; situates this combination in the range of possible strategies for public consultation; and clarifies the values and democratic theories at issue in these different practices.Less
This chapter first sets out the book's main topic which is deliberative democracy and how to include everyone under conditions where they are effectively motivated to really think about the issues. The book looks at the problem of how to fulfil two fundamental values — political equality and deliberation. It then outlines the reasons why is it difficult to achieve both political equality and deliberation. It considers the renewed interest in random sampling and deliberation; situates this combination in the range of possible strategies for public consultation; and clarifies the values and democratic theories at issue in these different practices.
Bernt P. Stigum
- Published in print:
- 2014
- Published Online:
- September 2015
- ISBN:
- 9780262028585
- eISBN:
- 9780262323109
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028585.003.0005
- Subject:
- Economics and Finance, Econometrics
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters ...
More
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.Less
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.
Mark Bevir and Jason Blakely
- Published in print:
- 2018
- Published Online:
- December 2018
- ISBN:
- 9780198832942
- eISBN:
- 9780191871344
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198832942.003.0005
- Subject:
- Political Science, Political Theory
This chapter draws on the latest methodological literature in order to show how an anti-naturalist framework justifies multi-methods in social science research. Contrary to the widespread debate that ...
More
This chapter draws on the latest methodological literature in order to show how an anti-naturalist framework justifies multi-methods in social science research. Contrary to the widespread debate that pits “quantitative” versus “qualitative” methods, researchers are free to use methods from across the social sciences provided they remain aware of anti-naturalist concepts and concerns. Leading methods are analyzed in light of the latest social science, including: mass surveys, random sampling, regression analysis, statistics, rational choice modeling, ethnography, archival research, and long-form interviewing. A full-blown interpretive approach to the social sciences can make use of all the major methods and techniques for studying human behavior, while also avoiding the scientism that too often plagues their current deployment.Less
This chapter draws on the latest methodological literature in order to show how an anti-naturalist framework justifies multi-methods in social science research. Contrary to the widespread debate that pits “quantitative” versus “qualitative” methods, researchers are free to use methods from across the social sciences provided they remain aware of anti-naturalist concepts and concerns. Leading methods are analyzed in light of the latest social science, including: mass surveys, random sampling, regression analysis, statistics, rational choice modeling, ethnography, archival research, and long-form interviewing. A full-blown interpretive approach to the social sciences can make use of all the major methods and techniques for studying human behavior, while also avoiding the scientism that too often plagues their current deployment.
Arunabh Ghosh
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780691179476
- eISBN:
- 9780691199214
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691179476.001.0001
- Subject:
- History, Asian History
In 1949, at the end of a long period of wars, one of the biggest challenges facing leaders of the new People's Republic of China was how much they did not know. The government of one of the world's ...
More
In 1949, at the end of a long period of wars, one of the biggest challenges facing leaders of the new People's Republic of China was how much they did not know. The government of one of the world's largest nations was committed to fundamentally reengineering its society and economy via socialist planning while having almost no reliable statistical data about their own country. This book is the history of efforts to resolve this “crisis in counting.” The book explores the choices made by political leaders, statisticians, academics, statistical workers, and even literary figures in attempts to know the nation through numbers. It shows that early reliance on Soviet-inspired methods of exhaustive enumeration became increasingly untenable in China by the mid-1950s. Unprecedented and unexpected exchanges with Indian statisticians followed, as the Chinese sought to learn about the then-exciting new technology of random sampling. These developments were overtaken by the tumult of the Great Leap Forward (1958–1961), when probabilistic and exhaustive methods were rejected and statistics was refashioned into an ethnographic enterprise. By acknowledging Soviet and Indian influences, the book not only revises existing models of Cold War science but also globalizes wider developments in the history of statistics and data. Anchored in debates about statistics and its relationship to state building, the book offers fresh perspectives on China's transition to socialism.Less
In 1949, at the end of a long period of wars, one of the biggest challenges facing leaders of the new People's Republic of China was how much they did not know. The government of one of the world's largest nations was committed to fundamentally reengineering its society and economy via socialist planning while having almost no reliable statistical data about their own country. This book is the history of efforts to resolve this “crisis in counting.” The book explores the choices made by political leaders, statisticians, academics, statistical workers, and even literary figures in attempts to know the nation through numbers. It shows that early reliance on Soviet-inspired methods of exhaustive enumeration became increasingly untenable in China by the mid-1950s. Unprecedented and unexpected exchanges with Indian statisticians followed, as the Chinese sought to learn about the then-exciting new technology of random sampling. These developments were overtaken by the tumult of the Great Leap Forward (1958–1961), when probabilistic and exhaustive methods were rejected and statistics was refashioned into an ethnographic enterprise. By acknowledging Soviet and Indian influences, the book not only revises existing models of Cold War science but also globalizes wider developments in the history of statistics and data. Anchored in debates about statistics and its relationship to state building, the book offers fresh perspectives on China's transition to socialism.
William Edelglass
- Published in print:
- 2017
- Published Online:
- October 2017
- ISBN:
- 9780190495794
- eISBN:
- 9780190495831
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190495794.003.0004
- Subject:
- Religion, Buddhism
The widespread discourse of happiness and meditation is part of a “happiness turn” in contemporary Western Buddhism, in which meditation is presented as a path to happiness. This turn is justified, ...
More
The widespread discourse of happiness and meditation is part of a “happiness turn” in contemporary Western Buddhism, in which meditation is presented as a path to happiness. This turn is justified, in part, by empirical research on happiness, which appears to be a straightforward scientific inquiry into the causes and conditions of happiness. The two most widespread methods for measuring happiness, life satisfaction questionnaires and random experience sampling, are each committed to a particular theory of happiness: implicit in the random experience sampling method is a hedonic conception of happiness as positive affect or pleasure. In contrast, Śāntideva suggests that cultivating mindfulness and awareness entails relinquishing of self and increasing skill in addressing others’ needs. This contrast demonstrates that the scientific study of meditation and happiness is not value neutral but reframes the meaning of meditation.Less
The widespread discourse of happiness and meditation is part of a “happiness turn” in contemporary Western Buddhism, in which meditation is presented as a path to happiness. This turn is justified, in part, by empirical research on happiness, which appears to be a straightforward scientific inquiry into the causes and conditions of happiness. The two most widespread methods for measuring happiness, life satisfaction questionnaires and random experience sampling, are each committed to a particular theory of happiness: implicit in the random experience sampling method is a hedonic conception of happiness as positive affect or pleasure. In contrast, Śāntideva suggests that cultivating mindfulness and awareness entails relinquishing of self and increasing skill in addressing others’ needs. This contrast demonstrates that the scientific study of meditation and happiness is not value neutral but reframes the meaning of meditation.
Joseph A. Veech
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780198829287
- eISBN:
- 9780191868078
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198829287.003.0007
- Subject:
- Biology, Ecology, Biomathematics / Statistics and Data Analysis / Complexity Studies
There are many different design and statistical issues that a researcher should consider when developing the data collection protocol or when interpreting results from a habitat analysis. One of the ...
More
There are many different design and statistical issues that a researcher should consider when developing the data collection protocol or when interpreting results from a habitat analysis. One of the first considerations is simply the area to include in the study. This depends on the behavior (particularly mobility) of the focal species and logistical constraints. The amount of area also relates to the number of survey locations (plots, transects, or other) and their spatial placement. Survey data often include many instances of a species absent from a spatial sampling unit. These could be true absences or might represent very low species detection probability. There are different statistical techniques for estimating detection probability as well as analyzing data with a substantial proportion of zero-abundance values. The spatial dispersion of the species within the overall study area or region is never random. Even apart from the effect of habitat, individuals are often aggregated due to various environmental factors or species traits. This can affect count data collected from survey plots. Related to spatial dispersion, the overall background density of the species within the study area can introduce particular challenges in identifying meaningful habitat associations. Statistical issues such as normality, multicollinearity, spatial and temporal autocorrelation may be relatively common and need to be addressed prior to an analysis. None of these design and statistical issues presents insurmountable challenges to a habitat analysis.Less
There are many different design and statistical issues that a researcher should consider when developing the data collection protocol or when interpreting results from a habitat analysis. One of the first considerations is simply the area to include in the study. This depends on the behavior (particularly mobility) of the focal species and logistical constraints. The amount of area also relates to the number of survey locations (plots, transects, or other) and their spatial placement. Survey data often include many instances of a species absent from a spatial sampling unit. These could be true absences or might represent very low species detection probability. There are different statistical techniques for estimating detection probability as well as analyzing data with a substantial proportion of zero-abundance values. The spatial dispersion of the species within the overall study area or region is never random. Even apart from the effect of habitat, individuals are often aggregated due to various environmental factors or species traits. This can affect count data collected from survey plots. Related to spatial dispersion, the overall background density of the species within the study area can introduce particular challenges in identifying meaningful habitat associations. Statistical issues such as normality, multicollinearity, spatial and temporal autocorrelation may be relatively common and need to be addressed prior to an analysis. None of these design and statistical issues presents insurmountable challenges to a habitat analysis.
Ian Bradley
- Published in print:
- 2007
- Published Online:
- October 2011
- ISBN:
- 9780195328943
- eISBN:
- 9780199851256
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195328943.003.0005
- Subject:
- Music, Popular
Like the sons of Gama Rex in Princess Ida, devotees of Gilbert and Sullivan are, on the whole, masculine in sex. The male bias is particularly evident among what might be called the ‘inner ...
More
Like the sons of Gama Rex in Princess Ida, devotees of Gilbert and Sullivan are, on the whole, masculine in sex. The male bias is particularly evident among what might be called the ‘inner brotherhood’, that company of enthusiasts who border on the obsessive, collect G & S memorabilia, write books on the subject, know every nuance of every recording, and sit in theatres waiting for a wrong word in a patter song or a move which deviates from the D'Oyly Carte norm. But even in the wider circle of G & S fans, men predominate over women. This chapter holds that the archetypal G & S fan is male, middle-aged, middle-class, and of middle income. He is also quite likely to be a Methodist.Less
Like the sons of Gama Rex in Princess Ida, devotees of Gilbert and Sullivan are, on the whole, masculine in sex. The male bias is particularly evident among what might be called the ‘inner brotherhood’, that company of enthusiasts who border on the obsessive, collect G & S memorabilia, write books on the subject, know every nuance of every recording, and sit in theatres waiting for a wrong word in a patter song or a move which deviates from the D'Oyly Carte norm. But even in the wider circle of G & S fans, men predominate over women. This chapter holds that the archetypal G & S fan is male, middle-aged, middle-class, and of middle income. He is also quite likely to be a Methodist.