Walter Willett
- Published in print:
- 1998
- Published Online:
- September 2009
- ISBN:
- 9780195122978
- eISBN:
- 9780199864249
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195122978.003.12
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter addresses the effect of measurement error in epidemiologic studies and statistical methods to compensate for measurement error. Topics covered include types of error, correction of ...
More
This chapter addresses the effect of measurement error in epidemiologic studies and statistical methods to compensate for measurement error. Topics covered include types of error, correction of standard deviations, correction of epidemiologic measures of association for measurement error, correction for measurement error in confounding variables, and estimation of relative risks based on dual responses.Less
This chapter addresses the effect of measurement error in epidemiologic studies and statistical methods to compensate for measurement error. Topics covered include types of error, correction of standard deviations, correction of epidemiologic measures of association for measurement error, correction for measurement error in confounding variables, and estimation of relative risks based on dual responses.
- Published in print:
- 2011
- Published Online:
- June 2013
- ISBN:
- 9780804772624
- eISBN:
- 9780804777209
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804772624.003.0007
- Subject:
- Economics and Finance, Econometrics
This chapter discusses how to accurately estimate the values of β and α from b and a. Regression produces an estimate of the standard deviation of εi. This, in turn, serves as the basis for estimates ...
More
This chapter discusses how to accurately estimate the values of β and α from b and a. Regression produces an estimate of the standard deviation of εi. This, in turn, serves as the basis for estimates of the standard deviations of b and a. With these, we can construct confidence intervals for β and α and test hypotheses about their values.Less
This chapter discusses how to accurately estimate the values of β and α from b and a. Regression produces an estimate of the standard deviation of εi. This, in turn, serves as the basis for estimates of the standard deviations of b and a. With these, we can construct confidence intervals for β and α and test hypotheses about their values.
Will G. Hopkins
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199561629
- eISBN:
- 9780191722479
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199561629.003.06
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
A variety of ways to express the incidence and prevalence of sports injury exists, of which the number of injuries per 1,000 hours of play is the most commonly used. There is no clear answer as to ...
More
A variety of ways to express the incidence and prevalence of sports injury exists, of which the number of injuries per 1,000 hours of play is the most commonly used. There is no clear answer as to which description is the best. Each description, however, has its own benefits and drawbacks. Based upon existing published data this chapter shows what the different ways of expressing the outcome measure can do to the results. Additionally, the chapter shows ways of expressing and calculating the spread of the outcome measures, e.g. standard deviation and 95% confidence interval.Less
A variety of ways to express the incidence and prevalence of sports injury exists, of which the number of injuries per 1,000 hours of play is the most commonly used. There is no clear answer as to which description is the best. Each description, however, has its own benefits and drawbacks. Based upon existing published data this chapter shows what the different ways of expressing the outcome measure can do to the results. Additionally, the chapter shows ways of expressing and calculating the spread of the outcome measures, e.g. standard deviation and 95% confidence interval.
Quan Li
- Published in print:
- 2018
- Published Online:
- March 2019
- ISBN:
- 9780190656218
- eISBN:
- 9780190656256
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190656218.003.0003
- Subject:
- Political Science, Political Theory
This chapter demonstrates the types of questions one could ask about a continuous random variable of interest and answer using statistical inference. It provides conceptual preparation for ...
More
This chapter demonstrates the types of questions one could ask about a continuous random variable of interest and answer using statistical inference. It provides conceptual preparation for understanding statistical inference, demonstrates how to get data ready for analysis in R, and then illustrates how to conduct two types of statistical inferences—null hypothesis testing and confidence interval construction—regarding the population attributes of a continuous random variable, using sample data. Both the one-sample t-test and the difference-of-means test are presented. Two key points in this chapter are worth noting. First, statistical inference is primarily concerned about figuring out population attributes using sample data. Hence, it is not the same as causal inference. Second, statistical inference can help to answer various questions of substantive interest. This chapter focuses on statistical inferences regarding one continuous random outcome variable.Less
This chapter demonstrates the types of questions one could ask about a continuous random variable of interest and answer using statistical inference. It provides conceptual preparation for understanding statistical inference, demonstrates how to get data ready for analysis in R, and then illustrates how to conduct two types of statistical inferences—null hypothesis testing and confidence interval construction—regarding the population attributes of a continuous random variable, using sample data. Both the one-sample t-test and the difference-of-means test are presented. Two key points in this chapter are worth noting. First, statistical inference is primarily concerned about figuring out population attributes using sample data. Hence, it is not the same as causal inference. Second, statistical inference can help to answer various questions of substantive interest. This chapter focuses on statistical inferences regarding one continuous random outcome variable.
Douglas Cumming, Na Dai, and Sofia A. Johan
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199862566
- eISBN:
- 9780199332762
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199862566.003.0007
- Subject:
- Economics and Finance, Financial Economics
Chapter 7 facilitates an understanding of the impact of hedge fund regulation on fund governance and performance by using a cross-country dataset of 3,782 hedge funds from 29 countries. The focus of ...
More
Chapter 7 facilitates an understanding of the impact of hedge fund regulation on fund governance and performance by using a cross-country dataset of 3,782 hedge funds from 29 countries. The focus of the analysis involves regulatory requirements in the form of minimum capitalization imposed on hedge fund managers, restrictions on the location of key service providers and permissible distribution channels in relation to hedge fund alphas, manipulation-proof performance measures (MPPMs), average monthly returns, fixed fees, and performance fees.Less
Chapter 7 facilitates an understanding of the impact of hedge fund regulation on fund governance and performance by using a cross-country dataset of 3,782 hedge funds from 29 countries. The focus of the analysis involves regulatory requirements in the form of minimum capitalization imposed on hedge fund managers, restrictions on the location of key service providers and permissible distribution channels in relation to hedge fund alphas, manipulation-proof performance measures (MPPMs), average monthly returns, fixed fees, and performance fees.
- Published in print:
- 2011
- Published Online:
- June 2013
- ISBN:
- 9780804772624
- eISBN:
- 9780804777209
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804772624.003.0008
- Subject:
- Economics and Finance, Econometrics
This chapter shows that the εi's must have the same expected value for regression to make any sense. However, we cannot tell if the εi's have a constant expected value that is different from zero, ...
More
This chapter shows that the εi's must have the same expected value for regression to make any sense. However, we cannot tell if the εi's have a constant expected value that is different from zero, and it does not make any substantive difference. If the disturbances have different variances, ordinary least squares (OLS) estimates are still unbiased. However, they are no longer best linear unbiased (BLU). In addition, the true variances of b and a are different from those given by the OLS variance formulas. In order to conduct inference, either we can estimate their true variances, or we may be able to get BLU estimators by transforming the data so that the transformed disturbances share the same variance.Less
This chapter shows that the εi's must have the same expected value for regression to make any sense. However, we cannot tell if the εi's have a constant expected value that is different from zero, and it does not make any substantive difference. If the disturbances have different variances, ordinary least squares (OLS) estimates are still unbiased. However, they are no longer best linear unbiased (BLU). In addition, the true variances of b and a are different from those given by the OLS variance formulas. In order to conduct inference, either we can estimate their true variances, or we may be able to get BLU estimators by transforming the data so that the transformed disturbances share the same variance.
Roberto J. Rona and Susan Chinn
- Published in print:
- 1999
- Published Online:
- September 2009
- ISBN:
- 9780192629197
- eISBN:
- 9780191723612
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192629197.003.0002
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter describes a mixed-longitudinal study. Children were followed for as long as they remained in school, with new five year olds joining and eleven year olds leaving each year. The resulting ...
More
This chapter describes a mixed-longitudinal study. Children were followed for as long as they remained in school, with new five year olds joining and eleven year olds leaving each year. The resulting data required non-standard analysis of data to estimate trends over time. Data were clustered by area and school; the advantages and disadvantages of this are discussed. Data collection and management methods may now be of historical interest only, but did result in high response rates, which are tabulated. Summaries of items of data, by year of collection are given. An inner city sample including ethnic minority children was studied in alternate years from 1983. Methods for standardization of measurements by age and sex are described, including the developments for those of lung function, height and weight-for-height.Less
This chapter describes a mixed-longitudinal study. Children were followed for as long as they remained in school, with new five year olds joining and eleven year olds leaving each year. The resulting data required non-standard analysis of data to estimate trends over time. Data were clustered by area and school; the advantages and disadvantages of this are discussed. Data collection and management methods may now be of historical interest only, but did result in high response rates, which are tabulated. Summaries of items of data, by year of collection are given. An inner city sample including ethnic minority children was studied in alternate years from 1983. Methods for standardization of measurements by age and sex are described, including the developments for those of lung function, height and weight-for-height.
Walter Willett
- Published in print:
- 1998
- Published Online:
- September 2009
- ISBN:
- 9780195122978
- eISBN:
- 9780199864249
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195122978.003.03
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter presents a conceptual background on the sources of variation in diet. It also assembles data on dietary variation. The daily variation in nutrient intake among free-living subjects has ...
More
This chapter presents a conceptual background on the sources of variation in diet. It also assembles data on dietary variation. The daily variation in nutrient intake among free-living subjects has consistently proved to be large, although the magnitude varies according to nutrient. Measurements of dietary intake based on a single or small number of 24-hour recalls per subject may provide a reasonable estimate of the mean for a group, but the standard deviation will be greatly overestimated.Less
This chapter presents a conceptual background on the sources of variation in diet. It also assembles data on dietary variation. The daily variation in nutrient intake among free-living subjects has consistently proved to be large, although the magnitude varies according to nutrient. Measurements of dietary intake based on a single or small number of 24-hour recalls per subject may provide a reasonable estimate of the mean for a group, but the standard deviation will be greatly overestimated.
Douglas Cumming, Na Dai, and Sofia A. Johan
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199862566
- eISBN:
- 9780199332762
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199862566.003.0004
- Subject:
- Economics and Finance, Financial Economics
Chapter 4 introduces definitions of some key measures of hedge fund performance and risk profile. Methods of computing these key measures are provided along with examples using data from CISDM, one ...
More
Chapter 4 introduces definitions of some key measures of hedge fund performance and risk profile. Methods of computing these key measures are provided along with examples using data from CISDM, one of the major hedge fund databases. Measures of return and risk for extant funds and disappearing funds, respectively, as well as by investment strategies, are set out.Less
Chapter 4 introduces definitions of some key measures of hedge fund performance and risk profile. Methods of computing these key measures are provided along with examples using data from CISDM, one of the major hedge fund databases. Measures of return and risk for extant funds and disappearing funds, respectively, as well as by investment strategies, are set out.
Carey Witkov and Keith Zengel
- Published in print:
- 2019
- Published Online:
- November 2019
- ISBN:
- 9780198847144
- eISBN:
- 9780191882074
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198847144.003.0002
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics, Particle Physics / Astrophysics / Cosmology
Chi-squared analysis requires familiarity with basic statistical concepts like the mean, standard deviation and standard error, and uncertainty propagation. Interesting aspects are presented to ...
More
Chi-squared analysis requires familiarity with basic statistical concepts like the mean, standard deviation and standard error, and uncertainty propagation. Interesting aspects are presented to challenge even those familiar with these standard topics. End-of-chapter problems are included (with solutions in an appendix).Less
Chi-squared analysis requires familiarity with basic statistical concepts like the mean, standard deviation and standard error, and uncertainty propagation. Interesting aspects are presented to challenge even those familiar with these standard topics. End-of-chapter problems are included (with solutions in an appendix).
Kent Osband
- Published in print:
- 2014
- Published Online:
- November 2015
- ISBN:
- 9780231151733
- eISBN:
- 9780231525411
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231151733.003.0009
- Subject:
- Economics and Finance, Financial Economics
This chapter presents various statistical frauds in marketing. In Moody's Binomial Expansion Technique (BET), the simplest mixing distribution puts all its weight on a single default risk. This makes ...
More
This chapter presents various statistical frauds in marketing. In Moody's Binomial Expansion Technique (BET), the simplest mixing distribution puts all its weight on a single default risk. This makes risk estimates quite uneven and ill-suited to the calibration of high safety thresholds. Also, BET is particularly prone to understate the risk on senior tranches of portfolios with low diversity scores. Some statisticians employ Gaussian (normal) approximations to yield results different from BET. The technique allows continuous returns, making all risks scale with standard deviation avoid the unevenness of BET, and make thresholds easier to calculate. Another statistical fraud in marketing is the application of copulas. The technique uses a uniform random draw of real numbers between 0 and 1, which is unnatural for discrete default risk.Less
This chapter presents various statistical frauds in marketing. In Moody's Binomial Expansion Technique (BET), the simplest mixing distribution puts all its weight on a single default risk. This makes risk estimates quite uneven and ill-suited to the calibration of high safety thresholds. Also, BET is particularly prone to understate the risk on senior tranches of portfolios with low diversity scores. Some statisticians employ Gaussian (normal) approximations to yield results different from BET. The technique allows continuous returns, making all risks scale with standard deviation avoid the unevenness of BET, and make thresholds easier to calculate. Another statistical fraud in marketing is the application of copulas. The technique uses a uniform random draw of real numbers between 0 and 1, which is unnatural for discrete default risk.
David Nugent
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9781503609037
- eISBN:
- 9781503609723
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9781503609037.003.0002
- Subject:
- Anthropology, Latin American Cultural Anthropology
This chapter introduces concepts that are crucial to the analysis of The Encrypted State. The most important of these is “sacropolitics,” the politics of public mass sacrifice. This term identifies a ...
More
This chapter introduces concepts that are crucial to the analysis of The Encrypted State. The most important of these is “sacropolitics,” the politics of public mass sacrifice. This term identifies a form of sovereignty that is distinct from biopolitics, necropolitics and the state of exception. Sacropolitics differs from biopolitics in the sense that it is not about the management of life. It differs from necropolitics in that it is not about the subjugation of life to death. Sacropolitics is neither about managing nor taking life but rather animating it. It is about bringing to life dead, dying or moribund populations and social formations. Sacropolitical efforts call upon the entire population to engage in public performances of mass sacrifice. These performances are intended to contribute to the creation of new life worlds that can redeem poor countries from the profane state into which they have fallen.Less
This chapter introduces concepts that are crucial to the analysis of The Encrypted State. The most important of these is “sacropolitics,” the politics of public mass sacrifice. This term identifies a form of sovereignty that is distinct from biopolitics, necropolitics and the state of exception. Sacropolitics differs from biopolitics in the sense that it is not about the management of life. It differs from necropolitics in that it is not about the subjugation of life to death. Sacropolitics is neither about managing nor taking life but rather animating it. It is about bringing to life dead, dying or moribund populations and social formations. Sacropolitical efforts call upon the entire population to engage in public performances of mass sacrifice. These performances are intended to contribute to the creation of new life worlds that can redeem poor countries from the profane state into which they have fallen.
David Nugent
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9781503609037
- eISBN:
- 9781503609723
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9781503609037.003.0009
- Subject:
- Anthropology, Latin American Cultural Anthropology
This chapter explores official efforts to understand why state activities that had formerly been ordinary and routine (conscription) become increasingly difficult to carry out. It focuses on the ...
More
This chapter explores official efforts to understand why state activities that had formerly been ordinary and routine (conscription) become increasingly difficult to carry out. It focuses on the police investigation of clandestine Aprista activities, and what this discovery suggests to the authorities about the existence of an extensive underground network of subversion. The chapter also traces the emergence in official circles of an explanation that resolves official anxieties, even as it displaces responsibility for problems that were of the government’s own making onto phantom forces that were regarded as hyper-real. The less the authorities were able to carry out everyday activities, the more extraordinary were the powers of subversion they attributed to these phantom forces. The most important of these forces was APRA.Less
This chapter explores official efforts to understand why state activities that had formerly been ordinary and routine (conscription) become increasingly difficult to carry out. It focuses on the police investigation of clandestine Aprista activities, and what this discovery suggests to the authorities about the existence of an extensive underground network of subversion. The chapter also traces the emergence in official circles of an explanation that resolves official anxieties, even as it displaces responsibility for problems that were of the government’s own making onto phantom forces that were regarded as hyper-real. The less the authorities were able to carry out everyday activities, the more extraordinary were the powers of subversion they attributed to these phantom forces. The most important of these forces was APRA.
P. Ishwara Bhat
- Published in print:
- 2020
- Published Online:
- January 2020
- ISBN:
- 9780199493098
- eISBN:
- 9780199098316
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199493098.003.0013
- Subject:
- Law, Philosophy of Law
Study of statistical data becomes inevitable because of the far-reaching socio-economic dimensions, demographic factors, and political implications of law’s operation. Quantitative legal research ...
More
Study of statistical data becomes inevitable because of the far-reaching socio-economic dimensions, demographic factors, and political implications of law’s operation. Quantitative legal research (QLR) insists on scientific measurement of the phenomena and appropriate generalization based on data analysis. The growing importance of QLR can be found in the policy making and implementing function of legislature, judiciary, and administration, and in the works of the Law Commission, policy researchers, and legal academicians. Designing of QLR entails framing of research questions, hypothesis formulation, and testing of the hypothesis in light of the statistical data collected. The sample size should be statistically appropriate and collection, organisation, presentation, analysis, and interpretation of data in QLR needs to be systematic. Analysing quantitative data by focusing on proportion, central tendency, and deviation enables to observe trends.Less
Study of statistical data becomes inevitable because of the far-reaching socio-economic dimensions, demographic factors, and political implications of law’s operation. Quantitative legal research (QLR) insists on scientific measurement of the phenomena and appropriate generalization based on data analysis. The growing importance of QLR can be found in the policy making and implementing function of legislature, judiciary, and administration, and in the works of the Law Commission, policy researchers, and legal academicians. Designing of QLR entails framing of research questions, hypothesis formulation, and testing of the hypothesis in light of the statistical data collected. The sample size should be statistically appropriate and collection, organisation, presentation, analysis, and interpretation of data in QLR needs to be systematic. Analysing quantitative data by focusing on proportion, central tendency, and deviation enables to observe trends.
Robert H. Swendsen
- Published in print:
- 2019
- Published Online:
- February 2020
- ISBN:
- 9780198853237
- eISBN:
- 9780191887703
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198853237.003.0003
- Subject:
- Physics, Condensed Matter Physics / Materials, Theoretical, Computational, and Statistical Physics
The chapter presents an overview of various interpretations of probability. It introduces a ‘model probability,’ which assumes that all microscopic states that are essentially alike have the same ...
More
The chapter presents an overview of various interpretations of probability. It introduces a ‘model probability,’ which assumes that all microscopic states that are essentially alike have the same probability in equilibrium. A justification for this fundamental assumption is provided. The basic definitions used in discrete probability theory are introduced, along with examples of their application. One such example, which illustrates how a random variable is derived from other random variables, demonstrates the use of the Kronecker delta function. The chapter further derives the binomial and multinomial distributions, which will be important in the following chapter on the configurational entropy, along with the useful approximation developed by Stirling and its variations. The Gaussian distribution is presented in detail, as it will be very important throughout the book.Less
The chapter presents an overview of various interpretations of probability. It introduces a ‘model probability,’ which assumes that all microscopic states that are essentially alike have the same probability in equilibrium. A justification for this fundamental assumption is provided. The basic definitions used in discrete probability theory are introduced, along with examples of their application. One such example, which illustrates how a random variable is derived from other random variables, demonstrates the use of the Kronecker delta function. The chapter further derives the binomial and multinomial distributions, which will be important in the following chapter on the configurational entropy, along with the useful approximation developed by Stirling and its variations. The Gaussian distribution is presented in detail, as it will be very important throughout the book.
Steve Selvin
- Published in print:
- 2019
- Published Online:
- May 2019
- ISBN:
- 9780198833444
- eISBN:
- 9780191872280
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198833444.003.0037
- Subject:
- Mathematics, Probability / Statistics, Applied Mathematics
A simple and extensive table that contains an error, reducing the analysis to nonsense.
A simple and extensive table that contains an error, reducing the analysis to nonsense.
Andy Hector
- Published in print:
- 2015
- Published Online:
- March 2015
- ISBN:
- 9780198729051
- eISBN:
- 9780191795855
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198729051.003.0005
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
This chapter pulls together material from earlier chapters to give an introductory user’s guide to error bars and intervals. There is no ‘best’ interval that suits all needs. Several different types ...
More
This chapter pulls together material from earlier chapters to give an introductory user’s guide to error bars and intervals. There is no ‘best’ interval that suits all needs. Several different types of intervals are compared: standard deviations; standard errors of means; standard errors of differences; confidence intervals; least significant differences. The pros and cons of these main types of interval are reviewed. The results are presented using R graphics.Less
This chapter pulls together material from earlier chapters to give an introductory user’s guide to error bars and intervals. There is no ‘best’ interval that suits all needs. Several different types of intervals are compared: standard deviations; standard errors of means; standard errors of differences; confidence intervals; least significant differences. The pros and cons of these main types of interval are reviewed. The results are presented using R graphics.
David Nugent
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9781503609037
- eISBN:
- 9781503609723
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9781503609037.003.0008
- Subject:
- Anthropology, Latin American Cultural Anthropology
This chapter analyzes the authorities’ mounting difficulties in conscripting the population for public works—a second “routine” activity they had previously undertaken with great success. The chapter ...
More
This chapter analyzes the authorities’ mounting difficulties in conscripting the population for public works—a second “routine” activity they had previously undertaken with great success. The chapter shows the delusional nature of government plans, and how delusion was represented as rationality and routine. The chapter also explores officials’ confusion about their inability to carry out the ordinary, everyday task of conscription, and their sense that what had formerly seemed ordinary was anything but that. Chapter Eight also examines the explanations that government officials generated to explain their inability to carry out activities that had formerly been routine—in which their attribute their difficulties to a series of phantom figures that are said to haunt government efforts to rule.Less
This chapter analyzes the authorities’ mounting difficulties in conscripting the population for public works—a second “routine” activity they had previously undertaken with great success. The chapter shows the delusional nature of government plans, and how delusion was represented as rationality and routine. The chapter also explores officials’ confusion about their inability to carry out the ordinary, everyday task of conscription, and their sense that what had formerly seemed ordinary was anything but that. Chapter Eight also examines the explanations that government officials generated to explain their inability to carry out activities that had formerly been routine—in which their attribute their difficulties to a series of phantom figures that are said to haunt government efforts to rule.
Steven J. Osterlind
- Published in print:
- 2019
- Published Online:
- January 2019
- ISBN:
- 9780198831600
- eISBN:
- 9780191869532
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198831600.003.0013
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy
This chapter describes quantifying events in America and their historical context. The cotton gin is invented and has tremendous impact on the country, bringing sentiments of taxation and slavery to ...
More
This chapter describes quantifying events in America and their historical context. The cotton gin is invented and has tremendous impact on the country, bringing sentiments of taxation and slavery to the fore, for state’s rights. Events leading to the American Civil War are described, as are other circumstances leading to the Industrial Revolution, first in England and then moving to America. Karl Pearson is introduced with description of his The Grammar of Science, as well as his approach to scholarship as first defining a philosophy of science, which has dominated much of scientific research from the time of the book’s publication to today. Pearson’s invention of the coefficient of correlation is described, and his other contributions to statistics are mentioned: standard deviation, skewness, kurtosis, and goodness of fit, as well as his formal introduction of the contingency table.Less
This chapter describes quantifying events in America and their historical context. The cotton gin is invented and has tremendous impact on the country, bringing sentiments of taxation and slavery to the fore, for state’s rights. Events leading to the American Civil War are described, as are other circumstances leading to the Industrial Revolution, first in England and then moving to America. Karl Pearson is introduced with description of his The Grammar of Science, as well as his approach to scholarship as first defining a philosophy of science, which has dominated much of scientific research from the time of the book’s publication to today. Pearson’s invention of the coefficient of correlation is described, and his other contributions to statistics are mentioned: standard deviation, skewness, kurtosis, and goodness of fit, as well as his formal introduction of the contingency table.
J. Tourenq and V. Rohrlich
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195085938
- eISBN:
- 9780197560525
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195085938.003.0010
- Subject:
- Computer Science, Software Engineering
Correspondence analysis, a non-parametric principal component analysis, has been used to analyze heavy mineral data so that variations between both samples and ...
More
Correspondence analysis, a non-parametric principal component analysis, has been used to analyze heavy mineral data so that variations between both samples and minerals can be studied simultaneously. Four data sets were selected to demonstrate the method. The first example, modern sediments from the River Nile, illustrates how correspondence analysis brings out extra details in heavy mineral associations. The other examples come from the Plio-Quaternary "Bourbonnais Formation" of the French Massif Central. The first data set demonstrates how the principal factor plane (with axes 1 and 2) highlights relationships between geographical position and the predominant heavy mineral association (metamorphic minerals and zircon), suggesting the paleogeographic source. In the second set, the factor plane of axes 1 and 3 indicates a subdivision of the metamorphic mineral assemblage, suggesting two sources of metamorphic minerals. Finally, outcrop samples were projected onto the factor plane and reveal ancient drainage systems important for the accumulation of the Bourbonnais sands. Statistical methods used in interpreting heavy minerals in sediments range from simple and classical methods, such as calculation of means and standard deviations, to the calculation of correspondences and variances. Use of multivariate methods is increasingly frequent (Maurer, 1983; Stattegger, 1986; 1987; Delaune et al., 1989; Mezzadri and Saccani, 1989) since the first studies of Imbrie and vanAndel (1964). Ordination techniques such as principal component analysis (Harman, 1961) synthesize large amounts of data and extract the most important relationships. We have chosen a non-parametric form of principal component analysis called correspondence analysis. This technique has been used in sedimentology by Chenet and Teil (1979) to investigate deep-sea samples, by Cojan and Teil (1982) and Mercier et al. (1987) to define paleoenvironments, and by Cojan and Beaudoin (1986) to show paleoecological control of deposition in French sedimentary basins. Correspondence analysis has been used successfully to interpret heavy mineral data (Tourenq et al, 1978a, 1978b; Bolin et al, 1982; Tourenq, 1986, 1989; Faulp et al, 1988; Ambroise et al, 1987). We provide examples of different situations where the method can be applied. We will not present the mathematical and statistical procedures involved in correspondence analysis, but refer readers to Benzécri et al.
Less
Correspondence analysis, a non-parametric principal component analysis, has been used to analyze heavy mineral data so that variations between both samples and minerals can be studied simultaneously. Four data sets were selected to demonstrate the method. The first example, modern sediments from the River Nile, illustrates how correspondence analysis brings out extra details in heavy mineral associations. The other examples come from the Plio-Quaternary "Bourbonnais Formation" of the French Massif Central. The first data set demonstrates how the principal factor plane (with axes 1 and 2) highlights relationships between geographical position and the predominant heavy mineral association (metamorphic minerals and zircon), suggesting the paleogeographic source. In the second set, the factor plane of axes 1 and 3 indicates a subdivision of the metamorphic mineral assemblage, suggesting two sources of metamorphic minerals. Finally, outcrop samples were projected onto the factor plane and reveal ancient drainage systems important for the accumulation of the Bourbonnais sands. Statistical methods used in interpreting heavy minerals in sediments range from simple and classical methods, such as calculation of means and standard deviations, to the calculation of correspondences and variances. Use of multivariate methods is increasingly frequent (Maurer, 1983; Stattegger, 1986; 1987; Delaune et al., 1989; Mezzadri and Saccani, 1989) since the first studies of Imbrie and vanAndel (1964). Ordination techniques such as principal component analysis (Harman, 1961) synthesize large amounts of data and extract the most important relationships. We have chosen a non-parametric form of principal component analysis called correspondence analysis. This technique has been used in sedimentology by Chenet and Teil (1979) to investigate deep-sea samples, by Cojan and Teil (1982) and Mercier et al. (1987) to define paleoenvironments, and by Cojan and Beaudoin (1986) to show paleoecological control of deposition in French sedimentary basins. Correspondence analysis has been used successfully to interpret heavy mineral data (Tourenq et al, 1978a, 1978b; Bolin et al, 1982; Tourenq, 1986, 1989; Faulp et al, 1988; Ambroise et al, 1987). We provide examples of different situations where the method can be applied. We will not present the mathematical and statistical procedures involved in correspondence analysis, but refer readers to Benzécri et al.