Samir Okasha
- Published in print:
- 2006
- Published Online:
- January 2007
- ISBN:
- 9780199267972
- eISBN:
- 9780191708275
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199267972.003.0001
- Subject:
- Philosophy, Philosophy of Science
This chapter studies the logic of evolution by natural selection and the origin of the levels of selection question. The abstract nature of the core Darwinian principles, and thus their potential ...
More
This chapter studies the logic of evolution by natural selection and the origin of the levels of selection question. The abstract nature of the core Darwinian principles, and thus their potential applicability at multiple levels of the biological hierarchy, is emphasized. Price's equation — a key foundational result in evolutionary theory — is introduced and discussed, which teaches us that character-fitness covariance is the essence of natural selection. The relation between Price's equation and Lewontin's tripartite analysis of the conditions required for Darwinian evolution is briefly examined.Less
This chapter studies the logic of evolution by natural selection and the origin of the levels of selection question. The abstract nature of the core Darwinian principles, and thus their potential applicability at multiple levels of the biological hierarchy, is emphasized. Price's equation — a key foundational result in evolutionary theory — is introduced and discussed, which teaches us that character-fitness covariance is the essence of natural selection. The relation between Price's equation and Lewontin's tripartite analysis of the conditions required for Darwinian evolution is briefly examined.
Samir Okasha
- Published in print:
- 2006
- Published Online:
- January 2007
- ISBN:
- 9780199267972
- eISBN:
- 9780191708275
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199267972.003.0003
- Subject:
- Philosophy, Philosophy of Science
This chapter analyzes the causal dimension to multi-level selection theory. Particular attention is paid to the idea that direct selection at one hierarchical level may generate, as a side effect, a ...
More
This chapter analyzes the causal dimension to multi-level selection theory. Particular attention is paid to the idea that direct selection at one hierarchical level may generate, as a side effect, a character-fitness covariance at a different level, and thus the appearance of direct selection at that level. Such ‘cross-level’ byproducts lie at the heart of the levels of selection problem and show that Price's equation cannot be an infallible guide to determining the level(s) at which selection is acting. The nature of cross-level byproducts in MLS1 and MLS2 is examined, and the statistical technique known as contextual analysis, which can be used to detect cross-level byproducts, is explored.Less
This chapter analyzes the causal dimension to multi-level selection theory. Particular attention is paid to the idea that direct selection at one hierarchical level may generate, as a side effect, a character-fitness covariance at a different level, and thus the appearance of direct selection at that level. Such ‘cross-level’ byproducts lie at the heart of the levels of selection problem and show that Price's equation cannot be an infallible guide to determining the level(s) at which selection is acting. The nature of cross-level byproducts in MLS1 and MLS2 is examined, and the statistical technique known as contextual analysis, which can be used to detect cross-level byproducts, is explored.
Moody T. Chu and Gene H. Golub
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566649
- eISBN:
- 9780191718021
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566649.003.0008
- Subject:
- Mathematics, Applied Mathematics
The task of retrieving useful information while maintaining the underlying physical feasibility often necessitates the search for a good structured lower rank approximation of the data matrix. This ...
More
The task of retrieving useful information while maintaining the underlying physical feasibility often necessitates the search for a good structured lower rank approximation of the data matrix. This chapter addresses some of the theoretical and numerical issues involved in this kind of problem. Six different structures representing different flavors of structures are considered: Toeplitz, circulant, covariance, Euclidean distance, normalized data, and nonnegative matrices.Less
The task of retrieving useful information while maintaining the underlying physical feasibility often necessitates the search for a good structured lower rank approximation of the data matrix. This chapter addresses some of the theoretical and numerical issues involved in this kind of problem. Six different structures representing different flavors of structures are considered: Toeplitz, circulant, covariance, Euclidean distance, normalized data, and nonnegative matrices.
Aman Ullah
- Published in print:
- 2004
- Published Online:
- August 2004
- ISBN:
- 9780198774471
- eISBN:
- 9780191601347
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774478.003.0004
- Subject:
- Economics and Finance, Econometrics
This chapter presents the finite sample analysis of estimators and test statistics in the case of regression models, where the errors have a scalar covariance matrix. Most of the results of the ...
More
This chapter presents the finite sample analysis of estimators and test statistics in the case of regression models, where the errors have a scalar covariance matrix. Most of the results of the normal distribution have been obtained in econometrics textbooks. The results for the nonnormal cases are presented, which have been rarely discussed in literature.Less
This chapter presents the finite sample analysis of estimators and test statistics in the case of regression models, where the errors have a scalar covariance matrix. Most of the results of the normal distribution have been obtained in econometrics textbooks. The results for the nonnormal cases are presented, which have been rarely discussed in literature.
Aman Ullah
- Published in print:
- 2004
- Published Online:
- August 2004
- ISBN:
- 9780198774471
- eISBN:
- 9780191601347
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774478.003.0005
- Subject:
- Economics and Finance, Econometrics
This chapter examines regression models with nonscalar covariance matrix of errors. This includes the estimators and test statistics in the context of linear regression with heteroskedasticity and ...
More
This chapter examines regression models with nonscalar covariance matrix of errors. This includes the estimators and test statistics in the context of linear regression with heteroskedasticity and serial correlation, seemingly unrelated regressions, limited dependent variables, and panel data models.Less
This chapter examines regression models with nonscalar covariance matrix of errors. This includes the estimators and test statistics in the context of linear regression with heteroskedasticity and serial correlation, seemingly unrelated regressions, limited dependent variables, and panel data models.
Donna Harrington
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780195339888
- eISBN:
- 9780199863662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195339888.003.0005
- Subject:
- Social Work, Research and Evaluation
This chapter focuses on using multiple-group confirmatory factor analysis (CFA) to examine the appropriateness of CFA models across different groups and populations. Multiple-group CFA involves ...
More
This chapter focuses on using multiple-group confirmatory factor analysis (CFA) to examine the appropriateness of CFA models across different groups and populations. Multiple-group CFA involves simultaneous CFAs in two or more groups, using separate variance-covariance matrices (or raw data) for each group. Measurement invariance is be tested by placing equality constraints on parameters in the groups. Two examples of multiple-group CFA from the social work literature are discussed, and then a detailed multiple-group CFA building on the Job Satisfaction Scale (JSS) example presented in the previous chapter is presented. This is one of the more complex uses of CFA, and this chapter briefly introduces this topic; other resources are provided at the end of the chapter for more information.Less
This chapter focuses on using multiple-group confirmatory factor analysis (CFA) to examine the appropriateness of CFA models across different groups and populations. Multiple-group CFA involves simultaneous CFAs in two or more groups, using separate variance-covariance matrices (or raw data) for each group. Measurement invariance is be tested by placing equality constraints on parameters in the groups. Two examples of multiple-group CFA from the social work literature are discussed, and then a detailed multiple-group CFA building on the Job Satisfaction Scale (JSS) example presented in the previous chapter is presented. This is one of the more complex uses of CFA, and this chapter briefly introduces this topic; other resources are provided at the end of the chapter for more information.
Manuel Arellano
- Published in print:
- 2003
- Published Online:
- July 2005
- ISBN:
- 9780199245284
- eISBN:
- 9780191602481
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199245282.003.0005
- Subject:
- Economics and Finance, Econometrics
This chapter analyses the time series properties of panel data sets, focusing on short panels. It discusses time effects and moving average covariances. It presents estimates of covariance structures ...
More
This chapter analyses the time series properties of panel data sets, focusing on short panels. It discusses time effects and moving average covariances. It presents estimates of covariance structures and tests the permanent income hypothesis.Less
This chapter analyses the time series properties of panel data sets, focusing on short panels. It discusses time effects and moving average covariances. It presents estimates of covariance structures and tests the permanent income hypothesis.
Karen A. Randolph and Laura L. Myers
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199764044
- eISBN:
- 9780199332533
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199764044.001.0001
- Subject:
- Social Work, Research and Evaluation
The complexity of social problems necessitates that social work researchers utilize multivariate statistical methods in their investigations. Having a thorough understanding of basic statistics can ...
More
The complexity of social problems necessitates that social work researchers utilize multivariate statistical methods in their investigations. Having a thorough understanding of basic statistics can facilitate this process as multivariate methods have as their foundation many of these basic statistical procedures. In this pocket guide, the authors introduce readers to three of the more frequently used multivariate statistical methods in social work research—multiplelinear regression analysis,analysis of variance and covariance, and path analysis—with an emphasis on the basic statistics as important features of these methods. The primary intention is to help prepare entry level doctoral students and early career social work researchers in the use of multivariate statistical methods by offering a straightforward and easy to understand explanation of these methods and the basic statistics that inform them. The pocket guide begins with a review of basic statistics, hypothesis testing with inferential statistics, and bivariate analytic methods. Subsequent sections describe bivarate and multiple linear regression analyses, one-way and two-way analysis of variance (ANOVA) and covariance (ANCOVA), and path analysis. In each chapter, the authors introduce the various basic statistical procedures by providing definitions, formulas, descriptions of the underlying logic and assumptions of each procedure, and examples of how they have been applied in the social work research literature. The authors also explain estimation procedures and how to interpret results. Each chapter provides brief step-by-step instructions for conducting these statistical tests in Statistical Package for the Social Sciences (SPSS) and AMOS (SPSS, Inc. 2011), based on data from the National Educational Longitudinal Study of 1988 (NELS: 88). Finally, the book offers a companion website that provides more detailed instructions, as well as data sets and worked examples.Less
The complexity of social problems necessitates that social work researchers utilize multivariate statistical methods in their investigations. Having a thorough understanding of basic statistics can facilitate this process as multivariate methods have as their foundation many of these basic statistical procedures. In this pocket guide, the authors introduce readers to three of the more frequently used multivariate statistical methods in social work research—multiplelinear regression analysis,analysis of variance and covariance, and path analysis—with an emphasis on the basic statistics as important features of these methods. The primary intention is to help prepare entry level doctoral students and early career social work researchers in the use of multivariate statistical methods by offering a straightforward and easy to understand explanation of these methods and the basic statistics that inform them. The pocket guide begins with a review of basic statistics, hypothesis testing with inferential statistics, and bivariate analytic methods. Subsequent sections describe bivarate and multiple linear regression analyses, one-way and two-way analysis of variance (ANOVA) and covariance (ANCOVA), and path analysis. In each chapter, the authors introduce the various basic statistical procedures by providing definitions, formulas, descriptions of the underlying logic and assumptions of each procedure, and examples of how they have been applied in the social work research literature. The authors also explain estimation procedures and how to interpret results. Each chapter provides brief step-by-step instructions for conducting these statistical tests in Statistical Package for the Social Sciences (SPSS) and AMOS (SPSS, Inc. 2011), based on data from the National Educational Longitudinal Study of 1988 (NELS: 88). Finally, the book offers a companion website that provides more detailed instructions, as well as data sets and worked examples.
Halbert White, Tae‐Hwan Kim, and Simone Manganelli
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199549498
- eISBN:
- 9780191720567
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199549498.003.0012
- Subject:
- Economics and Finance, Econometrics
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional ...
More
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional quantiles and the presence of exogenous variables. The MQ-CAViaR model is then used to specify conditional versions of the more robust skewness and kurtosis measures discussed in Kim and White (2004). The chapter is organized as follows. Section 2 develops the MQ-CAViaR data generating process (DGP). Section 3 proposes a quasi-maximum likelihood estimator for the MQ-CAViaR process, and proves its consistency and asymptotic normality. Section 4 shows how to consistently estimate the asymptotic variance—covariance matrix of the MQ-CAViaR estimator. Section 5 specifies conditional quantile-based measures of skewness and kurtosis based on MQ-CAViaR estimates. Section 6 contains an empirical application of our methods to the S&P 500 index. The chapter also reports results of a simulation experiment designed to examine the finite sample behavior of our estimator. Section 7 contains a summary and concluding remarks.Less
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional quantiles and the presence of exogenous variables. The MQ-CAViaR model is then used to specify conditional versions of the more robust skewness and kurtosis measures discussed in Kim and White (2004). The chapter is organized as follows. Section 2 develops the MQ-CAViaR data generating process (DGP). Section 3 proposes a quasi-maximum likelihood estimator for the MQ-CAViaR process, and proves its consistency and asymptotic normality. Section 4 shows how to consistently estimate the asymptotic variance—covariance matrix of the MQ-CAViaR estimator. Section 5 specifies conditional quantile-based measures of skewness and kurtosis based on MQ-CAViaR estimates. Section 6 contains an empirical application of our methods to the S&P 500 index. The chapter also reports results of a simulation experiment designed to examine the finite sample behavior of our estimator. Section 7 contains a summary and concluding remarks.
Thomas Ryckman
- Published in print:
- 2005
- Published Online:
- April 2005
- ISBN:
- 9780195177176
- eISBN:
- 9780199835324
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195177177.003.0002
- Subject:
- Philosophy, Philosophy of Science
A tension within Kant’s Transcendental Analytic, regarding the combination of the “active” faculty of understanding with the “passive” faculty of sensibility, underlies the distinct appraisals in ...
More
A tension within Kant’s Transcendental Analytic, regarding the combination of the “active” faculty of understanding with the “passive” faculty of sensibility, underlies the distinct appraisals in 1920 by Hans Reichenbach and Ernst Cassirer of constitutive but “relativized” a priori principles in the GTR. Reichenbach’s “principles of coordination” presuppose Schlick’s conception of cognition as a coordination of formal concepts to objects of perceptual experience, and are shown to be consonant only with the commitments of scientific realism. Cassirer’s rejection of the “active”/“passive” dichotomy promoted his conception of general covariance as a high level principle of objectivity, much in accord with Einstein’s own later views, as recently articulated in the literature on the “Hole Argument.” In particular, the principle of general covariance is shown to place significant constraints on field theories, a point noted by David Hilbert and implicit in the work of Emmy Noether.Less
A tension within Kant’s Transcendental Analytic, regarding the combination of the “active” faculty of understanding with the “passive” faculty of sensibility, underlies the distinct appraisals in 1920 by Hans Reichenbach and Ernst Cassirer of constitutive but “relativized” a priori principles in the GTR. Reichenbach’s “principles of coordination” presuppose Schlick’s conception of cognition as a coordination of formal concepts to objects of perceptual experience, and are shown to be consonant only with the commitments of scientific realism. Cassirer’s rejection of the “active”/“passive” dichotomy promoted his conception of general covariance as a high level principle of objectivity, much in accord with Einstein’s own later views, as recently articulated in the literature on the “Hole Argument.” In particular, the principle of general covariance is shown to place significant constraints on field theories, a point noted by David Hilbert and implicit in the work of Emmy Noether.
Garrison Sposito
- Published in print:
- 1999
- Published Online:
- November 2020
- ISBN:
- 9780195109900
- eISBN:
- 9780197561058
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195109900.003.0007
- Subject:
- Earth Sciences and Geography, Oceanography and Hydrology
The first detailed study of solute movement through the vadose zone at field scales of space and time was performed by Biggar and Nielsen (1976). Their experiment was ...
More
The first detailed study of solute movement through the vadose zone at field scales of space and time was performed by Biggar and Nielsen (1976). Their experiment was conducted on a 150-ha agricultural site located at the West Side Field Station of the University of California, where the soil (Panoche series) exhibits a broad range of textures. Twenty well-separated, 6.5-m-square plots, previously instrumented to monitor matric potential and withdraw soil solution for chemical analysis, were ponded with water containing low concentrations of the tracer anions chloride and nitrate. After about 1 week, steady-state infiltration conditions were established, and 0.075 m of water containing the two anions at concentrations between 0.1 and 0.2 mol L-1 was leached through each plot at the local infiltration rate, which varied widely from 0.054 to 0.46 m day-1, depending on plot location. Once this solute pulse had infiltrated (< 1.5 days), leaching under ponded conditions was recommenced with the water low in chloride and nitrate. Solution samples were extracted before and after the solute pulse input at six depths up to 1.83 m below the land surface in each plot. Analyses of these samples for chloride and nitrate produced a broad range of concentration data which nonetheless showed an excellent linear correlation between the concentrations of the two anions (R2= 0.975), with a proportionality coefficient equal to that expected on the basis of the composition of the input pulse. Values of the measured solute concentrations at each sampling depth were tabulated as functions of the leaching time. Biggar and Nielsen (1976) decided to fit their very large concentration-depth-time database to a finite-pulsc-input solution of the one-dimensional advection-dispersion equation, leaving both the dispersion coefficient D and advection velocity u as adjustable parameters. The 359 field-wide values of u obtained in this way were highly variable (CV ≈ 200%), but also highly correlated (R2 = 0.84) and proportional to values of the advection velocity calculated directly as the ratio of water flux density to water content in each field plot (Biggar and Nielsen, 1976, figure 4).
Less
The first detailed study of solute movement through the vadose zone at field scales of space and time was performed by Biggar and Nielsen (1976). Their experiment was conducted on a 150-ha agricultural site located at the West Side Field Station of the University of California, where the soil (Panoche series) exhibits a broad range of textures. Twenty well-separated, 6.5-m-square plots, previously instrumented to monitor matric potential and withdraw soil solution for chemical analysis, were ponded with water containing low concentrations of the tracer anions chloride and nitrate. After about 1 week, steady-state infiltration conditions were established, and 0.075 m of water containing the two anions at concentrations between 0.1 and 0.2 mol L-1 was leached through each plot at the local infiltration rate, which varied widely from 0.054 to 0.46 m day-1, depending on plot location. Once this solute pulse had infiltrated (< 1.5 days), leaching under ponded conditions was recommenced with the water low in chloride and nitrate. Solution samples were extracted before and after the solute pulse input at six depths up to 1.83 m below the land surface in each plot. Analyses of these samples for chloride and nitrate produced a broad range of concentration data which nonetheless showed an excellent linear correlation between the concentrations of the two anions (R2= 0.975), with a proportionality coefficient equal to that expected on the basis of the composition of the input pulse. Values of the measured solute concentrations at each sampling depth were tabulated as functions of the leaching time. Biggar and Nielsen (1976) decided to fit their very large concentration-depth-time database to a finite-pulsc-input solution of the one-dimensional advection-dispersion equation, leaving both the dispersion coefficient D and advection velocity u as adjustable parameters. The 359 field-wide values of u obtained in this way were highly variable (CV ≈ 200%), but also highly correlated (R2 = 0.84) and proportional to values of the advection velocity calculated directly as the ratio of water flux density to water content in each field plot (Biggar and Nielsen, 1976, figure 4).
Peter Main
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199219469
- eISBN:
- 9780191722516
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199219469.003.0012
- Subject:
- Physics, Crystallography: Physics
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures ...
More
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures including unweighted and weighted means, and least-squares criteria for a ‘best fit’. The simple case of linear regression for two parameters of a straight line is treated in some detail in order to explain the least-squares tools of observational equations and matrix algebra, leading to variances and covariances. Restraints and constraints are applied, and their important distinction made clear. Non-linearity in the observational equations leads to further complications, with only parameter shifts rather than the parameters themselves obtainable through least-squares treatment. Ill-conditioning and matrix singularity are explained, with reference to crystallographic relevance. Computing aspects are considered, since least-squares refinement is particularly expensive computationally.Less
In crystallography, numerical parameters for the structure are derived from experimental data. This chapter discusses how the data and parameters are related, and introduces data fitting procedures including unweighted and weighted means, and least-squares criteria for a ‘best fit’. The simple case of linear regression for two parameters of a straight line is treated in some detail in order to explain the least-squares tools of observational equations and matrix algebra, leading to variances and covariances. Restraints and constraints are applied, and their important distinction made clear. Non-linearity in the observational equations leads to further complications, with only parameter shifts rather than the parameters themselves obtainable through least-squares treatment. Ill-conditioning and matrix singularity are explained, with reference to crystallographic relevance. Computing aspects are considered, since least-squares refinement is particularly expensive computationally.
Simon Parsons and William Clegg
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199219469
- eISBN:
- 9780191722516
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199219469.003.0016
- Subject:
- Physics, Crystallography: Physics
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision ...
More
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision and accuracy, and distributions and their properties. Important properties include the mean and standard deviation of a distribution. The normal (Gaussian) distribution is of particular importance in view of the Central Limit Theorem. The mean of a set of values may be weighted or unweighted, and the place of weights in crystallography is discussed, especially in structure refinement. Some statistical tests and tools are described and used, including normal probability plots and analyses of variance. Correlation and covariance among parameters and derived results are considered. Possible sources of systematic and other errors in crystal structures are listed and their impact assessed. A simple checklist is provided for assessing results.Less
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision and accuracy, and distributions and their properties. Important properties include the mean and standard deviation of a distribution. The normal (Gaussian) distribution is of particular importance in view of the Central Limit Theorem. The mean of a set of values may be weighted or unweighted, and the place of weights in crystallography is discussed, especially in structure refinement. Some statistical tests and tools are described and used, including normal probability plots and analyses of variance. Correlation and covariance among parameters and derived results are considered. Possible sources of systematic and other errors in crystal structures are listed and their impact assessed. A simple checklist is provided for assessing results.
Quan Li
- Published in print:
- 2018
- Published Online:
- March 2019
- ISBN:
- 9780190656218
- eISBN:
- 9780190656256
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190656218.001.0001
- Subject:
- Political Science, Political Theory
This book seeks to teach undergraduate and graduate students in social sciences how to use R to manage, visualize, and analyze data in order to answer substantive questions and replicate published ...
More
This book seeks to teach undergraduate and graduate students in social sciences how to use R to manage, visualize, and analyze data in order to answer substantive questions and replicate published findings. This book distinguishes itself from other introductory R or statistics books in three ways. First, targeting an audience rarely exposed to statistical programming, it adopts a minimalist approach and covers only the most important functions and skills in R that one will need for conducting reproducible research projects. Second, it emphasizes meeting the practical needs of students using R in research projects. Specifically, it teaches students how to import, inspect, and manage data; understand the logic of statistical inference; visualize data and findings via histograms, boxplots, scatterplots, and diagnostic plots; and analyze data using one-sample t-test, difference-of-means test, covariance, correlation, ordinary least squares (OLS) regression, and model assumption diagnostics. Third, it teaches students how to replicate the findings in published journal articles and diagnose model assumption violations. The principle behind this book is to teach students to learn as little R as possible but to do as much reproducible, substance-driven data analysis at the beginner or intermediate level as possible. The minimalist approach dramatically reduces the learning cost but still proves adequate information for meeting the practical research needs of senior undergraduate and beginning graduate students. Having completed this book, students can use R and statistical analysis to answer questions regarding some substantively interesting continuous outcome variable in a cross-sectional design.Less
This book seeks to teach undergraduate and graduate students in social sciences how to use R to manage, visualize, and analyze data in order to answer substantive questions and replicate published findings. This book distinguishes itself from other introductory R or statistics books in three ways. First, targeting an audience rarely exposed to statistical programming, it adopts a minimalist approach and covers only the most important functions and skills in R that one will need for conducting reproducible research projects. Second, it emphasizes meeting the practical needs of students using R in research projects. Specifically, it teaches students how to import, inspect, and manage data; understand the logic of statistical inference; visualize data and findings via histograms, boxplots, scatterplots, and diagnostic plots; and analyze data using one-sample t-test, difference-of-means test, covariance, correlation, ordinary least squares (OLS) regression, and model assumption diagnostics. Third, it teaches students how to replicate the findings in published journal articles and diagnose model assumption violations. The principle behind this book is to teach students to learn as little R as possible but to do as much reproducible, substance-driven data analysis at the beginner or intermediate level as possible. The minimalist approach dramatically reduces the learning cost but still proves adequate information for meeting the practical research needs of senior undergraduate and beginning graduate students. Having completed this book, students can use R and statistical analysis to answer questions regarding some substantively interesting continuous outcome variable in a cross-sectional design.
Bas C. van Fraassen
- Published in print:
- 1989
- Published Online:
- November 2003
- ISBN:
- 9780198248606
- eISBN:
- 9780191597459
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198248601.003.0011
- Subject:
- Philosophy, Philosophy of Science
The concepts analysed and developed in the previous chapter are applied to discussions of the development of modern mechanics, including symmetries of space and time, relativity, conservation laws, ...
More
The concepts analysed and developed in the previous chapter are applied to discussions of the development of modern mechanics, including symmetries of space and time, relativity, conservation laws, invariance and covariance, and the relation to older ideas of laws of nature.Less
The concepts analysed and developed in the previous chapter are applied to discussions of the development of modern mechanics, including symmetries of space and time, relativity, conservation laws, invariance and covariance, and the relation to older ideas of laws of nature.
Ta-Pei Cheng
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199573639
- eISBN:
- 9780191722448
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199573639.003.0014
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
The mathematical realization of equivalence principle (EP) is the principle of general covariance. General relativity (GR) equations must be covariant with respect to general coordinate ...
More
The mathematical realization of equivalence principle (EP) is the principle of general covariance. General relativity (GR) equations must be covariant with respect to general coordinate transformations. To go from special relativity (SR) to GR equations, one replaces ordinary by covariant derivatives. The SR equation of motion turns into the geodesic equation. The Einstein equation, as the relativistic gravitation field equation, relates the energy momentum tensor to the Einstein curvature tensor. The Einstein equation in the space exterior to a spherical source is solved to obtain the Schwarzschild solution. The solutions of Einstein equation that satisfy the cosmological principle is the Robertson-Walker spacetime. The relation of the cosmological Friedmann equations to the Einstein field equation is explicated. The compatibility of the cosmological-constant term with the mathematical structure of Einstein equation and the interpretation of this term as the vacuum energy tensor are discussed.Less
The mathematical realization of equivalence principle (EP) is the principle of general covariance. General relativity (GR) equations must be covariant with respect to general coordinate transformations. To go from special relativity (SR) to GR equations, one replaces ordinary by covariant derivatives. The SR equation of motion turns into the geodesic equation. The Einstein equation, as the relativistic gravitation field equation, relates the energy momentum tensor to the Einstein curvature tensor. The Einstein equation in the space exterior to a spherical source is solved to obtain the Schwarzschild solution. The solutions of Einstein equation that satisfy the cosmological principle is the Robertson-Walker spacetime. The relation of the cosmological Friedmann equations to the Einstein field equation is explicated. The compatibility of the cosmological-constant term with the mathematical structure of Einstein equation and the interpretation of this term as the vacuum energy tensor are discussed.
Valeri P. Frolov and Andrei Zelnikov
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199692293
- eISBN:
- 9780191731860
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199692293.003.0002
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
Equivalence principle relates physics in an accelerated frame and in a gravitational field. It is possible to understand many properties of black holes beginning with the flat Minkowski spacetime. We ...
More
Equivalence principle relates physics in an accelerated frame and in a gravitational field. It is possible to understand many properties of black holes beginning with the flat Minkowski spacetime. We introduce curved coordinates, metric, non‐inertial frames, and describe the covariance principle. We discuss physics in auniformly accelerated frame and in a static uniform gravitational field. We introduce a so called Rindler horizon and describe its properties. The geometry near the Rindler horizon approximates the geometry near a black hole in a case when the mass of the latter becomes large. This explains why the results presented in this Chapter are important for understanding many properties of ‘real’ black holes.Less
Equivalence principle relates physics in an accelerated frame and in a gravitational field. It is possible to understand many properties of black holes beginning with the flat Minkowski spacetime. We introduce curved coordinates, metric, non‐inertial frames, and describe the covariance principle. We discuss physics in auniformly accelerated frame and in a static uniform gravitational field. We introduce a so called Rindler horizon and describe its properties. The geometry near the Rindler horizon approximates the geometry near a black hole in a case when the mass of the latter becomes large. This explains why the results presented in this Chapter are important for understanding many properties of ‘real’ black holes.
VOLOVIK GRIGORY E.
- Published in print:
- 2009
- Published Online:
- January 2010
- ISBN:
- 9780199564842
- eISBN:
- 9780191709906
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199564842.003.0006
- Subject:
- Physics, Condensed Matter Physics / Materials, Particle Physics / Astrophysics / Cosmology
The energy momentum tensor for the vacuum field which represents gravity is non-covariant, since the effective gravitational field obeys hydrodynamic equations rather than Einstein equations. ...
More
The energy momentum tensor for the vacuum field which represents gravity is non-covariant, since the effective gravitational field obeys hydrodynamic equations rather than Einstein equations. However, even for the fully covariant dynamics of gravity, in Einstein theory the corresponding quantity ‘the energy momentum tensor for the gravitational field’ cannot be presented in the covariant form. This is the famous problem of the energy momentum tensor in general relativity. One must sacrifice either covariance of the theory or the true conservation law. From the condensed matter point of view, the inconsistency between the covariance and the conservation law for the energy and momentum is an aspect of the much larger problem of the non-locality of effective theories. This chapter discusses the advantages and drawbacks of effective theory, non-locality in effective theory, true conservation and covariant conservation, covariance versus conservation, paradoxes of effective theory, Novikov–Wess–Zumino action for ferromagnets as an example of non-locality, effective versus microscopic theory, whether quantum gravity exists, what effective theory can and cannot do, and universality classes of effective theories of superfluidity.Less
The energy momentum tensor for the vacuum field which represents gravity is non-covariant, since the effective gravitational field obeys hydrodynamic equations rather than Einstein equations. However, even for the fully covariant dynamics of gravity, in Einstein theory the corresponding quantity ‘the energy momentum tensor for the gravitational field’ cannot be presented in the covariant form. This is the famous problem of the energy momentum tensor in general relativity. One must sacrifice either covariance of the theory or the true conservation law. From the condensed matter point of view, the inconsistency between the covariance and the conservation law for the energy and momentum is an aspect of the much larger problem of the non-locality of effective theories. This chapter discusses the advantages and drawbacks of effective theory, non-locality in effective theory, true conservation and covariant conservation, covariance versus conservation, paradoxes of effective theory, Novikov–Wess–Zumino action for ferromagnets as an example of non-locality, effective versus microscopic theory, whether quantum gravity exists, what effective theory can and cannot do, and universality classes of effective theories of superfluidity.
Carlo Giunti and Chung W. Kim
- Published in print:
- 2007
- Published Online:
- January 2010
- ISBN:
- 9780198508717
- eISBN:
- 9780191708862
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508717.003.0002
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter discusses the physics of quantized Dirac fields with detailed treatment of Dirac equation, representations of gamma matrices, products of gamma matrices, relativistic covariance (boosts, ...
More
This chapter discusses the physics of quantized Dirac fields with detailed treatment of Dirac equation, representations of gamma matrices, products of gamma matrices, relativistic covariance (boosts, rotations, and invariants), helicity, gauge transformations, chirality, solution of the Dirac equation (Dirac representation, chiral representation, two-component helicity eigenstate spinors, and massless field), quantization, symmetry transformation of states (space-time translations and Lorentz transformations), C, P, and T transformations, wave packets, and Fierz transformations.Less
This chapter discusses the physics of quantized Dirac fields with detailed treatment of Dirac equation, representations of gamma matrices, products of gamma matrices, relativistic covariance (boosts, rotations, and invariants), helicity, gauge transformations, chirality, solution of the Dirac equation (Dirac representation, chiral representation, two-component helicity eigenstate spinors, and massless field), quantization, symmetry transformation of states (space-time translations and Lorentz transformations), C, P, and T transformations, wave packets, and Fierz transformations.
Oliver Johns
- Published in print:
- 2005
- Published Online:
- January 2010
- ISBN:
- 9780198567264
- eISBN:
- 9780191717987
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198567264.003.0015
- Subject:
- Physics, Atomic, Laser, and Optical Physics
This chapter develops techniques that allow relativistically covariant calculations to be done in an elegant manner and introduces what are known as fourvectors. Fourvectors are analogous to the ...
More
This chapter develops techniques that allow relativistically covariant calculations to be done in an elegant manner and introduces what are known as fourvectors. Fourvectors are analogous to the familiar vectors in three-dimensional Cartesian space (termed threevectors), except that, in additional to the three spatial components, fourvectors will have an additional zeroth component associated with time. This additional component allows us to deal with the fact that the Lorentz transformation of special relativity transforms time as well as spatial coordinates. The theory of fourvectors and operators is presented using an invariant notation. The concept of fourvectors and tensors is discussed in the simple context of special relativity, as well as the choice of metric, relativistic interval, space-time diagram, general fourvectors and construction of new fourvectors, covariant and contravariant components, general Lorentz transformations, transformation of components, examples of Lorentz transformations, gradient fourvector, manifest covariance, formal covariance, fourvector operators, fourvector dyadics, wedge products, and manifestly covariant form of Maxwell’s equations.Less
This chapter develops techniques that allow relativistically covariant calculations to be done in an elegant manner and introduces what are known as fourvectors. Fourvectors are analogous to the familiar vectors in three-dimensional Cartesian space (termed threevectors), except that, in additional to the three spatial components, fourvectors will have an additional zeroth component associated with time. This additional component allows us to deal with the fact that the Lorentz transformation of special relativity transforms time as well as spatial coordinates. The theory of fourvectors and operators is presented using an invariant notation. The concept of fourvectors and tensors is discussed in the simple context of special relativity, as well as the choice of metric, relativistic interval, space-time diagram, general fourvectors and construction of new fourvectors, covariant and contravariant components, general Lorentz transformations, transformation of components, examples of Lorentz transformations, gradient fourvector, manifest covariance, formal covariance, fourvector operators, fourvector dyadics, wedge products, and manifestly covariant form of Maxwell’s equations.