David A. Savitz
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780195108408
- eISBN:
- 9780199865765
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195108408.001.0001
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This book offers a strategy for assessing epidemiologic research findings. Specific tools for assessing the presence and impact of selection bias in both cohort and case-control studies, bias from ...
More
This book offers a strategy for assessing epidemiologic research findings. Specific tools for assessing the presence and impact of selection bias in both cohort and case-control studies, bias from non-response, confounding, exposure measurement error, disease measurement error, and random error are identified and evaluated in this book. It is a difficult task to assess how much confidence one can have in a given set of findings. Two elements have been lacking in empirical tools for assessing a given study's susceptibility to specific sources of error. One is a link between methodological principles and the tools themselves, which involves taking stock of why the strategy for addressing the potential bias may or may not actually be informative, and how it could be misleading. The other is a full listing of the candidates to consider in addressing a potential problem, in the hope of improving the ability to draw upon one tool or another in an appropriate situation. This book aims to link methodological principles with research practice.Less
This book offers a strategy for assessing epidemiologic research findings. Specific tools for assessing the presence and impact of selection bias in both cohort and case-control studies, bias from non-response, confounding, exposure measurement error, disease measurement error, and random error are identified and evaluated in this book. It is a difficult task to assess how much confidence one can have in a given set of findings. Two elements have been lacking in empirical tools for assessing a given study's susceptibility to specific sources of error. One is a link between methodological principles and the tools themselves, which involves taking stock of why the strategy for addressing the potential bias may or may not actually be informative, and how it could be misleading. The other is a full listing of the candidates to consider in addressing a potential problem, in the hope of improving the ability to draw upon one tool or another in an appropriate situation. This book aims to link methodological principles with research practice.
Neil Abell, David W. Springer, and Akihito Kamata
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780195333367
- eISBN:
- 9780199864300
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195333367.003.0004
- Subject:
- Social Work, Research and Evaluation
This chapter provides a theoretical overview of reliability, as well as pragmatic considerations in establishing different types of reliability. To illustrate key points, it draws from two scales: ...
More
This chapter provides a theoretical overview of reliability, as well as pragmatic considerations in establishing different types of reliability. To illustrate key points, it draws from two scales: the Family Responsibility Scale and the Parental Self-Care Scale. Various forms of reliability are addressed, including interrater, test-retest, and internal consistency. Guidelines for interpreting reliability coefficients for clinical and research purposes are provided, including computation of stratified alpha for multidimensional measures. Computation of the standard error of measurement (SEM) is illustrated. The chapter concludes by asserting that a solid reliability coefficient is indispensable as a primary principle in assessing the quality of scores from a scale or test.Less
This chapter provides a theoretical overview of reliability, as well as pragmatic considerations in establishing different types of reliability. To illustrate key points, it draws from two scales: the Family Responsibility Scale and the Parental Self-Care Scale. Various forms of reliability are addressed, including interrater, test-retest, and internal consistency. Guidelines for interpreting reliability coefficients for clinical and research purposes are provided, including computation of stratified alpha for multidimensional measures. Computation of the standard error of measurement (SEM) is illustrated. The chapter concludes by asserting that a solid reliability coefficient is indispensable as a primary principle in assessing the quality of scores from a scale or test.
Simon Parsons and William Clegg
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199219469
- eISBN:
- 9780191722516
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199219469.003.0016
- Subject:
- Physics, Crystallography: Physics
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision ...
More
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision and accuracy, and distributions and their properties. Important properties include the mean and standard deviation of a distribution. The normal (Gaussian) distribution is of particular importance in view of the Central Limit Theorem. The mean of a set of values may be weighted or unweighted, and the place of weights in crystallography is discussed, especially in structure refinement. Some statistical tests and tools are described and used, including normal probability plots and analyses of variance. Correlation and covariance among parameters and derived results are considered. Possible sources of systematic and other errors in crystal structures are listed and their impact assessed. A simple checklist is provided for assessing results.Less
This chapter outlines some basic statistical methods and shows their application in crystallography, particularly in analysing the results. Concepts include: random and systematic errors, precision and accuracy, and distributions and their properties. Important properties include the mean and standard deviation of a distribution. The normal (Gaussian) distribution is of particular importance in view of the Central Limit Theorem. The mean of a set of values may be weighted or unweighted, and the place of weights in crystallography is discussed, especially in structure refinement. Some statistical tests and tools are described and used, including normal probability plots and analyses of variance. Correlation and covariance among parameters and derived results are considered. Possible sources of systematic and other errors in crystal structures are listed and their impact assessed. A simple checklist is provided for assessing results.
David A. Savitz
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780195108408
- eISBN:
- 9780199865765
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195108408.003.0011
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Multiple studies provide an opportunity to evaluate patterns of results to draw firmer conclusions. A series of studies yielding inconsistent results may well provide strong support for a causal ...
More
Multiple studies provide an opportunity to evaluate patterns of results to draw firmer conclusions. A series of studies yielding inconsistent results may well provide strong support for a causal inference when the methodologic features of those studies are scrutinized and the subset of studies that support an association are methodologically stronger, while those that fail to find an association are weaker. Similarly, consistent evidence of an association may not support a causal relation if all the studies share the same bias that is likely to generate spurious indications of a positive association. In order to draw conclusions, the methods and results must be considered in relation to one another, both within and across studies. This chapter discusses the consideration of random error and bias, data pooling and coordinated comparative analysis, synthetic and exploratory meta-analysis, interpreting consistency and inconsistency, and integrated assessment from combining evidence across studies.Less
Multiple studies provide an opportunity to evaluate patterns of results to draw firmer conclusions. A series of studies yielding inconsistent results may well provide strong support for a causal inference when the methodologic features of those studies are scrutinized and the subset of studies that support an association are methodologically stronger, while those that fail to find an association are weaker. Similarly, consistent evidence of an association may not support a causal relation if all the studies share the same bias that is likely to generate spurious indications of a positive association. In order to draw conclusions, the methods and results must be considered in relation to one another, both within and across studies. This chapter discusses the consideration of random error and bias, data pooling and coordinated comparative analysis, synthetic and exploratory meta-analysis, interpreting consistency and inconsistency, and integrated assessment from combining evidence across studies.
M. Bordag, G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199238743
- eISBN:
- 9780191716461
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199238743.003.0018
- Subject:
- Physics, Condensed Matter Physics / Materials, Atomic, Laser, and Optical Physics
Given that the Casimir force is very small and has a strong dependence on the separation distance and on the geometrical and material properties of the boundary surfaces, the measurement of this ...
More
Given that the Casimir force is very small and has a strong dependence on the separation distance and on the geometrical and material properties of the boundary surfaces, the measurement of this force is a challenging task. This chapter briefly considers older measurements of the Casimir force and formulates the general experimental requirements and best practices which follow from these measurements. Next, rigorous procedures for comparison of experiment with theory in relation to the force-distance measurements are discussed. Specifically, the presentation of the experimental errors and precision and the theoretical uncertainties for real materials are elaborated on. The statistical framework for the comparison between experiment and theory is also discussed. The concepts introduced in the chapter are used in Chapters 19–25, where the main experiments on the measurement of the Casimir force are considered.Less
Given that the Casimir force is very small and has a strong dependence on the separation distance and on the geometrical and material properties of the boundary surfaces, the measurement of this force is a challenging task. This chapter briefly considers older measurements of the Casimir force and formulates the general experimental requirements and best practices which follow from these measurements. Next, rigorous procedures for comparison of experiment with theory in relation to the force-distance measurements are discussed. Specifically, the presentation of the experimental errors and precision and the theoretical uncertainties for real materials are elaborated on. The statistical framework for the comparison between experiment and theory is also discussed. The concepts introduced in the chapter are used in Chapters 19–25, where the main experiments on the measurement of the Casimir force are considered.
Keith J. Worsley
- Published in print:
- 2001
- Published Online:
- March 2012
- ISBN:
- 9780192630711
- eISBN:
- 9780191724770
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192630711.003.0014
- Subject:
- Neuroscience, Techniques
Statistical analysis is concerned with making inference about underlying patterns in data that often contain a large amount of random error. This chapter begins by building up a model of the ...
More
Statistical analysis is concerned with making inference about underlying patterns in data that often contain a large amount of random error. This chapter begins by building up a model of the functional magnetic resonance imaging (fMRI) data, and discusses the haemodynamic response to the stimulus and then the random error. It deals with estimating the parameters of these models, assessing their variability and making decisions about whether the fMRI data shows any evidence of a blood oxygenation level dependent (BOLD) response to the stimulus. The chapter discusses in detail the methods for estimating both the signal and noise parameters, and also analyses the question of how to optimally design the experiment in order for the data to contain the maximum possible amount of extractable information.Less
Statistical analysis is concerned with making inference about underlying patterns in data that often contain a large amount of random error. This chapter begins by building up a model of the functional magnetic resonance imaging (fMRI) data, and discusses the haemodynamic response to the stimulus and then the random error. It deals with estimating the parameters of these models, assessing their variability and making decisions about whether the fMRI data shows any evidence of a blood oxygenation level dependent (BOLD) response to the stimulus. The chapter discusses in detail the methods for estimating both the signal and noise parameters, and also analyses the question of how to optimally design the experiment in order for the data to contain the maximum possible amount of extractable information.
David A. Savitz and Gregory A. Wellenius
- Published in print:
- 2016
- Published Online:
- November 2016
- ISBN:
- 9780190243777
- eISBN:
- 9780190243807
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190243777.003.0012
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Random error refers to the non-systematic reasons that estimated values deviate from the correct values. The processes that give rise to random error, either sampling or allocation across treatments, ...
More
Random error refers to the non-systematic reasons that estimated values deviate from the correct values. The processes that give rise to random error, either sampling or allocation across treatments, are not directly relevant to observational studies but provide the statistical framework for attempting to quantify the impact of random error. We suggest that random error should not take precedence over consideration of bias and argue against a formal interpretation of statistical significance testing. Confidence intervals provide an index of precision, and are more useful for quantifying random error in observational epidemiology. Multiple comparisons that result in identification of false positive associations due to random error can be minimized by a careful approach to data analysis and interpretation guided by subject matter knowledge. When data are explored without such guidance, as in exploratory studies examining large numbers of possible associations, interpretation of positive associations must be tempered.Less
Random error refers to the non-systematic reasons that estimated values deviate from the correct values. The processes that give rise to random error, either sampling or allocation across treatments, are not directly relevant to observational studies but provide the statistical framework for attempting to quantify the impact of random error. We suggest that random error should not take precedence over consideration of bias and argue against a formal interpretation of statistical significance testing. Confidence intervals provide an index of precision, and are more useful for quantifying random error in observational epidemiology. Multiple comparisons that result in identification of false positive associations due to random error can be minimized by a careful approach to data analysis and interpretation guided by subject matter knowledge. When data are explored without such guidance, as in exploratory studies examining large numbers of possible associations, interpretation of positive associations must be tempered.
Erik Biørn and Erik Biørn
- Published in print:
- 2016
- Published Online:
- December 2016
- ISBN:
- 9780198753445
- eISBN:
- 9780191815072
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198753445.003.0007
- Subject:
- Economics and Finance, Econometrics
A main objective of the chapter is to demonstrate how measurement error problems which are non-tractable in cross-section data can be overcome when (balanced) panel data are available. ...
More
A main objective of the chapter is to demonstrate how measurement error problems which are non-tractable in cross-section data can be overcome when (balanced) panel data are available. Transformations (including differences) recommended to eliminate fixed effects, can, in measurement error situations, magnify the relative noise–signal variation. The anatomy of the problems is illustrated by contrasting ‘disaggregate’ and ‘aggregate’ estimators. Ways of combining inconsistent estimators to obtain consistency is discussed. The contrast between N- and T-consistency is elaborated. Models with one error-ridden regressor, estimated by simple instrumental variable (IV) procedures, and multi-regressor generalizations estimated by the Generalized Method of Moments (GMM) are discussed. Testing of orthogonality conditions by Sargan–Hansen procedures is considered.Less
A main objective of the chapter is to demonstrate how measurement error problems which are non-tractable in cross-section data can be overcome when (balanced) panel data are available. Transformations (including differences) recommended to eliminate fixed effects, can, in measurement error situations, magnify the relative noise–signal variation. The anatomy of the problems is illustrated by contrasting ‘disaggregate’ and ‘aggregate’ estimators. Ways of combining inconsistent estimators to obtain consistency is discussed. The contrast between N- and T-consistency is elaborated. Models with one error-ridden regressor, estimated by simple instrumental variable (IV) procedures, and multi-regressor generalizations estimated by the Generalized Method of Moments (GMM) are discussed. Testing of orthogonality conditions by Sargan–Hansen procedures is considered.