Fred Campano and Dominick Salvatore
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780195300918
- eISBN:
- 9780199783441
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195300912.001.0001
- Subject:
- Economics and Finance, Development, Growth, and Environmental
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction ...
More
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction to not so familiar concepts relating to income distribution in the subsequent chapters. The income concept and household sample surveys are examined first, followed by descriptive statistics techniques commonly used to present the survey results. The commonality found in the shape of the income density function leads to statistical modeling, parameter estimation, and goodness of fit tests. Alternative models are then introduced along with the related summary measures of income distribution, including the Gini coefficient. This is followed by a sequence of chapters that deal with normative issues such as inequality, poverty, and country comparisons. The remaining chapters cover an assortment of topics including: economic development and globalization and their impact on income distribution, redistribution of income, and integrating macroeconomic models with income distribution models.Less
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction to not so familiar concepts relating to income distribution in the subsequent chapters. The income concept and household sample surveys are examined first, followed by descriptive statistics techniques commonly used to present the survey results. The commonality found in the shape of the income density function leads to statistical modeling, parameter estimation, and goodness of fit tests. Alternative models are then introduced along with the related summary measures of income distribution, including the Gini coefficient. This is followed by a sequence of chapters that deal with normative issues such as inequality, poverty, and country comparisons. The remaining chapters cover an assortment of topics including: economic development and globalization and their impact on income distribution, redistribution of income, and integrating macroeconomic models with income distribution models.
Michael J. North and Charles M. Macal
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780195172119
- eISBN:
- 9780199789894
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172119.003.0005
- Subject:
- Business and Management, Strategy
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, ...
More
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, participatory simulation, statistical modeling, risk analysis, and optimization. It also discusses why businesses and government agencies do modeling and simulation.Less
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, participatory simulation, statistical modeling, risk analysis, and optimization. It also discusses why businesses and government agencies do modeling and simulation.
Jenny R. Saffran
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780195301151
- eISBN:
- 9780199894246
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195301151.003.0002
- Subject:
- Psychology, Developmental Psychology
This chapter uses statistical learning as a model system to consider broader issues and implications pertaining to the role of learning in development. Statistical learning is an old idea, with roots ...
More
This chapter uses statistical learning as a model system to consider broader issues and implications pertaining to the role of learning in development. Statistical learning is an old idea, with roots in mid-20th-century fields of inquiry as diverse as structural linguistics, early neuroscience, and operant conditioning paradigms. Two broad claims underlie the statistical learning literature. First, important structures in the environment are mirrored by surface statistics. Second, organisms are in fact sensitive to these patterns in their environments. This combination of environmental structure and learning mechanisms that can exploit this structure is the central tenet of theories focused on learning — in this case, the potent combination of informative statistics in the input paired with processes that can make use of such statistics.Less
This chapter uses statistical learning as a model system to consider broader issues and implications pertaining to the role of learning in development. Statistical learning is an old idea, with roots in mid-20th-century fields of inquiry as diverse as structural linguistics, early neuroscience, and operant conditioning paradigms. Two broad claims underlie the statistical learning literature. First, important structures in the environment are mirrored by surface statistics. Second, organisms are in fact sensitive to these patterns in their environments. This combination of environmental structure and learning mechanisms that can exploit this structure is the central tenet of theories focused on learning — in this case, the potent combination of informative statistics in the input paired with processes that can make use of such statistics.
Xun Gu
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199213269
- eISBN:
- 9780191594762
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199213269.001.0001
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
Evolutionary genomics is a relatively new research field with the ultimate goal of understanding the underlying evolutionary and genetic mechanisms for the emergence of genome complexity under ...
More
Evolutionary genomics is a relatively new research field with the ultimate goal of understanding the underlying evolutionary and genetic mechanisms for the emergence of genome complexity under changing environments. It stems from an integration of high throughput data from functional genomics, statistical modelling and bioinformatics, and the procedure of phylogeny-based analysis. This book summarises the statistical framework of evolutionary genomics, and illustrates how statistical modelling and testing can enhance our understanding of functional genomic evolution. The book reviews the recent developments in methodology from an evolutionary perspective of genome function, and incorporates substantial examples from high throughput data in model organisms. In addition to phylogeny-based functional analysis of DNA sequences, the book includes discussion on how new types of functional genomic data (e.g., microarray) can provide exciting new insights into the evolution of genome function, which can lead in turn to an understanding of the emergence of genome complexity during evolution.Less
Evolutionary genomics is a relatively new research field with the ultimate goal of understanding the underlying evolutionary and genetic mechanisms for the emergence of genome complexity under changing environments. It stems from an integration of high throughput data from functional genomics, statistical modelling and bioinformatics, and the procedure of phylogeny-based analysis. This book summarises the statistical framework of evolutionary genomics, and illustrates how statistical modelling and testing can enhance our understanding of functional genomic evolution. The book reviews the recent developments in methodology from an evolutionary perspective of genome function, and incorporates substantial examples from high throughput data in model organisms. In addition to phylogeny-based functional analysis of DNA sequences, the book includes discussion on how new types of functional genomic data (e.g., microarray) can provide exciting new insights into the evolution of genome function, which can lead in turn to an understanding of the emergence of genome complexity during evolution.
Søren Johansen
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198774501
- eISBN:
- 9780191596476
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774508.003.0001
- Subject:
- Economics and Finance, Econometrics
Contains an overview of the monograph and discusses the statistical methodology of building and analysing statistical models and their likelihood function with the purpose of deriving estimators and ...
More
Contains an overview of the monograph and discusses the statistical methodology of building and analysing statistical models and their likelihood function with the purpose of deriving estimators and tests. The vector autoregressive model is used because it allows a flexible statistical description of the data. It makes it possible to embed interesting economic hypotheses as parametric restrictions and hence allow them to be tested against data.Less
Contains an overview of the monograph and discusses the statistical methodology of building and analysing statistical models and their likelihood function with the purpose of deriving estimators and tests. The vector autoregressive model is used because it allows a flexible statistical description of the data. It makes it possible to embed interesting economic hypotheses as parametric restrictions and hence allow them to be tested against data.
Xun Gu
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199213269
- eISBN:
- 9780191594762
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199213269.003.0005
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
Microarray technology can simultaneously monitor the expression levels of thousands of genes across many experimental conditions or treatments, providing us with unique opportunities to investigate ...
More
Microarray technology can simultaneously monitor the expression levels of thousands of genes across many experimental conditions or treatments, providing us with unique opportunities to investigate the evolutionary pattern of gene regulation. This chapter focuses on how to model the evolution of gene family expression with three goals: (i) statistical methods such as the likelihood ratio test can be applied for exploring the evolutionary pattern of gene expression; (ii) evolutionary tracing of expression changes can be predicted by the Bayesian method; and (iii) the statistical model can be utilized to study the expression-motif association. Several statistical models have been developed, most of which viewed gene expression data as continuous so that the modeling was based on the random-walk (Brownian) model. The chapter discusses these models and their applications.Less
Microarray technology can simultaneously monitor the expression levels of thousands of genes across many experimental conditions or treatments, providing us with unique opportunities to investigate the evolutionary pattern of gene regulation. This chapter focuses on how to model the evolution of gene family expression with three goals: (i) statistical methods such as the likelihood ratio test can be applied for exploring the evolutionary pattern of gene expression; (ii) evolutionary tracing of expression changes can be predicted by the Bayesian method; and (iii) the statistical model can be utilized to study the expression-motif association. Several statistical models have been developed, most of which viewed gene expression data as continuous so that the modeling was based on the random-walk (Brownian) model. The chapter discusses these models and their applications.
Gary Goertz and James Mahoney
- Published in print:
- 2012
- Published Online:
- October 2017
- ISBN:
- 9780691149707
- eISBN:
- 9781400845446
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691149707.003.0005
- Subject:
- Sociology, Social Research and Statistics
This chapter shows that the quantitative and qualitative cultures differ on the issue of symmetry. Whereas quantitative research tends to analyze relationships that are symmetric, qualitative ...
More
This chapter shows that the quantitative and qualitative cultures differ on the issue of symmetry. Whereas quantitative research tends to analyze relationships that are symmetric, qualitative research focuses on relationships that have asymmetric qualities. Causal models and explanations can be asymmetric in a variety of ways. This chapter deals mainly (though not exclusively) on the so-called “static causal asymmetry,” in which the explanation of occurrence is not the mirror image of that of nonoccurrence. After comparing symmetric and asymmetric models, the chapter looks at examples of asymmetric explanations using set-theoretic causal models. It highlights the difficulty of translating the fundamental symmetry of standard statistical models into the basic asymmetry of set-theoretic models, as well as the difficulty of capturing the asymmetry of set-theoretic models with the standard symmetric tools of statistics.Less
This chapter shows that the quantitative and qualitative cultures differ on the issue of symmetry. Whereas quantitative research tends to analyze relationships that are symmetric, qualitative research focuses on relationships that have asymmetric qualities. Causal models and explanations can be asymmetric in a variety of ways. This chapter deals mainly (though not exclusively) on the so-called “static causal asymmetry,” in which the explanation of occurrence is not the mirror image of that of nonoccurrence. After comparing symmetric and asymmetric models, the chapter looks at examples of asymmetric explanations using set-theoretic causal models. It highlights the difficulty of translating the fundamental symmetry of standard statistical models into the basic asymmetry of set-theoretic models, as well as the difficulty of capturing the asymmetry of set-theoretic models with the standard symmetric tools of statistics.
Ezra Susser, Sharon Schwartz, Alfredo Morabia, and Evelyn J. Bromet
- Published in print:
- 2006
- Published Online:
- September 2009
- ISBN:
- 9780195101812
- eISBN:
- 9780199864096
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195101812.003.25
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter introduces methods of statistical adjustment. Statistical adjustment is used to reduce the effects of confounders; or, more precisely, to infer what association would have been observed ...
More
This chapter introduces methods of statistical adjustment. Statistical adjustment is used to reduce the effects of confounders; or, more precisely, to infer what association would have been observed had there been no confounding. The main analytic methods for control of confounding include stratification, statistical modeling, and subgroup analysis. The chapter begins by focusing on stratification and regression analysis as methods of analysis for unmatched samples. It then describes methods for analyzing matched data. Although matching to control for confounding is done before the data are collected, it is critically important to apply statistical methods that account for the matching, because the analysis must correspond to the design to attain valid results. Subgroup analysis is briefly considered at the end of the chapter.Less
This chapter introduces methods of statistical adjustment. Statistical adjustment is used to reduce the effects of confounders; or, more precisely, to infer what association would have been observed had there been no confounding. The main analytic methods for control of confounding include stratification, statistical modeling, and subgroup analysis. The chapter begins by focusing on stratification and regression analysis as methods of analysis for unmatched samples. It then describes methods for analyzing matched data. Although matching to control for confounding is done before the data are collected, it is critically important to apply statistical methods that account for the matching, because the analysis must correspond to the design to attain valid results. Subgroup analysis is briefly considered at the end of the chapter.
Judith D. Singer and John B. Willett
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780195152968
- eISBN:
- 9780199864980
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195152968.003.0011
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Good data analysis involves more than using a computer package to fit a statistical model to data. To conduct a credible discrete-time survival analysis, one must: specify a suitable model for hazard ...
More
Good data analysis involves more than using a computer package to fit a statistical model to data. To conduct a credible discrete-time survival analysis, one must: specify a suitable model for hazard and understand its assumptions; use sample data to estimate the model parameters; interpret results in terms of your research questions; evaluate model fit and test hypotheses about (and/or construct confidence intervals for) model parameters; and communicate your findings. This chapter illustrates this entire process using the “age at first intercourse” study introduced in section 10.3. This sets the stage for a subsequent discussion of how to evaluate the assumptions underpinning the model and how to extend it flexibly across many circumstances in Chapter 12.Less
Good data analysis involves more than using a computer package to fit a statistical model to data. To conduct a credible discrete-time survival analysis, one must: specify a suitable model for hazard and understand its assumptions; use sample data to estimate the model parameters; interpret results in terms of your research questions; evaluate model fit and test hypotheses about (and/or construct confidence intervals for) model parameters; and communicate your findings. This chapter illustrates this entire process using the “age at first intercourse” study introduced in section 10.3. This sets the stage for a subsequent discussion of how to evaluate the assumptions underpinning the model and how to extend it flexibly across many circumstances in Chapter 12.
Robert Elgie
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199585984
- eISBN:
- 9780191729003
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199585984.003.0004
- Subject:
- Political Science, Comparative Politics
This chapter focuses on semi-presidential democracies. Within the set of democracies, what is the general evidence to suggest that democratic performance is likely to be worse in ...
More
This chapter focuses on semi-presidential democracies. Within the set of democracies, what is the general evidence to suggest that democratic performance is likely to be worse in president-parliamentary countries than in premier-presidential countries? The chapter begins by outlining the strategy that will be adopted to measure the performance of democracy. The second section presents some descriptive statistics to demonstrate that premier-presidential democracies have performed better than president-parliamentary democracies. The third section presents the results of a wide range of large-n controlled statistical tests. The results are very clear: premier-presidential democracies are clearly shown to perform much better than president-parliamentary democracies.Less
This chapter focuses on semi-presidential democracies. Within the set of democracies, what is the general evidence to suggest that democratic performance is likely to be worse in president-parliamentary countries than in premier-presidential countries? The chapter begins by outlining the strategy that will be adopted to measure the performance of democracy. The second section presents some descriptive statistics to demonstrate that premier-presidential democracies have performed better than president-parliamentary democracies. The third section presents the results of a wide range of large-n controlled statistical tests. The results are very clear: premier-presidential democracies are clearly shown to perform much better than president-parliamentary democracies.
William Hoppitt and Kevin N. Laland
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691150703
- eISBN:
- 9781400846504
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691150703.001.0001
- Subject:
- Biology, Animal Biology
Many animals, including humans, acquire valuable skills and knowledge by copying others. Scientists refer to this as social learning. It is one of the most exciting and rapidly developing areas of ...
More
Many animals, including humans, acquire valuable skills and knowledge by copying others. Scientists refer to this as social learning. It is one of the most exciting and rapidly developing areas of behavioral research and sits at the interface of many academic disciplines, including biology, experimental psychology, economics, and cognitive neuroscience. This book provides a comprehensive, practical guide to the research methods of this important emerging field. It defines the mechanisms thought to underlie social learning and demonstrate how to distinguish them experimentally in the laboratory. It presents techniques for detecting and quantifying social learning in nature, including statistical modeling of the spatial distribution of behavior traits. It also describes the latest theory and empirical findings on social learning strategies, and introduces readers to mathematical methods and models used in the study of cultural evolution. This book is an indispensable tool for researchers and an essential primer for students.Less
Many animals, including humans, acquire valuable skills and knowledge by copying others. Scientists refer to this as social learning. It is one of the most exciting and rapidly developing areas of behavioral research and sits at the interface of many academic disciplines, including biology, experimental psychology, economics, and cognitive neuroscience. This book provides a comprehensive, practical guide to the research methods of this important emerging field. It defines the mechanisms thought to underlie social learning and demonstrate how to distinguish them experimentally in the laboratory. It presents techniques for detecting and quantifying social learning in nature, including statistical modeling of the spatial distribution of behavior traits. It also describes the latest theory and empirical findings on social learning strategies, and introduces readers to mathematical methods and models used in the study of cultural evolution. This book is an indispensable tool for researchers and an essential primer for students.
Steve Selvin
- Published in print:
- 2004
- Published Online:
- September 2009
- ISBN:
- 9780195172805
- eISBN:
- 9780199865697
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172805.003.02
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter discusses variation and bias. Topics covered include the statistical model, the t-test, selection bias, confounder bias, ecologic bias, comparison of k groups, interaction contrasts, ...
More
This chapter discusses variation and bias. Topics covered include the statistical model, the t-test, selection bias, confounder bias, ecologic bias, comparison of k groups, interaction contrasts, two-way analysis, and misclassification bias.Less
This chapter discusses variation and bias. Topics covered include the statistical model, the t-test, selection bias, confounder bias, ecologic bias, comparison of k groups, interaction contrasts, two-way analysis, and misclassification bias.
Wesley C. Salmon
- Published in print:
- 1998
- Published Online:
- November 2003
- ISBN:
- 9780195108644
- eISBN:
- 9780199833627
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195108647.003.0007
- Subject:
- Philosophy, Philosophy of Science
Challenges the widely held thesis that scientific explanations are arguments (the “third dogma”) by posing three questions that seem to raise difficulties for it: (1) Why are irrelevancies harmless ...
More
Challenges the widely held thesis that scientific explanations are arguments (the “third dogma”) by posing three questions that seem to raise difficulties for it: (1) Why are irrelevancies harmless to arguments but fatal to explanations? (2) Can events whose probabilities are low be explained? Or, to reformulate essentially the same question, is genuine scientific explanation possible if indeterminism is true? (3) Why should requirements of temporal asymmetry be imposed upon explanations but not upon arguments?In addition to showing the untenability of the “third dogma,” this chapter signals the development of a causal theory of explanation that will supplement the simple statistical‐relevance (S‐R) model of explanation advocated in earlier works by the author.Less
Challenges the widely held thesis that scientific explanations are arguments (the “third dogma”) by posing three questions that seem to raise difficulties for it: (1) Why are irrelevancies harmless to arguments but fatal to explanations? (2) Can events whose probabilities are low be explained? Or, to reformulate essentially the same question, is genuine scientific explanation possible if indeterminism is true? (3) Why should requirements of temporal asymmetry be imposed upon explanations but not upon arguments?
In addition to showing the untenability of the “third dogma,” this chapter signals the development of a causal theory of explanation that will supplement the simple statistical‐relevance (S‐R) model of explanation advocated in earlier works by the author.
Ezra Susser, Sharon Schwartz, Alfredo Morabia, and Evelyn J. Bromet
- Published in print:
- 2006
- Published Online:
- September 2009
- ISBN:
- 9780195101812
- eISBN:
- 9780199864096
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195101812.003.27
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter introduces methods frequently used in epidemiology to assess statistical heterogeneity of effect in the studied sample. As in the case of confounding, the main analytic methods include ...
More
This chapter introduces methods frequently used in epidemiology to assess statistical heterogeneity of effect in the studied sample. As in the case of confounding, the main analytic methods include stratification and statistical modeling. It first defines additive and multiplicative interaction. It then describes methods based on stratification: comparing homogeneity across strata of the effect modifier, comparing the expected and observed joint effect of two factors, or using graphical representation of interaction. It also briefly indicates methods that allow the testing of the statistical significance of apparently heterogeneous effects. Finally, the chapter introduces statistical modeling of interaction by logistic regression.Less
This chapter introduces methods frequently used in epidemiology to assess statistical heterogeneity of effect in the studied sample. As in the case of confounding, the main analytic methods include stratification and statistical modeling. It first defines additive and multiplicative interaction. It then describes methods based on stratification: comparing homogeneity across strata of the effect modifier, comparing the expected and observed joint effect of two factors, or using graphical representation of interaction. It also briefly indicates methods that allow the testing of the statistical significance of apparently heterogeneous effects. Finally, the chapter introduces statistical modeling of interaction by logistic regression.
Helmut Hofmann
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780198504016
- eISBN:
- 9780191708480
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198504016.003.0008
- Subject:
- Physics, Nuclear and Plasma Physics
This chapter begins with a discussion of the decay of the compound nucleus by particle emission. Formulas for the transition rates are derived by exploiting basic properties of the T-matrix, such as ...
More
This chapter begins with a discussion of the decay of the compound nucleus by particle emission. Formulas for the transition rates are derived by exploiting basic properties of the T-matrix, such as the concept of microreversibility. Connections with the statistical model of nuclear reactions are established and interpretations in terms of statistical mechanics are given. Finally, the Bohr-Wheeler formula for the fission rate is reviewed and interpreted by considering, amongst others, stability conditions for fission in the liquid drop model.Less
This chapter begins with a discussion of the decay of the compound nucleus by particle emission. Formulas for the transition rates are derived by exploiting basic properties of the T-matrix, such as the concept of microreversibility. Connections with the statistical model of nuclear reactions are established and interpretations in terms of statistical mechanics are given. Finally, the Bohr-Wheeler formula for the fission rate is reviewed and interpreted by considering, amongst others, stability conditions for fission in the liquid drop model.
Steve Selvin
- Published in print:
- 2004
- Published Online:
- September 2009
- ISBN:
- 9780195172805
- eISBN:
- 9780199865697
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172805.003.13
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
The success of a model-based approach depends on choosing a model that accurately reflects the relationships within the data. This choice requires knowledge of the statistical properties of the model ...
More
The success of a model-based approach depends on choosing a model that accurately reflects the relationships within the data. This choice requires knowledge of the statistical properties of the model and a clear understanding of the phenomenon being investigated. One of the many useful models applied to survival data is the proportional hazards model. This chapter describes this model in simple terms, illustrating its properties and providing insight into the process of analyzing survival experience data using statistical modeling techniques.Less
The success of a model-based approach depends on choosing a model that accurately reflects the relationships within the data. This choice requires knowledge of the statistical properties of the model and a clear understanding of the phenomenon being investigated. One of the many useful models applied to survival data is the proportional hazards model. This chapter describes this model in simple terms, illustrating its properties and providing insight into the process of analyzing survival experience data using statistical modeling techniques.
Amanda Sacker
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199231034
- eISBN:
- 9780191723841
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199231034.003.0012
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter summarizes some of the issues that have been described in the previous two chapters on statistical considerations in family studies. It highlights some of the assumptions underlying the ...
More
This chapter summarizes some of the issues that have been described in the previous two chapters on statistical considerations in family studies. It highlights some of the assumptions underlying the analytic methods and discusses how their use can impact the results. The statistical analyses outlined in this section share common features aimed at quantifying the association between genetic and environmental factors with phenotypic outcomes. For some research, the focus is on heritability while for other work, the focus is on environmental issues while controlling for genetic influences. Modelling approaches for each are discussed, emphasizing potential problems and providing guidelines for careful interpretation. Examples from published empirical epidemiological work is used to illustrate the breadth of analytical strategies adopted for family studies research.Less
This chapter summarizes some of the issues that have been described in the previous two chapters on statistical considerations in family studies. It highlights some of the assumptions underlying the analytic methods and discusses how their use can impact the results. The statistical analyses outlined in this section share common features aimed at quantifying the association between genetic and environmental factors with phenotypic outcomes. For some research, the focus is on heritability while for other work, the focus is on environmental issues while controlling for genetic influences. Modelling approaches for each are discussed, emphasizing potential problems and providing guidelines for careful interpretation. Examples from published empirical epidemiological work is used to illustrate the breadth of analytical strategies adopted for family studies research.
Henry Brighton
- Published in print:
- 2011
- Published Online:
- May 2016
- ISBN:
- 9780262016032
- eISBN:
- 9780262298957
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262016032.003.0017
- Subject:
- Psychology, Health Psychology
In health care, our observations are shaped by interactions between complex biological and social systems. Practitioners seek diagnostic instruments that are both predictive and simple enough to use ...
More
In health care, our observations are shaped by interactions between complex biological and social systems. Practitioners seek diagnostic instruments that are both predictive and simple enough to use in their everyday decision making. Must we, as a result, seek a trade-off between the usability of a diagnostic instrument and its ability to make accurate predictions? This chapter argues that sound statistical reasons and evidence support the idea that the uncertainty underlying many problems in health care can often be better addressed with simple, easy-to-use diagnostic instruments. Put simply, satisficing methods which ignore information are not only easier to use, they can also predict with greater accuracy than more complex optimization methods.Less
In health care, our observations are shaped by interactions between complex biological and social systems. Practitioners seek diagnostic instruments that are both predictive and simple enough to use in their everyday decision making. Must we, as a result, seek a trade-off between the usability of a diagnostic instrument and its ability to make accurate predictions? This chapter argues that sound statistical reasons and evidence support the idea that the uncertainty underlying many problems in health care can often be better addressed with simple, easy-to-use diagnostic instruments. Put simply, satisficing methods which ignore information are not only easier to use, they can also predict with greater accuracy than more complex optimization methods.
Justin Grimmer, Sean J. Westwood, and Solomon Messing
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691162614
- eISBN:
- 9781400852666
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691162614.003.0009
- Subject:
- Political Science, American Politics
This concluding chapter provides more details about the classification of the nearly 170,000 House press releases used in this study as credit claiming or not. Making use of recent Text as Data ...
More
This concluding chapter provides more details about the classification of the nearly 170,000 House press releases used in this study as credit claiming or not. Making use of recent Text as Data methods, the study begins with 800 triple-hand-coded documents, providing a label for each of the press releases. The idea is to learn a relationship between the hand-coded labels and the words in the texts. This relationship will then be used to predict the label for all the remaining documents. The result of the process is that all the press releases will be labeled. The chapter then presents a series of simplifying assumptions that make statistical modeling of the texts feasible.Less
This concluding chapter provides more details about the classification of the nearly 170,000 House press releases used in this study as credit claiming or not. Making use of recent Text as Data methods, the study begins with 800 triple-hand-coded documents, providing a label for each of the press releases. The idea is to learn a relationship between the hand-coded labels and the words in the texts. This relationship will then be used to predict the label for all the remaining documents. The result of the process is that all the press releases will be labeled. The chapter then presents a series of simplifying assumptions that make statistical modeling of the texts feasible.
Deborah G. Mayo
- Published in print:
- 2004
- Published Online:
- February 2013
- ISBN:
- 9780226789552
- eISBN:
- 9780226789583
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226789583.003.0004
- Subject:
- Biology, Ecology
Error-statistical methods in science have been the subject of enormous criticism, giving rise to the popular statistical “reform” movement and bolstering subjective Bayesian philosophy of science. Is ...
More
Error-statistical methods in science have been the subject of enormous criticism, giving rise to the popular statistical “reform” movement and bolstering subjective Bayesian philosophy of science. Is it possible to have a general account of scientific evidence and inference that shows how we learn from experiment despite uncertainty and error? One way that philosophers have attempted to affirmatively answer this question is to erect accounts of scientific inference or testing where appealing to probabilistic or statistical ideas would accommodate the uncertainties and error. Leading attempts take the form of rules or logics relating evidence (or evidence statements) and hypotheses by measures of confirmation, support, or probability. We can call such accounts logics of evidential relationship (or E-R logics). This chapter reflects on these logics of evidence and compares them with error statistics. It then considers measures of fit vs. fit combined with error probabilities, what we really need in a philosophy of evidence, criticisms of Neyman-Pearson statistics and their sources, the behavioral-decision model of Neyman-Pearson tests, and the roles of statistical models and methods in statistical inference.Less
Error-statistical methods in science have been the subject of enormous criticism, giving rise to the popular statistical “reform” movement and bolstering subjective Bayesian philosophy of science. Is it possible to have a general account of scientific evidence and inference that shows how we learn from experiment despite uncertainty and error? One way that philosophers have attempted to affirmatively answer this question is to erect accounts of scientific inference or testing where appealing to probabilistic or statistical ideas would accommodate the uncertainties and error. Leading attempts take the form of rules or logics relating evidence (or evidence statements) and hypotheses by measures of confirmation, support, or probability. We can call such accounts logics of evidential relationship (or E-R logics). This chapter reflects on these logics of evidence and compares them with error statistics. It then considers measures of fit vs. fit combined with error probabilities, what we really need in a philosophy of evidence, criticisms of Neyman-Pearson statistics and their sources, the behavioral-decision model of Neyman-Pearson tests, and the roles of statistical models and methods in statistical inference.