Neil Abell, David W. Springer, and Akihito Kamata
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780195333367
- eISBN:
- 9780199864300
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195333367.003.0006
- Subject:
- Social Work, Research and Evaluation
This chapter presents basics of factor analysis modeling, focusing on its use and interpretations in scale and test development contexts. Two approaches to such analyses are discussed: exploratory ...
More
This chapter presents basics of factor analysis modeling, focusing on its use and interpretations in scale and test development contexts. Two approaches to such analyses are discussed: exploratory factor analysis (EFA), in which one explores what factor structure the data represent; and confirmatory factor analysis (CFA), where one attempts to confirm a hypothesized factor structure. Text and illustrations demonstrate how one can use factor analysis results for deciding which items should be retained or deleted from a scale or test instrument. Model fit evaluation is discussed through the chi-square statistic, as well as some fit indices and information criteria. Uses of a CFA with covariates (MIMIC) and multiple-group CFA approaches for measurement invariance studies are also demonstrated. Lastly, basics of item response theory (IRT) modeling are introduced by demonstrating its parameter estimation and interpretations.Less
This chapter presents basics of factor analysis modeling, focusing on its use and interpretations in scale and test development contexts. Two approaches to such analyses are discussed: exploratory factor analysis (EFA), in which one explores what factor structure the data represent; and confirmatory factor analysis (CFA), where one attempts to confirm a hypothesized factor structure. Text and illustrations demonstrate how one can use factor analysis results for deciding which items should be retained or deleted from a scale or test instrument. Model fit evaluation is discussed through the chi-square statistic, as well as some fit indices and information criteria. Uses of a CFA with covariates (MIMIC) and multiple-group CFA approaches for measurement invariance studies are also demonstrated. Lastly, basics of item response theory (IRT) modeling are introduced by demonstrating its parameter estimation and interpretations.
Donna Harrington
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780195339888
- eISBN:
- 9780199863662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195339888.003.0001
- Subject:
- Social Work, Research and Evaluation
This chapter discusses the major uses of confirmatory factor analysis including measurement development, psychometric evaluation of measures, construct validation, testing method effects, and testing ...
More
This chapter discusses the major uses of confirmatory factor analysis including measurement development, psychometric evaluation of measures, construct validation, testing method effects, and testing measurement invariance, and presents examples of these uses in the social work literature. Different types of validity are briefly defined and discussed. Confirmatory factor analysis is compared with three other common data analysis approaches: exploratory factor analysis, principal components analysis, and structural equation modeling. Software for conducting confirmatory factor analysis is briefly discussed.Less
This chapter discusses the major uses of confirmatory factor analysis including measurement development, psychometric evaluation of measures, construct validation, testing method effects, and testing measurement invariance, and presents examples of these uses in the social work literature. Different types of validity are briefly defined and discussed. Confirmatory factor analysis is compared with three other common data analysis approaches: exploratory factor analysis, principal components analysis, and structural equation modeling. Software for conducting confirmatory factor analysis is briefly discussed.
Natasha K. Bowen and Shenyang Guo
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780195367621
- eISBN:
- 9780199918256
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195367621.003.0004
- Subject:
- Social Work, Research and Evaluation
This chapter describes when and how to conduct a confirmatory factor analysis (CFA). CFA is a step in the scale development process, and it is also the first step in testing structural models. ...
More
This chapter describes when and how to conduct a confirmatory factor analysis (CFA). CFA is a step in the scale development process, and it is also the first step in testing structural models. Therefore, all researchers using a latent variable analysis approach must have an understanding of CFA, whether or not they are developing and testing a new scale. CFA is also compared to exploratory factor analysis (EFA).Less
This chapter describes when and how to conduct a confirmatory factor analysis (CFA). CFA is a step in the scale development process, and it is also the first step in testing structural models. Therefore, all researchers using a latent variable analysis approach must have an understanding of CFA, whether or not they are developing and testing a new scale. CFA is also compared to exploratory factor analysis (EFA).
Daniel Fink and Wesley M. Hochachka
- Published in print:
- 2012
- Published Online:
- August 2016
- ISBN:
- 9780801449116
- eISBN:
- 9780801463952
- Item type:
- chapter
- Publisher:
- Cornell University Press
- DOI:
- 10.7591/cornell/9780801449116.003.0009
- Subject:
- Environmental Science, Environmental Studies
This chapter focuses on the use of data mining to discover biological patterns in citizen science observations. In particular, it describes a set of statistical tools designed to take advantage of ...
More
This chapter focuses on the use of data mining to discover biological patterns in citizen science observations. In particular, it describes a set of statistical tools designed to take advantage of access to massive quantities of data with a strong emphasis on pattern discovery. The chapter begins with an overview of how data mining can be used to explore and learn about species distribution, along with the insights that can be gained by applying these methods to broad-scale citizen science data. Using data from bird monitoring projects eBird and Project FeederWatch, it demonstrates an exploratory strategy in which the focus of investigations moves from general phenomena to the processes responsible for the phenomena. It then considers distinction between exploratory and confirmatory analysis and the important practical issues that should be taken into account when planning an exploratory analysis with citizen science data.Less
This chapter focuses on the use of data mining to discover biological patterns in citizen science observations. In particular, it describes a set of statistical tools designed to take advantage of access to massive quantities of data with a strong emphasis on pattern discovery. The chapter begins with an overview of how data mining can be used to explore and learn about species distribution, along with the insights that can be gained by applying these methods to broad-scale citizen science data. Using data from bird monitoring projects eBird and Project FeederWatch, it demonstrates an exploratory strategy in which the focus of investigations moves from general phenomena to the processes responsible for the phenomena. It then considers distinction between exploratory and confirmatory analysis and the important practical issues that should be taken into account when planning an exploratory analysis with citizen science data.
Brian D. Haig
- Published in print:
- 2014
- Published Online:
- September 2014
- ISBN:
- 9780262027366
- eISBN:
- 9780262322379
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262027366.003.0003
- Subject:
- Psychology, Clinical Psychology
This chapter considers the abductive nature of theory generation by examining the logic and purpose of the method of exploratory factor analysis. It is argued that the common factors that result from ...
More
This chapter considers the abductive nature of theory generation by examining the logic and purpose of the method of exploratory factor analysis. It is argued that the common factors that result from using this method are not fictions, but latent variables, which are best understood as genuine theoretical entities. This realist interpretation of factors is supported by showing that exploratory factor analysis is an abductive generator of elementary theories that exploits an important heuristic of scientific methodology known as the principle of the common cause.Less
This chapter considers the abductive nature of theory generation by examining the logic and purpose of the method of exploratory factor analysis. It is argued that the common factors that result from using this method are not fictions, but latent variables, which are best understood as genuine theoretical entities. This realist interpretation of factors is supported by showing that exploratory factor analysis is an abductive generator of elementary theories that exploits an important heuristic of scientific methodology known as the principle of the common cause.
Brian D. Haig
- Published in print:
- 2018
- Published Online:
- January 2018
- ISBN:
- 9780190222055
- eISBN:
- 9780190871734
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190222055.003.0002
- Subject:
- Psychology, Social Psychology
Chapter 2 is concerned with modern data analysis. It focuses primarily on the nature, role, and importance of exploratory data analysis, although it gives some attention to computer-intensive ...
More
Chapter 2 is concerned with modern data analysis. It focuses primarily on the nature, role, and importance of exploratory data analysis, although it gives some attention to computer-intensive resampling methods. Exploratory data analysis is a process in which data are examined to reveal potential patterns of interest. However, the use of traditional confirmatory methods in data analysis remains the dominant practice. Different perspectives on data analysis, as they are shaped by four different accounts of scientific method, are provided. A brief discussion of John Tukey’s philosophy of teaching data analysis is presented. The chapter does not consider the more recent exploratory data analytic developments, such as the practice of statistical modeling, the employment of data-mining techniques, and more flexible resampling methods.Less
Chapter 2 is concerned with modern data analysis. It focuses primarily on the nature, role, and importance of exploratory data analysis, although it gives some attention to computer-intensive resampling methods. Exploratory data analysis is a process in which data are examined to reveal potential patterns of interest. However, the use of traditional confirmatory methods in data analysis remains the dominant practice. Different perspectives on data analysis, as they are shaped by four different accounts of scientific method, are provided. A brief discussion of John Tukey’s philosophy of teaching data analysis is presented. The chapter does not consider the more recent exploratory data analytic developments, such as the practice of statistical modeling, the employment of data-mining techniques, and more flexible resampling methods.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.001.0001
- Subject:
- Psychology, Social Psychology
Exploratory factor analysis (EFA) has played a major role in research conducted in the social sciences for more than 100 years, dating back to the pioneering work of Charles Spearman on mental ...
More
Exploratory factor analysis (EFA) has played a major role in research conducted in the social sciences for more than 100 years, dating back to the pioneering work of Charles Spearman on mental abilities. Since that time, EFA has become one of the most commonly used quantitative methods in many of the social sciences, including psychology, business, sociology, education, political science, and communications. To a lesser extent, it has also been utilized within the physical and biological sciences. Despite its long and widespread usage in many domains, numerous aspects of the underlying theory and application of EFA are poorly understood by researchers. Indeed, perhaps no widely used quantitative method requires more decisions on the part of a researcher and offers as wide an array of procedural options as EFA does. This book provides a non-mathematical introduction to the underlying theory of EFA and reviews the key decisions that must be made in its implementation. Among the issues discussed are the use of EFA versus confirmatory factor analysis, the use of principal component analysis versus common factor analysis, procedures for determining the appropriate number of factors, and methods for rotating factor solutions. Explanations and illustrations of the application of different factor analytic procedures are provided for analyses using common statistical packages, as well as a free package available on the web. In addition, practical instructions are provided for conducting a number of useful factor analytic procedures not included in the statistical packages.Less
Exploratory factor analysis (EFA) has played a major role in research conducted in the social sciences for more than 100 years, dating back to the pioneering work of Charles Spearman on mental abilities. Since that time, EFA has become one of the most commonly used quantitative methods in many of the social sciences, including psychology, business, sociology, education, political science, and communications. To a lesser extent, it has also been utilized within the physical and biological sciences. Despite its long and widespread usage in many domains, numerous aspects of the underlying theory and application of EFA are poorly understood by researchers. Indeed, perhaps no widely used quantitative method requires more decisions on the part of a researcher and offers as wide an array of procedural options as EFA does. This book provides a non-mathematical introduction to the underlying theory of EFA and reviews the key decisions that must be made in its implementation. Among the issues discussed are the use of EFA versus confirmatory factor analysis, the use of principal component analysis versus common factor analysis, procedures for determining the appropriate number of factors, and methods for rotating factor solutions. Explanations and illustrations of the application of different factor analytic procedures are provided for analyses using common statistical packages, as well as a free package available on the web. In addition, practical instructions are provided for conducting a number of useful factor analytic procedures not included in the statistical packages.
Brian D. Haig
- Published in print:
- 2018
- Published Online:
- January 2018
- ISBN:
- 9780190222055
- eISBN:
- 9780190871734
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190222055.003.0006
- Subject:
- Psychology, Social Psychology
Chapter 6 argues that exploratory factor analysis is an abductive method of theory generation that exploits a principle of scientific inference known as the principle of the common cause. Factor ...
More
Chapter 6 argues that exploratory factor analysis is an abductive method of theory generation that exploits a principle of scientific inference known as the principle of the common cause. Factor analysis is an important family of multivariate statistical methods that is widely used in the behavioral and social sciences. The best known model of factor analysis is common factor analysis, which has two types: exploratory factor analysis and confirmatory factor analysis. A number of methodological issues that arise in critical discussions of exploratory factor analysis are considered. It is suggested that exploratory factor analysis can be profitably employed in tandem with confirmatory factor analysis.Less
Chapter 6 argues that exploratory factor analysis is an abductive method of theory generation that exploits a principle of scientific inference known as the principle of the common cause. Factor analysis is an important family of multivariate statistical methods that is widely used in the behavioral and social sciences. The best known model of factor analysis is common factor analysis, which has two types: exploratory factor analysis and confirmatory factor analysis. A number of methodological issues that arise in critical discussions of exploratory factor analysis are considered. It is suggested that exploratory factor analysis can be profitably employed in tandem with confirmatory factor analysis.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0002
- Subject:
- Psychology, Social Psychology
This chapter looks at the key considerations that researchers should take into account in determining when it is appropriate to conduct an exploratory factor analysis (EFA) using the common factor ...
More
This chapter looks at the key considerations that researchers should take into account in determining when it is appropriate to conduct an exploratory factor analysis (EFA) using the common factor model. It first outlines the requirements for conducting EFA by considering what sorts of research questions are best explored by this type of factor analysis, along with the nature of the data necessary to properly conduct the analysis. It then focuses on the characteristics of the measured variables to be analyzed before turning to the question of when EFA versus confirmatory factor analysis is most appropriate. In case an exploratory approach is selected, the next issue is whether the analysis should be based on the common factor model or the (different but related) principal component analysis model.Less
This chapter looks at the key considerations that researchers should take into account in determining when it is appropriate to conduct an exploratory factor analysis (EFA) using the common factor model. It first outlines the requirements for conducting EFA by considering what sorts of research questions are best explored by this type of factor analysis, along with the nature of the data necessary to properly conduct the analysis. It then focuses on the characteristics of the measured variables to be analyzed before turning to the question of when EFA versus confirmatory factor analysis is most appropriate. In case an exploratory approach is selected, the next issue is whether the analysis should be based on the common factor model or the (different but related) principal component analysis model.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0003
- Subject:
- Psychology, Social Psychology
This chapter focuses on important requirements that must be satisfied and decisions that must be made in the actual implementation of an exploratory factor analysis (EFA) in research. More ...
More
This chapter focuses on important requirements that must be satisfied and decisions that must be made in the actual implementation of an exploratory factor analysis (EFA) in research. More specifically, it outlines three primary decisions that researchers need to address when conducting EFA: selecting from an array of model fitting procedures that estimate the parameters of the common factor model; determining how many common factors should be specified in the model when fitting it to the data; deciding whether the resulting solution should be rotated to aid the interpretation of the results and if so, determining what specific rotation procedure should be used. The challenge of determining the appropriate rotating factor solutions is called rotational indeterminacy.Less
This chapter focuses on important requirements that must be satisfied and decisions that must be made in the actual implementation of an exploratory factor analysis (EFA) in research. More specifically, it outlines three primary decisions that researchers need to address when conducting EFA: selecting from an array of model fitting procedures that estimate the parameters of the common factor model; determining how many common factors should be specified in the model when fitting it to the data; deciding whether the resulting solution should be rotated to aid the interpretation of the results and if so, determining what specific rotation procedure should be used. The challenge of determining the appropriate rotating factor solutions is called rotational indeterminacy.
Thanh V. Tran, Tam Nguyen, and Keith Chan
- Published in print:
- 2017
- Published Online:
- February 2018
- ISBN:
- 9780190496470
- eISBN:
- 9780190496500
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190496470.003.0004
- Subject:
- Social Work, Research and Evaluation
A cross-cultural comparison can be misleading for two reasons: (1) comparison is made using different attributes and (2) comparison is made using different scale units. This chapter illustrates ...
More
A cross-cultural comparison can be misleading for two reasons: (1) comparison is made using different attributes and (2) comparison is made using different scale units. This chapter illustrates multiple statistical approaches to evaluating the cross-cultural equivalence of the research instruments: data distribution of the items of the research instrument, the patterns of responses of each item, the corrected item–total correlation, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and reliability analysis using the parallel test and tau-equivalence test. Equivalence is the fundamental issue in cross-cultural research and evaluation.Less
A cross-cultural comparison can be misleading for two reasons: (1) comparison is made using different attributes and (2) comparison is made using different scale units. This chapter illustrates multiple statistical approaches to evaluating the cross-cultural equivalence of the research instruments: data distribution of the items of the research instrument, the patterns of responses of each item, the corrected item–total correlation, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and reliability analysis using the parallel test and tau-equivalence test. Equivalence is the fundamental issue in cross-cultural research and evaluation.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0005
- Subject:
- Psychology, Social Psychology
This chapter explains how many of the key procedures of an exploratory factor analysis (EFA) can be implemented in practice and how the information it provides can be interpreted. After providing a ...
More
This chapter explains how many of the key procedures of an exploratory factor analysis (EFA) can be implemented in practice and how the information it provides can be interpreted. After providing a brief review of key considerations that researchers must take into account before conducting an EFA, the chapter introduces the data set that illustrates how the EFA is implemented and interpreted. It then outlines the procedures for determining the appropriate number of common factors before turning to the program syntax for conducting an EFA with a specified number of common factors. An example is given in which EFA rather than a confirmatory factor analysis is used to determine the latent constructs underlying the pattern of correlations among the measured variables. Finally, the chapter discusses key aspects of the output provided by widely used EFA programs, including SPSS, SAS, and CEFA.Less
This chapter explains how many of the key procedures of an exploratory factor analysis (EFA) can be implemented in practice and how the information it provides can be interpreted. After providing a brief review of key considerations that researchers must take into account before conducting an EFA, the chapter introduces the data set that illustrates how the EFA is implemented and interpreted. It then outlines the procedures for determining the appropriate number of common factors before turning to the program syntax for conducting an EFA with a specified number of common factors. An example is given in which EFA rather than a confirmatory factor analysis is used to determine the latent constructs underlying the pattern of correlations among the measured variables. Finally, the chapter discusses key aspects of the output provided by widely used EFA programs, including SPSS, SAS, and CEFA.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0006
- Subject:
- Psychology, Social Psychology
This chapter summarizes the key issues that researchers need to take into consideration when choosing and implementing exploratory factor analysis (EFA) before offering some conclusions and ...
More
This chapter summarizes the key issues that researchers need to take into consideration when choosing and implementing exploratory factor analysis (EFA) before offering some conclusions and recommendations to help readers who are contemplating the use of EFA in their own research. It reviews the basic assumptions of the common factor model, the general mathematical model on which EFA is based, intended to explain the structure of correlations among a battery of measured variables; the issues that researchers should bear in mind in determining when it is appropriate to conduct an EFA; the decisions to be made in conducting an EFA; and the implementation of EFA as well as the interpretation of data it provides.Less
This chapter summarizes the key issues that researchers need to take into consideration when choosing and implementing exploratory factor analysis (EFA) before offering some conclusions and recommendations to help readers who are contemplating the use of EFA in their own research. It reviews the basic assumptions of the common factor model, the general mathematical model on which EFA is based, intended to explain the structure of correlations among a battery of measured variables; the issues that researchers should bear in mind in determining when it is appropriate to conduct an EFA; the decisions to be made in conducting an EFA; and the implementation of EFA as well as the interpretation of data it provides.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0001
- Subject:
- Psychology, Social Psychology
This book provides a non-mathematical introduction to exploratory factor analysis (EFA) or unrestricted factor analysis and how it is implemented. It also discusses the procedures for conducting ...
More
This book provides a non-mathematical introduction to exploratory factor analysis (EFA) or unrestricted factor analysis and how it is implemented. It also discusses the procedures for conducting confirmatory factor analysis or restricted factor analysis and compares it with principal component analysis. The procedures for determining the appropriate number of factors and methods for rotating factor solutions are described. In addition, the book explains the application of different factor analytic procedures for analyses using common statistical packages, as well as a free package available on the Web. Practical instructions on how to conduct a number of useful factor analytic procedures not included in the statistical packages are presented as well. This introductory chapter looks at the common factor model, a mathematical model that forms the basis of a set of statistical procedures for determining whether large sets of variables can be more parsimoniously represented as measures of one or a few underlying constructs. The model's basic conceptual premises are outlined, along with its pictorial representation and mathematical expression.Less
This book provides a non-mathematical introduction to exploratory factor analysis (EFA) or unrestricted factor analysis and how it is implemented. It also discusses the procedures for conducting confirmatory factor analysis or restricted factor analysis and compares it with principal component analysis. The procedures for determining the appropriate number of factors and methods for rotating factor solutions are described. In addition, the book explains the application of different factor analytic procedures for analyses using common statistical packages, as well as a free package available on the Web. Practical instructions on how to conduct a number of useful factor analytic procedures not included in the statistical packages are presented as well. This introductory chapter looks at the common factor model, a mathematical model that forms the basis of a set of statistical procedures for determining whether large sets of variables can be more parsimoniously represented as measures of one or a few underlying constructs. The model's basic conceptual premises are outlined, along with its pictorial representation and mathematical expression.
M. D. Edge
- Published in print:
- 2019
- Published Online:
- October 2019
- ISBN:
- 9780198827627
- eISBN:
- 9780191866463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198827627.003.0003
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
R is a powerful, free software package for performing statistical tasks. It will be used to simulate data, analyze data, and make data displays. More details about R are given in Appendix B.
R is a powerful, free software package for performing statistical tasks. It will be used to simulate data, analyze data, and make data displays. More details about R are given in Appendix B.
Leandre R. Fabrigar and Duane T. Wegener
- Published in print:
- 2011
- Published Online:
- March 2015
- ISBN:
- 9780199734177
- eISBN:
- 9780190255848
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199734177.003.0004
- Subject:
- Psychology, Social Psychology
This chapter discusses various assumptions underlying the common factor model and the procedures typically used in its implementation. Ideally, these assumptions should be carefully considered by ...
More
This chapter discusses various assumptions underlying the common factor model and the procedures typically used in its implementation. Ideally, these assumptions should be carefully considered by researchers prior to collecting any data for which an exploratory factor analysis is likely to be used. The chapter first considers the key assumptions underlying the common factor model itself, with particular reference to assumptions about how common factors influence measured variables. It compares effects indicator models and causal indicator models as well as linear effects versus nonlinear effects of common factors. It then explores assumptions underlying various procedures commonly used to fit the common factor model to data. It also explains the nature of each assumption and when it is or is not likely to be plausible, along with methods for evaluating the plausibility of the assumption. Finally, it outlines various courses of action when a given assumption is not met.Less
This chapter discusses various assumptions underlying the common factor model and the procedures typically used in its implementation. Ideally, these assumptions should be carefully considered by researchers prior to collecting any data for which an exploratory factor analysis is likely to be used. The chapter first considers the key assumptions underlying the common factor model itself, with particular reference to assumptions about how common factors influence measured variables. It compares effects indicator models and causal indicator models as well as linear effects versus nonlinear effects of common factors. It then explores assumptions underlying various procedures commonly used to fit the common factor model to data. It also explains the nature of each assumption and when it is or is not likely to be plausible, along with methods for evaluating the plausibility of the assumption. Finally, it outlines various courses of action when a given assumption is not met.
James B. Elsner and Thomas H. Jagger
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199827633
- eISBN:
- 9780197563199
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199827633.003.0011
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
Here in Part II, we focus on statistical models for understanding and predicting hurricane climate. This chapter shows you how to model hurricane ...
More
Here in Part II, we focus on statistical models for understanding and predicting hurricane climate. This chapter shows you how to model hurricane occurrence. This is done using the annual count of hurricanes making landfall in the United States. We also consider the occurrence of hurricanes across the basin and by origin. We begin with exploratory analysis and then show you how to model counts with Poisson regression. Issues of model fit, interpretation, and prediction are considered in turn. The topic of how to assess forecast skill is examined including how to perform cross-validation. Alternatives to the Poisson regression model are considered. Logistic regression and receiver operating characteristics (ROCS) are also covered. You use the data set US.txt which contains a list of tropical cyclone counts by year (see Chapter 2). The counts indicate the number of hurricanes hitting in the United States (excluding Hawaii). Input the data, save them as a data frame object, and print out the first six lines by typing . . . > H = read.table("US.txt", header=TRUE) > head(H) . . . The columns include year Year, number of U.S. hurricanes All, number of major U.S. hurricanes MUS, number of U.S. Gulf coast hurricanes G, number of Florida hurricanes FL, and number of East coast hurricanes E. Save the total number of years in the record as n and the average number hurricanes per year as rate. . . . > n = length(H$Year); rate = mean(H$All) > n; rate [1] 160 [1] 1.69 . . . The average number of U.S. hurricanes is 1.69 per year over these 160 years. First plot a time series and a distribution of the annual counts. Together, the two plots provide a nice summary of the information in your data relevant to any modeling effort. . . . > par(las=1) > layout(matrix(c(1, 2), 1, 2, byrow=TRUE), + widths=c(3/5, 2/5)) > plot(H$Year, H$All, type="h", xlab="Year", + ylab="Hurricane Count") > grid() > mtext("a", side=3, line=1, adj=0, cex=1.1) > barplot(table(H$All), xlab="Hurricane Count", + ylab="Number of Years", main="") > mtext("b", side=3, line=1, adj=0, cex=1.1) . . . The layout function divides the plot page into rows and columns as specified in the matrix function (first argument).
Less
Here in Part II, we focus on statistical models for understanding and predicting hurricane climate. This chapter shows you how to model hurricane occurrence. This is done using the annual count of hurricanes making landfall in the United States. We also consider the occurrence of hurricanes across the basin and by origin. We begin with exploratory analysis and then show you how to model counts with Poisson regression. Issues of model fit, interpretation, and prediction are considered in turn. The topic of how to assess forecast skill is examined including how to perform cross-validation. Alternatives to the Poisson regression model are considered. Logistic regression and receiver operating characteristics (ROCS) are also covered. You use the data set US.txt which contains a list of tropical cyclone counts by year (see Chapter 2). The counts indicate the number of hurricanes hitting in the United States (excluding Hawaii). Input the data, save them as a data frame object, and print out the first six lines by typing . . . > H = read.table("US.txt", header=TRUE) > head(H) . . . The columns include year Year, number of U.S. hurricanes All, number of major U.S. hurricanes MUS, number of U.S. Gulf coast hurricanes G, number of Florida hurricanes FL, and number of East coast hurricanes E. Save the total number of years in the record as n and the average number hurricanes per year as rate. . . . > n = length(H$Year); rate = mean(H$All) > n; rate [1] 160 [1] 1.69 . . . The average number of U.S. hurricanes is 1.69 per year over these 160 years. First plot a time series and a distribution of the annual counts. Together, the two plots provide a nice summary of the information in your data relevant to any modeling effort. . . . > par(las=1) > layout(matrix(c(1, 2), 1, 2, byrow=TRUE), + widths=c(3/5, 2/5)) > plot(H$Year, H$All, type="h", xlab="Year", + ylab="Hurricane Count") > grid() > mtext("a", side=3, line=1, adj=0, cex=1.1) > barplot(table(H$All), xlab="Hurricane Count", + ylab="Number of Years", main="") > mtext("b", side=3, line=1, adj=0, cex=1.1) . . . The layout function divides the plot page into rows and columns as specified in the matrix function (first argument).
James B. Elsner and Thomas H. Jagger
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199827633
- eISBN:
- 9780197563199
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199827633.003.0012
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
Strong hurricanes, such as Camille in 1969, Andrew in 1992, and Katrina in 2005, cause catastrophic damage. It is important to have an estimate of when the ...
More
Strong hurricanes, such as Camille in 1969, Andrew in 1992, and Katrina in 2005, cause catastrophic damage. It is important to have an estimate of when the next big one will occur. You also want to know what influences the strongest hurricanes and whether they are getting stronger as the earth warms. This chapter shows you how to model hurricane intensity. The data are basinwide lifetime highest intensities for individual tropical cyclones over the North Atlantic and county-level hurricane wind intervals. We begin by considering trends using the method of quantile regression and then examine extreme-value models for estimating return periods. We also look at modeling cyclone winds when the values are given by category, and use Miami-Dade County as an example. Here you consider cyclones above tropical storm intensity (≥ 17 m s−1) during the period 1967–2010, inclusive. The period is long enough to see changes but not too long that it includes intensity estimates before satellite observations. We use “intensity” and “strength” synonymously to mean the fastest wind inside the cyclone. Consider the set of events defined by the location and wind speed at which a tropical cyclone first reaches its lifetime maximum intensity (see Chapter 5). The data are in the file LMI.txt. Import and list the values in 10 columns of the first 6 rows of the data frame by typing . . . > LMI.df = read.table("LMI.txt", header=TRUE) > round(head(LMI.df)[c(1, 5:9, 12, 16)], 1). . . The data set is described in Chapter 6. Here your interest is the smoothed intensity estimate at the time of lifetime maximum (WmaxS). First, convert the wind speeds from the operational units of knots to the SI units of meter per second. . . . > LMI.df$WmaxS = LMI.df$WmaxS * .5144 . . . Next, determine the quartiles (0.25 and 0.75 quantiles) of the wind speed distribution. The quartiles divide the cumulative distribution function (CDF) into three equal-sized subsets. . . . > quantile(LMI.df$WmaxS, c(.25, .75)) 25% 75% 25.5 46.0 . . . You find that 25 percent of the cyclones have a lifetime maximum wind speed less than 26 m s−1 and 75 percent have a maximum wind speed less than 46ms−1, so that 50 percent of all cyclones have a maximum wind speed between 26 and 46 m s−1 (interquartile range–IQR).
Less
Strong hurricanes, such as Camille in 1969, Andrew in 1992, and Katrina in 2005, cause catastrophic damage. It is important to have an estimate of when the next big one will occur. You also want to know what influences the strongest hurricanes and whether they are getting stronger as the earth warms. This chapter shows you how to model hurricane intensity. The data are basinwide lifetime highest intensities for individual tropical cyclones over the North Atlantic and county-level hurricane wind intervals. We begin by considering trends using the method of quantile regression and then examine extreme-value models for estimating return periods. We also look at modeling cyclone winds when the values are given by category, and use Miami-Dade County as an example. Here you consider cyclones above tropical storm intensity (≥ 17 m s−1) during the period 1967–2010, inclusive. The period is long enough to see changes but not too long that it includes intensity estimates before satellite observations. We use “intensity” and “strength” synonymously to mean the fastest wind inside the cyclone. Consider the set of events defined by the location and wind speed at which a tropical cyclone first reaches its lifetime maximum intensity (see Chapter 5). The data are in the file LMI.txt. Import and list the values in 10 columns of the first 6 rows of the data frame by typing . . . > LMI.df = read.table("LMI.txt", header=TRUE) > round(head(LMI.df)[c(1, 5:9, 12, 16)], 1). . . The data set is described in Chapter 6. Here your interest is the smoothed intensity estimate at the time of lifetime maximum (WmaxS). First, convert the wind speeds from the operational units of knots to the SI units of meter per second. . . . > LMI.df$WmaxS = LMI.df$WmaxS * .5144 . . . Next, determine the quartiles (0.25 and 0.75 quantiles) of the wind speed distribution. The quartiles divide the cumulative distribution function (CDF) into three equal-sized subsets. . . . > quantile(LMI.df$WmaxS, c(.25, .75)) 25% 75% 25.5 46.0 . . . You find that 25 percent of the cyclones have a lifetime maximum wind speed less than 26 m s−1 and 75 percent have a maximum wind speed less than 46ms−1, so that 50 percent of all cyclones have a maximum wind speed between 26 and 46 m s−1 (interquartile range–IQR).
James B. Elsner and Thomas H. Jagger
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199827633
- eISBN:
- 9780197563199
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199827633.003.0017
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
In this chapter, we show some broader applications of our models and methods. We focus on impact models. Hurricanes are capable of generating large ...
More
In this chapter, we show some broader applications of our models and methods. We focus on impact models. Hurricanes are capable of generating large financial losses. We begin with a model that estimates extreme losses conditional on climate covariates. We then describe a method for quantifying the relative change in potential losses over the decades. Financial losses from hurricanes are to some extent directly related to fluctuations in climate. Environmental factors influence the frequency and intensity of hurricanes at the coast as detailed throughout this book (see for example Chapters 7 and 8). So, it is not surprising that these same environmental signals appear in estimates of losses. Here loss is the economic damage associated with a hurricane’s direct impact. A normalization procedure adjusts the loss estimate from a past hurricane to what it would be if the same cyclone struck in a recent year by accounting for inflation and changes in wealth and population over the intervening time, plus a factor to account for changes in the number of housing units exceeding population growth. The method produces loss estimates that can be compared over time (Pielke et al. 2008). Here you focus on losses exceeding one billion ($ U.S.) that have been adjusted to 2005. The loss data are available in Losses.txt in JAGS format (see Chapter 9). Input the data by typing . . . > source("Losses.txt"). . . The log-transformed loss amounts are in the column labeled ‘y’. The annual number of loss events are in the column labeled ‘L’. The data cover the period 1900–2007. More details about these data are given in Jagger et al. (2011). You begin by plotting a time series of the number of losses and a histogram of total loss per event. . . . > layout(matrix(c(1, 2), 1, 2, byrow=TRUE), + widths=c(3/5, 2/5)) > plot(1900:2007, L, type="h", xlab="Year", + ylab="Number of Loss Events") > grid() > mtext("a", side=3, line=1, adj=0, cex=1.1) > hist(y, xlab="Loss Amount ($ log)", + ylab="Frequency", main="") > mtext("b", side=3, line=1, adj=0, cex=1.1) . . .
Less
In this chapter, we show some broader applications of our models and methods. We focus on impact models. Hurricanes are capable of generating large financial losses. We begin with a model that estimates extreme losses conditional on climate covariates. We then describe a method for quantifying the relative change in potential losses over the decades. Financial losses from hurricanes are to some extent directly related to fluctuations in climate. Environmental factors influence the frequency and intensity of hurricanes at the coast as detailed throughout this book (see for example Chapters 7 and 8). So, it is not surprising that these same environmental signals appear in estimates of losses. Here loss is the economic damage associated with a hurricane’s direct impact. A normalization procedure adjusts the loss estimate from a past hurricane to what it would be if the same cyclone struck in a recent year by accounting for inflation and changes in wealth and population over the intervening time, plus a factor to account for changes in the number of housing units exceeding population growth. The method produces loss estimates that can be compared over time (Pielke et al. 2008). Here you focus on losses exceeding one billion ($ U.S.) that have been adjusted to 2005. The loss data are available in Losses.txt in JAGS format (see Chapter 9). Input the data by typing . . . > source("Losses.txt"). . . The log-transformed loss amounts are in the column labeled ‘y’. The annual number of loss events are in the column labeled ‘L’. The data cover the period 1900–2007. More details about these data are given in Jagger et al. (2011). You begin by plotting a time series of the number of losses and a histogram of total loss per event. . . . > layout(matrix(c(1, 2), 1, 2, byrow=TRUE), + widths=c(3/5, 2/5)) > plot(1900:2007, L, type="h", xlab="Year", + ylab="Number of Loss Events") > grid() > mtext("a", side=3, line=1, adj=0, cex=1.1) > hist(y, xlab="Loss Amount ($ log)", + ylab="Frequency", main="") > mtext("b", side=3, line=1, adj=0, cex=1.1) . . .