Roger M. Barker
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199576814
- eISBN:
- 9780191722509
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199576814.001.0001
- Subject:
- Business and Management, International Business, Corporate Governance and Accountability
The corporate governance systems of continental Europe have traditionally been quite different to those of the liberal market economies (e.g., the United States and the United Kingdom). Company ...
More
The corporate governance systems of continental Europe have traditionally been quite different to those of the liberal market economies (e.g., the United States and the United Kingdom). Company ownership has been dominated by incumbent blockholders, with a relatively minor role for minority shareholders and institutional investors. However, since the mid‐1990s, European corporations have adopted many of the characteristics of the Anglo‐American shareholder model. Furthermore, such an increased shareholder orientation has coincided with a significant role for the Left in European government. This presents a puzzle, as conventional wisdom does not conceive of the European Left as the natural ally of pro‐shareholder capitalism. This book provides an analysis of this paradox by arguing that the postwar support of the European Left for the prevailing blockholder‐dominated corporate system depended on the willingness of blockholders to share economic rents with employees, both through higher wages and greater employment stability. However, during the 1990s, product markets became more competitive in many European countries. The sharing of rents between social actors became increasingly difficult to sustain. In such an environment, the Left chose to relinquish its traditional social partnership with blockholders and embraced many aspects of the shareholder model. The hypothesis is initially explored through a panel data econometric analysis of fifteen non‐liberal market economies. Subsequent case study chapters examine the political economy of recent corporate governance change in Germany and Italy.Less
The corporate governance systems of continental Europe have traditionally been quite different to those of the liberal market economies (e.g., the United States and the United Kingdom). Company ownership has been dominated by incumbent blockholders, with a relatively minor role for minority shareholders and institutional investors. However, since the mid‐1990s, European corporations have adopted many of the characteristics of the Anglo‐American shareholder model. Furthermore, such an increased shareholder orientation has coincided with a significant role for the Left in European government. This presents a puzzle, as conventional wisdom does not conceive of the European Left as the natural ally of pro‐shareholder capitalism. This book provides an analysis of this paradox by arguing that the postwar support of the European Left for the prevailing blockholder‐dominated corporate system depended on the willingness of blockholders to share economic rents with employees, both through higher wages and greater employment stability. However, during the 1990s, product markets became more competitive in many European countries. The sharing of rents between social actors became increasingly difficult to sustain. In such an environment, the Left chose to relinquish its traditional social partnership with blockholders and embraced many aspects of the shareholder model. The hypothesis is initially explored through a panel data econometric analysis of fifteen non‐liberal market economies. Subsequent case study chapters examine the political economy of recent corporate governance change in Germany and Italy.
John C Gower and Garmt B Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.001.0001
- Subject:
- Mathematics, Probability / Statistics
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It ...
More
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It focuses on matching two or more configurations by using orthogonal, projection, and oblique axes transformations. Group-average summaries play an important part, and links with other group-average methods are discussed. The text is multi-disciplinary and also presents a unifying ANOVA framework.Less
Procrustean methods are used to transform one set of data to represent another set of data as closely as possible. This book unifies several strands in the literature and contains new algorithms. It focuses on matching two or more configurations by using orthogonal, projection, and oblique axes transformations. Group-average summaries play an important part, and links with other group-average methods are discussed. The text is multi-disciplinary and also presents a unifying ANOVA framework.
Peter Lyons and Howard J. Doueck
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195373912
- eISBN:
- 9780199865604
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195373912.001.0001
- Subject:
- Social Work, Research and Evaluation
This book is intended to be read at any stage in the dissertation process, but will be particularly useful in the early stages of preparation for a social work dissertation, and as a reference ...
More
This book is intended to be read at any stage in the dissertation process, but will be particularly useful in the early stages of preparation for a social work dissertation, and as a reference resource throughout. The book is a guide to successful dissertation completion. Content includes a brief history and overview of social work doctoral education in the United States, the importance of values in social work, and the relationship between personal, research, and social work values. Chapter 2 addresses issues in selecting and working with the dissertation supervisor and committee, as well as the role and tasks of all three parties in successful completion of the dissertation. In Chapter 3 strategies for researching, and evaluating the literature, as well as writing the literature review are discussed. In addition, the relevance of theory to social work research is examined. Chapter 4 describes ethical issues in social research and requirements for the protection of human subjects. In addition, an overview of both quantitative and qualitative research methods is provided. In Chapter 5 sample design and sample size are discussed in relation to both quantitative and qualitative research. The significance of the psychometric properties of measurement instruments is also discussed. Chapter 6 addresses issues in data collection, data management, and data analysis in qualitative and quantitative research. Finally Chapter 7 presents strategies for dissertation writing including structure and content, as well as data presentation.Less
This book is intended to be read at any stage in the dissertation process, but will be particularly useful in the early stages of preparation for a social work dissertation, and as a reference resource throughout. The book is a guide to successful dissertation completion. Content includes a brief history and overview of social work doctoral education in the United States, the importance of values in social work, and the relationship between personal, research, and social work values. Chapter 2 addresses issues in selecting and working with the dissertation supervisor and committee, as well as the role and tasks of all three parties in successful completion of the dissertation. In Chapter 3 strategies for researching, and evaluating the literature, as well as writing the literature review are discussed. In addition, the relevance of theory to social work research is examined. Chapter 4 describes ethical issues in social research and requirements for the protection of human subjects. In addition, an overview of both quantitative and qualitative research methods is provided. In Chapter 5 sample design and sample size are discussed in relation to both quantitative and qualitative research. The significance of the psychometric properties of measurement instruments is also discussed. Chapter 6 addresses issues in data collection, data management, and data analysis in qualitative and quantitative research. Finally Chapter 7 presents strategies for dissertation writing including structure and content, as well as data presentation.
Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0010
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter summarizes the fundamental concepts and tools for analyzing time series data. Time series analysis is a branch of applied mathematics developed mostly in the fields of signal processing ...
More
This chapter summarizes the fundamental concepts and tools for analyzing time series data. Time series analysis is a branch of applied mathematics developed mostly in the fields of signal processing and statistics. Contributions to this field, from an astronomical perspective, have predominantly focused on unevenly sampled data, low signal-to-noise data, and heteroscedastic errors. The chapter starts with a brief introduction to the main concepts in time series analysis. It then discusses the main tools from the modeling toolkit for time series analysis. Despite being set in the context of time series, many tools and results are readily applicable in other domains, and for this reason the examples presented will not be strictly limited to time-domain data. Armed with the modeling toolkit, the chapter goes on to discuss the analysis of periodic time series, search for temporally localized signals, and concludes with a brief discussion of stochastic processes.Less
This chapter summarizes the fundamental concepts and tools for analyzing time series data. Time series analysis is a branch of applied mathematics developed mostly in the fields of signal processing and statistics. Contributions to this field, from an astronomical perspective, have predominantly focused on unevenly sampled data, low signal-to-noise data, and heteroscedastic errors. The chapter starts with a brief introduction to the main concepts in time series analysis. It then discusses the main tools from the modeling toolkit for time series analysis. Despite being set in the context of time series, many tools and results are readily applicable in other domains, and for this reason the examples presented will not be strictly limited to time-domain data. Armed with the modeling toolkit, the chapter goes on to discuss the analysis of periodic time series, search for temporally localized signals, and concludes with a brief discussion of stochastic processes.
Roger M. Barker
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199576814
- eISBN:
- 9780191722509
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199576814.003.0006
- Subject:
- Business and Management, International Business, Corporate Governance and Accountability
A panel data econometric analysis of corporate governance change is undertaken utilizing a data set of fifteen nonliberal market economies covering the period 1975–2003. The results of this analysis ...
More
A panel data econometric analysis of corporate governance change is undertaken utilizing a data set of fifteen nonliberal market economies covering the period 1975–2003. The results of this analysis suggested that the interaction of partisanship and competition is a highly significant determinant of corporate governance change. In particular, significant shifts in a pro‐shareholder direction are associated with Left government – but not conservative government – in the context of high levels of competition. In contrast, neither Left nor conservative government is associated with corporate governance change in a low‐competition environment.Less
A panel data econometric analysis of corporate governance change is undertaken utilizing a data set of fifteen nonliberal market economies covering the period 1975–2003. The results of this analysis suggested that the interaction of partisanship and competition is a highly significant determinant of corporate governance change. In particular, significant shifts in a pro‐shareholder direction are associated with Left government – but not conservative government – in the context of high levels of competition. In contrast, neither Left nor conservative government is associated with corporate governance change in a low‐competition environment.
Peter Lyons and Howard J. Doueck
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195373912
- eISBN:
- 9780199865604
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195373912.003.0006
- Subject:
- Social Work, Research and Evaluation
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and ...
More
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and security in relation to HIPAA and other security requirements. The selection of appropriate statistical procedures including descriptive and inferential statistics is reviewed, as are the requirements and strategies for the collection and analysis of qualitative data including data coding and theme identification.Less
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and security in relation to HIPAA and other security requirements. The selection of appropriate statistical procedures including descriptive and inferential statistics is reviewed, as are the requirements and strategies for the collection and analysis of qualitative data including data coding and theme identification.
David B. Resnik
- Published in print:
- 2007
- Published Online:
- January 2007
- ISBN:
- 9780195309782
- eISBN:
- 9780199871285
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195309782.003.0004
- Subject:
- Philosophy, Moral Philosophy
This chapter considers the various ways that money can interfere with scientific norms. Problems can occur when financial interests intrude into experimental design, data analysis and interpretation, ...
More
This chapter considers the various ways that money can interfere with scientific norms. Problems can occur when financial interests intrude into experimental design, data analysis and interpretation, publication, peer review, and other aspects of science that should be protected from financial, political, or other biases. When this happens, financial interests affect the process of scientific research, and they can undermine objectivity, openness, honesty, and other research norms. Although it is impossible to prevent money from having any impact on research, society should take some steps to prevent financial interests from undermining scientific norms, such as developing policies for journals, granting agencies, and research institutions; educating students and scientists about potential problems and issues; and monitoring of research.Less
This chapter considers the various ways that money can interfere with scientific norms. Problems can occur when financial interests intrude into experimental design, data analysis and interpretation, publication, peer review, and other aspects of science that should be protected from financial, political, or other biases. When this happens, financial interests affect the process of scientific research, and they can undermine objectivity, openness, honesty, and other research norms. Although it is impossible to prevent money from having any impact on research, society should take some steps to prevent financial interests from undermining scientific norms, such as developing policies for journals, granting agencies, and research institutions; educating students and scientists about potential problems and issues; and monitoring of research.
Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0007
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data ...
More
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.Less
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.
David Royse, Michele Staton‐Tindall, Karen Badger, and J. Matthew Webster
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780195368789
- eISBN:
- 9780199863860
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:royes/9780195368789.003.0004
- Subject:
- Social Work, Research and Evaluation
This chapter discusses ways of analyzing and interpreting data collected for a needs assessment project based on the type of design that is selected. Analysis of qualitative data is also discussed ...
More
This chapter discusses ways of analyzing and interpreting data collected for a needs assessment project based on the type of design that is selected. Analysis of qualitative data is also discussed for those who have used focus groups or questionnaires that have employed open-ended questions. Quantitative data analytic strategies include data editing and checking, frequencies and univariate analyses, and examination of variables two at a time. Qualitative data analyses include understanding and interpreting themes and patterns in the data and drawing conclusions. The chapter also highlights some published needs assessment articles which have used quantitative and qualitative analytic approaches.Less
This chapter discusses ways of analyzing and interpreting data collected for a needs assessment project based on the type of design that is selected. Analysis of qualitative data is also discussed for those who have used focus groups or questionnaires that have employed open-ended questions. Quantitative data analytic strategies include data editing and checking, frequencies and univariate analyses, and examination of variables two at a time. Qualitative data analyses include understanding and interpreting themes and patterns in the data and drawing conclusions. The chapter also highlights some published needs assessment articles which have used quantitative and qualitative analytic approaches.
Roger M. Barker
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199576814
- eISBN:
- 9780191722509
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199576814.003.0007
- Subject:
- Business and Management, International Business, Corporate Governance and Accountability
A variety of statistical robustness tests confirm that the conclusions of Chapter 6 are not sensitive to the inclusion of particular countries or observations in the data set, or the choice of ...
More
A variety of statistical robustness tests confirm that the conclusions of Chapter 6 are not sensitive to the inclusion of particular countries or observations in the data set, or the choice of individual control variables. The reestimation of the model in terms of first‐differences (i.e., a dynamic model specification) also gives rise to consistent results.Less
A variety of statistical robustness tests confirm that the conclusions of Chapter 6 are not sensitive to the inclusion of particular countries or observations in the data set, or the choice of individual control variables. The reestimation of the model in terms of first‐differences (i.e., a dynamic model specification) also gives rise to consistent results.
Adil E. Shamoo and David B. Resnik
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780195368246
- eISBN:
- 9780199867615
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195368246.003.0003
- Subject:
- Biology, Disease Ecology / Epidemiology, Biochemistry / Molecular Biology
Proper management of research conduct is essential to achieving reliable results and maintaining the quality, objectivity, and integrity of research data. The different steps of research should be ...
More
Proper management of research conduct is essential to achieving reliable results and maintaining the quality, objectivity, and integrity of research data. The different steps of research should be monitored carefully, and research designs should include built-in safeguards to ensure the quality and integrity of research data. This chapter addresses ethical conduct in different steps of the research process: hypothesis formation, research design, literature review, data collection, data analysis, data interpretation, publication, and data storage. This chapter also discusses methods that can help assure the quality, objectivity, and integrity of research data, such as good research practices (GRPs), standard operating procedures (SOPs), peer review, and data audit.Less
Proper management of research conduct is essential to achieving reliable results and maintaining the quality, objectivity, and integrity of research data. The different steps of research should be monitored carefully, and research designs should include built-in safeguards to ensure the quality and integrity of research data. This chapter addresses ethical conduct in different steps of the research process: hypothesis formation, research design, literature review, data collection, data analysis, data interpretation, publication, and data storage. This chapter also discusses methods that can help assure the quality, objectivity, and integrity of research data, such as good research practices (GRPs), standard operating procedures (SOPs), peer review, and data audit.
Rein Taagepera
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199534661
- eISBN:
- 9780191715921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199534661.003.0001
- Subject:
- Political Science, Comparative Politics, Political Economy
Science is not only about the empirical “What is?” but also very much about the conceptual “How should it be on logical grounds?” Statistical approaches are essentially descriptive, while ...
More
Science is not only about the empirical “What is?” but also very much about the conceptual “How should it be on logical grounds?” Statistical approaches are essentially descriptive, while quantitatively formulated logical models are essentially predictive in an explanatory way. Social sciences have overemphasized statistical data analysis, often limiting their logical models to prediction of the direction of effect, oblivious of its quantitative extent. A better balance of methods is possible and will make social sciences more relevant to society. This book is about going beyond regression and other statistical approaches, and also about improving their use.Less
Science is not only about the empirical “What is?” but also very much about the conceptual “How should it be on logical grounds?” Statistical approaches are essentially descriptive, while quantitatively formulated logical models are essentially predictive in an explanatory way. Social sciences have overemphasized statistical data analysis, often limiting their logical models to prediction of the direction of effect, oblivious of its quantitative extent. A better balance of methods is possible and will make social sciences more relevant to society. This book is about going beyond regression and other statistical approaches, and also about improving their use.
Gidon Eshel
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691128917
- eISBN:
- 9781400840632
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691128917.003.0011
- Subject:
- Environmental Science, Environmental Studies
This chapter focuses on empirical orthogonal functions (EOFs). One of the most useful and common eigen-techniques in data analysis is the construction of EOFs. EOFs are a transform of the data; the ...
More
This chapter focuses on empirical orthogonal functions (EOFs). One of the most useful and common eigen-techniques in data analysis is the construction of EOFs. EOFs are a transform of the data; the original set of numbers is transformed into a different set with some desirable properties. In this sense the EOF transform is similar to other transforms, such as the Fourier or Laplace transforms. In all these cases, we project the original data onto a set of functions, thus replacing the original data with the set of projection coefficients on the chosen new set of basis vectors. However, the choice of the specific basis set varies from case to case. The discussions cover data matrix structure convention, reshaping multidimensional data sets for EOF analysis, forming anomalies and removing time mean, missing values, choosing and interpreting the covariability matrix, calculating the EOFs, projection time series, and extended EOF analysis.Less
This chapter focuses on empirical orthogonal functions (EOFs). One of the most useful and common eigen-techniques in data analysis is the construction of EOFs. EOFs are a transform of the data; the original set of numbers is transformed into a different set with some desirable properties. In this sense the EOF transform is similar to other transforms, such as the Fourier or Laplace transforms. In all these cases, we project the original data onto a set of functions, thus replacing the original data with the set of projection coefficients on the chosen new set of basis vectors. However, the choice of the specific basis set varies from case to case. The discussions cover data matrix structure convention, reshaping multidimensional data sets for EOF analysis, forming anomalies and removing time mean, missing values, choosing and interpreting the covariability matrix, calculating the EOFs, projection time series, and extended EOF analysis.
Outi Monni and Sampsa Hautaniemi
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199532872
- eISBN:
- 9780191714467
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199532872.003.0004
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
This chapter discusses methods to measure and integrate microarray-based copy number and gene expression data. It is well-known that gene copy number alterations are a key factor in cancer ...
More
This chapter discusses methods to measure and integrate microarray-based copy number and gene expression data. It is well-known that gene copy number alterations are a key factor in cancer development and progression. Especially gene amplification is known to be important mechanism for the cancer cells to increase expression of cellular proto-oncogenes. Thus, systematic identification of genes with elevated copy number and gene expression levels is important in discovery of potential therapeutic target genes in human cancers. Here, the chapter reviews the main methods to measure genome-wide copy number and gene expression levels. The main aim is to describe systematic computational data analysis approaches for integrating high-throughput copy number and gene expression data.Less
This chapter discusses methods to measure and integrate microarray-based copy number and gene expression data. It is well-known that gene copy number alterations are a key factor in cancer development and progression. Especially gene amplification is known to be important mechanism for the cancer cells to increase expression of cellular proto-oncogenes. Thus, systematic identification of genes with elevated copy number and gene expression levels is important in discovery of potential therapeutic target genes in human cancers. Here, the chapter reviews the main methods to measure genome-wide copy number and gene expression levels. The main aim is to describe systematic computational data analysis approaches for integrating high-throughput copy number and gene expression data.
Rachel Stanworth
- Published in print:
- 2003
- Published Online:
- November 2011
- ISBN:
- 9780198525110
- eISBN:
- 9780191730504
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198525110.003.0005
- Subject:
- Palliative Care, Patient Care and End-of-Life Decision Making, Palliative Medicine Research
This chapter discusses data analysis and interpretation, which is to tell a new story. However, there exists a natural tension between telling a new story and the desire to remain loyal to the ...
More
This chapter discusses data analysis and interpretation, which is to tell a new story. However, there exists a natural tension between telling a new story and the desire to remain loyal to the original material upon which it relies. It discusses the role of the qualitative researcher, who is responsible for recollecting the conditions of data collection, and who is aware of the nuances of discourse or of the ‘feeling’ of silence. It also mentions several programmes the researcher can use to analyze and interpret the gathered data.Less
This chapter discusses data analysis and interpretation, which is to tell a new story. However, there exists a natural tension between telling a new story and the desire to remain loyal to the original material upon which it relies. It discusses the role of the qualitative researcher, who is responsible for recollecting the conditions of data collection, and who is aware of the nuances of discourse or of the ‘feeling’ of silence. It also mentions several programmes the researcher can use to analyze and interpret the gathered data.
Mark F. Testa
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195321302
- eISBN:
- 9780199777457
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195321302.003.0005
- Subject:
- Social Work, Children and Families, Communities and Organizations
This chapter builds on the previous chapter's discussion of the limitations of the existing Child and Family Services Review (CFSR) outcomes indicators that rely heavily on cross-sectional samples of ...
More
This chapter builds on the previous chapter's discussion of the limitations of the existing Child and Family Services Review (CFSR) outcomes indicators that rely heavily on cross-sectional samples of active cases and exit cohort samples of children discharged from foster care. It reviews some of the challenges of analyzing child welfare outcomes when program and policy changes are still in the process of implementation, and discusses recent advances in longitudinal data analysis of time-to-outcome data. The chapter also provides an overview of the concept of statistical power, and discusses the importance of distinguishing between statistical and practical significance when assessing agency performance. It concludes with an illustration of how greater transparency can be brought to the analysis of family reunification trends in Illinois through statistical risk-adjustment for variations in child demographic characteristics, family needs, and other conditions of the populations served by the child welfare system.Less
This chapter builds on the previous chapter's discussion of the limitations of the existing Child and Family Services Review (CFSR) outcomes indicators that rely heavily on cross-sectional samples of active cases and exit cohort samples of children discharged from foster care. It reviews some of the challenges of analyzing child welfare outcomes when program and policy changes are still in the process of implementation, and discusses recent advances in longitudinal data analysis of time-to-outcome data. The chapter also provides an overview of the concept of statistical power, and discusses the importance of distinguishing between statistical and practical significance when assessing agency performance. It concludes with an illustration of how greater transparency can be brought to the analysis of family reunification trends in Illinois through statistical risk-adjustment for variations in child demographic characteristics, family needs, and other conditions of the populations served by the child welfare system.
Simon Learmount
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780199269082
- eISBN:
- 9780191719257
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199269082.003.0004
- Subject:
- Business and Management, Corporate Governance and Accountability
The reason for the confusion over how Japanese corporate governance works in practice is that up until now, there has been a lack of inquiry and analysis which is ‘close to the action’, in other ...
More
The reason for the confusion over how Japanese corporate governance works in practice is that up until now, there has been a lack of inquiry and analysis which is ‘close to the action’, in other words, research that draws directly on the actual, everyday socio-economic actions and interactions of businesspeople and other company participants. This chapter addresses this problem by conducting a detailed study on fourteen different Japanese companies. The companies studied, how the research was carried out, and the strengths and weaknesses of the approach adopted are discussed.Less
The reason for the confusion over how Japanese corporate governance works in practice is that up until now, there has been a lack of inquiry and analysis which is ‘close to the action’, in other words, research that draws directly on the actual, everyday socio-economic actions and interactions of businesspeople and other company participants. This chapter addresses this problem by conducting a detailed study on fourteen different Japanese companies. The companies studied, how the research was carried out, and the strengths and weaknesses of the approach adopted are discussed.
Andrew Sturdy, Karen Handley, Timothy Clark, and Robin Fincham
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780199212644
- eISBN:
- 9780191707339
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199212644.003.0003
- Subject:
- Business and Management, Organization Studies, Knowledge Management
The assumptions, methods, practices, and contexts of the research are set out as a reference point for the analysis in subsequent chapters. In particular, the issues involved in obtaining research ...
More
The assumptions, methods, practices, and contexts of the research are set out as a reference point for the analysis in subsequent chapters. In particular, the issues involved in obtaining research access for an observational and exploratory study of client‐consultant relations are discussed before providing overviews of each of the case study projects and their key participants. The methods of data collection used (e.g. observation, interviews and documents) are then described along with those of data analysis such as data coding techniques. Finally, the process through which core topics of boundary complexity, sector knowledge, challenge and humour were selected along with the use of data to develop them is set out.Less
The assumptions, methods, practices, and contexts of the research are set out as a reference point for the analysis in subsequent chapters. In particular, the issues involved in obtaining research access for an observational and exploratory study of client‐consultant relations are discussed before providing overviews of each of the case study projects and their key participants. The methods of data collection used (e.g. observation, interviews and documents) are then described along with those of data analysis such as data coding techniques. Finally, the process through which core topics of boundary complexity, sector knowledge, challenge and humour were selected along with the use of data to develop them is set out.
Michele Maggiore
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198570745
- eISBN:
- 9780191717666
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570745.001.0001
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This book deals with all aspects of gravitational-wave physics, both theoretical and experimental. This first volume deals with gravitational wave (GW) theory and experiments. Part I discusses the ...
More
This book deals with all aspects of gravitational-wave physics, both theoretical and experimental. This first volume deals with gravitational wave (GW) theory and experiments. Part I discusses the theory of GWs, re-deriving afresh and in a coherent way all the results presented. Both the geometrical and the field-theoretical approach to general relativity are discussed. The generation of GWs is discussed first in linearized theory (including the general multipole expansion) and then within the post-Newtonian formalism. Many important calculations (inspiral of compact binaries, GW emission by rotating or precessing bodies, infall into black holes, etc.) are presented. The observation of GWs emission from the change in the orbital period of binary pulsar, such as the Hulse-Taylor pulsar and the double pulsar, is also explained, and the pulsar timing formula is derived. Part II discusses the principles of GW experiments, going into the detail of the functioning of both interferometers and resonant-mass detectors. One chapter is devoted to the data analysis techniques relevant for GW experiments.Less
This book deals with all aspects of gravitational-wave physics, both theoretical and experimental. This first volume deals with gravitational wave (GW) theory and experiments. Part I discusses the theory of GWs, re-deriving afresh and in a coherent way all the results presented. Both the geometrical and the field-theoretical approach to general relativity are discussed. The generation of GWs is discussed first in linearized theory (including the general multipole expansion) and then within the post-Newtonian formalism. Many important calculations (inspiral of compact binaries, GW emission by rotating or precessing bodies, infall into black holes, etc.) are presented. The observation of GWs emission from the change in the orbital period of binary pulsar, such as the Hulse-Taylor pulsar and the double pulsar, is also explained, and the pulsar timing formula is derived. Part II discusses the principles of GW experiments, going into the detail of the functioning of both interferometers and resonant-mass detectors. One chapter is devoted to the data analysis techniques relevant for GW experiments.
Gidon Eshel
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691128917
- eISBN:
- 9781400840632
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691128917.003.0009
- Subject:
- Environmental Science, Environmental Studies
This chapter focuses on linear regression, the process of identifying the unique model that best explains a set of observed data among a specified class of general models. Regression thus occupies a ...
More
This chapter focuses on linear regression, the process of identifying the unique model that best explains a set of observed data among a specified class of general models. Regression thus occupies a uniquely important position at the very interface of modeling and data analysis. Regression arises very often, in various guises, in handling and analyzing data. Since it is one of the most basic, useful, and frequently employed data analysis tools, and since some understanding of regression is needed in later sections, regression will be discussed in some detail. Topics covered include setting up the problem; the linear system Ax = b, least squares, special problems giving rise to linear systems, statistical issues in regression analysis, and multidimensional regression and linear model identification.Less
This chapter focuses on linear regression, the process of identifying the unique model that best explains a set of observed data among a specified class of general models. Regression thus occupies a uniquely important position at the very interface of modeling and data analysis. Regression arises very often, in various guises, in handling and analyzing data. Since it is one of the most basic, useful, and frequently employed data analysis tools, and since some understanding of regression is needed in later sections, regression will be discussed in some detail. Topics covered include setting up the problem; the linear system Ax = b, least squares, special problems giving rise to linear systems, statistical issues in regression analysis, and multidimensional regression and linear model identification.