Peter Lyons and Howard J. Doueck
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195373912
- eISBN:
- 9780199865604
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195373912.003.0006
- Subject:
- Social Work, Research and Evaluation
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and ...
More
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and security in relation to HIPAA and other security requirements. The selection of appropriate statistical procedures including descriptive and inferential statistics is reviewed, as are the requirements and strategies for the collection and analysis of qualitative data including data coding and theme identification.Less
This chapter examines issues related to quantitative and qualitative data including data collection, data management, data processing, data preparation and data analysis; as well as data storage and security in relation to HIPAA and other security requirements. The selection of appropriate statistical procedures including descriptive and inferential statistics is reviewed, as are the requirements and strategies for the collection and analysis of qualitative data including data coding and theme identification.
Matti S. Hämäläinen, Fa-Hsuan Lin, and John C. Mosher
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780195307238
- eISBN:
- 9780199863990
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195307238.003.0008
- Subject:
- Neuroscience, Behavioral Neuroscience, Techniques
This chapter first outlines the overall MEG data processing workflow, with emphasis on source estimation and incorporation of anatomical information. It then provides an overview of analytical ...
More
This chapter first outlines the overall MEG data processing workflow, with emphasis on source estimation and incorporation of anatomical information. It then provides an overview of analytical methods needed in the computation of the minimum-norm solutions, including application of minimum-norm solutions in the computation of time-frequency representations in the source domain. It discusses a specific workflow to compute the cortically constrained distributed source estimates, including practical approaches to acquiring and processing the MRI and MEG data. Finally, it discusses a few representative studies where the presented methods have been employed.Less
This chapter first outlines the overall MEG data processing workflow, with emphasis on source estimation and incorporation of anatomical information. It then provides an overview of analytical methods needed in the computation of the minimum-norm solutions, including application of minimum-norm solutions in the computation of time-frequency representations in the source domain. It discusses a specific workflow to compute the cortically constrained distributed source estimates, including practical approaches to acquiring and processing the MRI and MEG data. Finally, it discusses a few representative studies where the presented methods have been employed.
David F. Hendry
- Published in print:
- 1995
- Published Online:
- November 2003
- ISBN:
- 9780198283164
- eISBN:
- 9780191596384
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198283164.003.0002
- Subject:
- Economics and Finance, Econometrics
The main concepts for empirical modelling of economic time series are explained: parameter and parameter space; constancy; structure; distributional shape; identification and observational ...
More
The main concepts for empirical modelling of economic time series are explained: parameter and parameter space; constancy; structure; distributional shape; identification and observational equivalence; interdependence; stochastic process; conditioning; white noise; autocorrelation; stationarity; integratedness; trend; heteroscedasticity; dimensionality; aggregation; sequential factorization; and marginalization. A formal data‐generation process (DGP) for economics is the joint data density with an innovation error. Empirical models derive from reduction operations applied to the DGP.Less
The main concepts for empirical modelling of economic time series are explained: parameter and parameter space; constancy; structure; distributional shape; identification and observational equivalence; interdependence; stochastic process; conditioning; white noise; autocorrelation; stationarity; integratedness; trend; heteroscedasticity; dimensionality; aggregation; sequential factorization; and marginalization. A formal data‐generation process (DGP) for economics is the joint data density with an innovation error. Empirical models derive from reduction operations applied to the DGP.
Thomas Rigotti, David E. Guest, Michael Clinton, and Gisela Mohr
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199542697
- eISBN:
- 9780191715389
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199542697.003.0002
- Subject:
- Business and Management, HRM / IR
This chapter outlines the key features of the research on which the book is based. The research was conducted in six European countries and Israel as a comparator. It involved over 200 organizations ...
More
This chapter outlines the key features of the research on which the book is based. The research was conducted in six European countries and Israel as a comparator. It involved over 200 organizations and over 5000 workers, about a third of whom were on various kinds of temporary contract. All kinds of temporary worker are included with a majority employed on fixed‐term contracts which is representative of typical national practice. Data were collected from employer representatives and from workers in each organization using extensively‐piloted questionnaires. The process of data collection and the broad content of the questionnaires are described. The statistical properties of the various scales used in the study are presented in a Technical Appendix.Less
This chapter outlines the key features of the research on which the book is based. The research was conducted in six European countries and Israel as a comparator. It involved over 200 organizations and over 5000 workers, about a third of whom were on various kinds of temporary contract. All kinds of temporary worker are included with a majority employed on fixed‐term contracts which is representative of typical national practice. Data were collected from employer representatives and from workers in each organization using extensively‐piloted questionnaires. The process of data collection and the broad content of the questionnaires are described. The statistical properties of the various scales used in the study are presented in a Technical Appendix.
Peter Eaton and Paul West
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199570454
- eISBN:
- 9780191722851
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199570454.003.0005
- Subject:
- Physics, Atomic, Laser, and Optical Physics
This chapter describes procedures for image processing, display and analysis. AFM data is particularly suitable for further processing and analysis, and has some particular requirements, due to the ...
More
This chapter describes procedures for image processing, display and analysis. AFM data is particularly suitable for further processing and analysis, and has some particular requirements, due to the three‐dimensional nature of the date obtained. Proper use of image processing techniques is important in order to enable further analysis while accurately reflecting the real nature of the sample, and avoiding the introduction of errors. Various methods of optimizing the display of the data are described, and their suitability for different uses is compared. In addition, there are many powerful analysis routines for AFM data, which can be essential to extract the most information from image data. This chapter describes how to maintain data integrity, and how to optimize and process the data for best effect.Less
This chapter describes procedures for image processing, display and analysis. AFM data is particularly suitable for further processing and analysis, and has some particular requirements, due to the three‐dimensional nature of the date obtained. Proper use of image processing techniques is important in order to enable further analysis while accurately reflecting the real nature of the sample, and avoiding the introduction of errors. Various methods of optimizing the display of the data are described, and their suitability for different uses is compared. In addition, there are many powerful analysis routines for AFM data, which can be essential to extract the most information from image data. This chapter describes how to maintain data integrity, and how to optimize and process the data for best effect.
Léopold Simar Paul W. Wilson
- Published in print:
- 2008
- Published Online:
- January 2008
- ISBN:
- 9780195183528
- eISBN:
- 9780199870288
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195183528.003.0004
- Subject:
- Economics and Finance, Econometrics
This chapter recasts the parametric and statistical approach of Chapter 2, and the nonparametric and deterministic approach of Chapter 3 into a nonparametric and statistical approach. It presents in ...
More
This chapter recasts the parametric and statistical approach of Chapter 2, and the nonparametric and deterministic approach of Chapter 3 into a nonparametric and statistical approach. It presents in a unified notation the basic assumptions needed to define the data-generating process (DGP) and shows how the nonparametric estimators [free disposal hull (FDH) and data envelopment analysis (DEA)] can be described easily in this framework. It then discusses bootstrap methods for inference based on DEA and FDH estimates. After that it discusses two ways FDH estimators can be improved, using bias corrections and interpolation. The chapter proposes a way for defining robust nonparametric estimators of the frontier, based on a concept of “partial frontiers” (order-m frontiers or order-α quantile frontiers). The next section surveys the most recent techniques allowing investigation of the effects of these external factors on efficiency. The two approaches are reconciled with each other, and a nonparametric method is shown to be particularly useful even if in the end a parametric model is desired. This mixed “semiparametric” approach seems to outperform the usual parametric approaches based on regression ideas. The last section concludes with a discussion of still-important, open issues and questions for future research.Less
This chapter recasts the parametric and statistical approach of Chapter 2, and the nonparametric and deterministic approach of Chapter 3 into a nonparametric and statistical approach. It presents in a unified notation the basic assumptions needed to define the data-generating process (DGP) and shows how the nonparametric estimators [free disposal hull (FDH) and data envelopment analysis (DEA)] can be described easily in this framework. It then discusses bootstrap methods for inference based on DEA and FDH estimates. After that it discusses two ways FDH estimators can be improved, using bias corrections and interpolation. The chapter proposes a way for defining robust nonparametric estimators of the frontier, based on a concept of “partial frontiers” (order-m frontiers or order-α quantile frontiers). The next section surveys the most recent techniques allowing investigation of the effects of these external factors on efficiency. The two approaches are reconciled with each other, and a nonparametric method is shown to be particularly useful even if in the end a parametric model is desired. This mixed “semiparametric” approach seems to outperform the usual parametric approaches based on regression ideas. The last section concludes with a discussion of still-important, open issues and questions for future research.
Dennis Sherwood and Jon Cooper
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199559046
- eISBN:
- 9780191595028
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199559046.003.0011
- Subject:
- Physics, Crystallography: Physics
This chapter covers the essential methods by which X-rays are generated in the laboratory and at synchrotron sources for data collection from protein crystals. The methods by which X-rays of a ...
More
This chapter covers the essential methods by which X-rays are generated in the laboratory and at synchrotron sources for data collection from protein crystals. The methods by which X-rays of a suitable wavelength are selected and collimated for a diffraction experiment are described along with the underlying physical principles. The commonly used methods for protein data collection are then described with a summary of various area detector systems that are widely used in the field. The principles and practice of determining the X-ray diffraction intensities are then covered along with the physical basis of various correction factors which are applied to the data. The processes of scaling and merging, which allow a set of unique diffraction intensities to be obtained from the numerous redundant measurements made in a data collection, are described, along with methods for assessing the quality of the data. The effects which thermal motion and disorder within the crystal have on the diffraction intensities are discussed and appropriate correction factors are described along with a number of caveats, such as crystal twinning, which affect the subsequent steps of structure analysis.Less
This chapter covers the essential methods by which X-rays are generated in the laboratory and at synchrotron sources for data collection from protein crystals. The methods by which X-rays of a suitable wavelength are selected and collimated for a diffraction experiment are described along with the underlying physical principles. The commonly used methods for protein data collection are then described with a summary of various area detector systems that are widely used in the field. The principles and practice of determining the X-ray diffraction intensities are then covered along with the physical basis of various correction factors which are applied to the data. The processes of scaling and merging, which allow a set of unique diffraction intensities to be obtained from the numerous redundant measurements made in a data collection, are described, along with methods for assessing the quality of the data. The effects which thermal motion and disorder within the crystal have on the diffraction intensities are discussed and appropriate correction factors are described along with a number of caveats, such as crystal twinning, which affect the subsequent steps of structure analysis.
Max H. Boisot
- Published in print:
- 1999
- Published Online:
- October 2011
- ISBN:
- 9780198296072
- eISBN:
- 9780191685194
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198296072.003.0003
- Subject:
- Business and Management, Knowledge Management, Organization Studies
This chapter focuses on acts of codification and abstraction as holding the key to economizing on data. Securing of such data economies are further taken as a critical prerequisite of effective ...
More
This chapter focuses on acts of codification and abstraction as holding the key to economizing on data. Securing of such data economies are further taken as a critical prerequisite of effective communication and, by insinuation, of effective organizational methods. These ideas are brought together in a single united conceptual framework, which is called the I-Space. Economizing on data-processing resources involves moving away from the uncodified end and towards the codified end of the scale, from the inarticulate towards the articulate, from the complex towards the simple. Data-processing economies are by no means an unmixed blessing. A price is paid in terms of lost flexibility, of options sacrificed. The I-Space is a conceptual framework within which the behaviour of information flows can be discovered and, through these, the creation and diffusion of knowledge within selected populations can be understood.Less
This chapter focuses on acts of codification and abstraction as holding the key to economizing on data. Securing of such data economies are further taken as a critical prerequisite of effective communication and, by insinuation, of effective organizational methods. These ideas are brought together in a single united conceptual framework, which is called the I-Space. Economizing on data-processing resources involves moving away from the uncodified end and towards the codified end of the scale, from the inarticulate towards the articulate, from the complex towards the simple. Data-processing economies are by no means an unmixed blessing. A price is paid in terms of lost flexibility, of options sacrificed. The I-Space is a conceptual framework within which the behaviour of information flows can be discovered and, through these, the creation and diffusion of knowledge within selected populations can be understood.
Moody T. Chu and Gene H. Golub
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566649
- eISBN:
- 9780191718021
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566649.003.0002
- Subject:
- Mathematics, Applied Mathematics
Inverse eigenvalue problems arise in a remarkable variety of applications. This chapter briefly highlights a few applications. The discussion is divided into six categories of applications: feedback ...
More
Inverse eigenvalue problems arise in a remarkable variety of applications. This chapter briefly highlights a few applications. The discussion is divided into six categories of applications: feedback control, applied mechanics, inverse Sturm-Liouville problem, applied physics, numerical analysis, and signal and data processing. Each category covers some additional problems.Less
Inverse eigenvalue problems arise in a remarkable variety of applications. This chapter briefly highlights a few applications. The discussion is divided into six categories of applications: feedback control, applied mechanics, inverse Sturm-Liouville problem, applied physics, numerical analysis, and signal and data processing. Each category covers some additional problems.
Magy Seif El-Nasr, Truong Huy Nguyen Dinh, Alessandro Canossa, and Anders Drachen
- Published in print:
- 2021
- Published Online:
- November 2021
- ISBN:
- 9780192897879
- eISBN:
- 9780191919466
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192897879.003.0001
- Subject:
- Computer Science, Human-Computer Interaction, Game Studies
This chapter introduces the topic of this book: Game Data Science. Game data science is the process of developing data-driven techniques and evidence to support decision-making across operational, ...
More
This chapter introduces the topic of this book: Game Data Science. Game data science is the process of developing data-driven techniques and evidence to support decision-making across operational, tactical, and strategic levels of game development, and this is why it is so valuable. This chapter introduces this topic as well as outlines the process of game data science from instrumentation, data collection, data processing, data analysis, to reporting. Further, the chapter also discusses the application of game data science, as well as its utility and value, to the different stakeholders. The chapter also includes a section discussing the evolution of this process over time, which is important to situate the field and the techniques discussed in the book. The chapter also outlines established industry terminologies and defines their use in the industry and academia.Less
This chapter introduces the topic of this book: Game Data Science. Game data science is the process of developing data-driven techniques and evidence to support decision-making across operational, tactical, and strategic levels of game development, and this is why it is so valuable. This chapter introduces this topic as well as outlines the process of game data science from instrumentation, data collection, data processing, data analysis, to reporting. Further, the chapter also discusses the application of game data science, as well as its utility and value, to the different stakeholders. The chapter also includes a section discussing the evolution of this process over time, which is important to situate the field and the techniques discussed in the book. The chapter also outlines established industry terminologies and defines their use in the industry and academia.
Mohan Matthen
- Published in print:
- 2005
- Published Online:
- April 2005
- ISBN:
- 9780199268504
- eISBN:
- 9780191602283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199268509.003.0003
- Subject:
- Philosophy, Philosophy of Mind
Descartes realized that the retinal image would have to be transformed into Amovements of the brain@ and then into ideas before it could become material for sensory or mental operations; he ...
More
Descartes realized that the retinal image would have to be transformed into Amovements of the brain@ and then into ideas before it could become material for sensory or mental operations; he discovered what today is called Atransduction@. The current neurocomputational paradigm goes further: it sees sensory systems as processing transduced signals in the search for the occurrence of specific events or conditions and discarding all information irrelevant to these. When a particular feature is detected, the system enters into a characteristic state: for instance, a neuron might fire to signal the detection of a particular feature. A perceiver gains access to this event through a conscious sensation, which is in no way an image or picture. The features that a system detects in this way are often objective characteristics of external things. This opens the door to realism with respect to sensory classification.Less
Descartes realized that the retinal image would have to be transformed into Amovements of the brain@ and then into ideas before it could become material for sensory or mental operations; he discovered what today is called Atransduction@. The current neurocomputational paradigm goes further: it sees sensory systems as processing transduced signals in the search for the occurrence of specific events or conditions and discarding all information irrelevant to these. When a particular feature is detected, the system enters into a characteristic state: for instance, a neuron might fire to signal the detection of a particular feature. A perceiver gains access to this event through a conscious sensation, which is in no way an image or picture. The features that a system detects in this way are often objective characteristics of external things. This opens the door to realism with respect to sensory classification.
Max H. Boisot
- Published in print:
- 1999
- Published Online:
- October 2011
- ISBN:
- 9780198296072
- eISBN:
- 9780191685194
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198296072.003.0004
- Subject:
- Business and Management, Knowledge Management, Organization Studies
Knowledge assets are a source of competitive gain for firms that have them. They allow them to bring greater products or services to market quicker and in greater volumes than their competitors can ...
More
Knowledge assets are a source of competitive gain for firms that have them. They allow them to bring greater products or services to market quicker and in greater volumes than their competitors can match. Yet the way that the possession of a knowledge asset translates into competitive advantage remains ill-understood. Extracting value from knowledge assets entails an ability to manage them as they emerge, wax, and wane through the actions of the SLC. In moving its knowledge through an SLC, however, a firm incurs both data-processing and data-transmission costs. When it comes to formation of value, knowledge assets vary from physical assets in important respects. The supply of physical goods is fully constrained by their spatio-temporal extension, while that of information goods is much less so. Information goods exhibit a natural scarcity only when they are deeply embedded in some physical substrate that is limited in space and time.Less
Knowledge assets are a source of competitive gain for firms that have them. They allow them to bring greater products or services to market quicker and in greater volumes than their competitors can match. Yet the way that the possession of a knowledge asset translates into competitive advantage remains ill-understood. Extracting value from knowledge assets entails an ability to manage them as they emerge, wax, and wane through the actions of the SLC. In moving its knowledge through an SLC, however, a firm incurs both data-processing and data-transmission costs. When it comes to formation of value, knowledge assets vary from physical assets in important respects. The supply of physical goods is fully constrained by their spatio-temporal extension, while that of information goods is much less so. Information goods exhibit a natural scarcity only when they are deeply embedded in some physical substrate that is limited in space and time.
Gerd Gigerenzer
- Published in print:
- 2002
- Published Online:
- October 2011
- ISBN:
- 9780195153729
- eISBN:
- 9780199849222
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195153729.003.0002
- Subject:
- Philosophy, General
Two influential tools fueled the cognitive revolution: new statistical techniques and the computer. Both started as tools for data processing and ended up as theories of mind. This chapter extends ...
More
Two influential tools fueled the cognitive revolution: new statistical techniques and the computer. Both started as tools for data processing and ended up as theories of mind. This chapter extends the thesis of a tools-to-theories heuristic from statistical tools to the computer. It is divided into two parts. In the first part, it argues that a conceptual divorce between intelligence and calculation circa 1800, motivated by a new social organization of work, made mechanical computation conceivable. The tools-to-theories heuristic comes into play in the second part. When computers finally became standard laboratory tools in the 20th century, the computer was proposed, and with some delay accepted, as a model of mind.Less
Two influential tools fueled the cognitive revolution: new statistical techniques and the computer. Both started as tools for data processing and ended up as theories of mind. This chapter extends the thesis of a tools-to-theories heuristic from statistical tools to the computer. It is divided into two parts. In the first part, it argues that a conceptual divorce between intelligence and calculation circa 1800, motivated by a new social organization of work, made mechanical computation conceivable. The tools-to-theories heuristic comes into play in the second part. When computers finally became standard laboratory tools in the 20th century, the computer was proposed, and with some delay accepted, as a model of mind.
Christian Gourieroux
- Published in print:
- 1999
- Published Online:
- November 2003
- ISBN:
- 9780198292111
- eISBN:
- 9780191596537
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292112.003.0014
- Subject:
- Economics and Finance, Macro- and Monetary Economics, Microeconomics
Christian Gourieroux starts out with the observation that the maintained hypothesis in econometrics is that the data is generated by some unknown stochastic process and that the job of the ...
More
Christian Gourieroux starts out with the observation that the maintained hypothesis in econometrics is that the data is generated by some unknown stochastic process and that the job of the econometrician is to try to identify that process. He deals first with the case of well specified models and the ‘general to specific’ approach. He then moves on to misspecified models and shows how by indirect inference these can help in analysing more fully specified models. He also discusses how such models may be useful in deriving useful practical strategies at both the micro and macro level. He concludes by discussing non‐stationarity and suggests how one can deal with models that only slowly become misspecified over time.Less
Christian Gourieroux starts out with the observation that the maintained hypothesis in econometrics is that the data is generated by some unknown stochastic process and that the job of the econometrician is to try to identify that process. He deals first with the case of well specified models and the ‘general to specific’ approach. He then moves on to misspecified models and shows how by indirect inference these can help in analysing more fully specified models. He also discusses how such models may be useful in deriving useful practical strategies at both the micro and macro level. He concludes by discussing non‐stationarity and suggests how one can deal with models that only slowly become misspecified over time.
John MacDonald and Ross Crail (eds)
- Published in print:
- 2016
- Published Online:
- March 2021
- ISBN:
- 9780198724452
- eISBN:
- 9780191927478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198724452.003.0014
- Subject:
- Law, Intellectual Property, IT, and Media Law
The first part of Chapter 10 sets out the origins of, and background to, the Data Protection Act 1998 and provides a glossary of idiosyncratic language. It runs through its main provisions: ...
More
The first part of Chapter 10 sets out the origins of, and background to, the Data Protection Act 1998 and provides a glossary of idiosyncratic language. It runs through its main provisions: definitions; the rights of individuals to access data relating to themselves, and, if necessary, have it corrected or erased; rights to prevent processing likely to cause damage and distress, or use for direct marketing purposes; data controllers; control of data users; registration and enforcement; the data protection principles; and the powers of the Information Commissioner and the tribunal. The second part of the chapter deals with the interface between the Data Protection Act 1998 and the Freedom of Information Act 2000 and the effect of section 40(1) and (2) of the 2000 Act.
Less
The first part of Chapter 10 sets out the origins of, and background to, the Data Protection Act 1998 and provides a glossary of idiosyncratic language. It runs through its main provisions: definitions; the rights of individuals to access data relating to themselves, and, if necessary, have it corrected or erased; rights to prevent processing likely to cause damage and distress, or use for direct marketing purposes; data controllers; control of data users; registration and enforcement; the data protection principles; and the powers of the Information Commissioner and the tribunal. The second part of the chapter deals with the interface between the Data Protection Act 1998 and the Freedom of Information Act 2000 and the effect of section 40(1) and (2) of the 2000 Act.
Russell Walker
- Published in print:
- 2015
- Published Online:
- August 2015
- ISBN:
- 9780199378326
- eISBN:
- 9780199378340
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199378326.003.0001
- Subject:
- Economics and Finance, Financial Economics
This chapter introduces Big Data from several dimensions: the rate of data creation by digital processes, the impact of increased data collection, storage and processing, and paradigm shifts in the ...
More
This chapter introduces Big Data from several dimensions: the rate of data creation by digital processes, the impact of increased data collection, storage and processing, and paradigm shifts in the demand for data as well as the handling of data for analysis. Important features of Big Data are defined and examined, such as the variety of data sources, the velocity of data creation, and the viral distribution of digital data. The creation of Big Data in specific markets and industries is highlighted, as well the sourcing of data from internal and external data sources, such as customer data, operations, scientific knowledge sets, and mass markets.Less
This chapter introduces Big Data from several dimensions: the rate of data creation by digital processes, the impact of increased data collection, storage and processing, and paradigm shifts in the demand for data as well as the handling of data for analysis. Important features of Big Data are defined and examined, such as the variety of data sources, the velocity of data creation, and the viral distribution of digital data. The creation of Big Data in specific markets and industries is highlighted, as well the sourcing of data from internal and external data sources, such as customer data, operations, scientific knowledge sets, and mass markets.
Halbert White, Tae‐Hwan Kim, and Simone Manganelli
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199549498
- eISBN:
- 9780191720567
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199549498.003.0012
- Subject:
- Economics and Finance, Econometrics
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional ...
More
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional quantiles and the presence of exogenous variables. The MQ-CAViaR model is then used to specify conditional versions of the more robust skewness and kurtosis measures discussed in Kim and White (2004). The chapter is organized as follows. Section 2 develops the MQ-CAViaR data generating process (DGP). Section 3 proposes a quasi-maximum likelihood estimator for the MQ-CAViaR process, and proves its consistency and asymptotic normality. Section 4 shows how to consistently estimate the asymptotic variance—covariance matrix of the MQ-CAViaR estimator. Section 5 specifies conditional quantile-based measures of skewness and kurtosis based on MQ-CAViaR estimates. Section 6 contains an empirical application of our methods to the S&P 500 index. The chapter also reports results of a simulation experiment designed to examine the finite sample behavior of our estimator. Section 7 contains a summary and concluding remarks.Less
This chapter extends Engle and Manganelli's (2004) univariate CAViaR model to a multi-quantile version, MQ-CAViaR. This allows for both a general vector autoregressive structure in the conditional quantiles and the presence of exogenous variables. The MQ-CAViaR model is then used to specify conditional versions of the more robust skewness and kurtosis measures discussed in Kim and White (2004). The chapter is organized as follows. Section 2 develops the MQ-CAViaR data generating process (DGP). Section 3 proposes a quasi-maximum likelihood estimator for the MQ-CAViaR process, and proves its consistency and asymptotic normality. Section 4 shows how to consistently estimate the asymptotic variance—covariance matrix of the MQ-CAViaR estimator. Section 5 specifies conditional quantile-based measures of skewness and kurtosis based on MQ-CAViaR estimates. Section 6 contains an empirical application of our methods to the S&P 500 index. The chapter also reports results of a simulation experiment designed to examine the finite sample behavior of our estimator. Section 7 contains a summary and concluding remarks.
Lee A. Bygrave
- Published in print:
- 2014
- Published Online:
- April 2014
- ISBN:
- 9780199675555
- eISBN:
- 9780191758904
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199675555.003.0005
- Subject:
- Law, Intellectual Property, IT, and Media Law
This chapter discusses the basic principles of data privacy law. It presents the constituent elements of these principles, along with the similarities and differences in the way they are elaborated ...
More
This chapter discusses the basic principles of data privacy law. It presents the constituent elements of these principles, along with the similarities and differences in the way they are elaborated in the more influential international codes. These principles are: fair and lawful processing, proportionality, minimality, purpose limitation, data subject influence, data quality, data security, and sensitivity.Less
This chapter discusses the basic principles of data privacy law. It presents the constituent elements of these principles, along with the similarities and differences in the way they are elaborated in the more influential international codes. These principles are: fair and lawful processing, proportionality, minimality, purpose limitation, data subject influence, data quality, data security, and sensitivity.
Ulrich Wuermeling and Isabella Oldani
- Published in print:
- 2021
- Published Online:
- June 2021
- ISBN:
- 9780198716662
- eISBN:
- 9780191918582
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198716662.003.0010
- Subject:
- Law, Intellectual Property, IT, and Media Law
This chapter studies the regulation of international data transfers in clouds. The General Data Protection Regulation (GDPR) stipulates that any transfer of personal data from the European Union (EU) ...
More
This chapter studies the regulation of international data transfers in clouds. The General Data Protection Regulation (GDPR) stipulates that any transfer of personal data from the European Union (EU) (as well as other European Economic Area (EEA) countries) to a third country or an international organisation is subject to restrictions to ensure that the level of protection provided by the GDPR is not undermined. The GDPR requires either adequate protection or appropriate safeguards for transfers of personal data to third countries. When assessing a data transfer to a third country, a number of factors must be considered. First, it is necessary to establish whether the processing of personal data falls within the scope of the GDPR. Second, the GDPR may apply either to the cloud provider or its customer, or to both. Third, it is necessary to establish when a 'transfer' of personal data from an EU Member State to a third country is taking place and how the protection of the data can be ensured. Fourth, in some circumstances, there may be an exception to the requirement to ensure continued protection following a data transfer.Less
This chapter studies the regulation of international data transfers in clouds. The General Data Protection Regulation (GDPR) stipulates that any transfer of personal data from the European Union (EU) (as well as other European Economic Area (EEA) countries) to a third country or an international organisation is subject to restrictions to ensure that the level of protection provided by the GDPR is not undermined. The GDPR requires either adequate protection or appropriate safeguards for transfers of personal data to third countries. When assessing a data transfer to a third country, a number of factors must be considered. First, it is necessary to establish whether the processing of personal data falls within the scope of the GDPR. Second, the GDPR may apply either to the cloud provider or its customer, or to both. Third, it is necessary to establish when a 'transfer' of personal data from an EU Member State to a third country is taking place and how the protection of the data can be ensured. Fourth, in some circumstances, there may be an exception to the requirement to ensure continued protection following a data transfer.
Markus Krajewski
- Published in print:
- 2011
- Published Online:
- August 2013
- ISBN:
- 9780262015899
- eISBN:
- 9780262298216
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262015899.001.0001
- Subject:
- History, History of Science, Technology, and Medicine
Today on almost every desk in every office sits a computer. Eighty years ago, desktops were equipped with a nonelectronic data processing machine: a card file. This book traces the evolution of this ...
More
Today on almost every desk in every office sits a computer. Eighty years ago, desktops were equipped with a nonelectronic data processing machine: a card file. This book traces the evolution of this proto-computer of rearrangeable parts (file cards) that became ubiquitous in offices between the world wars. The story begins with Konrad Gessner, a sixteenth-century Swiss polymath who described a new method of processing data: to cut up a sheet of handwritten notes into slips of paper, with one fact or topic per slip, and arrange as desired. In the late eighteenth century, the card catalog became the librarian’s answer to the threat of information overload. Then, at the turn of the twentieth century, business adopted the technology of the card catalog as a bookkeeping tool. The book explores this conceptual development and casts the card file as a “universal paper machine” that accomplishes the basic operations of Turing’s universal discrete machine: storing, processing, and transferring data. In telling this story, the book travels on a number of detours, telling us, for example, that the card catalog and the numbered street address emerged at the same time in the same city (Vienna); that Harvard University’s home-grown cataloging system grew out of a librarian’s laziness; and that Melvil Dewey (originator of the Dewey Decimal System) helped bring about the technology transfer of card files to business.Less
Today on almost every desk in every office sits a computer. Eighty years ago, desktops were equipped with a nonelectronic data processing machine: a card file. This book traces the evolution of this proto-computer of rearrangeable parts (file cards) that became ubiquitous in offices between the world wars. The story begins with Konrad Gessner, a sixteenth-century Swiss polymath who described a new method of processing data: to cut up a sheet of handwritten notes into slips of paper, with one fact or topic per slip, and arrange as desired. In the late eighteenth century, the card catalog became the librarian’s answer to the threat of information overload. Then, at the turn of the twentieth century, business adopted the technology of the card catalog as a bookkeeping tool. The book explores this conceptual development and casts the card file as a “universal paper machine” that accomplishes the basic operations of Turing’s universal discrete machine: storing, processing, and transferring data. In telling this story, the book travels on a number of detours, telling us, for example, that the card catalog and the numbered street address emerged at the same time in the same city (Vienna); that Harvard University’s home-grown cataloging system grew out of a librarian’s laziness; and that Melvil Dewey (originator of the Dewey Decimal System) helped bring about the technology transfer of card files to business.