Gary Herrigel
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199557738
- eISBN:
- 9780191720871
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199557738.003.0099
- Subject:
- Business and Management, Political Economy
Chapter introduces the main themes of the second part of the book. Above all, the global trend toward vertical disintegration in complex manufacturing in advanced countries is the focus of analysis. ...
More
Chapter introduces the main themes of the second part of the book. Above all, the global trend toward vertical disintegration in complex manufacturing in advanced countries is the focus of analysis. The book shifts its attention from long historical evolution in one industry in three countries to an analysis of supply chains in two very broad complex manufacturing sectors: Automobiles and Machinery. The country focus shifts as well, with broadly comparative initial chapters giving way to a focus on processes of recomposition in Germany and the United States The theoretical focus remains constant however: recomposition of the arrangements governing these industrial sectors are driven by creative action. The introduction also outlines the sources of data used in the analysis, above all interview based data collected by two research consortia: The Advanced Manufacturing Project and the Global Components ProjectLess
Chapter introduces the main themes of the second part of the book. Above all, the global trend toward vertical disintegration in complex manufacturing in advanced countries is the focus of analysis. The book shifts its attention from long historical evolution in one industry in three countries to an analysis of supply chains in two very broad complex manufacturing sectors: Automobiles and Machinery. The country focus shifts as well, with broadly comparative initial chapters giving way to a focus on processes of recomposition in Germany and the United States The theoretical focus remains constant however: recomposition of the arrangements governing these industrial sectors are driven by creative action. The introduction also outlines the sources of data used in the analysis, above all interview based data collected by two research consortia: The Advanced Manufacturing Project and the Global Components Project
Mathew Penrose
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506263
- eISBN:
- 9780191707858
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506263.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect ...
More
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect points that are close to each other. As an alternative to classical random graph models, these geometric graphs are relevant to the modelling of real networks having spatial content, arising for example in wireless communications, parallel processing, classification, epidemiology, astronomy, and the internet. Their study illustrates numerous techniques of modern stochastic geometry, including Stein's method, martingale methods, and continuum percolation. Typical results in the book concern properties of a graph G on n random points with edges included for interpoint distances up to r, with the parameter r dependent on n and typically small for large n. Asymptotic distributional properties are derived for numerous graph quantities. These include the number of copies of a given finite graph embedded in G, the number of isolated components isomorphic to a given graph, the empirical distributions of vertex degrees, the clique number, the chromatic number, the maximum and minimum degree, the size of the largest component, the total number of components, and the connectivity of the graph.Less
This book sets out a body of rigorous mathematical theory for finite graphs with nodes placed randomly in Euclidean d-space according to a common probability density, and edges added to connect points that are close to each other. As an alternative to classical random graph models, these geometric graphs are relevant to the modelling of real networks having spatial content, arising for example in wireless communications, parallel processing, classification, epidemiology, astronomy, and the internet. Their study illustrates numerous techniques of modern stochastic geometry, including Stein's method, martingale methods, and continuum percolation. Typical results in the book concern properties of a graph G on n random points with edges included for interpoint distances up to r, with the parameter r dependent on n and typically small for large n. Asymptotic distributional properties are derived for numerous graph quantities. These include the number of copies of a given finite graph embedded in G, the number of isolated components isomorphic to a given graph, the empirical distributions of vertex degrees, the clique number, the chromatic number, the maximum and minimum degree, the size of the largest component, the total number of components, and the connectivity of the graph.
Manuel Arellano
- Published in print:
- 2003
- Published Online:
- July 2005
- ISBN:
- 9780199245284
- eISBN:
- 9780191602481
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199245282.001.0001
- Subject:
- Economics and Finance, Econometrics
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic ...
More
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic responses and unobserved heterogeneity in panel data models. The book is divided into three parts. Part I deals with static models. Part II discusses pure time series models. Part III considers dynamic conditional models.Less
This book reviews some of the main topics in panel data econometrics. It analyses econometric models with non-exogenous explanatory variables, and the problem of distinguishing between dynamic responses and unobserved heterogeneity in panel data models. The book is divided into three parts. Part I deals with static models. Part II discusses pure time series models. Part III considers dynamic conditional models.
Daniel Halberstam
- Published in print:
- 2001
- Published Online:
- November 2003
- ISBN:
- 9780199245000
- eISBN:
- 9780191599996
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199245002.003.0009
- Subject:
- Political Science, European Union
Examines the difference between the European and American perceptions of the effects and desirability of commandeering (the issue of binding commands by central government that force its component ...
More
Examines the difference between the European and American perceptions of the effects and desirability of commandeering (the issue of binding commands by central government that force its component states to take regulatory action with respect to private parties) as a mechanism of central–component system interaction. Whereas the USA constitutional jurisprudence prohibits commandeering, the founding charters of the European Union and Germany permit such action. In successive sections, the chapter explores the relevant political and institutional background against which commandeering takes place in the USA, the EU, and Germany. It discusses (1) commandeering in international law and the apparent paradox in American views; (2) the formal supremacy of central law within component legal systems; (3) the ‘viscosity’ of the central legal system, i.e., the intensity of the obligation to adhere to the central legal system's norms; (4) the specificity of commands issued by central to component units of government (the directive as a limited tool of commandeering); (5) the corporate representation of component state systems within the law‐making bodies of central systems; and (6) the relative completeness and effectiveness of the levels of governance and the prominent alternatives to commandeering in each system, with specific reference to central government dependence (or not) on component state resources.Less
Examines the difference between the European and American perceptions of the effects and desirability of commandeering (the issue of binding commands by central government that force its component states to take regulatory action with respect to private parties) as a mechanism of central–component system interaction. Whereas the USA constitutional jurisprudence prohibits commandeering, the founding charters of the European Union and Germany permit such action. In successive sections, the chapter explores the relevant political and institutional background against which commandeering takes place in the USA, the EU, and Germany. It discusses (1) commandeering in international law and the apparent paradox in American views; (2) the formal supremacy of central law within component legal systems; (3) the ‘viscosity’ of the central legal system, i.e., the intensity of the obligation to adhere to the central legal system's norms; (4) the specificity of commands issued by central to component units of government (the directive as a limited tool of commandeering); (5) the corporate representation of component state systems within the law‐making bodies of central systems; and (6) the relative completeness and effectiveness of the levels of governance and the prominent alternatives to commandeering in each system, with specific reference to central government dependence (or not) on component state resources.
Michael P. Lynch
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780199218738
- eISBN:
- 9780191711794
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199218738.003.0010
- Subject:
- Philosophy, Metaphysics/Epistemology, Philosophy of Language
This chapter summarizes the book's aim, which is to articulate and defend what the book has called the functionalist theory of truth. The chapter then outlines the two components of the functionalist ...
More
This chapter summarizes the book's aim, which is to articulate and defend what the book has called the functionalist theory of truth. The chapter then outlines the two components of the functionalist theory. Finally it looks to future progress in discovering how truth manifests itself accross the spetrum of our thought.Less
This chapter summarizes the book's aim, which is to articulate and defend what the book has called the functionalist theory of truth. The chapter then outlines the two components of the functionalist theory. Finally it looks to future progress in discovering how truth manifests itself accross the spetrum of our thought.
Kees Hengeveld and J. Lachlan Mackenzie
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199278107
- eISBN:
- 9780191707797
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199278107.003.0001
- Subject:
- Linguistics, Psycholinguistics / Neurolinguistics / Cognitive Linguistics, Theoretical Linguistics
The chapter presents Functional Discourse Grammar (FDG) as part of a wider theory of verbal interaction, specifying its distinguishing features and detailing its architecture and notational ...
More
The chapter presents Functional Discourse Grammar (FDG) as part of a wider theory of verbal interaction, specifying its distinguishing features and detailing its architecture and notational conventions. It is explained how the grammar can be implemented in linguistic analysis and how it relates to linguistic functionalism and to language typology.Less
The chapter presents Functional Discourse Grammar (FDG) as part of a wider theory of verbal interaction, specifying its distinguishing features and detailing its architecture and notational conventions. It is explained how the grammar can be implemented in linguistic analysis and how it relates to linguistic functionalism and to language typology.
Paolo Mauro, Nathan Sussman, and Yishay Yafeh
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780199272693
- eISBN:
- 9780191603488
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199272697.003.0006
- Subject:
- Economics and Finance, Financial Economics
This chapter focuses on co-movement of spreads across different countries, and on the frequency of crises shared by more than one country — contagion. Overall, co-movement of spreads among emerging ...
More
This chapter focuses on co-movement of spreads across different countries, and on the frequency of crises shared by more than one country — contagion. Overall, co-movement of spreads among emerging markets was far higher in the 1990s than during the pre-World War I era. Sharp changes in spreads (or crises, defined in a number of ways) during the 1990s typically affected many countries at the same time, whereas global crises were virtually non-existent in the historical sample. An examination of whether co-movement was driven by common economic fundamentals showed that emerging markets in the past were more different from each other than their counterparts are today: they tended to specialize in a small number of export commodities. Differences in co-movement between the two periods were not driven solely by economic fundamentals, and may be accounted for by differences in investor behavior, particularly the presence of large investment funds today versus many individual investors in the past.Less
This chapter focuses on co-movement of spreads across different countries, and on the frequency of crises shared by more than one country — contagion. Overall, co-movement of spreads among emerging markets was far higher in the 1990s than during the pre-World War I era. Sharp changes in spreads (or crises, defined in a number of ways) during the 1990s typically affected many countries at the same time, whereas global crises were virtually non-existent in the historical sample. An examination of whether co-movement was driven by common economic fundamentals showed that emerging markets in the past were more different from each other than their counterparts are today: they tended to specialize in a small number of export commodities. Differences in co-movement between the two periods were not driven solely by economic fundamentals, and may be accounted for by differences in investor behavior, particularly the presence of large investment funds today versus many individual investors in the past.
Takanori Matsumoto
- Published in print:
- 2006
- Published Online:
- September 2006
- ISBN:
- 9780198292746
- eISBN:
- 9780191603891
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198292740.003.0002
- Subject:
- Economics and Finance, South and East Asia
This chapter assesses the quantitative position of ‘traditional’ industries in the economy. Traditional industry — which accounted for the largest number of gainfully occupied workers — developed ...
More
This chapter assesses the quantitative position of ‘traditional’ industries in the economy. Traditional industry — which accounted for the largest number of gainfully occupied workers — developed steadily in the modern era and continued to provide opportunities to workers that were not absorbed by the modern industrial sector. The role of the traditional commerce and service industries is emphasized, which functioned as a ‘buffer’ for economic fluctuations. The regional diversity of these industries is also analyzed using the statistical method of principal component analysis.Less
This chapter assesses the quantitative position of ‘traditional’ industries in the economy. Traditional industry — which accounted for the largest number of gainfully occupied workers — developed steadily in the modern era and continued to provide opportunities to workers that were not absorbed by the modern industrial sector. The role of the traditional commerce and service industries is emphasized, which functioned as a ‘buffer’ for economic fluctuations. The regional diversity of these industries is also analyzed using the statistical method of principal component analysis.
Melanie M. Morey and John J. Piderit
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780195305517
- eISBN:
- 9780199784813
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195305515.003.0011
- Subject:
- Religion, Church History
This chapter presents policy packages that draw together various recommendations included in the previous theme chapters. These packages include a variety of practical approaches, as well as ...
More
This chapter presents policy packages that draw together various recommendations included in the previous theme chapters. These packages include a variety of practical approaches, as well as situation specific plans. For each type of university, two scenarios are considered: one in which minimum standards for Catholic identity are achieved; and a second scenario in which these standards are far from being met. Catholic universities can adopt and adapt these plans in developing their own coordinated strategies to strengthen their institutional Catholic culture. The recommendations and approaches outlined in this chapter address the components of culture, the leadership style of the president and, where appropriate, are articulated according to each of the four models of being a Catholic university. The chapter concludes with the estimated cost of training an adequate number of faculty in the Catholic intellectual tradition.Less
This chapter presents policy packages that draw together various recommendations included in the previous theme chapters. These packages include a variety of practical approaches, as well as situation specific plans. For each type of university, two scenarios are considered: one in which minimum standards for Catholic identity are achieved; and a second scenario in which these standards are far from being met. Catholic universities can adopt and adapt these plans in developing their own coordinated strategies to strengthen their institutional Catholic culture. The recommendations and approaches outlined in this chapter address the components of culture, the leadership style of the president and, where appropriate, are articulated according to each of the four models of being a Catholic university. The chapter concludes with the estimated cost of training an adequate number of faculty in the Catholic intellectual tradition.
Andrea M. Herrmann
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780199543434
- eISBN:
- 9780191715693
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199543434.003.0004
- Subject:
- Business and Management, Strategy, Political Economy
This chapter examines how firms can compete despite comparative disadvantages of national antitrust legislation. It first studies whether pharmaceutical firms in Germany, Italy, and the UK develop ...
More
This chapter examines how firms can compete despite comparative disadvantages of national antitrust legislation. It first studies whether pharmaceutical firms in Germany, Italy, and the UK develop different types of component standards to pursue strategies of radical product innovation, incremental product innovation, and product imitation, respectively. Since quantitative analyses of interviews with Quality Assurance managers show that firms differ in their standardization approaches according to the strategies they pursue, the chapter also looks at how national antitrust legislation affects these strategies. Contrary to the findings in the previous chapter, qualitative analyses of interviews suggest that national antitrust legislation constitutes neither a comparative institutional advantage nor a disadvantage. The chapter concludes with reflections on how these findings contribute to the resource-based view and the competitiveness literature, and on reasons why a Schumpeterian perception of firms as creative entrepreneurs may nevertheless help to explain how firms gain competitive advantages.Less
This chapter examines how firms can compete despite comparative disadvantages of national antitrust legislation. It first studies whether pharmaceutical firms in Germany, Italy, and the UK develop different types of component standards to pursue strategies of radical product innovation, incremental product innovation, and product imitation, respectively. Since quantitative analyses of interviews with Quality Assurance managers show that firms differ in their standardization approaches according to the strategies they pursue, the chapter also looks at how national antitrust legislation affects these strategies. Contrary to the findings in the previous chapter, qualitative analyses of interviews suggest that national antitrust legislation constitutes neither a comparative institutional advantage nor a disadvantage. The chapter concludes with reflections on how these findings contribute to the resource-based view and the competitiveness literature, and on reasons why a Schumpeterian perception of firms as creative entrepreneurs may nevertheless help to explain how firms gain competitive advantages.
Robert C. Roberts and W. Jay Wood
- Published in print:
- 2007
- Published Online:
- May 2007
- ISBN:
- 9780199283675
- eISBN:
- 9780191712661
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199283675.003.0003
- Subject:
- Philosophy, Moral Philosophy
Regulative virtues epistemology does not give the concept of an intellectual virtue a foundational role in the development of a theory of knowledge. Instead, the virtues are a focus of analysis. ...
More
Regulative virtues epistemology does not give the concept of an intellectual virtue a foundational role in the development of a theory of knowledge. Instead, the virtues are a focus of analysis. Intellectual and moral virtues are not distinct categories of virtues, as other treatments have supposed them to be. The chapter attempts to answer such questions as: what makes a trait of personality a virtue? How are the intellectual virtues related to human nature? What accounts for competent, but divergent accounts of the virtues? What makes an intellectual virtue more than just an intellectual application of a virtue? Are virtues all ‘perfections’ of human nature? How are the tasks of the intellectual virtues divided among them? How are virtues individuated? How does motivation figure in the constitution of an intellectual virtue? Do all the virtues have a ‘motivational component’? If so, do they all have it in the same way?Less
Regulative virtues epistemology does not give the concept of an intellectual virtue a foundational role in the development of a theory of knowledge. Instead, the virtues are a focus of analysis. Intellectual and moral virtues are not distinct categories of virtues, as other treatments have supposed them to be. The chapter attempts to answer such questions as: what makes a trait of personality a virtue? How are the intellectual virtues related to human nature? What accounts for competent, but divergent accounts of the virtues? What makes an intellectual virtue more than just an intellectual application of a virtue? Are virtues all ‘perfections’ of human nature? How are the tasks of the intellectual virtues divided among them? How are virtues individuated? How does motivation figure in the constitution of an intellectual virtue? Do all the virtues have a ‘motivational component’? If so, do they all have it in the same way?
Mathew Penrose
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506263
- eISBN:
- 9780191707858
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506263.003.0010
- Subject:
- Mathematics, Probability / Statistics
This chapter concerns the largest component of G(N(n),r) in the thermodynamic limit when the number of points N(n) is Poisson with parameter n and the underlying density is uniform on the unit ...
More
This chapter concerns the largest component of G(N(n),r) in the thermodynamic limit when the number of points N(n) is Poisson with parameter n and the underlying density is uniform on the unit d-cube. It is shown that in the subcritical limit (where the limiting mean degree is below the critical point for continuum percolation), the order of the largest component grows logarithmically with n. In the supercritical limit, the order of the largest component (the ‘giant component’) is asymptotically proportional to n, while the second-largest component grows more slowly, in fact, like the logarithm of the number of points raised to the power d/(d-1). Large deviations and normal approximation results for the order of the largest component are also given.Less
This chapter concerns the largest component of G(N(n),r) in the thermodynamic limit when the number of points N(n) is Poisson with parameter n and the underlying density is uniform on the unit d-cube. It is shown that in the subcritical limit (where the limiting mean degree is below the critical point for continuum percolation), the order of the largest component grows logarithmically with n. In the supercritical limit, the order of the largest component (the ‘giant component’) is asymptotically proportional to n, while the second-largest component grows more slowly, in fact, like the logarithm of the number of points raised to the power d/(d-1). Large deviations and normal approximation results for the order of the largest component are also given.
Joshua M. Epstein
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691158884
- eISBN:
- 9781400848256
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691158884.003.0002
- Subject:
- Mathematics, Applied Mathematics
This part of the book describes explicit mathematical models for the affective, cognitive, and social components of Agent_Zero. It first considers some underlying neuroscience of fear and the role of ...
More
This part of the book describes explicit mathematical models for the affective, cognitive, and social components of Agent_Zero. It first considers some underlying neuroscience of fear and the role of the amygdala before turning to Rescorla–Wagner equations of conditioning. In particular, it explains how the fear circuit can be activated and how fear conditioning can occur unconsciously. It then reviews some standard nomenclature adopted by Ivan Pavlov in his study, Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex, with emphasis on David Hume's “association of ideas,” the theory of conditioning, and the Rescorla–Wagner model. After examining “the passions,” the discussion focuses on reason, Agent_Zero's cognitive component, and the model's social component. The central case is that the agent initiates the group's behavior despite starting with the lowest disposition, with no initial emotional inclination, no evidence, the same threshold as all others, and no orders from above.Less
This part of the book describes explicit mathematical models for the affective, cognitive, and social components of Agent_Zero. It first considers some underlying neuroscience of fear and the role of the amygdala before turning to Rescorla–Wagner equations of conditioning. In particular, it explains how the fear circuit can be activated and how fear conditioning can occur unconsciously. It then reviews some standard nomenclature adopted by Ivan Pavlov in his study, Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex, with emphasis on David Hume's “association of ideas,” the theory of conditioning, and the Rescorla–Wagner model. After examining “the passions,” the discussion focuses on reason, Agent_Zero's cognitive component, and the model's social component. The central case is that the agent initiates the group's behavior despite starting with the lowest disposition, with no initial emotional inclination, no evidence, the same threshold as all others, and no orders from above.
Joshua M. Epstein
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691158884
- eISBN:
- 9781400848256
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691158884.003.0005
- Subject:
- Mathematics, Applied Mathematics
This part offers some ideas for future research and applications of Agent_Zero. It first considers Agent_Zero's numerical cartography before discussing its affective, cognitive, and social ...
More
This part offers some ideas for future research and applications of Agent_Zero. It first considers Agent_Zero's numerical cartography before discussing its affective, cognitive, and social components. It then examines the feasibility of increasing the modeling resolution, scaling up the space and the agent population, and the model's contribution to empiricism. It also reviews some of the testable hypotheses advanced by Agent_Zero and various model interpretations relating to civil violence, economics, health behavior, psychology, jury dynamics, the formation and dynamics of networks, mutual escalation dynamics, and birth and intergenerational transmission. This part ends by presenting the book's overall conclusions and emphasizing the importance of Agent_Zero in establishing neurocognitive foundations for generative social science.Less
This part offers some ideas for future research and applications of Agent_Zero. It first considers Agent_Zero's numerical cartography before discussing its affective, cognitive, and social components. It then examines the feasibility of increasing the modeling resolution, scaling up the space and the agent population, and the model's contribution to empiricism. It also reviews some of the testable hypotheses advanced by Agent_Zero and various model interpretations relating to civil violence, economics, health behavior, psychology, jury dynamics, the formation and dynamics of networks, mutual escalation dynamics, and birth and intergenerational transmission. This part ends by presenting the book's overall conclusions and emphasizing the importance of Agent_Zero in establishing neurocognitive foundations for generative social science.
Markus Ullsperger
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195372731
- eISBN:
- 9780199776283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195372731.003.0010
- Subject:
- Neuroscience, Techniques
This chapter gives an overview of data integration methods for simultaneous EEG-fMRI, in which EEG features are extracted and used to parametrically model the fMRI data. Up to now, variants of ...
More
This chapter gives an overview of data integration methods for simultaneous EEG-fMRI, in which EEG features are extracted and used to parametrically model the fMRI data. Up to now, variants of EEG-informed fMRI analysis have been most widely and successfully applied. After a brief discussion of the rationale of this approach, its variants for ongoing and event-related EEG phenomena are explained. Studies applying EEG-informed fMRI are reviewed. The advantage of denoising methods such as independent component analysis allowing single-trial quantifications of the EEG phenomena of interest is discussed. To allow clear interpretations of covariations between electrophysiological and hemodynamic measures, further dependent variables such as behavioral data should be taken into account. The chapter closes with an outlook on future questions and ongoing methodological developments.Less
This chapter gives an overview of data integration methods for simultaneous EEG-fMRI, in which EEG features are extracted and used to parametrically model the fMRI data. Up to now, variants of EEG-informed fMRI analysis have been most widely and successfully applied. After a brief discussion of the rationale of this approach, its variants for ongoing and event-related EEG phenomena are explained. Studies applying EEG-informed fMRI are reviewed. The advantage of denoising methods such as independent component analysis allowing single-trial quantifications of the EEG phenomena of interest is discussed. To allow clear interpretations of covariations between electrophysiological and hemodynamic measures, further dependent variables such as behavioral data should be taken into account. The chapter closes with an outlook on future questions and ongoing methodological developments.
Vince D. Calhoun and Tom Eichele
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195372731
- eISBN:
- 9780199776283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195372731.003.0011
- Subject:
- Neuroscience, Techniques
Independent component analysis (ICA) is increasingly utilized as a tool for evaluating the hidden spatiotemporal structure contained within brain imaging data. This chapter first provides a brief ...
More
Independent component analysis (ICA) is increasingly utilized as a tool for evaluating the hidden spatiotemporal structure contained within brain imaging data. This chapter first provides a brief overview of ICA and how ICA is applied to functional magnetic resonance imaging (fMRI) data. It then discusses group ICA and the application of group ICA for data fusion, with an emphasis on the methods developed within our group. It also discusses, within a larger context, the many alternative approaches that are feasible and currently in use.Less
Independent component analysis (ICA) is increasingly utilized as a tool for evaluating the hidden spatiotemporal structure contained within brain imaging data. This chapter first provides a brief overview of ICA and how ICA is applied to functional magnetic resonance imaging (fMRI) data. It then discusses group ICA and the application of group ICA for data fusion, with an emphasis on the methods developed within our group. It also discusses, within a larger context, the many alternative approaches that are feasible and currently in use.
Tom Eichele and Vince D. Calhoun
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195372731
- eISBN:
- 9780199776283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195372731.003.0012
- Subject:
- Neuroscience, Techniques
This chapter introduces and applies the concept of parallel spatial and temporal unmixing with group independent component analysis (ICA) for concurrent electroencephalography-functional magnetic ...
More
This chapter introduces and applies the concept of parallel spatial and temporal unmixing with group independent component analysis (ICA) for concurrent electroencephalography-functional magnetic resonance imaging (EEG-fMRI). Hemodynamic response function (HRF) deconvolution and single-trial estimation in the fMRI data were employed, and the single-trial weights were used as predictors for the amplitude modulation in the EEG. For illustration, data from a previously published performance-monitoring experiment were analyzed, in order to identify error-preceding activity in the EEG modality. EEG components that displayed such slow trends, and which were coupled to the corresponding fMRI components, are described. Parallel ICA for analysis of concurrent EEG-fMRI on a trial-by-trial basis is a very useful addition to the toolbelt of researchers interested in multimodal integration.Less
This chapter introduces and applies the concept of parallel spatial and temporal unmixing with group independent component analysis (ICA) for concurrent electroencephalography-functional magnetic resonance imaging (EEG-fMRI). Hemodynamic response function (HRF) deconvolution and single-trial estimation in the fMRI data were employed, and the single-trial weights were used as predictors for the amplitude modulation in the EEG. For illustration, data from a previously published performance-monitoring experiment were analyzed, in order to identify error-preceding activity in the EEG modality. EEG components that displayed such slow trends, and which were coupled to the corresponding fMRI components, are described. Parallel ICA for analysis of concurrent EEG-fMRI on a trial-by-trial basis is a very useful addition to the toolbelt of researchers interested in multimodal integration.
Stefan Debener, Jeremy Thorne, Till R. Schneider, and Filipa Campos Viola
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195372731
- eISBN:
- 9780199776283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195372731.003.0008
- Subject:
- Neuroscience, Techniques
Independent component analysis (ICA) is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals. The EEG signal consists of a mixture of various brain ...
More
Independent component analysis (ICA) is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals. The EEG signal consists of a mixture of various brain and non-brain contributions. Accordingly, a valid and powerful unmixing tool promises a better, more accessible representation of the statistical sources contributing to the mixed recorded signal. ICA, being potentially such a tool, may help in the detection of signal sources that cannot be identified on the raw data level alone using other, more conventional techniques. The application of ICA to EEG signals has become popular, as it provides two key features: it is a powerful way to remove artifacts from EEG data, and it helps to disentangle otherwise mixed brain signals. This chapter is concerned with evaluating and optimizing EEG decompositions by means of ICA. First, it discusses typical ICA results with reference to artifact- and brain-related components. Then, it elaborates on different EEG pre-processing steps, considered in light of the statistical assumptions underlying ICA. As such, the motivation for the chapter is to provide some practical guidelines for those researchers who wish to successfully decompose multi-channel EEG recordings.Less
Independent component analysis (ICA) is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals. The EEG signal consists of a mixture of various brain and non-brain contributions. Accordingly, a valid and powerful unmixing tool promises a better, more accessible representation of the statistical sources contributing to the mixed recorded signal. ICA, being potentially such a tool, may help in the detection of signal sources that cannot be identified on the raw data level alone using other, more conventional techniques. The application of ICA to EEG signals has become popular, as it provides two key features: it is a powerful way to remove artifacts from EEG data, and it helps to disentangle otherwise mixed brain signals. This chapter is concerned with evaluating and optimizing EEG decompositions by means of ICA. First, it discusses typical ICA results with reference to artifact- and brain-related components. Then, it elaborates on different EEG pre-processing steps, considered in light of the statistical assumptions underlying ICA. As such, the motivation for the chapter is to provide some practical guidelines for those researchers who wish to successfully decompose multi-channel EEG recordings.
Giancarlo Valente, Fabrizio Esposito, Federico de Martino, Rainer Goebel, and Elia Formisano
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780195372731
- eISBN:
- 9780199776283
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195372731.003.0009
- Subject:
- Neuroscience, Techniques
This chapter examines the most relevant aspects concerning the use of independent component analysis (ICA) for the analysis of functional magnetic resonance imaging (fMRI) data. In particular, after ...
More
This chapter examines the most relevant aspects concerning the use of independent component analysis (ICA) for the analysis of functional magnetic resonance imaging (fMRI) data. In particular, after illustrating the fMRI-ICA model (“Problem formulation and application to fMRI”), the chapter compares the most commonly used ICA algorithms in the context of fMRI data analysis. The problems of choosing the dimensionality of the ICA decomposition, and of selecting the “meaningful” components, are considered. Optimizations of the ICA algorithms for dealing with the specific spatiotemporal properties of the fMRI data, and extensions of the ICA to multisubject fMRI studies, are described. For each of these aspects, different approaches from various groups are briefly reviewed.Less
This chapter examines the most relevant aspects concerning the use of independent component analysis (ICA) for the analysis of functional magnetic resonance imaging (fMRI) data. In particular, after illustrating the fMRI-ICA model (“Problem formulation and application to fMRI”), the chapter compares the most commonly used ICA algorithms in the context of fMRI data analysis. The problems of choosing the dimensionality of the ICA decomposition, and of selecting the “meaningful” components, are considered. Optimizations of the ICA algorithms for dealing with the specific spatiotemporal properties of the fMRI data, and extensions of the ICA to multisubject fMRI studies, are described. For each of these aspects, different approaches from various groups are briefly reviewed.
Stephen J. Simpson and David Raubenheimer
- Published in print:
- 2012
- Published Online:
- October 2017
- ISBN:
- 9780691145655
- eISBN:
- 9781400842803
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691145655.003.0002
- Subject:
- Biology, Animal Biology
This chapter discusses the Geometric Framework (GF) for nutrition. GF satisfies the multiple-food-components requirement using a simple device known as a nutrient space. A nutrient space is a ...
More
This chapter discusses the Geometric Framework (GF) for nutrition. GF satisfies the multiple-food-components requirement using a simple device known as a nutrient space. A nutrient space is a geometric space built of two or more axes, where each axis represents a food component that is suspected to play a role in influencing the animal's responses to its environment. In most cases, these food components will be nutrients but this is not invariably the case. The nutrient space provides the common context in which to describe the pertinent aspects of the animal, its environment, the interactions between animal and environment, and the consequences of these interactions.Less
This chapter discusses the Geometric Framework (GF) for nutrition. GF satisfies the multiple-food-components requirement using a simple device known as a nutrient space. A nutrient space is a geometric space built of two or more axes, where each axis represents a food component that is suspected to play a role in influencing the animal's responses to its environment. In most cases, these food components will be nutrients but this is not invariably the case. The nutrient space provides the common context in which to describe the pertinent aspects of the animal, its environment, the interactions between animal and environment, and the consequences of these interactions.