George Karniadakis and Spencer Sherwin
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198528692
- eISBN:
- 9780191713491
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528692.001.0001
- Subject:
- Mathematics, Numerical Analysis
Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more ...
More
Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of high-order discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of time-dependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multi-domain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, non-tensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.Less
Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of high-order discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of time-dependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multi-domain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, non-tensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.
Steffen L. Lauritzen
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.001.0001
- Subject:
- Mathematics, Probability / Statistics
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private ...
More
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private insurance company. Thiele worked in astronomy, mathematics, actuarial science, and statistics, his most spectacular contributions were in the latter two areas, where his published work was far ahead of his time. This book is concerned with his statistical work. It evolves around his three main statistical masterpieces, which are now translated into English for the first time: 1) his article from 1880 where he derives the Kalman filter; 2) his book from 1889, where he lays out the subject of statistics in a highly original way, derives the half-invariants (today known as cumulants), the notion of likelihood in the case of binomial experiments, the canonical form of the linear normal model, and develops model criticism via analysis of residuals; and 3) an article from 1899 where he completes the theory of the half-invariants. This book also contains three chapters, written by A. Hald and S. L. Lauritzen, which describe Thiele's statistical work in modern terms and puts it into an historical perspective.Less
Thorvald Nicolai Thiele was a brilliant Danish researcher of the 19th century. He was a professor of Astronomy at the University of Copenhagen and the founder of Hafnia, the first Danish private insurance company. Thiele worked in astronomy, mathematics, actuarial science, and statistics, his most spectacular contributions were in the latter two areas, where his published work was far ahead of his time. This book is concerned with his statistical work. It evolves around his three main statistical masterpieces, which are now translated into English for the first time: 1) his article from 1880 where he derives the Kalman filter; 2) his book from 1889, where he lays out the subject of statistics in a highly original way, derives the half-invariants (today known as cumulants), the notion of likelihood in the case of binomial experiments, the canonical form of the linear normal model, and develops model criticism via analysis of residuals; and 3) an article from 1899 where he completes the theory of the half-invariants. This book also contains three chapters, written by A. Hald and S. L. Lauritzen, which describe Thiele's statistical work in modern terms and puts it into an historical perspective.
Lars Oxelheim and Clas Wihlborg
- Published in print:
- 2008
- Published Online:
- May 2009
- ISBN:
- 9780195335743
- eISBN:
- 9780199868964
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195335743.001.0001
- Subject:
- Economics and Finance, Financial Economics
This book develops “Macroeconomic Uncertainty Strategy” (MUST) as a tool for coping with the impact of macroeconomic fluctuations on risk management, performance assessment, and strategies for value ...
More
This book develops “Macroeconomic Uncertainty Strategy” (MUST) as a tool for coping with the impact of macroeconomic fluctuations on risk management, performance assessment, and strategies for value enhancement. The essential elements of a corporate strategy for managing uncertainty in the macroeconomic environment includes setting corporate objectives for risk management, measuring risk, choosing operational and financial instruments for risk management, filtering out macroeconomic influences on performance, and developing compensation schemes that enhance shareholder value when macroeconomic fluctuations bias performance measures. The information obtained through conventional accounting systems become seriously misleading in response to macroeconomic fluctuations with the consequence that alternative ways to obtain relevant information must be considered. Conventional measures of exchange rate, interest rate, and inflation risk are similarly misleading with the consequence that a comprehensive view of the macroeconomic impact on the firm—recognizing the interdependence between macroeconomic variables—must be developed. Most of all, strategies to deal with macroeconomic fluctuations should be considered on a strategic level in the firm in order to establish shareholder wealth maximization as the objective of risk management and reward systems. Shareholder wealth maximization also requires that external stakeholders obtain information that allows them to evaluate the competitiveness of the firm without obfuscation by macroeconomic events.Less
This book develops “Macroeconomic Uncertainty Strategy” (MUST) as a tool for coping with the impact of macroeconomic fluctuations on risk management, performance assessment, and strategies for value enhancement. The essential elements of a corporate strategy for managing uncertainty in the macroeconomic environment includes setting corporate objectives for risk management, measuring risk, choosing operational and financial instruments for risk management, filtering out macroeconomic influences on performance, and developing compensation schemes that enhance shareholder value when macroeconomic fluctuations bias performance measures. The information obtained through conventional accounting systems become seriously misleading in response to macroeconomic fluctuations with the consequence that alternative ways to obtain relevant information must be considered. Conventional measures of exchange rate, interest rate, and inflation risk are similarly misleading with the consequence that a comprehensive view of the macroeconomic impact on the firm—recognizing the interdependence between macroeconomic variables—must be developed. Most of all, strategies to deal with macroeconomic fluctuations should be considered on a strategic level in the firm in order to establish shareholder wealth maximization as the objective of risk management and reward systems. Shareholder wealth maximization also requires that external stakeholders obtain information that allows them to evaluate the competitiveness of the firm without obfuscation by macroeconomic events.
Lars Peter Hansen and Thomas J. Sargent
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691042770
- eISBN:
- 9781400848188
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691042770.001.0001
- Subject:
- Economics and Finance, History of Economic Thought
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and ...
More
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and quantities. The book presents examples from microeconomics, macroeconomics, and asset pricing. The models are cast in terms of a representative consumer. While the book demonstrates the analytical benefits acquired when an analysis with a representative consumer is possible, it also characterizes the restrictiveness of assumptions under which a representative household justifies a purely aggregative analysis. The book unites economic theory with a workable econometrics while going beyond and beneath demand and supply curves for dynamic economies. It constructs and applies competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. The book, based on the 2012 Gorman lectures, stresses heterogeneity, aggregation, and how a common structure unites what superficially appear to be diverse applications. An appendix describes MATLAB programs that apply to the book's calculations.Less
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and quantities. The book presents examples from microeconomics, macroeconomics, and asset pricing. The models are cast in terms of a representative consumer. While the book demonstrates the analytical benefits acquired when an analysis with a representative consumer is possible, it also characterizes the restrictiveness of assumptions under which a representative household justifies a purely aggregative analysis. The book unites economic theory with a workable econometrics while going beyond and beneath demand and supply curves for dynamic economies. It constructs and applies competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. The book, based on the 2012 Gorman lectures, stresses heterogeneity, aggregation, and how a common structure unites what superficially appear to be diverse applications. An appendix describes MATLAB programs that apply to the book's calculations.
Barbara Forey, Jan Hamling, Peter Lee, and Nicholas Wald (eds)
- Published in print:
- 2002
- Published Online:
- September 2009
- ISBN:
- 9780198508564
- eISBN:
- 9780191723773
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508564.001.0001
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
International Smoking Statistics presents a valuable collection of smoking data relating to thirty countries—most of Europe, and also Australia, Canada, Japan, New Zealand, USA, and the ...
More
International Smoking Statistics presents a valuable collection of smoking data relating to thirty countries—most of Europe, and also Australia, Canada, Japan, New Zealand, USA, and the former USSR. Annual data on the national sales of all types of tobacco products are presented for the years up to 1995, with over 100 years of data available for some countries. Both manufactured and hand-rolled cigarettes are considered, as well as pipes and cigars, and smokeless tobacco products. These data are also presented on a per-adult basis. The transition from plain to filter and to lower tar cigarettes is documented. Up to fifty years of survey-based data are presented on the sex- and age-specific prevalence of smoking and amount smoked. National data are shown when available, supplemented by relevant data from international, regional, and epidemiological studies. Surveys of adolescents and adults are included. Appropriate attention is given to the varying definitions and methodologies of the source material, while presenting data in a consistent format. Some summary statistics are derived using standardized methods which are fully described, allowing international comparisons to be made.Less
International Smoking Statistics presents a valuable collection of smoking data relating to thirty countries—most of Europe, and also Australia, Canada, Japan, New Zealand, USA, and the former USSR. Annual data on the national sales of all types of tobacco products are presented for the years up to 1995, with over 100 years of data available for some countries. Both manufactured and hand-rolled cigarettes are considered, as well as pipes and cigars, and smokeless tobacco products. These data are also presented on a per-adult basis. The transition from plain to filter and to lower tar cigarettes is documented. Up to fifty years of survey-based data are presented on the sex- and age-specific prevalence of smoking and amount smoked. National data are shown when available, supplemented by relevant data from international, regional, and epidemiological studies. Surveys of adolescents and adults are included. Appropriate attention is given to the varying definitions and methodologies of the source material, while presenting data in a consistent format. Some summary statistics are derived using standardized methods which are fully described, allowing international comparisons to be made.
David B. Audretsch, Max C. Keilbach, and Erik E. Lehmann
- Published in print:
- 2006
- Published Online:
- January 2007
- ISBN:
- 9780195183511
- eISBN:
- 9780199783663
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195183511.003.0011
- Subject:
- Economics and Finance, Development, Growth, and Environmental
This chapter presents a synthesis of the discussions in the preceding chapters. It argues that entrepreneurship makes a unique contribution to economic growth by permeating the knowledge filter and ...
More
This chapter presents a synthesis of the discussions in the preceding chapters. It argues that entrepreneurship makes a unique contribution to economic growth by permeating the knowledge filter and commercializing ideas that would otherwise remain uncommercialized. Entrepreneurial opportunities are not at all exogenous, or given, in the Knowledge Spillover Theory of Entrepreneurship. Rather, they are endogenously generated by the extent of investments in new knowledge. Thus, a context rich in knowledge will generate more entrepreneurial opportunities than a context with impoverished knowledge.Less
This chapter presents a synthesis of the discussions in the preceding chapters. It argues that entrepreneurship makes a unique contribution to economic growth by permeating the knowledge filter and commercializing ideas that would otherwise remain uncommercialized. Entrepreneurial opportunities are not at all exogenous, or given, in the Knowledge Spillover Theory of Entrepreneurship. Rather, they are endogenously generated by the extent of investments in new knowledge. Thus, a context rich in knowledge will generate more entrepreneurial opportunities than a context with impoverished knowledge.
Lee A. Bygrave and Terje Michaelsen
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780199561131
- eISBN:
- 9780191721199
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199561131.003.0004
- Subject:
- Business and Management, Information Technology, Political Economy
This chapter describes the main organizations that are concerned directly with Internet governance. It outlines the relevant responsibilities and agendas of the respective organizations, together ...
More
This chapter describes the main organizations that are concerned directly with Internet governance. It outlines the relevant responsibilities and agendas of the respective organizations, together with their sources of funding and their relationships with each other. Attention is directed mainly at transnational bodies. These include the Internet Society, Internet Architecture Board, Internet Engineering Task Force, World Wide Web Consortium, and Internet Corporation for Assigned Names and Numbers. The remainder of the chapter describes the various roles played by national governments, alone and in concert, in Internet governance. Using the self-governance ideals of ‘digital libertarianism’ as foil, it delineates the growing influence of governments in the field.Less
This chapter describes the main organizations that are concerned directly with Internet governance. It outlines the relevant responsibilities and agendas of the respective organizations, together with their sources of funding and their relationships with each other. Attention is directed mainly at transnational bodies. These include the Internet Society, Internet Architecture Board, Internet Engineering Task Force, World Wide Web Consortium, and Internet Corporation for Assigned Names and Numbers. The remainder of the chapter describes the various roles played by national governments, alone and in concert, in Internet governance. Using the self-governance ideals of ‘digital libertarianism’ as foil, it delineates the growing influence of governments in the field.
J. D. Moreland and S. Westland
- Published in print:
- 2003
- Published Online:
- April 2010
- ISBN:
- 9780198525301
- eISBN:
- 9780191584947
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198525301.003.0028
- Subject:
- Psychology, Cognitive Neuroscience
Macular pigment (MP) is a natural filter with ‘notch’ transmission characteristics. This chapter examines the effects of MP on surface colours in Normal and Anomalous Trichromats for a database of ...
More
Macular pigment (MP) is a natural filter with ‘notch’ transmission characteristics. This chapter examines the effects of MP on surface colours in Normal and Anomalous Trichromats for a database of 1782 reflectance spectra. The results show that increases in MP concentration produce a general clockwise rotation of chromaticity around the illuminant point for Normals and Anormals. The chromaticity shifts, associated with rotation, increase with distance from the illuminant point.Less
Macular pigment (MP) is a natural filter with ‘notch’ transmission characteristics. This chapter examines the effects of MP on surface colours in Normal and Anomalous Trichromats for a database of 1782 reflectance spectra. The results show that increases in MP concentration produce a general clockwise rotation of chromaticity around the illuminant point for Normals and Anormals. The chromaticity shifts, associated with rotation, increase with distance from the illuminant point.
Nelson Cowan
- Published in print:
- 1998
- Published Online:
- January 2008
- ISBN:
- 9780195119107
- eISBN:
- 9780199870097
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195119107.003.0005
- Subject:
- Psychology, Cognitive Psychology
Donald Broadbent's 1958 information processing model included an attention filter allowing only one channel of information to be processed, with the rest filtered out. In contrast to this “early ...
More
Donald Broadbent's 1958 information processing model included an attention filter allowing only one channel of information to be processed, with the rest filtered out. In contrast to this “early filter” view, there subsequently emerged “late filter” views in which all input is processed to a semantic level but a filter prevents responding to multiple channels at once. There is evidence seeming to support both views. This chapter proposes an intermediate view, further developing Ann Treisman's attenuation theory in which all incoming stimulation contacts long-term memory, causing automatic activation of some memory features (at least physically-based features, e.g., tone pitch and light hue). Changes in stimulation can cause orienting responses, in which attention is recruited toward the change. After a neural model of stimuli is constructed, there is habituation of orienting; feature activation ceases to recruit attention. Thus, the processes of habituation and orienting comprise an intermediate-level attention filter.Less
Donald Broadbent's 1958 information processing model included an attention filter allowing only one channel of information to be processed, with the rest filtered out. In contrast to this “early filter” view, there subsequently emerged “late filter” views in which all input is processed to a semantic level but a filter prevents responding to multiple channels at once. There is evidence seeming to support both views. This chapter proposes an intermediate view, further developing Ann Treisman's attenuation theory in which all incoming stimulation contacts long-term memory, causing automatic activation of some memory features (at least physically-based features, e.g., tone pitch and light hue). Changes in stimulation can cause orienting responses, in which attention is recruited toward the change. After a neural model of stimuli is constructed, there is habituation of orienting; feature activation ceases to recruit attention. Thus, the processes of habituation and orienting comprise an intermediate-level attention filter.
Margaret Jane Radin
- Published in print:
- 2012
- Published Online:
- October 2017
- ISBN:
- 9780691155333
- eISBN:
- 9781400844838
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691155333.003.0010
- Subject:
- Law, Company and Commercial Law
This chapter considers “private” reform ideas or market solutions for improving the normative and democratic acceptability of boilerplate terms. It begins with a discussion of one potentially ...
More
This chapter considers “private” reform ideas or market solutions for improving the normative and democratic acceptability of boilerplate terms. It begins with a discussion of one potentially important “private” incentive: reputation. Some firms are likely to be especially cognizant of the need to maintain good relationships with their users, and therefore responsive to the threat of reputational harm. This is most likely to be true for firms that have users who are reasonably savvy about issues of user rights, such as data privacy or information copying. After outlining the conditions conducive to consumer pushback, the chapter examines other private or market approaches, such as those involving rating agencies, seals of approval, and certifications. Finally, it looks at automated filtering or “machine bargaining,” and especially the implementation of filtering systems for personal computers.Less
This chapter considers “private” reform ideas or market solutions for improving the normative and democratic acceptability of boilerplate terms. It begins with a discussion of one potentially important “private” incentive: reputation. Some firms are likely to be especially cognizant of the need to maintain good relationships with their users, and therefore responsive to the threat of reputational harm. This is most likely to be true for firms that have users who are reasonably savvy about issues of user rights, such as data privacy or information copying. After outlining the conditions conducive to consumer pushback, the chapter examines other private or market approaches, such as those involving rating agencies, seals of approval, and certifications. Finally, it looks at automated filtering or “machine bargaining,” and especially the implementation of filtering systems for personal computers.
George M. Branch
- Published in print:
- 2008
- Published Online:
- May 2008
- ISBN:
- 9780195319958
- eISBN:
- 9780199869596
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195319958.003.0003
- Subject:
- Biology, Aquatic Biology
The chapter summarizes the ecology of the subtidal reefs of South Africa. Discussions include the role of wind, productivity, and oceanographic conditions, the important role of currents and the ...
More
The chapter summarizes the ecology of the subtidal reefs of South Africa. Discussions include the role of wind, productivity, and oceanographic conditions, the important role of currents and the physical forces as well as the species interactions, including rock lobsters, abalone, and sea urchins.Less
The chapter summarizes the ecology of the subtidal reefs of South Africa. Discussions include the role of wind, productivity, and oceanographic conditions, the important role of currents and the physical forces as well as the species interactions, including rock lobsters, abalone, and sea urchins.
Peter Hansen, Morten Kringelbach, and Riitta Salmelin (eds)
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780195307238
- eISBN:
- 9780199863990
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195307238.001.0001
- Subject:
- Neuroscience, Behavioral Neuroscience, Techniques
Magnetoencephalography (MEG) is an exciting brain imaging technology that allows real-time tracking of neural activity, making it an invaluable tool for advancing our understanding of brain function. ...
More
Magnetoencephalography (MEG) is an exciting brain imaging technology that allows real-time tracking of neural activity, making it an invaluable tool for advancing our understanding of brain function. This introduction to MEG brings together chapters which provide the basic tools for planning and executing MEG experiments, as well as analyzing and interpreting the resulting data. Chapters on the basics describe the fundamentals of MEG and its instrumentation, and provide guidelines for designing experiments and performing successful measurements. Chapters on data analysis present it in detail, from general concepts and assumptions to analysis of evoked responses and oscillatory background activity. Chapters on solutions propose potential solutions to the inverse problem using techniques such as minimum norm estimates, spatial filters, and beamformers. Chapters on combinations elucidate how MEG can be used to complement other neuroimaging techniques. Chapters on applications provide practical examples of how to use MEG to study sensory processing and cognitive tasks, and how MEG can be used in a clinical setting.Less
Magnetoencephalography (MEG) is an exciting brain imaging technology that allows real-time tracking of neural activity, making it an invaluable tool for advancing our understanding of brain function. This introduction to MEG brings together chapters which provide the basic tools for planning and executing MEG experiments, as well as analyzing and interpreting the resulting data. Chapters on the basics describe the fundamentals of MEG and its instrumentation, and provide guidelines for designing experiments and performing successful measurements. Chapters on data analysis present it in detail, from general concepts and assumptions to analysis of evoked responses and oscillatory background activity. Chapters on solutions propose potential solutions to the inverse problem using techniques such as minimum norm estimates, spatial filters, and beamformers. Chapters on combinations elucidate how MEG can be used to complement other neuroimaging techniques. Chapters on applications provide practical examples of how to use MEG to study sensory processing and cognitive tasks, and how MEG can be used in a clinical setting.
Paul L. Nunez and Ramesh Srinivasan
- Published in print:
- 2006
- Published Online:
- May 2009
- ISBN:
- 9780195050387
- eISBN:
- 9780199865673
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195050387.003.0008
- Subject:
- Neuroscience, Neuroendocrine and Autonomic, Techniques
The source distribution underlying any EEG waveform need not be generated by a single dipole source located in a restricted cortical area. High-resolution EEG methods apply spatial filters to scalp ...
More
The source distribution underlying any EEG waveform need not be generated by a single dipole source located in a restricted cortical area. High-resolution EEG methods apply spatial filters to scalp data rather than fitting the data to a source model. High-resolution filters isolate those aspects of the EEG that are associated with superficial cortical tissue immediately surrounding the electrode. Signals removed by applying this spatial filter can generally be eliminated as local source candidates. This chapter demonstrates theoretically that the surface Laplacian is a band-pass filtered representation of source activity, as compared to scalp potentials dominated by very low spatial frequencies in the source distribution. The surface Laplacian increases the sensitivity of each electrode to nearby superficial cortical sources. Such Laplacian-identified sources are likely to be radial dipoles in proximal gyral surfaces. High-resolution EEG offers the advantage of viewing cortical dynamics at smaller spatial scales than are possible with raw scalp potentials.Less
The source distribution underlying any EEG waveform need not be generated by a single dipole source located in a restricted cortical area. High-resolution EEG methods apply spatial filters to scalp data rather than fitting the data to a source model. High-resolution filters isolate those aspects of the EEG that are associated with superficial cortical tissue immediately surrounding the electrode. Signals removed by applying this spatial filter can generally be eliminated as local source candidates. This chapter demonstrates theoretically that the surface Laplacian is a band-pass filtered representation of source activity, as compared to scalp potentials dominated by very low spatial frequencies in the source distribution. The surface Laplacian increases the sensitivity of each electrode to nearby superficial cortical sources. Such Laplacian-identified sources are likely to be radial dipoles in proximal gyral surfaces. High-resolution EEG offers the advantage of viewing cortical dynamics at smaller spatial scales than are possible with raw scalp potentials.
Neil Shephard
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566540
- eISBN:
- 9780191718038
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566540.003.0012
- Subject:
- Mathematics, Probability / Statistics
This chapter explores whether there are discontinuities in financial price processes using daily data on the Japanese yen and United States dollar. It opens with a brief description of the data, ...
More
This chapter explores whether there are discontinuities in financial price processes using daily data on the Japanese yen and United States dollar. It opens with a brief description of the data, notation, and models to be used, and then turns to a semi-parametric analysis based on the realized quadratic variation process, which suggests appreciable evidence of discontinuities in the data. The parametric modeling of the local martingale of prices using a Brownian motion plus a compound Poisson process is described. A particle filter is used to fit this model, which is compared with the previous approach. The parametric approach seems to miss almost all the real discontinuities in the process. The chapter ends with a brief discussion of some open problems.Less
This chapter explores whether there are discontinuities in financial price processes using daily data on the Japanese yen and United States dollar. It opens with a brief description of the data, notation, and models to be used, and then turns to a semi-parametric analysis based on the realized quadratic variation process, which suggests appreciable evidence of discontinuities in the data. The parametric modeling of the local martingale of prices using a Brownian motion plus a compound Poisson process is described. A particle filter is used to fit this model, which is compared with the previous approach. The parametric approach seems to miss almost all the real discontinuities in the process. The chapter ends with a brief discussion of some open problems.
David B. Audretsch
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780195183504
- eISBN:
- 9780199783885
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195183504.003.0006
- Subject:
- Economics and Finance, Development, Growth, and Environmental
Knowledge has emerged as the critical factor to generate economic growth, jobs, and competitiveness in a globalized economy. However, science, research, and human capital do not do the taxpayers much ...
More
Knowledge has emerged as the critical factor to generate economic growth, jobs, and competitiveness in a globalized economy. However, science, research, and human capital do not do the taxpayers much good if these investments in new knowledge are not translated into jobs and growth. There is no shortage of educated, scientific, and engineering, as well as creative and dedicated people. But the product of all of this is that their new ideas and insights are not always picked up by the great large companies. The reason is what scholars have only recently termed as, The Knowledge Filter. It is the knowledge filter that stands between investment in research and science, but also more generally knowledge and ideas on the one hand, and their commercialization through innovation, leading ultimately to economic growth, on the other. It is the knowledge filter that impedes the spillover of knowledge and ideas from actually becoming commercialized into innovations that become the basis for economic growth.Less
Knowledge has emerged as the critical factor to generate economic growth, jobs, and competitiveness in a globalized economy. However, science, research, and human capital do not do the taxpayers much good if these investments in new knowledge are not translated into jobs and growth. There is no shortage of educated, scientific, and engineering, as well as creative and dedicated people. But the product of all of this is that their new ideas and insights are not always picked up by the great large companies. The reason is what scholars have only recently termed as, The Knowledge Filter. It is the knowledge filter that stands between investment in research and science, but also more generally knowledge and ideas on the one hand, and their commercialization through innovation, leading ultimately to economic growth, on the other. It is the knowledge filter that impedes the spillover of knowledge and ideas from actually becoming commercialized into innovations that become the basis for economic growth.
David B. Audretsch
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780195183504
- eISBN:
- 9780199783885
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195183504.003.0007
- Subject:
- Economics and Finance, Development, Growth, and Environmental
By penetrating the knowledge filter and trying out ideas that might otherwise never have made it through, entrepreneurship serves as the missing link to innovation and ultimately economic growth and ...
More
By penetrating the knowledge filter and trying out ideas that might otherwise never have made it through, entrepreneurship serves as the missing link to innovation and ultimately economic growth and job creation. Entrepreneurship is an important mechanism permeating the knowledge filter to facilitate the spillover of knowledge and ultimately generate economic growth. Entrepreneurship provides the missing link to innovation and growth in virtually every context where people have ideas and starting a new firm is not blocked or impeded. For example, universities can be interpreted as being hotbeads for generating new knowledge and ideas. Entrepreneurship provides the vision to use this knowledge. If there is no vision there is no entrepreneurship. If there is a vision, but no action or activity, there is also no entrepreneurship.Less
By penetrating the knowledge filter and trying out ideas that might otherwise never have made it through, entrepreneurship serves as the missing link to innovation and ultimately economic growth and job creation. Entrepreneurship is an important mechanism permeating the knowledge filter to facilitate the spillover of knowledge and ultimately generate economic growth. Entrepreneurship provides the missing link to innovation and growth in virtually every context where people have ideas and starting a new firm is not blocked or impeded. For example, universities can be interpreted as being hotbeads for generating new knowledge and ideas. Entrepreneurship provides the vision to use this knowledge. If there is no vision there is no entrepreneurship. If there is a vision, but no action or activity, there is also no entrepreneurship.
Lauri Parkkonen
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780195307238
- eISBN:
- 9780199863990
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195307238.003.0002
- Subject:
- Neuroscience, Behavioral Neuroscience, Techniques
This chapter reviews the methods and technology required for magnetoencephalographic measurements. The key concepts discussed include sensor components, noise reduction methods, co-registration of ...
More
This chapter reviews the methods and technology required for magnetoencephalographic measurements. The key concepts discussed include sensor components, noise reduction methods, co-registration of MEG with anatomical images, stimulators and their MEG compatibility, and filtering and averaging.Less
This chapter reviews the methods and technology required for magnetoencephalographic measurements. The key concepts discussed include sensor components, noise reduction methods, co-registration of MEG with anatomical images, stimulators and their MEG compatibility, and filtering and averaging.
T. N. Thiele
- Published in print:
- 2002
- Published Online:
- September 2007
- ISBN:
- 9780198509721
- eISBN:
- 9780191709197
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509721.003.0002
- Subject:
- Mathematics, Probability / Statistics
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in ...
More
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in this paper served as an important source of inspiration for Lauritzen and Spiegelhalter (1988). His geometric construction of the Kalman filter is described as a novelty.Less
This chapter presents Thiele's first paper on the method of least squares. This paper was so far ahead of its time that only a few appreciated the results. Thiele's recursive algorithm developed in this paper served as an important source of inspiration for Lauritzen and Spiegelhalter (1988). His geometric construction of the Kalman filter is described as a novelty.
Bernhard Blümich
- Published in print:
- 2003
- Published Online:
- January 2010
- ISBN:
- 9780198526766
- eISBN:
- 9780191709524
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198526766.003.0007
- Subject:
- Physics, Condensed Matter Physics / Materials
Compared to other imaging methods, the unique feature of nuclear magnetic resonance (NMR) imaging is the abundance ol parameters that can he exploited for image contrast. These parameters are mostly ...
More
Compared to other imaging methods, the unique feature of nuclear magnetic resonance (NMR) imaging is the abundance ol parameters that can he exploited for image contrast. These parameters are mostly molecular in nature and are linked to the chemical and physical properties of the sample. Examples of molecular chemical parameters include the chemical shift and the indirect spin-spin coupling. Molecular physical parameters are lineshapes, relaxation times, the self-diffusion coefficient, and the strength of the dipole-dipole interaction. The last of these is the fundamental quantity by which distances can be probed either on a molecular level by the dipole-dipole coupling tensor or on a mesoscopic level by spin diffusion. This chapter discusses NMR image contrast, optimisation of contrast, magnetisation filters, transfer functions and mobility filters, contrast parameters, NMR parameters and material properties, translational diffusion and transport filters, local-field filters, combination filters, morphology filters, multi-quantum filters, homonuclear magnetisation-transfer filters, heteronuclear magnetisation-transfer filters, spectroscopic parameters, multi-dimensional spectroscopy, sample manipulation, temperature variation, magnetic field distortions, contrast agents, and noble gases.Less
Compared to other imaging methods, the unique feature of nuclear magnetic resonance (NMR) imaging is the abundance ol parameters that can he exploited for image contrast. These parameters are mostly molecular in nature and are linked to the chemical and physical properties of the sample. Examples of molecular chemical parameters include the chemical shift and the indirect spin-spin coupling. Molecular physical parameters are lineshapes, relaxation times, the self-diffusion coefficient, and the strength of the dipole-dipole interaction. The last of these is the fundamental quantity by which distances can be probed either on a molecular level by the dipole-dipole coupling tensor or on a mesoscopic level by spin diffusion. This chapter discusses NMR image contrast, optimisation of contrast, magnetisation filters, transfer functions and mobility filters, contrast parameters, NMR parameters and material properties, translational diffusion and transport filters, local-field filters, combination filters, morphology filters, multi-quantum filters, homonuclear magnetisation-transfer filters, heteronuclear magnetisation-transfer filters, spectroscopic parameters, multi-dimensional spectroscopy, sample manipulation, temperature variation, magnetic field distortions, contrast agents, and noble gases.
Michele Maggiore
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198570745
- eISBN:
- 9780191717666
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570745.003.0007
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular ...
More
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.Less
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.