Michele Maggiore
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198570745
- eISBN:
- 9780191717666
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570745.003.0007
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular ...
More
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.Less
This chapter deals with experimental aspects of gravitational waves. It defines spectral strain sensitivity, describes the detector's noise and the pattern functions that encode its angular sensitivity, and discusses various data analysis techniques for GWs. It also introduces the theory of matched filtering. A proper interpretation of the results obtained with matched filtering relies on notions of probability and statistics. These are discussed together with an introduction to the frequentist and the Bayesian frameworks. The reconstruction of the source parameters is discussed, and the general theory is then applied to different classes of signals, namely, bursts, periodic sources, coalescing binaries, and stochastic background.
Stephen Handel
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780195169645
- eISBN:
- 9780199786732
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195169645.003.0003
- Subject:
- Psychology, Cognitive Psychology
If the goal of sensory systems is to maximize information transmission, there should be a match between the functioning of the sensory systems and the statistical properties of the objects in the ...
More
If the goal of sensory systems is to maximize information transmission, there should be a match between the functioning of the sensory systems and the statistical properties of the objects in the environment. Analyses of the distribution of acoustical and visual energies indicate that they follow a power law, 1/f, so that there is a constant relationship between frequency and amplitude, namely equal power in all octave regions. To encode this distribution, the auditory and visual systems use cells that resemble Gabor functions that decorrelate local sensory energy to detect the redundancies such as continuous boundaries that signify objects. There is sparse coding so that only a small number of cells fire for any input and those cells minimize the uncertainty problem by trading frequency resolution with orientation or time resolution. The perceptual outcomes are combined with Bayesian prior probabilities to identify the most likely object.Less
If the goal of sensory systems is to maximize information transmission, there should be a match between the functioning of the sensory systems and the statistical properties of the objects in the environment. Analyses of the distribution of acoustical and visual energies indicate that they follow a power law, 1/f, so that there is a constant relationship between frequency and amplitude, namely equal power in all octave regions. To encode this distribution, the auditory and visual systems use cells that resemble Gabor functions that decorrelate local sensory energy to detect the redundancies such as continuous boundaries that signify objects. There is sparse coding so that only a small number of cells fire for any input and those cells minimize the uncertainty problem by trading frequency resolution with orientation or time resolution. The perceptual outcomes are combined with Bayesian prior probabilities to identify the most likely object.
Peter Achinstein
- Published in print:
- 2001
- Published Online:
- November 2003
- ISBN:
- 9780195143898
- eISBN:
- 9780199833023
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195143892.003.0005
- Subject:
- Philosophy, Philosophy of Science
A new concept of probability ‐ objective epistemic probability ‐ is introduced and defended. It is epistemic because it is a measure of the degree of reasonableness of believing something; it is ...
More
A new concept of probability ‐ objective epistemic probability ‐ is introduced and defended. It is epistemic because it is a measure of the degree of reasonableness of believing something; it is objective because it is independent of the beliefs of any person or group. The view is contrasted with several others, including the subjective Bayesian theory of probability, which is epistemic but not objective; with the propensity theory, which is objective but not epistemic; and with Carnap's view, which, like the view defended, is both epistemic and objective but, unlike it, is relativized to a potential epistemic situation.Less
A new concept of probability ‐ objective epistemic probability ‐ is introduced and defended. It is epistemic because it is a measure of the degree of reasonableness of believing something; it is objective because it is independent of the beliefs of any person or group. The view is contrasted with several others, including the subjective Bayesian theory of probability, which is epistemic but not objective; with the propensity theory, which is objective but not epistemic; and with Carnap's view, which, like the view defended, is both epistemic and objective but, unlike it, is relativized to a potential epistemic situation.
Yoaav Isaacs
John Hawthorne (ed.)
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198798705
- eISBN:
- 9780191848469
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198798705.003.0008
- Subject:
- Philosophy, Philosophy of Religion
This chapter argues that the fine-tuning argument for the existence of God is a straightforwardly legitimate argument. The fine-tuning argument takes certain features of fundamental physics to ...
More
This chapter argues that the fine-tuning argument for the existence of God is a straightforwardly legitimate argument. The fine-tuning argument takes certain features of fundamental physics to confirm the existence of God because these features of fundamental physics are more likely given the existence of God than they are given the non-existence of God. And any such argument is straightforwardly legitimate, as such arguments follow a canonically legitimate form of empirical argumentation. The chapter explores various objections to the fine-tuning argument: that it requires an ill-defined notion of small changes in the laws of physics, that it over-generalizes, that it requires implausible presuppositions about divine intentions, and that it is debunked by anthropic reasoning. In each case it finds either that the putatively objectionable feature of the fine-tuning argument is inessential to it or that the putatively objectionable feature of the fine-tuning argument is not actually objectionable.Less
This chapter argues that the fine-tuning argument for the existence of God is a straightforwardly legitimate argument. The fine-tuning argument takes certain features of fundamental physics to confirm the existence of God because these features of fundamental physics are more likely given the existence of God than they are given the non-existence of God. And any such argument is straightforwardly legitimate, as such arguments follow a canonically legitimate form of empirical argumentation. The chapter explores various objections to the fine-tuning argument: that it requires an ill-defined notion of small changes in the laws of physics, that it over-generalizes, that it requires implausible presuppositions about divine intentions, and that it is debunked by anthropic reasoning. In each case it finds either that the putatively objectionable feature of the fine-tuning argument is inessential to it or that the putatively objectionable feature of the fine-tuning argument is not actually objectionable.
David C. Knill
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262042383
- eISBN:
- 9780262294188
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262042383.003.0009
- Subject:
- Neuroscience, Disorders of the Nervous System
This chapter discusses how the Bayesian probability theory can be used as a framework in integrating multiple sensory cues, introduces elements of Bayesian theories of perception, and describes some ...
More
This chapter discusses how the Bayesian probability theory can be used as a framework in integrating multiple sensory cues, introduces elements of Bayesian theories of perception, and describes some psychophysical tests of Bayesian cue integration.Less
This chapter discusses how the Bayesian probability theory can be used as a framework in integrating multiple sensory cues, introduces elements of Bayesian theories of perception, and describes some psychophysical tests of Bayesian cue integration.
Robert H. Swendsen
- Published in print:
- 2019
- Published Online:
- February 2020
- ISBN:
- 9780198853237
- eISBN:
- 9780191887703
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198853237.003.0005
- Subject:
- Physics, Condensed Matter Physics / Materials, Theoretical, Computational, and Statistical Physics
The theory of probability developed in Chapter 3 for discrete random variables is extended to probability distributions, in order to treat the continuous momentum variables. The Dirac delta function ...
More
The theory of probability developed in Chapter 3 for discrete random variables is extended to probability distributions, in order to treat the continuous momentum variables. The Dirac delta function is introduced as a convenient tool to transform continuous random variables, in analogy with the use of the Kronecker delta for discrete random variables. The properties of the Dirac delta function that are needed in statistical mechanics are presented and explained. The addition of two continuous random numbers is given as a simple example. An application of Bayesian probability is given to illustrate its significance. However, the components of the momenta of the particles in an ideal gas are continuous variables.Less
The theory of probability developed in Chapter 3 for discrete random variables is extended to probability distributions, in order to treat the continuous momentum variables. The Dirac delta function is introduced as a convenient tool to transform continuous random variables, in analogy with the use of the Kronecker delta for discrete random variables. The properties of the Dirac delta function that are needed in statistical mechanics are presented and explained. The addition of two continuous random numbers is given as a simple example. An application of Bayesian probability is given to illustrate its significance. However, the components of the momenta of the particles in an ideal gas are continuous variables.
Jan Lauwereyns
- Published in print:
- 2010
- Published Online:
- August 2013
- ISBN:
- 9780262123105
- eISBN:
- 9780262277990
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262123105.001.0001
- Subject:
- Psychology, Neuropsychology
This book examines the neural underpinnings of decision-making using “bias” as its core concept, rather than the more common but noncommittal terms “selection” and “attention.” It offers an ...
More
This book examines the neural underpinnings of decision-making using “bias” as its core concept, rather than the more common but noncommittal terms “selection” and “attention.” It offers an integrative, interdisciplinary account of the structure and function of bias, which it defines as a basic brain mechanism that attaches different weights to different information sources, prioritizing some cognitive representations at the expense of others. The author introduces the concepts of bias and sensitivity based on notions from Bayesian probability, which he translates into easily recognizable neural signatures, introduced by concrete examples from the experimental literature. He examines, among other topics, positive and negative motivations for giving priority to different sensory inputs, and looks for the neural underpinnings of racism, sexism, and other forms of “familiarity bias.” The author—a poet and essayist as well as a scientist—connects findings and ideas in neuroscience to analogous concepts in such diverse fields as post-Lacanian psychoanalysis, literary theory, philosophy of mind, evolutionary psychology, and experimental economics.Less
This book examines the neural underpinnings of decision-making using “bias” as its core concept, rather than the more common but noncommittal terms “selection” and “attention.” It offers an integrative, interdisciplinary account of the structure and function of bias, which it defines as a basic brain mechanism that attaches different weights to different information sources, prioritizing some cognitive representations at the expense of others. The author introduces the concepts of bias and sensitivity based on notions from Bayesian probability, which he translates into easily recognizable neural signatures, introduced by concrete examples from the experimental literature. He examines, among other topics, positive and negative motivations for giving priority to different sensory inputs, and looks for the neural underpinnings of racism, sexism, and other forms of “familiarity bias.” The author—a poet and essayist as well as a scientist—connects findings and ideas in neuroscience to analogous concepts in such diverse fields as post-Lacanian psychoanalysis, literary theory, philosophy of mind, evolutionary psychology, and experimental economics.
C. John Mann
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195085938
- eISBN:
- 9780197560525
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195085938.003.0025
- Subject:
- Computer Science, Software Engineering
The nuclear waste programs of the United States and other countries have forced geologists to think specifically about probabilities of natural events, because the ...
More
The nuclear waste programs of the United States and other countries have forced geologists to think specifically about probabilities of natural events, because the legal requirements to license repositories mandate a probabilistic standard (US EPA, 1985). In addition, uncertainties associated with these probabilities and the predicted performance of a geologic repository must be stated clearly in quantitative terms, as far as possible. Geoscientists rarely have thought in terms of stochasticity or clearly stated uncertainties for their results. All scientists are taught to acknowledge uncertainty and to specify the quantitative uncertainty in each derived or measured value, but this has seldom been done in geology. Thus, the nuclear waste disposal program is forcing us to do now what we should have been doing all along: acknowledge in quantitative terms what uncertainty is associated with each quantity that is employed, whether deterministically or probabilistically. Uncertainty is a simple concept ostensibly understood to mean that which is indeterminate, not certain, containing doubt, indefinite, problematical, not reliable, or dubious. However, uncertainty in a scientific sense demonstrates a complexity which often is unappreciated. Some types of uncertainty are difficult to handle, if they must be quantified, and a completely satisfactory treatment may be impossible. Initially, only uncertainty associated with measurement, was quantified. The Gaussian, or normal, probability density function (pdf) was recognized by Carl Friedrich Gauss as he studied errors in his measurements two centuries ago and developed a theory of errors still being used today. This was the only type of uncertainty that scientists acknowledged until Heisenberg stated his famous uncertainty principle in 1928. As information theory evolved during and after World War II, major advances were made in semantic uncertainty. Today, two major types of uncertainty are generally recognized (Klir and Folger, 1988): ambiguity or nonspecificity and vagueness or fuzziness. These can be subdivided further into seven types having various measures of uncertainty based on probability theory, set theory, fuzzy-set theory, and possibility theory.
Less
The nuclear waste programs of the United States and other countries have forced geologists to think specifically about probabilities of natural events, because the legal requirements to license repositories mandate a probabilistic standard (US EPA, 1985). In addition, uncertainties associated with these probabilities and the predicted performance of a geologic repository must be stated clearly in quantitative terms, as far as possible. Geoscientists rarely have thought in terms of stochasticity or clearly stated uncertainties for their results. All scientists are taught to acknowledge uncertainty and to specify the quantitative uncertainty in each derived or measured value, but this has seldom been done in geology. Thus, the nuclear waste disposal program is forcing us to do now what we should have been doing all along: acknowledge in quantitative terms what uncertainty is associated with each quantity that is employed, whether deterministically or probabilistically. Uncertainty is a simple concept ostensibly understood to mean that which is indeterminate, not certain, containing doubt, indefinite, problematical, not reliable, or dubious. However, uncertainty in a scientific sense demonstrates a complexity which often is unappreciated. Some types of uncertainty are difficult to handle, if they must be quantified, and a completely satisfactory treatment may be impossible. Initially, only uncertainty associated with measurement, was quantified. The Gaussian, or normal, probability density function (pdf) was recognized by Carl Friedrich Gauss as he studied errors in his measurements two centuries ago and developed a theory of errors still being used today. This was the only type of uncertainty that scientists acknowledged until Heisenberg stated his famous uncertainty principle in 1928. As information theory evolved during and after World War II, major advances were made in semantic uncertainty. Today, two major types of uncertainty are generally recognized (Klir and Folger, 1988): ambiguity or nonspecificity and vagueness or fuzziness. These can be subdivided further into seven types having various measures of uncertainty based on probability theory, set theory, fuzzy-set theory, and possibility theory.
John H. Doveton
- Published in print:
- 2014
- Published Online:
- November 2020
- ISBN:
- 9780199978045
- eISBN:
- 9780197563359
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199978045.003.0009
- Subject:
- Earth Sciences and Geography, Geophysics: Earth Sciences
Formation lithologies that are composed of several minerals require multiple porosity logs to be run in combination in order to evaluate volumetric porosity. In the ...
More
Formation lithologies that are composed of several minerals require multiple porosity logs to be run in combination in order to evaluate volumetric porosity. In the most simple solution model, the proportions of multiple components together with porosity can be estimated from a set of simultaneous equations for the measured log responses. These equations can be written in matrix algebra form as: . . . CV = L . . . where C is a matrix of the component petrophysical properties, V is a vector of the component unknown proportions, and L is a vector of the log responses of the evaluated zone. The equation set describes a linear model that links the log measurements with the component mineral properties. Although porosity represents the proportion of voids within the rock, the pore space is filled with a fluid whose physical properties make it a “mineral” component. If the minerals, their petrophysical properties, and their proportions are either known or hypothesized, then log responses can be computed. In this case, the procedure is one of forward-modeling and is useful in situations of highly complex formations, where geological models are used to generate alternative log-response scenarios that can be matched with actual logging measurements in a search for the best reconciliation between composition and logs. However, more commonly, the set of equations is solved as an “inverse problem,” in which the rock composition is deduced from the logging measurements. Probably the earliest application of the compositional analysis of a formation by the inverse procedure applied to logs was by petrophysicists working in Permian carbonates of West Texas, who were frustrated by complex mineralogy in their attempts to obtain reliable porosity estimates from logs, as described by Savre (1963). Up to that time, porosities had been commonly evaluated from neutron logs, but the values were excessively high in zones that contained gypsum, caused by the hydrogen within the water of crystallization. The substitution of the density log for the porosity estimation was compromised by the occurrence of anhydrite as well as gypsum.
Less
Formation lithologies that are composed of several minerals require multiple porosity logs to be run in combination in order to evaluate volumetric porosity. In the most simple solution model, the proportions of multiple components together with porosity can be estimated from a set of simultaneous equations for the measured log responses. These equations can be written in matrix algebra form as: . . . CV = L . . . where C is a matrix of the component petrophysical properties, V is a vector of the component unknown proportions, and L is a vector of the log responses of the evaluated zone. The equation set describes a linear model that links the log measurements with the component mineral properties. Although porosity represents the proportion of voids within the rock, the pore space is filled with a fluid whose physical properties make it a “mineral” component. If the minerals, their petrophysical properties, and their proportions are either known or hypothesized, then log responses can be computed. In this case, the procedure is one of forward-modeling and is useful in situations of highly complex formations, where geological models are used to generate alternative log-response scenarios that can be matched with actual logging measurements in a search for the best reconciliation between composition and logs. However, more commonly, the set of equations is solved as an “inverse problem,” in which the rock composition is deduced from the logging measurements. Probably the earliest application of the compositional analysis of a formation by the inverse procedure applied to logs was by petrophysicists working in Permian carbonates of West Texas, who were frustrated by complex mineralogy in their attempts to obtain reliable porosity estimates from logs, as described by Savre (1963). Up to that time, porosities had been commonly evaluated from neutron logs, but the values were excessively high in zones that contained gypsum, caused by the hydrogen within the water of crystallization. The substitution of the density log for the porosity estimation was compromised by the occurrence of anhydrite as well as gypsum.