Marian David
- Published in print:
- 2005
- Published Online:
- May 2010
- ISBN:
- 9780199283569
- eISBN:
- 9780191712708
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199283569.003.0009
- Subject:
- Philosophy, Metaphysics/Epistemology, Philosophy of Language
Truthmakers have come to play a central role in David Armstrong's metaphysics. They are the things that stand in the relation of truthmaking to truthbearers. This chapter focuses on the relation. ...
More
Truthmakers have come to play a central role in David Armstrong's metaphysics. They are the things that stand in the relation of truthmaking to truthbearers. This chapter focuses on the relation. More specifically, it discusses a thesis Armstrong holds about truthmaking that is of special importance to him; namely, the thesis that truthmaking is an internal relation. It explores what work this thesis is supposed to do for Armstrong, especially for this doctrine of the ontological free lunch, raising questions and pointing out difficulties along the way. At the end of the chapter, it is shown that Armstrong's preferred truthbearers generate a serious difficulty for his thesis that the truthmaking relation is internal.Less
Truthmakers have come to play a central role in David Armstrong's metaphysics. They are the things that stand in the relation of truthmaking to truthbearers. This chapter focuses on the relation. More specifically, it discusses a thesis Armstrong holds about truthmaking that is of special importance to him; namely, the thesis that truthmaking is an internal relation. It explores what work this thesis is supposed to do for Armstrong, especially for this doctrine of the ontological free lunch, raising questions and pointing out difficulties along the way. At the end of the chapter, it is shown that Armstrong's preferred truthbearers generate a serious difficulty for his thesis that the truthmaking relation is internal.
Barbara Forrest and Paul R. Gross
- Published in print:
- 2004
- Published Online:
- April 2010
- ISBN:
- 9780195157420
- eISBN:
- 9780199894000
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195157420.003.0006
- Subject:
- Biology, Evolutionary Biology / Genetics
This chapter details the involvement of Jonathan Wells and William Dembski in the intelligent design movement. Wells, a member of Rev. Sun Myung Moon’s Unification Church, devotes himself to ...
More
This chapter details the involvement of Jonathan Wells and William Dembski in the intelligent design movement. Wells, a member of Rev. Sun Myung Moon’s Unification Church, devotes himself to attacking evolution as a religious duty, and Dembski is the movement’s chief apologist and mathematical thinker. The chapter debunks Wells’s claims in a Unification Church sermon and in his book Icons of Evolution, where he attacks major lines of scientific support for evolution, especially common ancestry. It also documents Dembski’s lack of scientific expertise and exposes his rhetorical tactics and claims about “specified complexity.” The chapter’s critique draws from analyses of Dembski’s work by physicist Mark Perakh, biologist Wesley Elsberry, biologist Gert Korthof, and other critics who have analyzed Dembski’s claims in his written work, including his books The Design Inference and No Free Lunch.Less
This chapter details the involvement of Jonathan Wells and William Dembski in the intelligent design movement. Wells, a member of Rev. Sun Myung Moon’s Unification Church, devotes himself to attacking evolution as a religious duty, and Dembski is the movement’s chief apologist and mathematical thinker. The chapter debunks Wells’s claims in a Unification Church sermon and in his book Icons of Evolution, where he attacks major lines of scientific support for evolution, especially common ancestry. It also documents Dembski’s lack of scientific expertise and exposes his rhetorical tactics and claims about “specified complexity.” The chapter’s critique draws from analyses of Dembski’s work by physicist Mark Perakh, biologist Wesley Elsberry, biologist Gert Korthof, and other critics who have analyzed Dembski’s claims in his written work, including his books The Design Inference and No Free Lunch.
Franklin E. Zimring
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780197513170
- eISBN:
- 9780197513200
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197513170.003.0003
- Subject:
- Sociology, Law, Crime and Deviance, Population and Demography
This chapter explores the issue of what aspects of criminal law and policy had such concentrated impact in the generation after 1970, when prison populations multiplied. There were no aspects of ...
More
This chapter explores the issue of what aspects of criminal law and policy had such concentrated impact in the generation after 1970, when prison populations multiplied. There were no aspects of substantive law that seem to explain the pattern, but two operational features and incentives in state and local criminal process might have jointly sparked the explosion. The focus of prosecutors on statistics on convictions and punishments as measures of their adversarial effectiveness started in the 1970s. This focus appears to have combined with the perverse “free lunch” feature of state governments in which county governments, which have most of the power to determine prison terms, pay none of the costs of imprisonment. The reporting systems increased the desire for more substantial punishment, and the total lack of cost to local government inhibited restraint in penal growth.Less
This chapter explores the issue of what aspects of criminal law and policy had such concentrated impact in the generation after 1970, when prison populations multiplied. There were no aspects of substantive law that seem to explain the pattern, but two operational features and incentives in state and local criminal process might have jointly sparked the explosion. The focus of prosecutors on statistics on convictions and punishments as measures of their adversarial effectiveness started in the 1970s. This focus appears to have combined with the perverse “free lunch” feature of state governments in which county governments, which have most of the power to determine prison terms, pay none of the costs of imprisonment. The reporting systems increased the desire for more substantial punishment, and the total lack of cost to local government inhibited restraint in penal growth.
Franklin E. Zimring
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780197513170
- eISBN:
- 9780197513200
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197513170.001.0001
- Subject:
- Sociology, Law, Crime and Deviance, Population and Demography
The phenomenal growth of penal confinement in the United States in the last quarter of the twentieth century is still a public policy mystery. Why did it happen when it happened? What explains the ...
More
The phenomenal growth of penal confinement in the United States in the last quarter of the twentieth century is still a public policy mystery. Why did it happen when it happened? What explains the unprecedented magnitude of prison and jail expansion? Why are the current levels of penal confinement so very close to the all-time peak rate reached in 2007? What is the likely course of levels of penal confinement in the next generation of American life? Are there changes in government or policy that can avoid the prospect of mass incarceration as a chronic element of governance in the United States? This study is organized around four major concerns: What happened in the 33 years after 1973? Why did these extraordinary changes happen in that single generation? What is likely to happen to levels of penal confinement in the next three decades? What changes in law or practice might reduce this likely penal future?Less
The phenomenal growth of penal confinement in the United States in the last quarter of the twentieth century is still a public policy mystery. Why did it happen when it happened? What explains the unprecedented magnitude of prison and jail expansion? Why are the current levels of penal confinement so very close to the all-time peak rate reached in 2007? What is the likely course of levels of penal confinement in the next generation of American life? Are there changes in government or policy that can avoid the prospect of mass incarceration as a chronic element of governance in the United States? This study is organized around four major concerns: What happened in the 33 years after 1973? Why did these extraordinary changes happen in that single generation? What is likely to happen to levels of penal confinement in the next three decades? What changes in law or practice might reduce this likely penal future?
Andrew Ang
- Published in print:
- 2014
- Published Online:
- August 2014
- ISBN:
- 9780199959327
- eISBN:
- 9780199382323
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199959327.003.0003
- Subject:
- Economics and Finance, Financial Economics
Mean-variance investing is all about diversification. By exploiting the interaction of assets with each other, so one asset’s gains can make up for another asset’s losses, diversification allows ...
More
Mean-variance investing is all about diversification. By exploiting the interaction of assets with each other, so one asset’s gains can make up for another asset’s losses, diversification allows investors to increase expected returns while reducing risks. In practice, mean-variance portfolios that constrain the mean, volatility, and correlation inputs to reduce sampling error have performed much better than unconstrained portfolios. These constrained special cases include equal-weighted, minimum variance, and risk parity portfolios.Less
Mean-variance investing is all about diversification. By exploiting the interaction of assets with each other, so one asset’s gains can make up for another asset’s losses, diversification allows investors to increase expected returns while reducing risks. In practice, mean-variance portfolios that constrain the mean, volatility, and correlation inputs to reduce sampling error have performed much better than unconstrained portfolios. These constrained special cases include equal-weighted, minimum variance, and risk parity portfolios.
Didier Sornette
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780691175959
- eISBN:
- 9781400885091
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691175959.003.0002
- Subject:
- Business and Management, Finance, Accounting, and Banking
This chapter discusses the fundamental characteristics and properties of stock markets and the way prices vary from one instant to the next. It first introduces the standard view about price ...
More
This chapter discusses the fundamental characteristics and properties of stock markets and the way prices vary from one instant to the next. It first introduces the standard view about price variations and returns on the stock market, using a simple toy model to illustrate why arbitrage opportunities (the possibility to get a “free lunch”) are often washed out by the intelligent investment of informed traders, giving rise to the concept of the efficient stock market. It then considers the efficient market hypothesis in relation to random walk by analyzing Louis Bachelier's thesis that the trajectories of stock market prices are identical to random walks. It also examines how information is incorporated in prices, thus destroying potential “free lunches.” Finally, it explains the trade-off between risk and expected return.Less
This chapter discusses the fundamental characteristics and properties of stock markets and the way prices vary from one instant to the next. It first introduces the standard view about price variations and returns on the stock market, using a simple toy model to illustrate why arbitrage opportunities (the possibility to get a “free lunch”) are often washed out by the intelligent investment of informed traders, giving rise to the concept of the efficient stock market. It then considers the efficient market hypothesis in relation to random walk by analyzing Louis Bachelier's thesis that the trajectories of stock market prices are identical to random walks. It also examines how information is incorporated in prices, thus destroying potential “free lunches.” Finally, it explains the trade-off between risk and expected return.
Jason Rosenhouse
- Published in print:
- 2012
- Published Online:
- May 2015
- ISBN:
- 9780199744633
- eISBN:
- 9780190267827
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199744633.003.0016
- Subject:
- Biology, Ecology
In this chapter, the author considers rhetorical techniques that prove to be an immensely frustrating point of contact between young-Earth creationism and intelligent design (ID) proponents, with ...
More
In this chapter, the author considers rhetorical techniques that prove to be an immensely frustrating point of contact between young-Earth creationism and intelligent design (ID) proponents, with particular emphasis on the latter's practice of quoting scientists out of context and ridiculing their ideas. One example is the exchange between Michael Behe, a Lehigh University biochemist and ID proponent, and Jerry Coyne, of the Department of Ecology and Evolution at the University of Chicago. A second example comes from ID proponent William Dembski, in his book No Free Lunch, as he makes claims about the Cambrian explosion. ID proponents such as Phillip Johnson are also fond of making bold, confident presentations of arguments that are entirely incorrect. And when they are done hurling their invective, distortions, and misquotations, these ID proponents resort to accusing scientists of being arrogant.Less
In this chapter, the author considers rhetorical techniques that prove to be an immensely frustrating point of contact between young-Earth creationism and intelligent design (ID) proponents, with particular emphasis on the latter's practice of quoting scientists out of context and ridiculing their ideas. One example is the exchange between Michael Behe, a Lehigh University biochemist and ID proponent, and Jerry Coyne, of the Department of Ecology and Evolution at the University of Chicago. A second example comes from ID proponent William Dembski, in his book No Free Lunch, as he makes claims about the Cambrian explosion. ID proponents such as Phillip Johnson are also fond of making bold, confident presentations of arguments that are entirely incorrect. And when they are done hurling their invective, distortions, and misquotations, these ID proponents resort to accusing scientists of being arrogant.
Carmelo Giacovazzo
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199686995
- eISBN:
- 9780191918377
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199686995.003.0013
- Subject:
- Chemistry, Crystallography: Chemistry
The descriptions of the various types of Fourier synthesis (observed, difference, hybrid) and of their properties, given in Chapter 7, suggest that electron density maps are not only a tool for ...
More
The descriptions of the various types of Fourier synthesis (observed, difference, hybrid) and of their properties, given in Chapter 7, suggest that electron density maps are not only a tool for depicting the distribution of the electrons in the target structure, but also a source of information which may be continuously exploited during the phasing process, no matter whether ab initio or non-ab initio methods were used for deriving the initial model. Here, we will describe two important techniques based on the properties of electron density maps. (i) The recursive approach for phase extension and refinement called EDM (electron density modification). Such techniques have dramatically improved the efficiency of phasing procedures, which usually end with a limited percentage of phased reflections and non-negligible phase errors. EDM techniques allow us to extend phase assignment and to improve phase quality. The author is firmly convinced that practical solution of the phase problem for structures with Nasym up to 200 atoms in the asymmetric unit may be jointly ascribed to direct methods and to EDM techniques. (ii) The AMB (automated model building) procedures; these may be considered to be partly EDM techniques and they are used for automatic building of molecular models from electron density maps. Essentially, we will refer to proteins; the procedures used for small to medium-sized molecules have already been described in Section 6.3.5. Two new ab initio phasing approaches, charge flipping and VLD, essentially based on the properties of the Fourier transform, belong to the EDM category, and since they require a special treatment, they will be described later in Chapter 9. Phase extension and refinement may be performed in reciprocal and in direct space. We described the former in Section 6.3.6; here, we are just interested in direct space procedures, the so-called EDM (electron density modification) techniques. Such procedures are based on the following hypothesis: a poor electron density map, ρ, may be modified by a suitable function, f , to obtain a new map, say ρmod, which better approximates the true map: . . . ρmod (r) = f [ρ(r)]. (8.1) . . . If function f is chosen properly, more accurate phases can be obtained by Fourier inversion of ρmod, which may in turn be used to calculate a new electron density map.
Less
The descriptions of the various types of Fourier synthesis (observed, difference, hybrid) and of their properties, given in Chapter 7, suggest that electron density maps are not only a tool for depicting the distribution of the electrons in the target structure, but also a source of information which may be continuously exploited during the phasing process, no matter whether ab initio or non-ab initio methods were used for deriving the initial model. Here, we will describe two important techniques based on the properties of electron density maps. (i) The recursive approach for phase extension and refinement called EDM (electron density modification). Such techniques have dramatically improved the efficiency of phasing procedures, which usually end with a limited percentage of phased reflections and non-negligible phase errors. EDM techniques allow us to extend phase assignment and to improve phase quality. The author is firmly convinced that practical solution of the phase problem for structures with Nasym up to 200 atoms in the asymmetric unit may be jointly ascribed to direct methods and to EDM techniques. (ii) The AMB (automated model building) procedures; these may be considered to be partly EDM techniques and they are used for automatic building of molecular models from electron density maps. Essentially, we will refer to proteins; the procedures used for small to medium-sized molecules have already been described in Section 6.3.5. Two new ab initio phasing approaches, charge flipping and VLD, essentially based on the properties of the Fourier transform, belong to the EDM category, and since they require a special treatment, they will be described later in Chapter 9. Phase extension and refinement may be performed in reciprocal and in direct space. We described the former in Section 6.3.6; here, we are just interested in direct space procedures, the so-called EDM (electron density modification) techniques. Such procedures are based on the following hypothesis: a poor electron density map, ρ, may be modified by a suitable function, f , to obtain a new map, say ρmod, which better approximates the true map: . . . ρmod (r) = f [ρ(r)]. (8.1) . . . If function f is chosen properly, more accurate phases can be obtained by Fourier inversion of ρmod, which may in turn be used to calculate a new electron density map.
Kris McDaniel
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780198719656
- eISBN:
- 9780191788741
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198719656.003.0008
- Subject:
- Philosophy, Metaphysics/Epistemology
This chapter argues that the naturalness of a property or relation is proportionate to the degree of being of that property or relation, and that once we recognize this proportionality, we see a way ...
More
This chapter argues that the naturalness of a property or relation is proportionate to the degree of being of that property or relation, and that once we recognize this proportionality, we see a way to define naturalness in terms of degree of being. Several arguments against this purported reduction of naturalness to degrees of being are discussed and rebutted. A further argument for the reduction, the central premise of which is that theories making use of degrees of being are ideologically simpler than those making use of naturalness and quantification, is tentatively defended. Along the way, a new notion of comparative existence is introduced.Less
This chapter argues that the naturalness of a property or relation is proportionate to the degree of being of that property or relation, and that once we recognize this proportionality, we see a way to define naturalness in terms of degree of being. Several arguments against this purported reduction of naturalness to degrees of being are discussed and rebutted. A further argument for the reduction, the central premise of which is that theories making use of degrees of being are ideologically simpler than those making use of naturalness and quantification, is tentatively defended. Along the way, a new notion of comparative existence is introduced.
Jenny Pickworth Glusker and Kenneth N. Trueblood
- Published in print:
- 2010
- Published Online:
- November 2020
- ISBN:
- 9780199576340
- eISBN:
- 9780191917905
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199576340.003.0017
- Subject:
- Chemistry, Crystallography: Chemistry
As indicated at the start of Chapter 4, after the diffraction pattern has been recorded and measured, the next stage in a crystal structure determination is solving the structure—that is, finding a ...
More
As indicated at the start of Chapter 4, after the diffraction pattern has been recorded and measured, the next stage in a crystal structure determination is solving the structure—that is, finding a suitable “trial structure” that contains approximate positions for most of the atoms in the unit cell of known dimensions and space group. The term “trial structure” implies that the structure that has been found is only an approximation to the correct or “true” structure, while “suitable” implies that the trial structure is close enough to the true structure that it can be smoothly refined to give a good fit to the experimental data. Methods for finding suitable trial structures form the subject of this chapter and the next. In the early days of structure determination, trial and error methods were, of necessity, almost the only available way of solving structures. Structure factors for the suggested “trial structure” were calculated and compared with those that had been observed. When more productive methods for obtaining trial structures—the “Patterson function” and “direct methods”—were introduced, the manner of solving a crystal structure changed dramatically for the better. We begin with a discussion of so-called “direct methods.” These are analytical techniques for deriving an approximate set of phases from which a first approximation to the electron-density map can be calculated. Interpretation of this map may then give a suitable trial structure. Previous to direct methods, all phases were calculated (as described in Chapter 5) from a proposed trial structure. The search for other methods that did not require a trial structure led to these phaseprobability methods, that is, direct methods. A direct solution to the phase problem by algebraic methods began in the 1920s (Ott, 1927; Banerjee, 1933; Avrami, 1938) and progressed with work on inequalities by David Harker and John Kasper (Harker and Kasper, 1948). The latter authors used inequality relationships put forward by Augustin Louis Cauchy and Karl Hermann Amandus Schwarz that led to relations between the magnitudes of some structure factors.
Less
As indicated at the start of Chapter 4, after the diffraction pattern has been recorded and measured, the next stage in a crystal structure determination is solving the structure—that is, finding a suitable “trial structure” that contains approximate positions for most of the atoms in the unit cell of known dimensions and space group. The term “trial structure” implies that the structure that has been found is only an approximation to the correct or “true” structure, while “suitable” implies that the trial structure is close enough to the true structure that it can be smoothly refined to give a good fit to the experimental data. Methods for finding suitable trial structures form the subject of this chapter and the next. In the early days of structure determination, trial and error methods were, of necessity, almost the only available way of solving structures. Structure factors for the suggested “trial structure” were calculated and compared with those that had been observed. When more productive methods for obtaining trial structures—the “Patterson function” and “direct methods”—were introduced, the manner of solving a crystal structure changed dramatically for the better. We begin with a discussion of so-called “direct methods.” These are analytical techniques for deriving an approximate set of phases from which a first approximation to the electron-density map can be calculated. Interpretation of this map may then give a suitable trial structure. Previous to direct methods, all phases were calculated (as described in Chapter 5) from a proposed trial structure. The search for other methods that did not require a trial structure led to these phaseprobability methods, that is, direct methods. A direct solution to the phase problem by algebraic methods began in the 1920s (Ott, 1927; Banerjee, 1933; Avrami, 1938) and progressed with work on inequalities by David Harker and John Kasper (Harker and Kasper, 1948). The latter authors used inequality relationships put forward by Augustin Louis Cauchy and Karl Hermann Amandus Schwarz that led to relations between the magnitudes of some structure factors.