Daniel Steel
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780195331448
- eISBN:
- 9780199868063
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195331448.001.0001
- Subject:
- Philosophy, Philosophy of Science
The biological and social sciences often generalize causal conclusions from one context to others that may differ in some relevant respects, as is illustrated by inferences from animal models to ...
More
The biological and social sciences often generalize causal conclusions from one context to others that may differ in some relevant respects, as is illustrated by inferences from animal models to humans or from a pilot study to a broader population. Inferences like these are known as extrapolations. How and when extrapolation can be legitimate is a fundamental question for the biological and social sciences that has not received the attention it deserves. This book argues that previous accounts of extrapolation are inadequate and proposes a better approach that is able to answer methodological critiques of extrapolation from animal models to humans.Less
The biological and social sciences often generalize causal conclusions from one context to others that may differ in some relevant respects, as is illustrated by inferences from animal models to humans or from a pilot study to a broader population. Inferences like these are known as extrapolations. How and when extrapolation can be legitimate is a fundamental question for the biological and social sciences that has not received the attention it deserves. This book argues that previous accounts of extrapolation are inadequate and proposes a better approach that is able to answer methodological critiques of extrapolation from animal models to humans.
Daniel P. Steel
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780195331448
- eISBN:
- 9780199868063
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195331448.003.0010
- Subject:
- Philosophy, Philosophy of Science
This chapter summarizes those that went before and ends by sketching some open questions.
This chapter summarizes those that went before and ends by sketching some open questions.
Stephen Yablo
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691144955
- eISBN:
- 9781400845989
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691144955.003.0009
- Subject:
- Philosophy, Philosophy of Language
If A implies B, then is there always something that we can point to as what A adds to B? The logician, or logical engineer, says yes. The mysterian says no. To get a bead on the issue, this chapter ...
More
If A implies B, then is there always something that we can point to as what A adds to B? The logician, or logical engineer, says yes. The mysterian says no. To get a bead on the issue, this chapter distinguished four types of extrapolation: inductive, as in Hume, projective, as in Goodman, alethic, as in Kripkenstein, and type 4, as in Wittgenstein's “conceptual problem of other minds” and his example of 5 o'clock on the sun. Logical subtraction is understood, to begin with, as type 4 extrapolation. A–B is the result of extrapolating A beyond the bounds imposed by B. The question is whether this can always be done.Less
If A implies B, then is there always something that we can point to as what A adds to B? The logician, or logical engineer, says yes. The mysterian says no. To get a bead on the issue, this chapter distinguished four types of extrapolation: inductive, as in Hume, projective, as in Goodman, alethic, as in Kripkenstein, and type 4, as in Wittgenstein's “conceptual problem of other minds” and his example of 5 o'clock on the sun. Logical subtraction is understood, to begin with, as type 4 extrapolation. A–B is the result of extrapolating A beyond the bounds imposed by B. The question is whether this can always be done.
A. Townsend Peterson, Jorge Soberón, Richard G. Pearson, Robert P. Anderson, Enrique Martínez-Meyer, Miguel Nakamura, and Miguel Bastos Araújo
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691136868
- eISBN:
- 9781400840670
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691136868.003.0007
- Subject:
- Biology, Ecology
This chapter explains how environmental data can be used to create models that characterize species’ ecological niches in environmental space. It introduces a model, which is a function constructed ...
More
This chapter explains how environmental data can be used to create models that characterize species’ ecological niches in environmental space. It introduces a model, which is a function constructed by means of data analysis for the purpose of approximating the true relationship (that is, the niche) in the form of the function f linking the environment and species occurrences. The chapter first considers the “meaning” of the function f that is being estimated by the algorithms before discussing the modeling algorithms, the approaches used to implement ecological niche modeling, model calibration, model complexity and overfitting, and model extrapolation and transferability. The chapter concludes with an overview of differences among methods and selection of “best” models, along with strategies for characterizing ecological niches in ways that allow visualization, comparisons, definition of quantitative measures, snf more.Less
This chapter explains how environmental data can be used to create models that characterize species’ ecological niches in environmental space. It introduces a model, which is a function constructed by means of data analysis for the purpose of approximating the true relationship (that is, the niche) in the form of the function f linking the environment and species occurrences. The chapter first considers the “meaning” of the function f that is being estimated by the algorithms before discussing the modeling algorithms, the approaches used to implement ecological niche modeling, model calibration, model complexity and overfitting, and model extrapolation and transferability. The chapter concludes with an overview of differences among methods and selection of “best” models, along with strategies for characterizing ecological niches in ways that allow visualization, comparisons, definition of quantitative measures, snf more.
Daniel P. Steel
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780195331448
- eISBN:
- 9780199868063
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195331448.003.0001
- Subject:
- Philosophy, Philosophy of Science
This chapter introduces the general methodological challenges that confront extrapolation in the biological and social sciences, and sketches the outlines of the mechanisms approach to those ...
More
This chapter introduces the general methodological challenges that confront extrapolation in the biological and social sciences, and sketches the outlines of the mechanisms approach to those challenges that is developed in the rest of the book.Less
This chapter introduces the general methodological challenges that confront extrapolation in the biological and social sciences, and sketches the outlines of the mechanisms approach to those challenges that is developed in the rest of the book.
Daniel P. Steel
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780195331448
- eISBN:
- 9780199868063
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195331448.003.0005
- Subject:
- Philosophy, Philosophy of Science
This chapter argues that previous accounts of extrapolation, either by reference to capacities or mechanisms, do not adequately address the challenges confronting extrapolation. It then begins the ...
More
This chapter argues that previous accounts of extrapolation, either by reference to capacities or mechanisms, do not adequately address the challenges confronting extrapolation. It then begins the account of how the mechanisms approach can be developed so as to do better. The central concept in this account is what I term comparative process tracing.Less
This chapter argues that previous accounts of extrapolation, either by reference to capacities or mechanisms, do not adequately address the challenges confronting extrapolation. It then begins the account of how the mechanisms approach can be developed so as to do better. The central concept in this account is what I term comparative process tracing.
Daniel P. Steel
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780195331448
- eISBN:
- 9780199868063
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195331448.003.0006
- Subject:
- Philosophy, Philosophy of Science
This chapter further develops the mechanisms approach to extrapolation begun in chapter 5 and explores its relevance to the hotly debated issue of ceteris paribus laws. It argues that the ...
More
This chapter further develops the mechanisms approach to extrapolation begun in chapter 5 and explores its relevance to the hotly debated issue of ceteris paribus laws. It argues that the difficulties that beset the most problematic type of ceteris paribus law vanish if “ceteris paribus” is interpreted as indicating an inference schema concerning extrapolation rather than as qualifying a universally quantified generalization.Less
This chapter further develops the mechanisms approach to extrapolation begun in chapter 5 and explores its relevance to the hotly debated issue of ceteris paribus laws. It argues that the difficulties that beset the most problematic type of ceteris paribus law vanish if “ceteris paribus” is interpreted as indicating an inference schema concerning extrapolation rather than as qualifying a universally quantified generalization.
G. E. R. Lloyd
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199567874
- eISBN:
- 9780191721649
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199567874.003.0003
- Subject:
- Philosophy, History of Philosophy, Ancient Philosophy
This chapter explores the different ways in which what we can call mathematical investigations have been defined and practised in different societies. It argues that there is no one route that the ...
More
This chapter explores the different ways in which what we can call mathematical investigations have been defined and practised in different societies. It argues that there is no one route that the development of mathematics had to, or did in practice, follow, once it became the subject of self-conscious inquiries. We have rich evidence on this issue from both ancient Greece and China in particular. Whereas some Greek mathematicians privilege demonstration in the axiomatic-deductive mode, Chinese mathematics was more concerned with heuristics and in growing the subject by extrapolation and analogy. There are also striking differences between Greek and Chinese, and understanding of the relation between mathematics and other areas of investigation, such as music theory and astronomy.Less
This chapter explores the different ways in which what we can call mathematical investigations have been defined and practised in different societies. It argues that there is no one route that the development of mathematics had to, or did in practice, follow, once it became the subject of self-conscious inquiries. We have rich evidence on this issue from both ancient Greece and China in particular. Whereas some Greek mathematicians privilege demonstration in the axiomatic-deductive mode, Chinese mathematics was more concerned with heuristics and in growing the subject by extrapolation and analogy. There are also striking differences between Greek and Chinese, and understanding of the relation between mathematics and other areas of investigation, such as music theory and astronomy.
Paul Humphreys
- Published in print:
- 2004
- Published Online:
- February 2006
- ISBN:
- 9780195158700
- eISBN:
- 9780199785964
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195158709.003.0001
- Subject:
- Philosophy, Philosophy of Science
Extrapolation, conversion, and augmentation are three ways in which our natural observational and computational abilities can be extended. Examples of each are given and the possibility of and need ...
More
Extrapolation, conversion, and augmentation are three ways in which our natural observational and computational abilities can be extended. Examples of each are given and the possibility of and need for a completely automated science is explored, with particular reference to the data explosion.Less
Extrapolation, conversion, and augmentation are three ways in which our natural observational and computational abilities can be extended. Examples of each are given and the possibility of and need for a completely automated science is explored, with particular reference to the data explosion.
Charles F. Manski
- Published in print:
- 2019
- Published Online:
- May 2020
- ISBN:
- 9780691194738
- eISBN:
- 9780691195360
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691194738.003.0002
- Subject:
- Economics and Finance, Econometrics
This chapter critiques how evidence from randomized trials has been used to inform medical decision making. Trials have long enjoyed a favored status within medical research on treatment response and ...
More
This chapter critiques how evidence from randomized trials has been used to inform medical decision making. Trials have long enjoyed a favored status within medical research on treatment response and are often called the “gold standard” for such research. The U.S. Food and Drug Administration (FDA) ordinarily considers only trial data when making decisions on drug approval. The well-known appeal of trials is that, given sufficient sample size and complete observation of outcomes, they deliver credible findings about treatment response within the study population. However, it is also well-known that extrapolation of findings from trials to clinical practice can be difficult. Researchers and guideline developers often use untenable assumptions to extrapolate. The chapter refers to this practice as wishful extrapolation. It discusses multiple reasons why extrapolation of research findings to clinical practice may be suspect.Less
This chapter critiques how evidence from randomized trials has been used to inform medical decision making. Trials have long enjoyed a favored status within medical research on treatment response and are often called the “gold standard” for such research. The U.S. Food and Drug Administration (FDA) ordinarily considers only trial data when making decisions on drug approval. The well-known appeal of trials is that, given sufficient sample size and complete observation of outcomes, they deliver credible findings about treatment response within the study population. However, it is also well-known that extrapolation of findings from trials to clinical practice can be difficult. Researchers and guideline developers often use untenable assumptions to extrapolate. The chapter refers to this practice as wishful extrapolation. It discusses multiple reasons why extrapolation of research findings to clinical practice may be suspect.
Eaton E. Lattman, Thomas D. Grant, and Edward H. Snell
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780199670871
- eISBN:
- 9780191749575
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199670871.003.0008
- Subject:
- Physics, Soft Matter / Biological Physics
Thic chapter describes some of the processes that are often carried out within specific data processing software associated with an instrument but are invisible to the user. It is useful to be aware ...
More
Thic chapter describes some of the processes that are often carried out within specific data processing software associated with an instrument but are invisible to the user. It is useful to be aware of them. These include dealing with detector artifacts and limitations, and the integration of the signal from a two-dimensional image to produce a one-dimensional scattering profile, among other steps.Less
Thic chapter describes some of the processes that are often carried out within specific data processing software associated with an instrument but are invisible to the user. It is useful to be aware of them. These include dealing with detector artifacts and limitations, and the integration of the signal from a two-dimensional image to produce a one-dimensional scattering profile, among other steps.
Bradford Lyau
- Published in print:
- 2017
- Published Online:
- May 2019
- ISBN:
- 9781496811523
- eISBN:
- 9781496811561
- Item type:
- chapter
- Publisher:
- University Press of Mississippi
- DOI:
- 10.14325/mississippi/9781496811523.003.0012
- Subject:
- Literature, Film, Media, and Cultural Studies
Bradford Lyau, in “Many Paths, One Journey: Cixin Liu’s Three Body Problem Novels,” carefully places Cixin Liu’s Three Body trilogy within both Chinese and Western literary traditions before offering ...
More
Bradford Lyau, in “Many Paths, One Journey: Cixin Liu’s Three Body Problem Novels,” carefully places Cixin Liu’s Three Body trilogy within both Chinese and Western literary traditions before offering a brief critical analysis of the first two books, where humanity struggles against an impending alien invasion and how humanity also faces its destiny. These two resonances with Western literature, both popular and elite, invite the reader to consider Liu’s novel’s different literary roots: 1) American genre science fiction, 2) the philosophical tale as it emerged in the West, 3) the role of philosophy in Chinese literature, and 4) Chinese science fiction.Less
Bradford Lyau, in “Many Paths, One Journey: Cixin Liu’s Three Body Problem Novels,” carefully places Cixin Liu’s Three Body trilogy within both Chinese and Western literary traditions before offering a brief critical analysis of the first two books, where humanity struggles against an impending alien invasion and how humanity also faces its destiny. These two resonances with Western literature, both popular and elite, invite the reader to consider Liu’s novel’s different literary roots: 1) American genre science fiction, 2) the philosophical tale as it emerged in the West, 3) the role of philosophy in Chinese literature, and 4) Chinese science fiction.
Marian Stamp Dawkins
- Published in print:
- 1998
- Published Online:
- March 2012
- ISBN:
- 9780198503200
- eISBN:
- 9780191686474
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198503200.003.0004
- Subject:
- Psychology, Cognitive Psychology
This chapter begins with the most basic ideas of what is meant by thinking. It has to do with ‘working things out in the head’ and so one needs to devise some way of showing that this is what an ...
More
This chapter begins with the most basic ideas of what is meant by thinking. It has to do with ‘working things out in the head’ and so one needs to devise some way of showing that this is what an animal is doing. One of the simplest kinds of working things out in the head is extrapolation – an animal is shown something which then disappears and it has to work out where the object will reappear, given that it is behaving in some predictable manner. So, for example, if a piece of food is being dragged along in a particular direction and then goes behind a screen, will the animal look for the food at the place where it disappeared (no extrapolation) or at the far end of the screen where it is due to reappear (true extrapolation)? An animal that could anticipate where something is due to reappear would be showing a rudimentary ability to work things out in its head.Less
This chapter begins with the most basic ideas of what is meant by thinking. It has to do with ‘working things out in the head’ and so one needs to devise some way of showing that this is what an animal is doing. One of the simplest kinds of working things out in the head is extrapolation – an animal is shown something which then disappears and it has to work out where the object will reappear, given that it is behaving in some predictable manner. So, for example, if a piece of food is being dragged along in a particular direction and then goes behind a screen, will the animal look for the food at the place where it disappeared (no extrapolation) or at the far end of the screen where it is due to reappear (true extrapolation)? An animal that could anticipate where something is due to reappear would be showing a rudimentary ability to work things out in its head.
Patrick Magee and Mark Tooley
- Published in print:
- 2011
- Published Online:
- November 2020
- ISBN:
- 9780199595150
- eISBN:
- 9780191918032
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199595150.003.0005
- Subject:
- Clinical Medicine and Allied Health, Anesthesiology
Graphs are used to represent pictorially or clarify a relationship between two variables, say x and y, or t and function (t). If x and y are related by the linear ...
More
Graphs are used to represent pictorially or clarify a relationship between two variables, say x and y, or t and function (t). If x and y are related by the linear equation y = mx + c, then the graphical relationship is a straight line as in Figure 1.1, where m is the slope, a constant value for a straight line and c is the value of y when x = 0. Figure 1.2 shows the relationship when m and c take negative values. Note that if c = 0, the line passes through the origin, 0, of the x–y axes. Examples of other linear relationships that can be represented in this way include the following (the bracketed symbols are the variables equivalent to x and y above): v = u + a t; (variables are t and v), where u is the starting velocity of an object that is subjected to acceleration a for time t, after which its velocity is v.
Less
Graphs are used to represent pictorially or clarify a relationship between two variables, say x and y, or t and function (t). If x and y are related by the linear equation y = mx + c, then the graphical relationship is a straight line as in Figure 1.1, where m is the slope, a constant value for a straight line and c is the value of y when x = 0. Figure 1.2 shows the relationship when m and c take negative values. Note that if c = 0, the line passes through the origin, 0, of the x–y axes. Examples of other linear relationships that can be represented in this way include the following (the bracketed symbols are the variables equivalent to x and y above): v = u + a t; (variables are t and v), where u is the starting velocity of an object that is subjected to acceleration a for time t, after which its velocity is v.
Michael Haliassos
- Published in print:
- 2013
- Published Online:
- January 2015
- ISBN:
- 9780262018296
- eISBN:
- 9780262305495
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262018296.003.0005
- Subject:
- Economics and Finance, Financial Economics
The paper studies asset prices in an economy where some investors categorize risky assets into different styles and move funds among these styles depending on their relative performance. In this ...
More
The paper studies asset prices in an economy where some investors categorize risky assets into different styles and move funds among these styles depending on their relative performance. In this economy, assets in the same style comove too much, assets in different styles comove too little, and reclassifying an asset into a new style raises its correlation with that style. The authors also predict that style returns exhibit a rich pattern of own- and cross-autocorrelations and that while asset-level momentum and value strategies are profitable, their style-level counterparts are even more so. The model is used to shed light on several style-related empirical anomalies.Less
The paper studies asset prices in an economy where some investors categorize risky assets into different styles and move funds among these styles depending on their relative performance. In this economy, assets in the same style comove too much, assets in different styles comove too little, and reclassifying an asset into a new style raises its correlation with that style. The authors also predict that style returns exhibit a rich pattern of own- and cross-autocorrelations and that while asset-level momentum and value strategies are profitable, their style-level counterparts are even more so. The model is used to shed light on several style-related empirical anomalies.
Lionel Raff, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam
- Published in print:
- 2012
- Published Online:
- November 2020
- ISBN:
- 9780199765652
- eISBN:
- 9780197563113
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199765652.003.0011
- Subject:
- Chemistry, Physical Chemistry
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. ...
More
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a stochastic global search method that mimics the process of natural biological evolution. GAs operate on a population of potential solutions applying the principle of survival of the fittest to generate progressively better approximations to a solution. A new set of approximations is generated in each iteration (also known as generation) of a GA through the process of selecting individuals from the solution space according to their fitness levels, and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that have a higher probability of being “fitter,” i.e., better approximations of the specified potential values, than the individuals they were created from, just as in natural adaptation. The most time-consuming part in implementing a GA is often the evaluation of the objective or the fitness function. The objective function O[P] is expressed as sum squared error computed over a given large ensemble of data. Consequently, the time required for evaluating the objective function becomes an important factor. Since a GA is well suited for implementing on parallel computers, the time required for evaluating the objective function can be reduced significantly by parallel processing. A better approach would be to map out the objective function using several possible solutions concurrently or beforehand to improve computational efficiency of the GA prior to its execution, and using this information to implement the GA. This will obviate the need for cumbersome direct evaluation of the objective function. Neural networks may be best suited to map the functional relationship between the objective function and the various parameters of the specific functional form. This study presents an approach that combines the universal function approximation capability of multilayer neural networks to accelerate a GA for fitting atomic system potentials. The approach involves evaluating the objective function, which for the present application is the mean squared error (MSE) between the computed and model-estimated potential, and training a multilayer neural network with decision variables as input and the objective function as output.
Less
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a stochastic global search method that mimics the process of natural biological evolution. GAs operate on a population of potential solutions applying the principle of survival of the fittest to generate progressively better approximations to a solution. A new set of approximations is generated in each iteration (also known as generation) of a GA through the process of selecting individuals from the solution space according to their fitness levels, and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that have a higher probability of being “fitter,” i.e., better approximations of the specified potential values, than the individuals they were created from, just as in natural adaptation. The most time-consuming part in implementing a GA is often the evaluation of the objective or the fitness function. The objective function O[P] is expressed as sum squared error computed over a given large ensemble of data. Consequently, the time required for evaluating the objective function becomes an important factor. Since a GA is well suited for implementing on parallel computers, the time required for evaluating the objective function can be reduced significantly by parallel processing. A better approach would be to map out the objective function using several possible solutions concurrently or beforehand to improve computational efficiency of the GA prior to its execution, and using this information to implement the GA. This will obviate the need for cumbersome direct evaluation of the objective function. Neural networks may be best suited to map the functional relationship between the objective function and the various parameters of the specific functional form. This study presents an approach that combines the universal function approximation capability of multilayer neural networks to accelerate a GA for fitting atomic system potentials. The approach involves evaluating the objective function, which for the present application is the mean squared error (MSE) between the computed and model-estimated potential, and training a multilayer neural network with decision variables as input and the objective function as output.
Sigmund F. Zakrzewski (ed.)
- Published in print:
- 2002
- Published Online:
- November 2020
- ISBN:
- 9780195148114
- eISBN:
- 9780197565629
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195148114.003.0012
- Subject:
- Chemistry, Environmental Chemistry
The purpose of risk assessment is estimation of the severity of harmful effects to human health and the environment that may result from exposure to chemicals present ...
More
The purpose of risk assessment is estimation of the severity of harmful effects to human health and the environment that may result from exposure to chemicals present in the environment. The Environmental Protection Agency (EPA) procedure of risk assessment, whether related to human health or to the environment, involves four steps: 1. hazard assessment 2. dose–response assessment 3. exposure assessment 4. risk characterization The quantity of chemicals in use today is staggering. According to the data compiled by Hodgson and Guthrie in 1980 (1), there were then 1500 active ingredients of pesticides, 4000 active ingredients of therapeutic drugs, 2000 drug additives to improve stability, 2500 food additives with nutritional value, 3000 food additives to promote product life, and 50,000 additional chemicals in common use. Considering the growth of the chemical and pharmaceutical industries, these amounts must now be considerably larger. Past experience has shown that some of these chemicals, although not toxic unless ingested in large quantities, may be mutagenic and carcinogenic with chronic exposure to minute doses, or may interfere with the reproductive or immune systems of humans and animals. To protect human health it is necessary to determine that compounds to which people are exposed daily or periodically in their daily lives (such as cosmetics, foods, and pesticides) will not cause harm upon long-term exposure. The discussion in this chapter will focus primarly on carcinogenicity and mutagenicity, but also endocrine disrupters will be considered. The carcinogenicity of some chemicals was established through epidemiological studies. However, because of the long latency period of cancer, epidemiological studies require many years before any conclusions can be reached. In addition, they are very expensive. Another method that could be used is bioassay in animals. Such bioassays, although quite useful in predicting human cancer hazard, may take as long as 2 years or more and require at least 600 animals per assay. This method is also too costly in terms of time and money to be considered for large-scale screening. For these reasons an inexpensive, short-term assay system is needed for preliminary evaluation of potential mutagens and carcinogens.
Less
The purpose of risk assessment is estimation of the severity of harmful effects to human health and the environment that may result from exposure to chemicals present in the environment. The Environmental Protection Agency (EPA) procedure of risk assessment, whether related to human health or to the environment, involves four steps: 1. hazard assessment 2. dose–response assessment 3. exposure assessment 4. risk characterization The quantity of chemicals in use today is staggering. According to the data compiled by Hodgson and Guthrie in 1980 (1), there were then 1500 active ingredients of pesticides, 4000 active ingredients of therapeutic drugs, 2000 drug additives to improve stability, 2500 food additives with nutritional value, 3000 food additives to promote product life, and 50,000 additional chemicals in common use. Considering the growth of the chemical and pharmaceutical industries, these amounts must now be considerably larger. Past experience has shown that some of these chemicals, although not toxic unless ingested in large quantities, may be mutagenic and carcinogenic with chronic exposure to minute doses, or may interfere with the reproductive or immune systems of humans and animals. To protect human health it is necessary to determine that compounds to which people are exposed daily or periodically in their daily lives (such as cosmetics, foods, and pesticides) will not cause harm upon long-term exposure. The discussion in this chapter will focus primarly on carcinogenicity and mutagenicity, but also endocrine disrupters will be considered. The carcinogenicity of some chemicals was established through epidemiological studies. However, because of the long latency period of cancer, epidemiological studies require many years before any conclusions can be reached. In addition, they are very expensive. Another method that could be used is bioassay in animals. Such bioassays, although quite useful in predicting human cancer hazard, may take as long as 2 years or more and require at least 600 animals per assay. This method is also too costly in terms of time and money to be considered for large-scale screening. For these reasons an inexpensive, short-term assay system is needed for preliminary evaluation of potential mutagens and carcinogens.
Maurice R. Eftink and Haripada Maity
- Published in print:
- 2000
- Published Online:
- November 2020
- ISBN:
- 9780199638130
- eISBN:
- 9780191918179
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199638130.003.0016
- Subject:
- Chemistry, Organic Chemistry
The biophysical characterization of globular proteins will almost always include some type of study of the unfolding of protein to obtain thermodynamic parameters. The ...
More
The biophysical characterization of globular proteins will almost always include some type of study of the unfolding of protein to obtain thermodynamic parameters. The basic idea is that a transition between a native and unfolded state, induced by temperature, pH, or denaturant concentration, can serve as a standard reaction for obtaining a thermodynamic measure of the stability of the native state. For example, the free energy change for the unfolding reaction can be used to compare the stability of a set of mutant forms of a protein (1-4). This type of analysis is based both on assumptions of the thermodynamic model for the unfolding process and on assumptions in the way the data are analysed; some of these assumptions and their limitations will be discussed below. There are a variety of methods that can be used to monitor an unfolding process. A common method is differential scanning calorimetry, DSC, which measures the variation in the specific heat of a protein-containing solution as a protein is thermally unfolded (5-7). DSC is a popular method for this purpose, but optical methods can also provide suitable information for tracking the unfolding of a protein The spectroscopic signals for the native and unfolded states of a protein can give some insight regarding the structure of the states, and often can provide advantages of economy, ease of measurement and amenability to a wide range of sample concentration. The optical spectroscopic methods that have been used most often for this purpose are absorption spectroscopy, circular dichroism and fluorescence, which will be discussed in this chapter. A key to each of these methods and their use in protein unfolding studies is that the signal is a mole fraction weighted average of the signals of each thermodynamic state. That is, the observed signal, S, can be expressed as . . . S = ∑XiSi . . . . . . 1 . . . where Xi is the mole fraction of species i and si is the intrinsic signal of species i. In order for a particular spectroscopic signal to be useful for tracking a N ↔ U transition of a protein, the signal must be sufficiently different for the N and U states.
Less
The biophysical characterization of globular proteins will almost always include some type of study of the unfolding of protein to obtain thermodynamic parameters. The basic idea is that a transition between a native and unfolded state, induced by temperature, pH, or denaturant concentration, can serve as a standard reaction for obtaining a thermodynamic measure of the stability of the native state. For example, the free energy change for the unfolding reaction can be used to compare the stability of a set of mutant forms of a protein (1-4). This type of analysis is based both on assumptions of the thermodynamic model for the unfolding process and on assumptions in the way the data are analysed; some of these assumptions and their limitations will be discussed below. There are a variety of methods that can be used to monitor an unfolding process. A common method is differential scanning calorimetry, DSC, which measures the variation in the specific heat of a protein-containing solution as a protein is thermally unfolded (5-7). DSC is a popular method for this purpose, but optical methods can also provide suitable information for tracking the unfolding of a protein The spectroscopic signals for the native and unfolded states of a protein can give some insight regarding the structure of the states, and often can provide advantages of economy, ease of measurement and amenability to a wide range of sample concentration. The optical spectroscopic methods that have been used most often for this purpose are absorption spectroscopy, circular dichroism and fluorescence, which will be discussed in this chapter. A key to each of these methods and their use in protein unfolding studies is that the signal is a mole fraction weighted average of the signals of each thermodynamic state. That is, the observed signal, S, can be expressed as . . . S = ∑XiSi . . . . . . 1 . . . where Xi is the mole fraction of species i and si is the intrinsic signal of species i. In order for a particular spectroscopic signal to be useful for tracking a N ↔ U transition of a protein, the signal must be sufficiently different for the N and U states.
James Wei
- Published in print:
- 2007
- Published Online:
- November 2020
- ISBN:
- 9780195159172
- eISBN:
- 9780197561997
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195159172.003.0010
- Subject:
- Chemistry, Physical Chemistry
After searching the literature and making predictions based on theory without getting sufficient satisfactory results, the next move would be to make ...
More
After searching the literature and making predictions based on theory without getting sufficient satisfactory results, the next move would be to make estimates. We need the property y of substances pi from a population P that has not been investigated and reported in the literature. Fortunately, there exists a subset S of P that has been investigated, and we have the values for the property y. For instance, we may want the boiling points of all the hydrocarbons, but we have only the boiling points of the normal paraffins from 1 to 20 carbon atoms. Can we use this piece of information on normal paraffins to estimate the boiling points for the rest of the hydrocarbon population? How much effort would be involved and how accurate would the results be? The number of isomers of paraffin is very large; see table 5.1. We see that the iso-paraffins are not as well investigated as the normal paraffins. We have the boiling points of all three isomers of pentane, but not the 75 isomers of decane. It is inevitable that we have to resort to estimations. When we have obtained a good correlation for normal paraffins, we would naturally want to know if we can extend this to the branched paraffins, and onward to the population of all the saturated hydrocarbons (by including the cyclic paraffins), and onward to the population of all hydrocarbons (by including olefins, acetylenes, and aromatic compounds), and then onward to the population of all organic compounds (by including compounds with heteroatoms, such as O, N, Cl). A correlation that applies accurately to a larger domain is more useful than one that works only for a smaller domain. Another example is polychlorinated biphenyls (PCBs), which have 10 hydrogen atoms that can be substituted by chlorine atoms. There are three types of site: the four α sites near the bridge between the two phenyl fragments, the four β sites farther away from the bridge, and the two γ sites that are the farthest away from the bridge. The number of isomers is shown in table 5.2.
Less
After searching the literature and making predictions based on theory without getting sufficient satisfactory results, the next move would be to make estimates. We need the property y of substances pi from a population P that has not been investigated and reported in the literature. Fortunately, there exists a subset S of P that has been investigated, and we have the values for the property y. For instance, we may want the boiling points of all the hydrocarbons, but we have only the boiling points of the normal paraffins from 1 to 20 carbon atoms. Can we use this piece of information on normal paraffins to estimate the boiling points for the rest of the hydrocarbon population? How much effort would be involved and how accurate would the results be? The number of isomers of paraffin is very large; see table 5.1. We see that the iso-paraffins are not as well investigated as the normal paraffins. We have the boiling points of all three isomers of pentane, but not the 75 isomers of decane. It is inevitable that we have to resort to estimations. When we have obtained a good correlation for normal paraffins, we would naturally want to know if we can extend this to the branched paraffins, and onward to the population of all the saturated hydrocarbons (by including the cyclic paraffins), and onward to the population of all hydrocarbons (by including olefins, acetylenes, and aromatic compounds), and then onward to the population of all organic compounds (by including compounds with heteroatoms, such as O, N, Cl). A correlation that applies accurately to a larger domain is more useful than one that works only for a smaller domain. Another example is polychlorinated biphenyls (PCBs), which have 10 hydrogen atoms that can be substituted by chlorine atoms. There are three types of site: the four α sites near the bridge between the two phenyl fragments, the four β sites farther away from the bridge, and the two γ sites that are the farthest away from the bridge. The number of isomers is shown in table 5.2.
Lionel Raff, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam
- Published in print:
- 2012
- Published Online:
- November 2020
- ISBN:
- 9780199765652
- eISBN:
- 9780197563113
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199765652.003.0007
- Subject:
- Chemistry, Physical Chemistry
In this section, we want to give a brief introduction to neural networks (NNs). It is written for readers who are not familiar with neural networks but are curious ...
More
In this section, we want to give a brief introduction to neural networks (NNs). It is written for readers who are not familiar with neural networks but are curious about how they can be applied to practical problems in chemical reaction dynamics. The field of neural networks covers a very broad area. It is not possible to discuss all types of neural networks. Instead, we will concentrate on the most common neural network architecture, namely, the multilayer perceptron (MLP). We will describe the basics of this architecture, discuss its capabilities, and show how it has been used on several different chemical reaction dynamics problems (for introductions to other types of networks, the reader is referred to References 105-107). For the purposes of this document, we will look at neural networks as function approximators. As shown in Figure 3-1, we have some unknown function that we wish to approximate. We want to adjust the parameters of the network so that it will produce the same response as the unknown function, if the same input is applied to both systems. For our applications, the unknown function may correspond to the relationship between the atomic structure variables and the resulting potential energy and forces. The multilayer perceptron neural network is built up of simple components. We will begin with a single-input neuron, which we will then extend to multiple inputs. We will next stack these neurons together to produce layers. Finally, we will cascade the layers together to form the network. A single-input neuron is shown in Figure 3-2. The scalar input p is multiplied by the scalar weight w to form wp, one of the terms that is sent to the summer. The other input, 1, is multiplied by a bias b and then passed to the summer. The summer output n, often referred to as the net input, goes into a transfer function f, which produces the scalar neuron output a.
Less
In this section, we want to give a brief introduction to neural networks (NNs). It is written for readers who are not familiar with neural networks but are curious about how they can be applied to practical problems in chemical reaction dynamics. The field of neural networks covers a very broad area. It is not possible to discuss all types of neural networks. Instead, we will concentrate on the most common neural network architecture, namely, the multilayer perceptron (MLP). We will describe the basics of this architecture, discuss its capabilities, and show how it has been used on several different chemical reaction dynamics problems (for introductions to other types of networks, the reader is referred to References 105-107). For the purposes of this document, we will look at neural networks as function approximators. As shown in Figure 3-1, we have some unknown function that we wish to approximate. We want to adjust the parameters of the network so that it will produce the same response as the unknown function, if the same input is applied to both systems. For our applications, the unknown function may correspond to the relationship between the atomic structure variables and the resulting potential energy and forces. The multilayer perceptron neural network is built up of simple components. We will begin with a single-input neuron, which we will then extend to multiple inputs. We will next stack these neurons together to produce layers. Finally, we will cascade the layers together to form the network. A single-input neuron is shown in Figure 3-2. The scalar input p is multiplied by the scalar weight w to form wp, one of the terms that is sent to the summer. The other input, 1, is multiplied by a bias b and then passed to the summer. The summer output n, often referred to as the net input, goes into a transfer function f, which produces the scalar neuron output a.