Cyriel M. A. Pennartz
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262029315
- eISBN:
- 9780262330121
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262029315.003.0004
- Subject:
- Neuroscience, Behavioral Neuroscience
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of ...
More
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of reduced neuron models. We construct neural networks using these building blocks and explore how they accomplish memory, categorization and other tasks. Computational advantages of parallel-distributed networks are considered, and we explore their emergent properties, such as in pattern completion. Artificial neural networks appear instructive for understanding consciousness, as they illustrate how stable representations can be achieved in dynamic systems. More importantly, they show how low-level processes result in high-level phenomena such as memory retrieval. However, an essential remaining problem is that neural networks do not possess a mechanism specifying what kind of information (e.g. sensory modality) they process. Going back to the classic labeled-lines hypothesis, it is argued that this hypothesis does not offer a solution to the question how the brain differentiates the various sensory inputs it receives into distinct modalities. The brain is observed to live in a "Cuneiform room" by which it only receives and emits spike messages: these are the only source materials by which it can construct modally differentiated experiences.Less
What are neural network models, what kind of cognitive processes can they perform, and what do they teach us about representations and consciousness? First, this chapter explains the functioning of reduced neuron models. We construct neural networks using these building blocks and explore how they accomplish memory, categorization and other tasks. Computational advantages of parallel-distributed networks are considered, and we explore their emergent properties, such as in pattern completion. Artificial neural networks appear instructive for understanding consciousness, as they illustrate how stable representations can be achieved in dynamic systems. More importantly, they show how low-level processes result in high-level phenomena such as memory retrieval. However, an essential remaining problem is that neural networks do not possess a mechanism specifying what kind of information (e.g. sensory modality) they process. Going back to the classic labeled-lines hypothesis, it is argued that this hypothesis does not offer a solution to the question how the brain differentiates the various sensory inputs it receives into distinct modalities. The brain is observed to live in a "Cuneiform room" by which it only receives and emits spike messages: these are the only source materials by which it can construct modally differentiated experiences.
Randall C. O'Reilly, Alex A. Petrov, Jonathan D. Cohen, Christian J. Lebiere, Seth A. Herd, and Trent Kriete
- Published in print:
- 2014
- Published Online:
- September 2014
- ISBN:
- 9780262027236
- eISBN:
- 9780262322461
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262027236.003.0008
- Subject:
- Philosophy, Philosophy of Mind
Is human cognition best characterized in terms of the systematic nature of classical symbol processing systems (as argued by Fodor & Pylyshyn, 1988), or in terms of the context-sensitive, embedded ...
More
Is human cognition best characterized in terms of the systematic nature of classical symbol processing systems (as argued by Fodor & Pylyshyn, 1988), or in terms of the context-sensitive, embedded knowledge characteristic of classical connectionist or neural network systems? We attempt to bridge these contrasting perspectives in several ways. First, we argue that human cognition exhibits the full spectrum, from extreme context sensitivity to high levels of systematicity. Next, we leverage biologically-based computational modeling of different brain areas (and their interactions), at multiple levels of abstraction, to show how this full spectrum of behavior can be understood from a computational cognitive neuroscience perspective. In particular, recent computational modeling of the prefrontal cortex / basal ganglia circuit demonstrates a mechanism for variable binding that supports high levels of systematicity, in domains where traditional connectionist models fail. Thus, we find that this debate has helped advance our understanding of human cognition in many ways, and are optimistic that a careful consideration of the computational nature of neural processing can help bridge seemingly opposing viewpoints.Less
Is human cognition best characterized in terms of the systematic nature of classical symbol processing systems (as argued by Fodor & Pylyshyn, 1988), or in terms of the context-sensitive, embedded knowledge characteristic of classical connectionist or neural network systems? We attempt to bridge these contrasting perspectives in several ways. First, we argue that human cognition exhibits the full spectrum, from extreme context sensitivity to high levels of systematicity. Next, we leverage biologically-based computational modeling of different brain areas (and their interactions), at multiple levels of abstraction, to show how this full spectrum of behavior can be understood from a computational cognitive neuroscience perspective. In particular, recent computational modeling of the prefrontal cortex / basal ganglia circuit demonstrates a mechanism for variable binding that supports high levels of systematicity, in domains where traditional connectionist models fail. Thus, we find that this debate has helped advance our understanding of human cognition in many ways, and are optimistic that a careful consideration of the computational nature of neural processing can help bridge seemingly opposing viewpoints.
Timothy J. O’Donnell
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0004
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter reviews the literatures on the English past tense and past participle. The first part of the chapter reviews the empirical literature on the English past from point of view of ...
More
This chapter reviews the literatures on the English past tense and past participle. The first part of the chapter reviews the empirical literature on the English past from point of view of productivity and reuse—what is known about the pattern of competition between various generalizations exhibited by the English past system. The system exhibits a sharp dichotomy in levels of productivity and is characterized by a violable pattern of defaultness and blocking. The regular +ed rule acts as a default, applying when no other form is available, while the availability of irregular forms blocks its application. The second part of this chapter examines how productivity is handled by different theoretical accounts of the English past, organized around two high-level points. First, the general applicability of the regular rule means that any theory that can account for the past tense must be able to represent abstract generalizations. Second, the empirical literature also shows that blocking is a probabilistic phenomenon and that irregular generalizations can sometimes apply to novel stems. This, together with other phenomena indicate that any adequate theory of the past tense must be able to provide a violable, quantitative account of blocking and defaultness.Less
This chapter reviews the literatures on the English past tense and past participle. The first part of the chapter reviews the empirical literature on the English past from point of view of productivity and reuse—what is known about the pattern of competition between various generalizations exhibited by the English past system. The system exhibits a sharp dichotomy in levels of productivity and is characterized by a violable pattern of defaultness and blocking. The regular +ed rule acts as a default, applying when no other form is available, while the availability of irregular forms blocks its application. The second part of this chapter examines how productivity is handled by different theoretical accounts of the English past, organized around two high-level points. First, the general applicability of the regular rule means that any theory that can account for the past tense must be able to represent abstract generalizations. Second, the empirical literature also shows that blocking is a probabilistic phenomenon and that irregular generalizations can sometimes apply to novel stems. This, together with other phenomena indicate that any adequate theory of the past tense must be able to provide a violable, quantitative account of blocking and defaultness.
Gideon Borensztajn, Willem Zuidema, and William Bechtel
- Published in print:
- 2014
- Published Online:
- September 2014
- ISBN:
- 9780262027236
- eISBN:
- 9780262322461
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262027236.003.0007
- Subject:
- Philosophy, Philosophy of Mind
In this chapter we propose precise operational criteria of systematicity that reveal a connection between the notion of systematicity and causal roles for category membership. We argue that neural ...
More
In this chapter we propose precise operational criteria of systematicity that reveal a connection between the notion of systematicity and causal roles for category membership. We argue that neural network approaches that build on the assumption that grammatical knowledge is encoded implicitly, such as Elman's SRN, fall short of demonstrating systematic behavior precisely because such implicit knowledge plays no causal role in the network dynamics. On the other hand neural networks that employ explicit, encapsulated representations (i.e., representations that encapsulate contextual details) do enable categories to play causal roles. We draw upon insights from neurobiology to show how the hierarchical, columnar organization of the cortex in fact provides a basis for encapsulated representations that are invariant. We then sketch a novel approach to neural network modeling that illustrates how encapsulated representations can be operated on and dynamically bound into complex representations, producing rule-like, systematic behavior capable of dealing with hierarchical syntax.Less
In this chapter we propose precise operational criteria of systematicity that reveal a connection between the notion of systematicity and causal roles for category membership. We argue that neural network approaches that build on the assumption that grammatical knowledge is encoded implicitly, such as Elman's SRN, fall short of demonstrating systematic behavior precisely because such implicit knowledge plays no causal role in the network dynamics. On the other hand neural networks that employ explicit, encapsulated representations (i.e., representations that encapsulate contextual details) do enable categories to play causal roles. We draw upon insights from neurobiology to show how the hierarchical, columnar organization of the cortex in fact provides a basis for encapsulated representations that are invariant. We then sketch a novel approach to neural network modeling that illustrates how encapsulated representations can be operated on and dynamically bound into complex representations, producing rule-like, systematic behavior capable of dealing with hierarchical syntax.
Christof Koch
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195104912
- eISBN:
- 9780197562338
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195139853.003.0020
- Subject:
- Computer Science, Mathematical Theory of Computation
In the previous thirteen chapters, we met and described, sometimes in excruciating detail, the constitutive elements making up the neuronal hardware: dendrites, synapses, voltagedependent ...
More
In the previous thirteen chapters, we met and described, sometimes in excruciating detail, the constitutive elements making up the neuronal hardware: dendrites, synapses, voltagedependent conductances, axons, spines and calcium. We saw how, different from electronic circuits in which only very few levels of organization exist, the nervous systems has many tightly interlocking levels of organization that codepend on each other in crucial ways. It is now time to put some of these elements together into a functioning whole, a single nerve cell. With such a single nerve cell model in hand, we can ask functional questions, such as: at what time scale does it operate, what sort of operations can it carry out, and how good is it at encoding information. We begin this Herculean task by (1) completely neglecting the dendritic tree and (2) replacing the conductance-based description of the spiking process (e.g., the Hodgkin- Huxley equations) by one of two canonical descriptions. These two steps dramatically reduce the complexity of the problem of characterizing the electrical behavior of neurons. Instead of having to solve coupled, nonlinear partial differential equations, we are left with a single ordinary differential equation. Such simplifications allow us to formally treat networks of large numbers of interconnected neurons, as exemplified in the neural network literature, and to simulate their dynamics. Understanding any complex system always entails choosing a level of description that retains key properties of the system while removing those nonessential for the purpose at hand. The study of brains is no exception to this. Numerous simplified single-cell models have been proposed over the years, yet most of them can be reduced to just one of two forms. These can be distinguished by the form of their output: spike or pulse models generate discrete, all-or-none impulses.
Less
In the previous thirteen chapters, we met and described, sometimes in excruciating detail, the constitutive elements making up the neuronal hardware: dendrites, synapses, voltagedependent conductances, axons, spines and calcium. We saw how, different from electronic circuits in which only very few levels of organization exist, the nervous systems has many tightly interlocking levels of organization that codepend on each other in crucial ways. It is now time to put some of these elements together into a functioning whole, a single nerve cell. With such a single nerve cell model in hand, we can ask functional questions, such as: at what time scale does it operate, what sort of operations can it carry out, and how good is it at encoding information. We begin this Herculean task by (1) completely neglecting the dendritic tree and (2) replacing the conductance-based description of the spiking process (e.g., the Hodgkin- Huxley equations) by one of two canonical descriptions. These two steps dramatically reduce the complexity of the problem of characterizing the electrical behavior of neurons. Instead of having to solve coupled, nonlinear partial differential equations, we are left with a single ordinary differential equation. Such simplifications allow us to formally treat networks of large numbers of interconnected neurons, as exemplified in the neural network literature, and to simulate their dynamics. Understanding any complex system always entails choosing a level of description that retains key properties of the system while removing those nonessential for the purpose at hand. The study of brains is no exception to this. Numerous simplified single-cell models have been proposed over the years, yet most of them can be reduced to just one of two forms. These can be distinguished by the form of their output: spike or pulse models generate discrete, all-or-none impulses.
Arlindo Oliveira
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780262036030
- eISBN:
- 9780262338394
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036030.003.0005
- Subject:
- Computer Science, Artificial Intelligence
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an ...
More
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.Less
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.
Timothy J. O’Donnell
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0005
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter presents simulation results for the English past. The first part of the chapter gives a general overview of the modeling assumptions used for the simulations of the English past-tense ...
More
This chapter presents simulation results for the English past. The first part of the chapter gives a general overview of the modeling assumptions used for the simulations of the English past-tense system, including input representations and the training corpus. Latter parts of the chapter discuss simulation results showing that of the five formal models considered in the book, only fragment grammars—the inference-based model—provides an adequate explanation of the major empirical phenomena in the English past. The inference-based model treats the regular rule as a default, applying in cases where no other inflectional process is available, while exhibiting blocking of the regular rule by the irregular forms. Furthermore, the inference-based model explains why blocking is directional, following the elsewhere condition with more specific generalizations typically taking precedence over others. It also provides a partial explanation for cases where blocking fails, for example, cases of overregularization during language acquisition.Less
This chapter presents simulation results for the English past. The first part of the chapter gives a general overview of the modeling assumptions used for the simulations of the English past-tense system, including input representations and the training corpus. Latter parts of the chapter discuss simulation results showing that of the five formal models considered in the book, only fragment grammars—the inference-based model—provides an adequate explanation of the major empirical phenomena in the English past. The inference-based model treats the regular rule as a default, applying in cases where no other inflectional process is available, while exhibiting blocking of the regular rule by the irregular forms. Furthermore, the inference-based model explains why blocking is directional, following the elsewhere condition with more specific generalizations typically taking precedence over others. It also provides a partial explanation for cases where blocking fails, for example, cases of overregularization during language acquisition.
Lionel Raff, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam
- Published in print:
- 2012
- Published Online:
- November 2020
- ISBN:
- 9780199765652
- eISBN:
- 9780197563113
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199765652.003.0013
- Subject:
- Chemistry, Physical Chemistry
The use of neural networks (NNs) to predict an outcome or the output results as a function of a set of input parameters has been gaining wider acceptance with the advance in computer technology as ...
More
The use of neural networks (NNs) to predict an outcome or the output results as a function of a set of input parameters has been gaining wider acceptance with the advance in computer technology as well as with an increased awareness of the potential of NNs. A neural network is first trained to learn the underlying functional relationship between the output and the input parameters by providing it with a large number of data points, where each data point corresponds to a set of output and input parameters. Sumpter and Noid demonstrated the use of NNs to map the vibrational motion derived from the vibrational spectra onto a PES with relatively high accuracy. In another application, Sumpter et al. trained an NN to learn the relation between the phase-space points along a trajectory and the mode energies for stretching, torsion, and bending vibrations of H2O2. Likewise, Nami et al. demonstrated the use of NNs to determine the TiO2 deposition rates in a chemical vapor deposition (CVD) process from the knowledge of a range of deposition conditions. In view of the success achieved in obtaining interpolated values of the PESs for multi-atomic systems using an NN trained by the ab initio energy values for a large number of configurations, it is reasonable to ask whether we can successfully compute the results of an MD trajectory for a chemical reaction using an NN trained by the data obtained by previous MD simulations. If this can be done successfully, it becomes possible to execute a small number of trajectories, M, and then utilize the results of these trajectories as a database to train an NN to predict the final results of a very large number of trajectories N, where N >> M, that can be used to increase the statistical accuracy of the MD calculations and to further explore the dependence of the trajectory results upon a wide variety of variables without actually having to perform any further numerical integrations. In effect, the NN replaces the computationally laborious numerical integrations.
Less
The use of neural networks (NNs) to predict an outcome or the output results as a function of a set of input parameters has been gaining wider acceptance with the advance in computer technology as well as with an increased awareness of the potential of NNs. A neural network is first trained to learn the underlying functional relationship between the output and the input parameters by providing it with a large number of data points, where each data point corresponds to a set of output and input parameters. Sumpter and Noid demonstrated the use of NNs to map the vibrational motion derived from the vibrational spectra onto a PES with relatively high accuracy. In another application, Sumpter et al. trained an NN to learn the relation between the phase-space points along a trajectory and the mode energies for stretching, torsion, and bending vibrations of H2O2. Likewise, Nami et al. demonstrated the use of NNs to determine the TiO2 deposition rates in a chemical vapor deposition (CVD) process from the knowledge of a range of deposition conditions. In view of the success achieved in obtaining interpolated values of the PESs for multi-atomic systems using an NN trained by the ab initio energy values for a large number of configurations, it is reasonable to ask whether we can successfully compute the results of an MD trajectory for a chemical reaction using an NN trained by the data obtained by previous MD simulations. If this can be done successfully, it becomes possible to execute a small number of trajectories, M, and then utilize the results of these trajectories as a database to train an NN to predict the final results of a very large number of trajectories N, where N >> M, that can be used to increase the statistical accuracy of the MD calculations and to further explore the dependence of the trajectory results upon a wide variety of variables without actually having to perform any further numerical integrations. In effect, the NN replaces the computationally laborious numerical integrations.
Alcino J. Silva, Anthony Landreth, and John Bickle
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780199731756
- eISBN:
- 9780199367658
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199731756.003.0008
- Subject:
- Neuroscience, Molecular and Cellular Systems, Techniques
This chapter proposes an approach to planning informative experiments based on the concepts of previous chapters. The process of rational experiment planning can be viewed as the progressive ...
More
This chapter proposes an approach to planning informative experiments based on the concepts of previous chapters. The process of rational experiment planning can be viewed as the progressive filling-in of missing evidence, and resolution of conflicting evidence. To make our recommendations concrete, we will work with a live research project in molecular and cellular cognition, concerning memory allocation: the problem of understanding the mechanisms that determine which cells in a circuit are recruited to store a given memoryLess
This chapter proposes an approach to planning informative experiments based on the concepts of previous chapters. The process of rational experiment planning can be viewed as the progressive filling-in of missing evidence, and resolution of conflicting evidence. To make our recommendations concrete, we will work with a live research project in molecular and cellular cognition, concerning memory allocation: the problem of understanding the mechanisms that determine which cells in a circuit are recruited to store a given memory
Kenneth Payne
- Published in print:
- 2021
- Published Online:
- January 2022
- ISBN:
- 9780197611692
- eISBN:
- 9780197632956
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197611692.003.0003
- Subject:
- Political Science, Security Studies
Machine Learning, and especially Deep Learning, has led to startling advances in AI over the last decade. Much of this research has led by the private sector, rather than the military. Artificial ...
More
Machine Learning, and especially Deep Learning, has led to startling advances in AI over the last decade. Much of this research has led by the private sector, rather than the military. Artificial neural networks are highly effective at optimizing decisions in narrow domains, like an Atari video game. That can deliver superhuman performance on some military tasks, like flying robot helicopters. But it may not be sufficient for a more flexible, human-like intelligence, of the sort required to make strategic judgments about escalation.Less
Machine Learning, and especially Deep Learning, has led to startling advances in AI over the last decade. Much of this research has led by the private sector, rather than the military. Artificial neural networks are highly effective at optimizing decisions in narrow domains, like an Atari video game. That can deliver superhuman performance on some military tasks, like flying robot helicopters. But it may not be sufficient for a more flexible, human-like intelligence, of the sort required to make strategic judgments about escalation.
Anthony Trewavas
- Published in print:
- 2014
- Published Online:
- November 2014
- ISBN:
- 9780199539543
- eISBN:
- 9780191788291
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199539543.003.0022
- Subject:
- Biology, Plant Sciences and Forestry
Cellular proteins have a broad range of functions that provide a springboard for understanding how cells express intelligent behaviour. The major proteins and processes in signal transduction have ...
More
Cellular proteins have a broad range of functions that provide a springboard for understanding how cells express intelligent behaviour. The major proteins and processes in signal transduction have switch-like activity from receptors to cytosolic Ca2+ and then further amplification via protein kinase and phosphorylation. The potential for creativity (novelty) in transduction sequence has been demonstrated from a consideration of different energy levels in proteins. Cyclic enzymes of different kinds can also be shown to have switch-like characteristics, as do allosteric proteins in combination with an activator. Boolean language underpins computer design and some Boolean operations mimic known transduction steps. Nerve cells often have a switch-like character to them and simple neural networks composed of a few neurones have been modelled and shown to exhibit important properties such as pattern recognition, computation and memory. Some transduction sequences bear similarity to simple neural networks and use of chemical diodes, all-or-none chemical reactions linked together, exhibited similar behaviour as the neural net. Logic circuits that describe certain developmental behaviours have been reported. The concept of mutual information has been used to determine the ‘bits’ of information that underpin transduction events in single cells. In most cases a single bit, an on/off function was detected. More cells provide for greater numbers of outcomes. Finally Manfred Eigen’s (Nobel Prize winner) assessment of learning, memory and intelligence in single cells is described.Less
Cellular proteins have a broad range of functions that provide a springboard for understanding how cells express intelligent behaviour. The major proteins and processes in signal transduction have switch-like activity from receptors to cytosolic Ca2+ and then further amplification via protein kinase and phosphorylation. The potential for creativity (novelty) in transduction sequence has been demonstrated from a consideration of different energy levels in proteins. Cyclic enzymes of different kinds can also be shown to have switch-like characteristics, as do allosteric proteins in combination with an activator. Boolean language underpins computer design and some Boolean operations mimic known transduction steps. Nerve cells often have a switch-like character to them and simple neural networks composed of a few neurones have been modelled and shown to exhibit important properties such as pattern recognition, computation and memory. Some transduction sequences bear similarity to simple neural networks and use of chemical diodes, all-or-none chemical reactions linked together, exhibited similar behaviour as the neural net. Logic circuits that describe certain developmental behaviours have been reported. The concept of mutual information has been used to determine the ‘bits’ of information that underpin transduction events in single cells. In most cases a single bit, an on/off function was detected. More cells provide for greater numbers of outcomes. Finally Manfred Eigen’s (Nobel Prize winner) assessment of learning, memory and intelligence in single cells is described.
Lionel Raff, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam
- Published in print:
- 2012
- Published Online:
- November 2020
- ISBN:
- 9780199765652
- eISBN:
- 9780197563113
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199765652.003.0008
- Subject:
- Chemistry, Physical Chemistry
In order to achieve the maximum accuracy in characterizing the PES and the associated force fields for an MD investigation, careful preparation of the database is an essential step in the process. ...
More
In order to achieve the maximum accuracy in characterizing the PES and the associated force fields for an MD investigation, careful preparation of the database is an essential step in the process. The points that must be addressed include the following: 1. The total volume of configuration space is extremely large, and its size increases as the internal energy of the system rises. For example, consider a four-atom system. For this system, at least six internal coordinates must be specified to determine the spatial configuration of the molecular system. At a given internal energy, each of these six coordinates can span a continuous range of values from some minimum to some maximum. If each variable range is divided into 100 equal increments and the potential energy of the system computed by some ab initio method for all possible configurations of the system, a total of 1006or 1012 electronic structure calculations would need to be executed. This is clearly beyond the computational capabilities of any computational system currently in existence. Grid sampling methods can and have been used effectively for three atom systems. However, for more complex systems, it is essential that procedures be developed that permit the regions of configuration space that are important in the reaction dynamics to be identified. 2. Sampling methods usually should be optimized to produce a reasonably uniform density of data points in those regions of configuration space that are important in the dynamics. If this is not done and there are regions of very high point density and others with low point density, no fitting technique will function well. The parameters of the method will adjust themselves to fit regions of high density preferentially over those with low density even when the low-density regions may be more important in the dynamics. An exception to the need to have an approximately uniform density of points in the database occurs in regions where the potential gradient is large. In such regions, the density of points in the database will need to be larger than in regions in which the gradient is small.
Less
In order to achieve the maximum accuracy in characterizing the PES and the associated force fields for an MD investigation, careful preparation of the database is an essential step in the process. The points that must be addressed include the following: 1. The total volume of configuration space is extremely large, and its size increases as the internal energy of the system rises. For example, consider a four-atom system. For this system, at least six internal coordinates must be specified to determine the spatial configuration of the molecular system. At a given internal energy, each of these six coordinates can span a continuous range of values from some minimum to some maximum. If each variable range is divided into 100 equal increments and the potential energy of the system computed by some ab initio method for all possible configurations of the system, a total of 1006or 1012 electronic structure calculations would need to be executed. This is clearly beyond the computational capabilities of any computational system currently in existence. Grid sampling methods can and have been used effectively for three atom systems. However, for more complex systems, it is essential that procedures be developed that permit the regions of configuration space that are important in the reaction dynamics to be identified. 2. Sampling methods usually should be optimized to produce a reasonably uniform density of data points in those regions of configuration space that are important in the dynamics. If this is not done and there are regions of very high point density and others with low point density, no fitting technique will function well. The parameters of the method will adjust themselves to fit regions of high density preferentially over those with low density even when the low-density regions may be more important in the dynamics. An exception to the need to have an approximately uniform density of points in the database occurs in regions where the potential gradient is large. In such regions, the density of points in the database will need to be larger than in regions in which the gradient is small.
Maurice J. McHugh and Douglas G. Goodin
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195150599
- eISBN:
- 9780197561881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195150599.003.0023
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
Interdecadal-scale climate variability must be considered when interpreting climatic trends at local, regional, or global scales. Significant amounts of variance are found at interdecadal ...
More
Interdecadal-scale climate variability must be considered when interpreting climatic trends at local, regional, or global scales. Significant amounts of variance are found at interdecadal timescales in many climate parameters of both “direct” data (e.g., precipitation and sea surface temperatures at specific locations) and “indirect” data through which the climate system operates (e.g., circulation indices such as the Pacific North American index [PNA] or the North Atlantic Oscillation index [NAO]). The aim of this study is to evaluate LTER climate data for evidence of interdecadal-scale variability, which may in turn be associated with interdecadal-scale fluctuations evident in ecological or biophysical data measured throughout the LTER site network. In their conceptualization of climatic variability, Marcus and Brazel (1984) describe four types of interannual climate variations: (1) Periodic variations around a stationary mean are well known to occur at short timescales, such as diurnal temperature changes or the annual cycle, but are difficult to resolve at decadal or longer timescales. (2) Discontinuities generated by sudden changes in the overall state of the climate system can reveal nonstationarity in the mean about which data vary in a periodic or quasi-periodic manner. These sudden alterations can result in periods perhaps characterized by prolonged drought or colder than normal temperatures. (3) The climate system may undergo trends such as periods of slowly increasing or decreasing precipitation or of warming or cooling until some new mean “steady” state is reached. (4) Climate data may exhibit increasing or decreasing variability about a specific mean value or steady state. Interdecadal contributions to climate variability can be described in terms of types 2 and 3 of Marcus and Brazel’s conceptual classification—discontinuities in the mean and trends in the data. Records of the Northern Hemisphere’s average land surface temperature show discontinuities in the mean state of the hemispheric temperature record in conjunction with obvious trends. Conceptually, it is hard to distinguish between these aspects of climate variability. Trends are an essential component of an alteration in the mean state of the temperature series, as they serve as a temporal linkage between the different mean states.
Less
Interdecadal-scale climate variability must be considered when interpreting climatic trends at local, regional, or global scales. Significant amounts of variance are found at interdecadal timescales in many climate parameters of both “direct” data (e.g., precipitation and sea surface temperatures at specific locations) and “indirect” data through which the climate system operates (e.g., circulation indices such as the Pacific North American index [PNA] or the North Atlantic Oscillation index [NAO]). The aim of this study is to evaluate LTER climate data for evidence of interdecadal-scale variability, which may in turn be associated with interdecadal-scale fluctuations evident in ecological or biophysical data measured throughout the LTER site network. In their conceptualization of climatic variability, Marcus and Brazel (1984) describe four types of interannual climate variations: (1) Periodic variations around a stationary mean are well known to occur at short timescales, such as diurnal temperature changes or the annual cycle, but are difficult to resolve at decadal or longer timescales. (2) Discontinuities generated by sudden changes in the overall state of the climate system can reveal nonstationarity in the mean about which data vary in a periodic or quasi-periodic manner. These sudden alterations can result in periods perhaps characterized by prolonged drought or colder than normal temperatures. (3) The climate system may undergo trends such as periods of slowly increasing or decreasing precipitation or of warming or cooling until some new mean “steady” state is reached. (4) Climate data may exhibit increasing or decreasing variability about a specific mean value or steady state. Interdecadal contributions to climate variability can be described in terms of types 2 and 3 of Marcus and Brazel’s conceptual classification—discontinuities in the mean and trends in the data. Records of the Northern Hemisphere’s average land surface temperature show discontinuities in the mean state of the hemispheric temperature record in conjunction with obvious trends. Conceptually, it is hard to distinguish between these aspects of climate variability. Trends are an essential component of an alteration in the mean state of the temperature series, as they serve as a temporal linkage between the different mean states.