Gary A. Glatzmaier
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691141725
- eISBN:
- 9781400848904
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691141725.003.0005
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter explains how to write a postprocessing code, and more specifically how to study the nonlinear simulations using computer graphics and analysis. It first considers how to compute and ...
More
This chapter explains how to write a postprocessing code, and more specifically how to study the nonlinear simulations using computer graphics and analysis. It first considers how to compute and store results in a file during the computer simulation, assuming the Fourier transforms to x-space are done within the main computational code during the simulation. It then describes the postprocessing code for reading these files and displaying the various fields, along with the use of graphics software packages that provide additional, more sophisticated visualizations of the scalar and vector data. It also discusses the computer analysis of several additional properties of the solution, focusing on measurements of nonlinear convection such as Rayleigh number, Nusselt number, Reynolds number, and kinetic energy spectrum.Less
This chapter explains how to write a postprocessing code, and more specifically how to study the nonlinear simulations using computer graphics and analysis. It first considers how to compute and store results in a file during the computer simulation, assuming the Fourier transforms to x-space are done within the main computational code during the simulation. It then describes the postprocessing code for reading these files and displaying the various fields, along with the use of graphics software packages that provide additional, more sophisticated visualizations of the scalar and vector data. It also discusses the computer analysis of several additional properties of the solution, focusing on measurements of nonlinear convection such as Rayleigh number, Nusselt number, Reynolds number, and kinetic energy spectrum.
Charles D. Bailyn
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691148823
- eISBN:
- 9781400850563
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691148823.003.0003
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter looks at the presence of outflows or jets, a somewhat unexpected feature of accretion flows. There is strong observational evidence that some fraction of the infalling material reverses ...
More
This chapter looks at the presence of outflows or jets, a somewhat unexpected feature of accretion flows. There is strong observational evidence that some fraction of the infalling material reverses course near the accreting object and is shot out perpendicularly to the accretion disk. In some cases, narrow collimated beams of emission are observed emerging from the central-most regions of galaxies and continuing across the whole of the galaxy, depositing their energy hundreds of kiloparsecs away from their origin. These phenomena are sometimes described as jets “emerging” from a black hole. This parlance is misleading—the jets do not, and indeed could not, emerge from inside the event horizon. Rather, some mechanism redirects the energy generated by the accretion process into a fraction of the infalling material and provides enough bulk kinetic energy for the material to escape the accretion process before the material enters the event horizon.Less
This chapter looks at the presence of outflows or jets, a somewhat unexpected feature of accretion flows. There is strong observational evidence that some fraction of the infalling material reverses course near the accreting object and is shot out perpendicularly to the accretion disk. In some cases, narrow collimated beams of emission are observed emerging from the central-most regions of galaxies and continuing across the whole of the galaxy, depositing their energy hundreds of kiloparsecs away from their origin. These phenomena are sometimes described as jets “emerging” from a black hole. This parlance is misleading—the jets do not, and indeed could not, emerge from inside the event horizon. Rather, some mechanism redirects the energy generated by the accretion process into a fraction of the infalling material and provides enough bulk kinetic energy for the material to escape the accretion process before the material enters the event horizon.
George E. Smith and Raghav Seth
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780190098025
- eISBN:
- 9780190098056
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190098025.003.0004
- Subject:
- Philosophy, Philosophy of Science
Between 1908 and 1911 Perrin published values for Avogadro’s number—the number of molecules per mole of any substance—on the basis of theory-mediated measurements of the mean kinetic energies of ...
More
Between 1908 and 1911 Perrin published values for Avogadro’s number—the number of molecules per mole of any substance—on the basis of theory-mediated measurements of the mean kinetic energies of granules in Brownian motion. The umbilical cord connecting these energies to Avogadro’s number was the assumption that they are the same as the mean kinetic energies of the molecules in the surrounding liquid. This, as van Fraassen has argued, seems to presuppose that molecules exist, thereby undercutting Perrin’s claim to be proving their existence. This chapter reviews Perrin’s four theory-mediated measurements, showing, on the one hand, that none of them in fact depended on molecular theory yet, on the other, that, by virtue of being exemplars of theory-mediated measurement at its best, they managed to establish several extraordinary landmark conclusions about Brownian motion in its own right.Less
Between 1908 and 1911 Perrin published values for Avogadro’s number—the number of molecules per mole of any substance—on the basis of theory-mediated measurements of the mean kinetic energies of granules in Brownian motion. The umbilical cord connecting these energies to Avogadro’s number was the assumption that they are the same as the mean kinetic energies of the molecules in the surrounding liquid. This, as van Fraassen has argued, seems to presuppose that molecules exist, thereby undercutting Perrin’s claim to be proving their existence. This chapter reviews Perrin’s four theory-mediated measurements, showing, on the one hand, that none of them in fact depended on molecular theory yet, on the other, that, by virtue of being exemplars of theory-mediated measurement at its best, they managed to establish several extraordinary landmark conclusions about Brownian motion in its own right.
Martin Schöneld
- Published in print:
- 2000
- Published Online:
- May 2006
- ISBN:
- 9780195132182
- eISBN:
- 9780199786336
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195132181.003.0003
- Subject:
- Philosophy, History of Philosophy
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 ...
More
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 discusses Kant’s dynamic ontology, such as his ideas on substantial interaction and energetic space. Section 3 analyzes Kant’s experimental and kinematic appraisals, which form the bulk of his first book. Section 4 describes Kant’s proposed synthesis of Cartesian momentum and Leibnizian energy as “true estimation” of force.Less
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 discusses Kant’s dynamic ontology, such as his ideas on substantial interaction and energetic space. Section 3 analyzes Kant’s experimental and kinematic appraisals, which form the bulk of his first book. Section 4 describes Kant’s proposed synthesis of Cartesian momentum and Leibnizian energy as “true estimation” of force.
George E. Smith and Raghav Seth
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780190098025
- eISBN:
- 9780190098056
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190098025.003.0003
- Subject:
- Philosophy, Philosophy of Science
The mystery of Brownian motion had been announced with its discovery by Robert Brown in 1828: the persistence of the motion of solid particles in liquids for indefinite periods of time instead of ...
More
The mystery of Brownian motion had been announced with its discovery by Robert Brown in 1828: the persistence of the motion of solid particles in liquids for indefinite periods of time instead of sinking as sediment to the bottom. Once molecular-kinetic theory emerged more fully a few years later, it was the obvious candidate for explaining the phenomenon. Nevertheless, those developing kinetic theory in the second half of the century, Maxwell and Boltzmann, appear to have ignored it. The chapter summarizes research on Brownian motion during the nineteenth century, indicating why leading physicists ignored it, and what developments in the first five years of the twentieth century led to its suddenly becoming so important to kinetic theory. This background supplements that of Chapter 2, completing the historical context for the developments covered in subsequent chapters.Less
The mystery of Brownian motion had been announced with its discovery by Robert Brown in 1828: the persistence of the motion of solid particles in liquids for indefinite periods of time instead of sinking as sediment to the bottom. Once molecular-kinetic theory emerged more fully a few years later, it was the obvious candidate for explaining the phenomenon. Nevertheless, those developing kinetic theory in the second half of the century, Maxwell and Boltzmann, appear to have ignored it. The chapter summarizes research on Brownian motion during the nineteenth century, indicating why leading physicists ignored it, and what developments in the first five years of the twentieth century led to its suddenly becoming so important to kinetic theory. This background supplements that of Chapter 2, completing the historical context for the developments covered in subsequent chapters.
Oliver Johns
- Published in print:
- 2005
- Published Online:
- January 2010
- ISBN:
- 9780198567264
- eISBN:
- 9780191717987
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198567264.003.0009
- Subject:
- Physics, Atomic, Laser, and Optical Physics
The successful description of the motion of a rigid body is one of the triumphs of Newtonian mechanics. Having learned in the previous chapter how to specify the position and orientation of a rigid ...
More
The successful description of the motion of a rigid body is one of the triumphs of Newtonian mechanics. Having learned in the previous chapter how to specify the position and orientation of a rigid body, this chapter deals with its natural motion under impressed external forces and torques. The dynamical theorems of collective motion are extended using rotation operators. Some basic facts about rigid-body motion are discussed, along with the inertia operator and the spin, inertia dyadic, kinetic energy of a rigid body, meaning of the inertia operator, principal axes, time evolution of the spin, torque-free motion of a symmetric body, Euler angles of the torque-free motion, body with one point fixed, time evolution with one point fixed, work-energy theorems, rotation with a fixed axis, symmetric top with one point fixed, initially clamped symmetric top, approximate treatment of the symmetric top, inertial forces, calculations of the Coriolis force, and the magnetic-Coriolis analogy.Less
The successful description of the motion of a rigid body is one of the triumphs of Newtonian mechanics. Having learned in the previous chapter how to specify the position and orientation of a rigid body, this chapter deals with its natural motion under impressed external forces and torques. The dynamical theorems of collective motion are extended using rotation operators. Some basic facts about rigid-body motion are discussed, along with the inertia operator and the spin, inertia dyadic, kinetic energy of a rigid body, meaning of the inertia operator, principal axes, time evolution of the spin, torque-free motion of a symmetric body, Euler angles of the torque-free motion, body with one point fixed, time evolution with one point fixed, work-energy theorems, rotation with a fixed axis, symmetric top with one point fixed, initially clamped symmetric top, approximate treatment of the symmetric top, inertial forces, calculations of the Coriolis force, and the magnetic-Coriolis analogy.
Jochen Autschbach
- Published in print:
- 2020
- Published Online:
- February 2021
- ISBN:
- 9780190920807
- eISBN:
- 9780197508350
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190920807.003.0009
- Subject:
- Chemistry, Quantum and Theoretical Chemistry
It is shown how an aufbau principle for atoms arises from the Hartree-Fock (HF) treatment with increasing numbers of electrons. The Slater screening rules are introduced. The HF equations for general ...
More
It is shown how an aufbau principle for atoms arises from the Hartree-Fock (HF) treatment with increasing numbers of electrons. The Slater screening rules are introduced. The HF equations for general molecules are not separable in the spatial variables. This requires another approximation, such as the linear combination of atomic orbitals (LCAO) molecular orbital method. The orbitals of molecules are represented in a basis set of known functions, for example atomic orbital (AO)-like functions or plane waves. The HF equation then becomes a generalized matrix pseudo-eigenvalue problem. Solutions are obtained for the hydrogen molecule ion and H2 with a minimal AO basis. The Slater rule for 1s shells is rationalized via the optimal exponent in a minimal 1s basis. The nature of the chemical bond, and specifically the role of the kinetic energy in covalent bonding, are discussed in details with the example of the hydrogen molecule ion.Less
It is shown how an aufbau principle for atoms arises from the Hartree-Fock (HF) treatment with increasing numbers of electrons. The Slater screening rules are introduced. The HF equations for general molecules are not separable in the spatial variables. This requires another approximation, such as the linear combination of atomic orbitals (LCAO) molecular orbital method. The orbitals of molecules are represented in a basis set of known functions, for example atomic orbital (AO)-like functions or plane waves. The HF equation then becomes a generalized matrix pseudo-eigenvalue problem. Solutions are obtained for the hydrogen molecule ion and H2 with a minimal AO basis. The Slater rule for 1s shells is rationalized via the optimal exponent in a minimal 1s basis. The nature of the chemical bond, and specifically the role of the kinetic energy in covalent bonding, are discussed in details with the example of the hydrogen molecule ion.
Robert T. Hanlon
- Published in print:
- 2020
- Published Online:
- April 2020
- ISBN:
- 9780198851547
- eISBN:
- 9780191886133
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198851547.003.0005
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Energy lies at the core of the first law of thermodynamics as a concept to quantify change. Events happen but total energy remains the same. It’s the change in energy that matters. The realization ...
More
Energy lies at the core of the first law of thermodynamics as a concept to quantify change. Events happen but total energy remains the same. It’s the change in energy that matters. The realization that the energy associated with heat was equal to the energy associated with work (work–heat equivalanec) is what led to the first law of thermodynamics.Less
Energy lies at the core of the first law of thermodynamics as a concept to quantify change. Events happen but total energy remains the same. It’s the change in energy that matters. The realization that the energy associated with heat was equal to the energy associated with work (work–heat equivalanec) is what led to the first law of thermodynamics.
Jennifer Coopersmith
- Published in print:
- 2015
- Published Online:
- August 2015
- ISBN:
- 9780198716747
- eISBN:
- 9780191800955
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198716747.003.0018
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology, History of Physics
Difficult questions addressed are: why are there two main forms of energy, potential and kinetic, and is one of them more fundamental? Kinetic energy – where does it go when we change reference ...
More
Difficult questions addressed are: why are there two main forms of energy, potential and kinetic, and is one of them more fundamental? Kinetic energy – where does it go when we change reference frames, and why is its form 1/2mv2? Rest mass energy – is it an absurdly large zero-point energy? What is heat? Does heat only exist in transit? What is temperature and how do the microscopic and macroscopic definitions tie up? What is entropy? Are the laws of thermodynamics empirical and are they absolute? How does the equipartion of energy come about? How does an overall direction of time emerge from microscopic motions that are reversible in time? Why temperature is more fundamental than pressure. Why does the Second Law of Thermodynamics not preclude the increase in ‘structure’ (e.g. stars, people, etc.)? The link between the Second Law of Thermodynamics and Global Warming.Less
Difficult questions addressed are: why are there two main forms of energy, potential and kinetic, and is one of them more fundamental? Kinetic energy – where does it go when we change reference frames, and why is its form 1/2mv2? Rest mass energy – is it an absurdly large zero-point energy? What is heat? Does heat only exist in transit? What is temperature and how do the microscopic and macroscopic definitions tie up? What is entropy? Are the laws of thermodynamics empirical and are they absolute? How does the equipartion of energy come about? How does an overall direction of time emerge from microscopic motions that are reversible in time? Why temperature is more fundamental than pressure. Why does the Second Law of Thermodynamics not preclude the increase in ‘structure’ (e.g. stars, people, etc.)? The link between the Second Law of Thermodynamics and Global Warming.
John L. Lumley and Gal Berkooz
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195106435
- eISBN:
- 9780197561003
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195106435.003.0006
- Subject:
- Computer Science, Mathematical Theory of Computation
Turbulence generally can be characterized by a number of length scales: at least one for the energy containing range, and one from the dissipative range; there may be others, but they can be ...
More
Turbulence generally can be characterized by a number of length scales: at least one for the energy containing range, and one from the dissipative range; there may be others, but they can be expressed in terms of these. Whether a turbulence is simple or not depends on how many scales are necessary to describe the energy containing range. Certainly, if a turbulence involves more than one production mechanism (such as shear and buoyancy, for example, or shear and density differences in a centripetal field) there will be more than one length scale. Even if there is only one physical mechanism, say shear, a turbulence which was produced under one set of circumstances may be subjected to another set of circumstances. For example, a turbulence may be produced in a boundary layer, which is then subjected to a strain rate. For a while, such turbulence will have two length scales, one corresponding to the initial boundary layer turbulence, and the other associated with the strain rate to which the flow is subjected. Or, a turbulence may have different length scales in different directions. Ordinary turbulence modeling is restricted to situations that can be approximated as having a single scale of length and velocity. Turbulence with multiple scales is much more complicated to predict. Some progress can be made by applying rapid distortion theory, or one or another kind of stability theory, to the initial turbulence, and predicting the kinds of structures that are induced by the applied distortion. We will talk more about this later. For now, we will restrict ourselves to a turbulence that has a single scale of length in the energy containing range.
Less
Turbulence generally can be characterized by a number of length scales: at least one for the energy containing range, and one from the dissipative range; there may be others, but they can be expressed in terms of these. Whether a turbulence is simple or not depends on how many scales are necessary to describe the energy containing range. Certainly, if a turbulence involves more than one production mechanism (such as shear and buoyancy, for example, or shear and density differences in a centripetal field) there will be more than one length scale. Even if there is only one physical mechanism, say shear, a turbulence which was produced under one set of circumstances may be subjected to another set of circumstances. For example, a turbulence may be produced in a boundary layer, which is then subjected to a strain rate. For a while, such turbulence will have two length scales, one corresponding to the initial boundary layer turbulence, and the other associated with the strain rate to which the flow is subjected. Or, a turbulence may have different length scales in different directions. Ordinary turbulence modeling is restricted to situations that can be approximated as having a single scale of length and velocity. Turbulence with multiple scales is much more complicated to predict. Some progress can be made by applying rapid distortion theory, or one or another kind of stability theory, to the initial turbulence, and predicting the kinds of structures that are induced by the applied distortion. We will talk more about this later. For now, we will restrict ourselves to a turbulence that has a single scale of length in the energy containing range.
S. Wei and A. W. , Jr Castleman
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195090048
- eISBN:
- 9780197560594
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195090048.003.0009
- Subject:
- Chemistry, Physical Chemistry
The last decade has seen tremendous growth in the study of gas phase clusters. Some areas of cluster research which have received considerable attention in this regard include solvation (Lee et al. ...
More
The last decade has seen tremendous growth in the study of gas phase clusters. Some areas of cluster research which have received considerable attention in this regard include solvation (Lee et al. 1980), (Armirav et al. 1982), and reactivity (Dantus et al. 1991; Khudkar and Zewail 1990; Rosker et al. 1988; Scherer et al. 1987). In particular, studies of the dynamics of formation and dissociation, and the changing properties of clusters at successively higher degrees of aggregation, enable an investigation of the basic mechanisms of nucleation and the continuous transformation of matter from the gas phase to the condensed phase to be probed at the molecular level (Castleman and Keesee 1986a, 1988). In this context, the progressive clustering of a molecule involves energy transfer and redistribution within the molecular system, with attendant processes of unimolecular dissociation taking place between growth steps (Kay and Castleman 1983). Related processes of energy transfer, proton transfer, and dissociation are also operative during the reorientation of molecules about ions produced during the primary ionization event required in detecting clusters via mass spectrometry (Castleman and Keesee 1986b), providing further motivation for studies of the reaction dynamics of clusters (Begemann et al. 1986; Boesl et al 1992; Castleman and Keesee 1987; Echt et al. 1985; Levine and Bernstein 1987; Lifshitz et al. 1990; Lifshitz and Louage 1989, 1990; Märk 1987; Märk and Castleman 1984, 1986; Morgan and Castleman 1989; Stace and Moore 1983; Wei et al. 1990a,b). The real-time probing of cluster reaction dynamics is a facilitating research field through femtosecond pump-probe techniques pioneered by Zewail and coworkers (Dantus et al. 1991; Khundkar and Zewail 1990; Rosker et al. 1988; Scherer et al. 1987). Some real-time investigations have been performed on metal, van der Waals, and hydrogen-bonded clusters by employing these pump-probe spectroscopic techniques. For example, the photoionization and fragmentation of sodium clusters have been investigated by ion mass spectrometry and zero kinetic energy photoelectron spectroscopy in both picosecond (Schreiber et al. 1992) and femtosecond (Baumert et al. 1992, 1993; Bühler et al. 1992) time domains. Studies have also been made to elucidate the effect of solvation on intracluster reactions.
Less
The last decade has seen tremendous growth in the study of gas phase clusters. Some areas of cluster research which have received considerable attention in this regard include solvation (Lee et al. 1980), (Armirav et al. 1982), and reactivity (Dantus et al. 1991; Khudkar and Zewail 1990; Rosker et al. 1988; Scherer et al. 1987). In particular, studies of the dynamics of formation and dissociation, and the changing properties of clusters at successively higher degrees of aggregation, enable an investigation of the basic mechanisms of nucleation and the continuous transformation of matter from the gas phase to the condensed phase to be probed at the molecular level (Castleman and Keesee 1986a, 1988). In this context, the progressive clustering of a molecule involves energy transfer and redistribution within the molecular system, with attendant processes of unimolecular dissociation taking place between growth steps (Kay and Castleman 1983). Related processes of energy transfer, proton transfer, and dissociation are also operative during the reorientation of molecules about ions produced during the primary ionization event required in detecting clusters via mass spectrometry (Castleman and Keesee 1986b), providing further motivation for studies of the reaction dynamics of clusters (Begemann et al. 1986; Boesl et al 1992; Castleman and Keesee 1987; Echt et al. 1985; Levine and Bernstein 1987; Lifshitz et al. 1990; Lifshitz and Louage 1989, 1990; Märk 1987; Märk and Castleman 1984, 1986; Morgan and Castleman 1989; Stace and Moore 1983; Wei et al. 1990a,b). The real-time probing of cluster reaction dynamics is a facilitating research field through femtosecond pump-probe techniques pioneered by Zewail and coworkers (Dantus et al. 1991; Khundkar and Zewail 1990; Rosker et al. 1988; Scherer et al. 1987). Some real-time investigations have been performed on metal, van der Waals, and hydrogen-bonded clusters by employing these pump-probe spectroscopic techniques. For example, the photoionization and fragmentation of sodium clusters have been investigated by ion mass spectrometry and zero kinetic energy photoelectron spectroscopy in both picosecond (Schreiber et al. 1992) and femtosecond (Baumert et al. 1992, 1993; Bühler et al. 1992) time domains. Studies have also been made to elucidate the effect of solvation on intracluster reactions.
Paul F. Meier
- Published in print:
- 2020
- Published Online:
- February 2021
- ISBN:
- 9780190098391
- eISBN:
- 9780190098421
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190098391.003.0007
- Subject:
- Environmental Science, Environmental Sustainability
A wind farm is a collection of wind turbines, sufficiently spaced to avoid wind interference between turbines. Onshore and offshore are the two basic types of wind farms. The cost of building an ...
More
A wind farm is a collection of wind turbines, sufficiently spaced to avoid wind interference between turbines. Onshore and offshore are the two basic types of wind farms. The cost of building an offshore farm is greater because of the need for turbines that withstand high wind and corrosive conditions of the sea, plus the expense of installing underwater transmission cables to shore. For an onshore wind farm, the land area for the farm is large, but the direct impact area is relatively small. The direct impact area includes the turbine pads, roads, substations, and transmission equipment, and only makes up about 2% of the total wind farm area. Since the direct impact area is small compared to the total wind farm area, agriculture and ranching can coexist with the wind farm. Wind is a very fast growing renewable energy technology. In the ten years since 2009, the worldwide capacity for wind power increased 276% while US capacity increased 175%.Less
A wind farm is a collection of wind turbines, sufficiently spaced to avoid wind interference between turbines. Onshore and offshore are the two basic types of wind farms. The cost of building an offshore farm is greater because of the need for turbines that withstand high wind and corrosive conditions of the sea, plus the expense of installing underwater transmission cables to shore. For an onshore wind farm, the land area for the farm is large, but the direct impact area is relatively small. The direct impact area includes the turbine pads, roads, substations, and transmission equipment, and only makes up about 2% of the total wind farm area. Since the direct impact area is small compared to the total wind farm area, agriculture and ranching can coexist with the wind farm. Wind is a very fast growing renewable energy technology. In the ten years since 2009, the worldwide capacity for wind power increased 276% while US capacity increased 175%.
Lionel Raff, Ranga Komanduri, Martin Hagan, and Satish Bukkapatnam
- Published in print:
- 2012
- Published Online:
- November 2020
- ISBN:
- 9780199765652
- eISBN:
- 9780197563113
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199765652.003.0011
- Subject:
- Chemistry, Physical Chemistry
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a ...
More
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a stochastic global search method that mimics the process of natural biological evolution. GAs operate on a population of potential solutions applying the principle of survival of the fittest to generate progressively better approximations to a solution. A new set of approximations is generated in each iteration (also known as generation) of a GA through the process of selecting individuals from the solution space according to their fitness levels, and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that have a higher probability of being “fitter,” i.e., better approximations of the specified potential values, than the individuals they were created from, just as in natural adaptation. The most time-consuming part in implementing a GA is often the evaluation of the objective or the fitness function. The objective function O[P] is expressed as sum squared error computed over a given large ensemble of data. Consequently, the time required for evaluating the objective function becomes an important factor. Since a GA is well suited for implementing on parallel computers, the time required for evaluating the objective function can be reduced significantly by parallel processing. A better approach would be to map out the objective function using several possible solutions concurrently or beforehand to improve computational efficiency of the GA prior to its execution, and using this information to implement the GA. This will obviate the need for cumbersome direct evaluation of the objective function. Neural networks may be best suited to map the functional relationship between the objective function and the various parameters of the specific functional form. This study presents an approach that combines the universal function approximation capability of multilayer neural networks to accelerate a GA for fitting atomic system potentials. The approach involves evaluating the objective function, which for the present application is the mean squared error (MSE) between the computed and model-estimated potential, and training a multilayer neural network with decision variables as input and the objective function as output.
Less
Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a stochastic global search method that mimics the process of natural biological evolution. GAs operate on a population of potential solutions applying the principle of survival of the fittest to generate progressively better approximations to a solution. A new set of approximations is generated in each iteration (also known as generation) of a GA through the process of selecting individuals from the solution space according to their fitness levels, and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that have a higher probability of being “fitter,” i.e., better approximations of the specified potential values, than the individuals they were created from, just as in natural adaptation. The most time-consuming part in implementing a GA is often the evaluation of the objective or the fitness function. The objective function O[P] is expressed as sum squared error computed over a given large ensemble of data. Consequently, the time required for evaluating the objective function becomes an important factor. Since a GA is well suited for implementing on parallel computers, the time required for evaluating the objective function can be reduced significantly by parallel processing. A better approach would be to map out the objective function using several possible solutions concurrently or beforehand to improve computational efficiency of the GA prior to its execution, and using this information to implement the GA. This will obviate the need for cumbersome direct evaluation of the objective function. Neural networks may be best suited to map the functional relationship between the objective function and the various parameters of the specific functional form. This study presents an approach that combines the universal function approximation capability of multilayer neural networks to accelerate a GA for fitting atomic system potentials. The approach involves evaluating the objective function, which for the present application is the mean squared error (MSE) between the computed and model-estimated potential, and training a multilayer neural network with decision variables as input and the objective function as output.
Lallit Anand and Sanjay Govindjee
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780198864721
- eISBN:
- 9780191896767
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198864721.003.0005
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter discusses the first and second laws of thermodynamics. The first law represents a balance between the rate of change of the internal energy plus the rate of change of kinetic energy of ...
More
This chapter discusses the first and second laws of thermodynamics. The first law represents a balance between the rate of change of the internal energy plus the rate of change of kinetic energy of a part of the body, and the rate at which energy in the form of heat is transferred to the part plus the mechanical power expended upon it. A part also possesses entropy, and the second law is the statement that the rate at which the net entropy of a part changes is greater than or at a minimum equal to the entropy flow into the part, resulting in a free energy imbalance known as the Clausius-Duhem inequality.Less
This chapter discusses the first and second laws of thermodynamics. The first law represents a balance between the rate of change of the internal energy plus the rate of change of kinetic energy of a part of the body, and the rate at which energy in the form of heat is transferred to the part plus the mechanical power expended upon it. A part also possesses entropy, and the second law is the statement that the rate at which the net entropy of a part changes is greater than or at a minimum equal to the entropy flow into the part, resulting in a free energy imbalance known as the Clausius-Duhem inequality.
E. C. Pielou
- Published in print:
- 2001
- Published Online:
- February 2013
- ISBN:
- 9780226668062
- eISBN:
- 9780226668055
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226668055.003.0002
- Subject:
- Biology, Natural History and Field Guides
This chapter addresses the question: What is energy? It discusses the concept of work, energy conversions, potential energy (PE), and kinetic energy. Energy results from two kinds of forces. One ...
More
This chapter addresses the question: What is energy? It discusses the concept of work, energy conversions, potential energy (PE), and kinetic energy. Energy results from two kinds of forces. One kind, exemplified by gravity and elasticity, is called a conservative force; its salient feature is that it can be stored as gravitational PE and elastic PE. A system in which the only forces acting are conservative forces never runs down. The other kind of force, exemplified by friction and air resistance, is nonconservative. When nonconservative forces are operating, either alone or in combination with conservative ones, a system inevitably runs down. Nonconservative forces produce heat, and the heat can never spontaneously turn back into another kind of energy.Less
This chapter addresses the question: What is energy? It discusses the concept of work, energy conversions, potential energy (PE), and kinetic energy. Energy results from two kinds of forces. One kind, exemplified by gravity and elasticity, is called a conservative force; its salient feature is that it can be stored as gravitational PE and elastic PE. A system in which the only forces acting are conservative forces never runs down. The other kind of force, exemplified by friction and air resistance, is nonconservative. When nonconservative forces are operating, either alone or in combination with conservative ones, a system inevitably runs down. Nonconservative forces produce heat, and the heat can never spontaneously turn back into another kind of energy.
Martin Nilsson and Steen Rasmussen
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195137170
- eISBN:
- 9780197561652
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/9780195137170.003.0011
- Subject:
- Computer Science, Systems Analysis and Design
Realistic molecular dynamics and self-assembly is represented in a lattice simulation where water, water-hydrocarbons, and water-amphiphilic systems are investigated. The details of the phase ...
More
Realistic molecular dynamics and self-assembly is represented in a lattice simulation where water, water-hydrocarbons, and water-amphiphilic systems are investigated. The details of the phase separation dynamics and the constructive self-assembly dynamics are discussed and compared to the corresponding experimental systems. The method used to represent the different molecular types can easily be expended to include additional molecules and thus allow the assembly of more complex structures. This molecular dynamics (MD) lattice gas fills a modeling gap between traditional MD and lattice gas methods. Both molecular objects and force fields are represented by propagating information particles and all microscopic interactions are reversible. Living systems, perhaps the ultimate constructive dynamical systems, is the motivation for this work and our focus is a study of the dynamics of molecular self-assembly and self-organization. In living systems, matter is organized such that it spontaneously constructs intricate functionalities at all levels from the molecules up to the organism and beyond. At the lower levels of description, chemical reactions, molecular selfassembly and self-organization are the drivers of this complexity. We shall, in this chapter, demonstrate how molecular self-assembly and selforganization processes can be represented in formal systems. The formal systems are to be denned as a special kind of lattice gas and they are in a form where an obvious correspondence exists between the observables in the lattice gases and the experimentally observed properties in the molecular self-assembly systems. This has the clear advantage that by using these formal systems, theory, simulation, and experiment can be conducted in concert and can mutually support each other. However, a disadvantage also exists because analytical results are difficult to obtain for these formal systems due to their inherent complexity dictated by their necessary realism. The key to novelt simpler molecules (from lower levels), dynamical hierarchies are formed [2, 3]. Dynamical hierarchies are characterized by distinct observable functionalities at multiple levels of description. Since these higher-order structures are generated spontaneously due to the physico-chemical properties of their building blocks, complexity can come for free in molecular self-assembly systems. Through such processes, matter apparently can program itself into structures that constitute living systems [11, 27, 30].
Less
Realistic molecular dynamics and self-assembly is represented in a lattice simulation where water, water-hydrocarbons, and water-amphiphilic systems are investigated. The details of the phase separation dynamics and the constructive self-assembly dynamics are discussed and compared to the corresponding experimental systems. The method used to represent the different molecular types can easily be expended to include additional molecules and thus allow the assembly of more complex structures. This molecular dynamics (MD) lattice gas fills a modeling gap between traditional MD and lattice gas methods. Both molecular objects and force fields are represented by propagating information particles and all microscopic interactions are reversible. Living systems, perhaps the ultimate constructive dynamical systems, is the motivation for this work and our focus is a study of the dynamics of molecular self-assembly and self-organization. In living systems, matter is organized such that it spontaneously constructs intricate functionalities at all levels from the molecules up to the organism and beyond. At the lower levels of description, chemical reactions, molecular selfassembly and self-organization are the drivers of this complexity. We shall, in this chapter, demonstrate how molecular self-assembly and selforganization processes can be represented in formal systems. The formal systems are to be denned as a special kind of lattice gas and they are in a form where an obvious correspondence exists between the observables in the lattice gases and the experimentally observed properties in the molecular self-assembly systems. This has the clear advantage that by using these formal systems, theory, simulation, and experiment can be conducted in concert and can mutually support each other. However, a disadvantage also exists because analytical results are difficult to obtain for these formal systems due to their inherent complexity dictated by their necessary realism. The key to novelt simpler molecules (from lower levels), dynamical hierarchies are formed [2, 3]. Dynamical hierarchies are characterized by distinct observable functionalities at multiple levels of description. Since these higher-order structures are generated spontaneously due to the physico-chemical properties of their building blocks, complexity can come for free in molecular self-assembly systems. Through such processes, matter apparently can program itself into structures that constitute living systems [11, 27, 30].
Philip Coppens
- Published in print:
- 1997
- Published Online:
- November 2020
- ISBN:
- 9780195098235
- eISBN:
- 9780197560877
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195098235.003.0011
- Subject:
- Chemistry, Physical Chemistry
The total energy of a quantum-mechanical system can be written as the sum of its kinetic energy T, Coulombic energy ECoui and exchange and electron correlation contributions Ex and Ecorr, ...
More
The total energy of a quantum-mechanical system can be written as the sum of its kinetic energy T, Coulombic energy ECoui and exchange and electron correlation contributions Ex and Ecorr, respectively: . . . E=T+Ecoui+Ex+Ecorr (9.1) . . . The only term in this expression that can be derived directly from the charge distribution is the Coulombic energy. It consists of nucleus–nucleus repulsion, nucleus–electron attraction, and electron–electron repulsion terms.
Less
The total energy of a quantum-mechanical system can be written as the sum of its kinetic energy T, Coulombic energy ECoui and exchange and electron correlation contributions Ex and Ecorr, respectively: . . . E=T+Ecoui+Ex+Ecorr (9.1) . . . The only term in this expression that can be derived directly from the charge distribution is the Coulombic energy. It consists of nucleus–nucleus repulsion, nucleus–electron attraction, and electron–electron repulsion terms.
R. M. Goody and Y. L. Yung
- Published in print:
- 1989
- Published Online:
- November 2020
- ISBN:
- 9780195051346
- eISBN:
- 9780197560976
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195051346.003.0004
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit frequency (or wavelength) range ...
More
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit frequency (or wavelength) range per unit solid angle per unit area perpendicular to the given direction. In Fig. 2.1 the point P is surrounded by a small element of area dπs, perpendicular to the direction of the unit vector s. From each point on dπs a cone of solid angle dωs is drawn about the s vector. The bundle of rays, originating on dπs, and contained within dωs, transports in time dt and in the frequency range v to v + dv, the energy . . . Ev = Iv(P,S) dπs dωs dv dt, (2.1). . . where Iv(P, s) is the specific intensity at the point P in the s-direction. If Iv is not a function of direction the intensity field is said to be isotropic ; if Iv is not a function of position the field is said to be homogeneous.
Less
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit frequency (or wavelength) range per unit solid angle per unit area perpendicular to the given direction. In Fig. 2.1 the point P is surrounded by a small element of area dπs, perpendicular to the direction of the unit vector s. From each point on dπs a cone of solid angle dωs is drawn about the s vector. The bundle of rays, originating on dπs, and contained within dωs, transports in time dt and in the frequency range v to v + dv, the energy . . . Ev = Iv(P,S) dπs dωs dv dt, (2.1). . . where Iv(P, s) is the specific intensity at the point P in the s-direction. If Iv is not a function of direction the intensity field is said to be isotropic ; if Iv is not a function of position the field is said to be homogeneous.
Gilles Doucet
- Published in print:
- 2020
- Published Online:
- January 2021
- ISBN:
- 9780197548684
- eISBN:
- 9780197548714
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197548684.003.0011
- Subject:
- Law, Public International Law, Philosophy of Law
This chapter describes a multilateral transparency measure requiring “notification of transfer of kinetic energy to an object in Earth orbit.” This proposed transparency and confidence-building ...
More
This chapter describes a multilateral transparency measure requiring “notification of transfer of kinetic energy to an object in Earth orbit.” This proposed transparency and confidence-building measure would address a number of stumbling blocks in space arms control, such defining weapon in outer space and the challenges of verification, and offers the potential of easing tensions, increasing trust, and achieving a more secure space operating environment. This TCBM is based on behavior and the transparency will assist in differentiating commercial/civil R&D from military activities in the emerging fields of on-orbit servicing and active debris removal. This measure would make it more difficult for States to surreptitiously develop antisatellite weapons, and may also reduce the perceived need for such capability.Less
This chapter describes a multilateral transparency measure requiring “notification of transfer of kinetic energy to an object in Earth orbit.” This proposed transparency and confidence-building measure would address a number of stumbling blocks in space arms control, such defining weapon in outer space and the challenges of verification, and offers the potential of easing tensions, increasing trust, and achieving a more secure space operating environment. This TCBM is based on behavior and the transparency will assist in differentiating commercial/civil R&D from military activities in the emerging fields of on-orbit servicing and active debris removal. This measure would make it more difficult for States to surreptitiously develop antisatellite weapons, and may also reduce the perceived need for such capability.
Vasily Bulatov and Wei Cai
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780198526148
- eISBN:
- 9780191916618
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198526148.003.0006
- Subject:
- Computer Science, Software Engineering
Fundamentally, materials derive their properties from the interaction between their constituent atoms. These basic interactions make the atoms assemble in a particular crystalline structure. The ...
More
Fundamentally, materials derive their properties from the interaction between their constituent atoms. These basic interactions make the atoms assemble in a particular crystalline structure. The same interactions also define how the atoms prefer to arrange themselves in the dislocation core. Therefore, to understand the behavior of dislocations, it is necessary and sufficient to study the collective behavior of atoms in crystals populated by dislocations. This chapter introduces the basic methodology of atomistic simulations that will be applied to the studies of dislocations in the following chapters. Section 1 discusses the nature of interatomic interactions and introduces empirical models that describe these interactions with various degrees of accuracy. Section 2 introduces the significance of the Boltzmann distribution that describes statistical properties of a collection of interacting atoms in thermal equilibrium. This section sets the stage for a subsequent discussion of basic computational methods to be used throughout this book. Section 3 covers the methods for energy minimization. Sections 4 and 5 give a concise introduction to Monte Carlo and molecular dynamics methods. When put close together, atoms interact by exerting forces on each other. Depending on the atomic species, some interatomic interactions are relatively easy to describe, while others can be very complicated. This variability stems from the quantum mechanical motion and interaction of electrons [15, 16]. Henceforth, rigorous treatment of interatomic interactions should be based on a solution of Schrödinger’s equation for interacting electrons, which is usually referred to as the first principles or ab initio theory. Numerical calculations based on first principles are computationally very expensive and can only deal with a relatively small number of atoms. In the context of dislocation modelling, relevant behaviors often involve many thousands of atoms and can only be approached using much less sophisticated but more computationally efficient models. Even though we do not use it in this book, it is useful to bear in mind that the first principles theory provides a useful starting point for constructing approximate but efficient models that are needed to study large-scale problems involving many atoms.
Less
Fundamentally, materials derive their properties from the interaction between their constituent atoms. These basic interactions make the atoms assemble in a particular crystalline structure. The same interactions also define how the atoms prefer to arrange themselves in the dislocation core. Therefore, to understand the behavior of dislocations, it is necessary and sufficient to study the collective behavior of atoms in crystals populated by dislocations. This chapter introduces the basic methodology of atomistic simulations that will be applied to the studies of dislocations in the following chapters. Section 1 discusses the nature of interatomic interactions and introduces empirical models that describe these interactions with various degrees of accuracy. Section 2 introduces the significance of the Boltzmann distribution that describes statistical properties of a collection of interacting atoms in thermal equilibrium. This section sets the stage for a subsequent discussion of basic computational methods to be used throughout this book. Section 3 covers the methods for energy minimization. Sections 4 and 5 give a concise introduction to Monte Carlo and molecular dynamics methods. When put close together, atoms interact by exerting forces on each other. Depending on the atomic species, some interatomic interactions are relatively easy to describe, while others can be very complicated. This variability stems from the quantum mechanical motion and interaction of electrons [15, 16]. Henceforth, rigorous treatment of interatomic interactions should be based on a solution of Schrödinger’s equation for interacting electrons, which is usually referred to as the first principles or ab initio theory. Numerical calculations based on first principles are computationally very expensive and can only deal with a relatively small number of atoms. In the context of dislocation modelling, relevant behaviors often involve many thousands of atoms and can only be approached using much less sophisticated but more computationally efficient models. Even though we do not use it in this book, it is useful to bear in mind that the first principles theory provides a useful starting point for constructing approximate but efficient models that are needed to study large-scale problems involving many atoms.