Raymond Brun
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780199552689
- eISBN:
- 9780191720277
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199552689.001.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
The high enthalpy gas flows, associating high velocities, and high temperatures, are the scene of physical and chemical processes such as molecular vibrational excitation, dissociation, ionization, ...
More
The high enthalpy gas flows, associating high velocities, and high temperatures, are the scene of physical and chemical processes such as molecular vibrational excitation, dissociation, ionization, or various reactions. The characteristic times of these processes are of the same order of magnitude as aerodynamic characteristic times so that these reactive media are generally in thermodynamic and chemical non-equilibrium. This book presents a general introductory study of these media. In a first part their fundamental statistical aspects are described, starting from their discrete structure and taking into account the interactions between elementary particles: the transport phenomena, relaxation, and kinetics as well as their coupling are thus analysed and illustrated by many examples. The second part of the work is devoted to the macroscopic aspects of the reactive flows including shock waves, hypersonic expansions, flows around bodies, and boundary layers. Experimental data on vibrational relaxation times, vibrational populations, and kinetic rate constants are also presented. Finally, experimental aspects of reactive flows, their simulation in shock tube and shock tunnel are described as well as their applications, particularly in the aero-spatial domain.Less
The high enthalpy gas flows, associating high velocities, and high temperatures, are the scene of physical and chemical processes such as molecular vibrational excitation, dissociation, ionization, or various reactions. The characteristic times of these processes are of the same order of magnitude as aerodynamic characteristic times so that these reactive media are generally in thermodynamic and chemical non-equilibrium. This book presents a general introductory study of these media. In a first part their fundamental statistical aspects are described, starting from their discrete structure and taking into account the interactions between elementary particles: the transport phenomena, relaxation, and kinetics as well as their coupling are thus analysed and illustrated by many examples. The second part of the work is devoted to the macroscopic aspects of the reactive flows including shock waves, hypersonic expansions, flows around bodies, and boundary layers. Experimental data on vibrational relaxation times, vibrational populations, and kinetic rate constants are also presented. Finally, experimental aspects of reactive flows, their simulation in shock tube and shock tunnel are described as well as their applications, particularly in the aero-spatial domain.
Harvey R. Brown
- Published in print:
- 2005
- Published Online:
- September 2006
- ISBN:
- 9780199275830
- eISBN:
- 9780191603914
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199275831.003.0008
- Subject:
- Philosophy, Philosophy of Science
This chapter begins with a discussion of Minkowski's geometrization of special relativity (SR). It then discusses Minkowski space-time, what absolute geometry explains, and special relativity. It is ...
More
This chapter begins with a discussion of Minkowski's geometrization of special relativity (SR). It then discusses Minkowski space-time, what absolute geometry explains, and special relativity. It is argued that Einstein's contribution was not to establish a clear-cut divide between kinematics and dynamics, but the demonstration (a) of the full operational significance of the Lorentz transformations, and (b) that the latter could be obtained by imposing simple phenomenological constraints on the nature of the fundamental interactions in physics.Less
This chapter begins with a discussion of Minkowski's geometrization of special relativity (SR). It then discusses Minkowski space-time, what absolute geometry explains, and special relativity. It is argued that Einstein's contribution was not to establish a clear-cut divide between kinematics and dynamics, but the demonstration (a) of the full operational significance of the Lorentz transformations, and (b) that the latter could be obtained by imposing simple phenomenological constraints on the nature of the fundamental interactions in physics.
William Taussig Scott and Martin X. Moleski
- Published in print:
- 2005
- Published Online:
- July 2005
- ISBN:
- 9780195174335
- eISBN:
- 9780199835706
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/019517433X.001.0001
- Subject:
- Religion, Philosophy of Religion
Michael Polanyi (1891–1976) was born to a Viennese family living in Hungary. After obtaining a medical degree, he served in the Austro-Hungarian army in World War I, then chose Austrian citizenship ...
More
Michael Polanyi (1891–1976) was born to a Viennese family living in Hungary. After obtaining a medical degree, he served in the Austro-Hungarian army in World War I, then chose Austrian citizenship in the aftermath of the war. While on sick leave, he wrote an article on the adsorption of gases that became the foundation for his doctoral research in physical chemistry at Karlsruhe in Germany. In his later work at the Kaiser Wilhelm Institute in Berlin and the University of Manchester in England, Polanyi also worked on crystallography and reaction kinetics. After fleeing to England from Nazi Germany, Polanyi gradually turned away from physical chemistry to studies in economics, social and political analysis, philosophy, theology, and aesthetics. The biography traces the development of Polanyi's theory of tacit, personal knowledge and shows how his scientific career shaped his philosophy of science and his view of religion in general and Christianity and Judaism in particular.Less
Michael Polanyi (1891–1976) was born to a Viennese family living in Hungary. After obtaining a medical degree, he served in the Austro-Hungarian army in World War I, then chose Austrian citizenship in the aftermath of the war. While on sick leave, he wrote an article on the adsorption of gases that became the foundation for his doctoral research in physical chemistry at Karlsruhe in Germany. In his later work at the Kaiser Wilhelm Institute in Berlin and the University of Manchester in England, Polanyi also worked on crystallography and reaction kinetics. After fleeing to England from Nazi Germany, Polanyi gradually turned away from physical chemistry to studies in economics, social and political analysis, philosophy, theology, and aesthetics. The biography traces the development of Polanyi's theory of tacit, personal knowledge and shows how his scientific career shaped his philosophy of science and his view of religion in general and Christianity and Judaism in particular.
William Taussig Scott and Martin X. Moleski
- Published in print:
- 2005
- Published Online:
- July 2005
- ISBN:
- 9780195174335
- eISBN:
- 9780199835706
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/019517433X.003.0003
- Subject:
- Religion, Philosophy of Religion
In preparation for a scientific career in Germany, Polanyi chose Australian citizenship and was baptized as a Roman Catholic. During his doctoral studies at Karlsruhe, Polanyi completed his thesis on ...
More
In preparation for a scientific career in Germany, Polanyi chose Australian citizenship and was baptized as a Roman Catholic. During his doctoral studies at Karlsruhe, Polanyi completed his thesis on adsorption and began to work on reaction kinetics. He became engaged to Magda Kemeny, who was also working on a doctorate at the University.Less
In preparation for a scientific career in Germany, Polanyi chose Australian citizenship and was baptized as a Roman Catholic. During his doctoral studies at Karlsruhe, Polanyi completed his thesis on adsorption and began to work on reaction kinetics. He became engaged to Magda Kemeny, who was also working on a doctorate at the University.
Giovanni Zocchi
- Published in print:
- 2018
- Published Online:
- January 2019
- ISBN:
- 9780691173863
- eISBN:
- 9781400890064
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691173863.001.0001
- Subject:
- Physics, Soft Matter / Biological Physics
This book presents a dynamic new approach to the physics of enzymes and DNA from the perspective of materials science. Unified around the concept of molecular deformability—how proteins and DNA ...
More
This book presents a dynamic new approach to the physics of enzymes and DNA from the perspective of materials science. Unified around the concept of molecular deformability—how proteins and DNA stretch, fold, and change shape—the book describes the complex molecules of life from the innovative perspective of materials properties and dynamics, in contrast to structural or purely chemical approaches. It covers a wealth of topics, including nonlinear deformability of enzymes and DNA; the chemo-dynamic cycle of enzymes; supra-molecular constructions with internal stress; nano-rheology and viscoelasticity; and chemical kinetics, Brownian motion, and barrier crossing. Essential reading for researchers in materials science, engineering, and nanotechnology, the book also describes the landmark experiments that have established the materials properties and energy landscape of large biological molecules. The book gives graduate students a working knowledge of model building in statistical mechanics, making it an essential resource for tomorrow's experimentalists in this cutting-edge field. In addition, mathematical methods are introduced in the bio-molecular context. The result is a generalized approach to mathematical problem solving that enables students to apply their findings more broadly.Less
This book presents a dynamic new approach to the physics of enzymes and DNA from the perspective of materials science. Unified around the concept of molecular deformability—how proteins and DNA stretch, fold, and change shape—the book describes the complex molecules of life from the innovative perspective of materials properties and dynamics, in contrast to structural or purely chemical approaches. It covers a wealth of topics, including nonlinear deformability of enzymes and DNA; the chemo-dynamic cycle of enzymes; supra-molecular constructions with internal stress; nano-rheology and viscoelasticity; and chemical kinetics, Brownian motion, and barrier crossing. Essential reading for researchers in materials science, engineering, and nanotechnology, the book also describes the landmark experiments that have established the materials properties and energy landscape of large biological molecules. The book gives graduate students a working knowledge of model building in statistical mechanics, making it an essential resource for tomorrow's experimentalists in this cutting-edge field. In addition, mathematical methods are introduced in the bio-molecular context. The result is a generalized approach to mathematical problem solving that enables students to apply their findings more broadly.
Erich H. Kisi and Christopher J. Howard
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780198515944
- eISBN:
- 9780191705663
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198515944.003.0001
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter opens with brief descriptions of neutrons, powders (polycrystalline materials), and diffraction, followed by an account of how the unique properties of neutrons and their interaction ...
More
This chapter opens with brief descriptions of neutrons, powders (polycrystalline materials), and diffraction, followed by an account of how the unique properties of neutrons and their interaction with matter define a role for neutron powder diffraction in the study of condensed matter. The concepts are refined within an historical account of the development of neutron powder diffraction and its applications. Reference are made to the earliest demonstration of neutron diffraction, to the development of neutron sources (research reactors and accelerator-based sources), and to applications ranging from early investigation of simple crystal and magnetic structures to the more recent investigations of kinetic processes and complex crystal structures (high-temperature superconductors, fullerenes).Less
This chapter opens with brief descriptions of neutrons, powders (polycrystalline materials), and diffraction, followed by an account of how the unique properties of neutrons and their interaction with matter define a role for neutron powder diffraction in the study of condensed matter. The concepts are refined within an historical account of the development of neutron powder diffraction and its applications. Reference are made to the earliest demonstration of neutron diffraction, to the development of neutron sources (research reactors and accelerator-based sources), and to applications ranging from early investigation of simple crystal and magnetic structures to the more recent investigations of kinetic processes and complex crystal structures (high-temperature superconductors, fullerenes).
Erich H. Kisi and Christopher J. Howard
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780198515944
- eISBN:
- 9780191705663
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198515944.003.0012
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter highlights some recent and forthcoming developments that impact on the practice of neutron powder diffraction. New and more powerful neutron sources are now in service or under ...
More
This chapter highlights some recent and forthcoming developments that impact on the practice of neutron powder diffraction. New and more powerful neutron sources are now in service or under construction. At the same time, there has been continuing development of critical components, such as neutron guides (supermirrors), compact collimators, focussing monochromators, and fast detection systems. These developments have been incorporated into new neutron powder diffractometers, for high resolution, high intensity, and residual stress applications. Advances in data analysis discussed include an increased focus on the use of group theory, the analysis of the total scattering, e.g., via pair distribution functions, mapping by the maximum entropy method, and rapid handling of extensive data sets. Frontier applications range from fast reaction kinetics (combustion synthesis) to the structure refinement of biological molecules. It is suggested that application of neutron powder diffraction for simultaneous investigation of structure and microstructure will assume increasing importance.Less
This chapter highlights some recent and forthcoming developments that impact on the practice of neutron powder diffraction. New and more powerful neutron sources are now in service or under construction. At the same time, there has been continuing development of critical components, such as neutron guides (supermirrors), compact collimators, focussing monochromators, and fast detection systems. These developments have been incorporated into new neutron powder diffractometers, for high resolution, high intensity, and residual stress applications. Advances in data analysis discussed include an increased focus on the use of group theory, the analysis of the total scattering, e.g., via pair distribution functions, mapping by the maximum entropy method, and rapid handling of extensive data sets. Frontier applications range from fast reaction kinetics (combustion synthesis) to the structure refinement of biological molecules. It is suggested that application of neutron powder diffraction for simultaneous investigation of structure and microstructure will assume increasing importance.
C. N. Hinshelwood
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198570257
- eISBN:
- 9780191717659
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570257.003.0019
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter discusses the role of energy and entropy in chemical reactions. Topics covered include chain reactions, branching chains, catalysis, and the development of chemical kinetics.
This chapter discusses the role of energy and entropy in chemical reactions. Topics covered include chain reactions, branching chains, catalysis, and the development of chemical kinetics.
J. Klafter and I. M. Sokolov
- Published in print:
- 2011
- Published Online:
- December 2013
- ISBN:
- 9780199234868
- eISBN:
- 9780191775024
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199234868.001.0001
- Subject:
- Physics, Soft Matter / Biological Physics
The name “random walk” for a problem of a displacement of a point in a sequence of independent random steps was coined by Karl Pearson in 1905 in a question posed to readers of “Nature”. The same ...
More
The name “random walk” for a problem of a displacement of a point in a sequence of independent random steps was coined by Karl Pearson in 1905 in a question posed to readers of “Nature”. The same year, a similar problem was formulated by Albert Einstein in one of his Annus Mirabilis works. Even earlier problem was posed by Louis Bachelier in his thesis devoted to the theory of financial speculations in 1900. Nowadays theory of random walks was proved useful in physics and chemistry (diffusion, reactions, mixing in flows), economics, biology (from animal spread to motion of subcellular structures) and in many other disciplines. The random walk approach serves not only as a model of simple diffusion but of many complex sub‐ and superdiffusive transport processes as well. This book discusses main variants of the random walks and gives the most important mathematical tools for their theoretical description.Less
The name “random walk” for a problem of a displacement of a point in a sequence of independent random steps was coined by Karl Pearson in 1905 in a question posed to readers of “Nature”. The same year, a similar problem was formulated by Albert Einstein in one of his Annus Mirabilis works. Even earlier problem was posed by Louis Bachelier in his thesis devoted to the theory of financial speculations in 1900. Nowadays theory of random walks was proved useful in physics and chemistry (diffusion, reactions, mixing in flows), economics, biology (from animal spread to motion of subcellular structures) and in many other disciplines. The random walk approach serves not only as a model of simple diffusion but of many complex sub‐ and superdiffusive transport processes as well. This book discusses main variants of the random walks and gives the most important mathematical tools for their theoretical description.
Athel Cornish-Bowden
- Published in print:
- 2000
- Published Online:
- November 2020
- ISBN:
- 9780199638130
- eISBN:
- 9780191918179
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199638130.003.0010
- Subject:
- Chemistry, Organic Chemistry
All of chemical kinetics is based on rate equations, but this is especially true of steady-state enzyme kinetics: in other applications a rate equation can be regarded ...
More
All of chemical kinetics is based on rate equations, but this is especially true of steady-state enzyme kinetics: in other applications a rate equation can be regarded as a differential equation that has to be integrated to give the function of real interest, whereas in steady-state enzyme kinetics it is used as it stands. Although the early enzymologists tried to follow the usual chemical practice of deriving equations that describe the state of reaction as a function of time there were too many complications, such as loss of enzyme activity, effects of accumulating product etc., for this to be a fruitful approach. Rapid progress only became possible when Michaelis and Menten (1) realized that most of the complications could be removed by extrapolating back to zero time and regarding the measured initial rate as the primary observation. Since then, of course, accumulating knowledge has made it possible to study time courses directly, and this has led to two additional subdisciplines of enzyme kinetics, transient-state kinetics, which deals with the time regime before a steady state is established, and progress-curve analysis, which deals with the slow approach to equilibrium during the steady-state phase. The former of these has achieved great importance but is regarded as more specialized. It is dealt with in later chapters of this book. Progress-curve analysis has never recovered the importance that it had at the beginning of the twentieth century. Nearly all steps that form parts of the mechanisms of enzyme-catalysed reactions involve reactions of a single molecule, in which case they typically follow first-order kinetics: . . . v = ka . . . . . . 1 . . . or they involve two molecules (usually but not necessarily different from one another) and typically follow second-order kinetics: . . . v = kab . . . . . . 2 . . . In both cases v represents the rate of reaction, and a and b are the concentrations of the molecules involved, and k is a rate constant. Because we shall be regarding the rate as a quantity in its own right it is not usual in steady-state kinetics to represent it as a derivative such as -da/dt.
Less
All of chemical kinetics is based on rate equations, but this is especially true of steady-state enzyme kinetics: in other applications a rate equation can be regarded as a differential equation that has to be integrated to give the function of real interest, whereas in steady-state enzyme kinetics it is used as it stands. Although the early enzymologists tried to follow the usual chemical practice of deriving equations that describe the state of reaction as a function of time there were too many complications, such as loss of enzyme activity, effects of accumulating product etc., for this to be a fruitful approach. Rapid progress only became possible when Michaelis and Menten (1) realized that most of the complications could be removed by extrapolating back to zero time and regarding the measured initial rate as the primary observation. Since then, of course, accumulating knowledge has made it possible to study time courses directly, and this has led to two additional subdisciplines of enzyme kinetics, transient-state kinetics, which deals with the time regime before a steady state is established, and progress-curve analysis, which deals with the slow approach to equilibrium during the steady-state phase. The former of these has achieved great importance but is regarded as more specialized. It is dealt with in later chapters of this book. Progress-curve analysis has never recovered the importance that it had at the beginning of the twentieth century. Nearly all steps that form parts of the mechanisms of enzyme-catalysed reactions involve reactions of a single molecule, in which case they typically follow first-order kinetics: . . . v = ka . . . . . . 1 . . . or they involve two molecules (usually but not necessarily different from one another) and typically follow second-order kinetics: . . . v = kab . . . . . . 2 . . . In both cases v represents the rate of reaction, and a and b are the concentrations of the molecules involved, and k is a rate constant. Because we shall be regarding the rate as a quantity in its own right it is not usual in steady-state kinetics to represent it as a derivative such as -da/dt.
M. T. Wilson and J. Torres
- Published in print:
- 2000
- Published Online:
- November 2020
- ISBN:
- 9780199638130
- eISBN:
- 9780191918179
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199638130.003.0012
- Subject:
- Chemistry, Organic Chemistry
There was a time, fortunately some years ago now, when to undertake rapid kinetic measurements using a stopped-flow spectrophotometer verged on the heroic. One needed ...
More
There was a time, fortunately some years ago now, when to undertake rapid kinetic measurements using a stopped-flow spectrophotometer verged on the heroic. One needed to be armed with knowledge of amplifiers, light sources, oscilloscopes etc. and ideally one’s credibility was greatly enhanced were one to build one’s own instrument. Analysis of the data was similarly difficult. To obtain a single rate constant might involve a wide range of skills in addition to those required for the chemical/biochemical manipulation of the system and could easily include photography, developing prints and considerable mathematical agility. Now all this has changed and, from the point of view of the scientist attempting to solve problems through transient kinetic studies, a good thing too! Very high quality data can readily be obtained by anyone with a few hours training and the ability to use a mouse and ‘point and click’ programs. Excellent stopped -flow spectrophotometers can be bought which are reliable, stable, sensitive and which are controlled by computers able to signal-average and to analyse, in seconds, kinetic progress curves in a number of ways yielding rate constants, amplitudes, residuals and statistics. Because it is now so easy, from the technical point of view, to make measurement and to do so without an apprenticeship in kinetic methods, it becomes important to make sure that one collects data that are meaningful and open to sensible interpretation. There are a number of pitfalls to avoid. The emphasis of this article is, therefore, somewhat different to that written by Eccleston (1) in an earlier volume of this series. Less time will be spent on consideration of the hardware, although the general principles are given, but the focus will be on making sure that the data collected means what one thinks it means and then how to be sure one is extracting kinetic parameters from this in a sensible way. With the advent of powerful, fast computers it has now become possible to process very large data sets quickly and this has paved the way for the application of ‘rapid scan’ devices (usually, but not exclusively, diode arrays), which allow complete spectra to be collected at very short time intervals during a reaction.
Less
There was a time, fortunately some years ago now, when to undertake rapid kinetic measurements using a stopped-flow spectrophotometer verged on the heroic. One needed to be armed with knowledge of amplifiers, light sources, oscilloscopes etc. and ideally one’s credibility was greatly enhanced were one to build one’s own instrument. Analysis of the data was similarly difficult. To obtain a single rate constant might involve a wide range of skills in addition to those required for the chemical/biochemical manipulation of the system and could easily include photography, developing prints and considerable mathematical agility. Now all this has changed and, from the point of view of the scientist attempting to solve problems through transient kinetic studies, a good thing too! Very high quality data can readily be obtained by anyone with a few hours training and the ability to use a mouse and ‘point and click’ programs. Excellent stopped -flow spectrophotometers can be bought which are reliable, stable, sensitive and which are controlled by computers able to signal-average and to analyse, in seconds, kinetic progress curves in a number of ways yielding rate constants, amplitudes, residuals and statistics. Because it is now so easy, from the technical point of view, to make measurement and to do so without an apprenticeship in kinetic methods, it becomes important to make sure that one collects data that are meaningful and open to sensible interpretation. There are a number of pitfalls to avoid. The emphasis of this article is, therefore, somewhat different to that written by Eccleston (1) in an earlier volume of this series. Less time will be spent on consideration of the hardware, although the general principles are given, but the focus will be on making sure that the data collected means what one thinks it means and then how to be sure one is extracting kinetic parameters from this in a sensible way. With the advent of powerful, fast computers it has now become possible to process very large data sets quickly and this has paved the way for the application of ‘rapid scan’ devices (usually, but not exclusively, diode arrays), which allow complete spectra to be collected at very short time intervals during a reaction.
ANGELO GAVEZZOTTI
- Published in print:
- 2006
- Published Online:
- January 2010
- ISBN:
- 9780198570806
- eISBN:
- 9780191718779
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570806.003.0015
- Subject:
- Physics, Atomic, Laser, and Optical Physics
In molecular crystallisation there are no first-rate laws, very few second-rate laws and many third-rate laws. The first-rate laws of thermodynamics become in such a context third-rate laws because ...
More
In molecular crystallisation there are no first-rate laws, very few second-rate laws and many third-rate laws. The first-rate laws of thermodynamics become in such a context third-rate laws because the concept of phase is ill-defined in most if not all of the transformations involved in the evolution from a disperse molecular system to a molecular aggregate. There is a wide gap between the ever increasing ease with which the aggregation and crystallisation phenomenon can be studied thanks to calorimetry, X-ray diffraction, nuclear magnetic resonance, atomic force microscopy, molecular simulation, and the degree of understanding and control that may be gained from these experiments. If even laws are weak, the way to a theory seems even more problematic. This chapter examines whether a theory of crystallisation currently exists, laws and theories in chemistry, stages of molecular aggregation in oligomers, nanoparticles and mesoparticles, aggregation of macroscopic crystals, and the thermodynamics, kinetics, and symmetry of molecular aggregation.Less
In molecular crystallisation there are no first-rate laws, very few second-rate laws and many third-rate laws. The first-rate laws of thermodynamics become in such a context third-rate laws because the concept of phase is ill-defined in most if not all of the transformations involved in the evolution from a disperse molecular system to a molecular aggregate. There is a wide gap between the ever increasing ease with which the aggregation and crystallisation phenomenon can be studied thanks to calorimetry, X-ray diffraction, nuclear magnetic resonance, atomic force microscopy, molecular simulation, and the degree of understanding and control that may be gained from these experiments. If even laws are weak, the way to a theory seems even more problematic. This chapter examines whether a theory of crystallisation currently exists, laws and theories in chemistry, stages of molecular aggregation in oligomers, nanoparticles and mesoparticles, aggregation of macroscopic crystals, and the thermodynamics, kinetics, and symmetry of molecular aggregation.
Wolfgang Banzhaf and Lidia Yamamoto
- Published in print:
- 2015
- Published Online:
- September 2016
- ISBN:
- 9780262029438
- eISBN:
- 9780262329460
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262029438.003.0018
- Subject:
- Public Health and Epidemiology, Public Health
This chapter is devoted to one of the main applications of artificial chemistries, the modeling of biological systems. The chapter starts at the molecular level, with algorithms that model RNA and ...
More
This chapter is devoted to one of the main applications of artificial chemistries, the modeling of biological systems. The chapter starts at the molecular level, with algorithms that model RNA and protein folding. An introduction to models of enzymatic reactions and the binding of proteins to genes then follows. Models of the dynamics of biochemical pathways are discussed next, with focus on metabolic networks. An overview of algorithms to simulate the large-scale reaction networks common in biology is presented afterwards. Genetic regulatory networks (GRNs) are examples of such large-scale reaction networks, and are discussed next. A treatment of cell differentiation, multicellularity and morphogenesis concludes the chapter.Less
This chapter is devoted to one of the main applications of artificial chemistries, the modeling of biological systems. The chapter starts at the molecular level, with algorithms that model RNA and protein folding. An introduction to models of enzymatic reactions and the binding of proteins to genes then follows. Models of the dynamics of biochemical pathways are discussed next, with focus on metabolic networks. An overview of algorithms to simulate the large-scale reaction networks common in biology is presented afterwards. Genetic regulatory networks (GRNs) are examples of such large-scale reaction networks, and are discussed next. A treatment of cell differentiation, multicellularity and morphogenesis concludes the chapter.
Dmitri I. Svergun, Michel H. J. Koch, Peter A. Timmins, and Roland P. May
- Published in print:
- 2013
- Published Online:
- December 2013
- ISBN:
- 9780199639533
- eISBN:
- 9780191747731
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199639533.003.0008
- Subject:
- Physics, Crystallography: Physics
Following a brief historical introduction, the difference between dynamics and kinetics is explained to clarify that SAS produces kinetic information. Data interpretation methods are based on those ...
More
Following a brief historical introduction, the difference between dynamics and kinetics is explained to clarify that SAS produces kinetic information. Data interpretation methods are based on those developed for mixtures. An overview of the main perturbation methods (temperature, pressure, mixing, light and fields) and the relevant SAS instrumentation is presented. Examples of applications illustrate some of the results obtained for assembly and (un)folding phenomena induced by different rates of temperature or pressure change. An extensive survey of protein and RNA (un)folding studies, as well as studies of the kinetics of allosteric transitions and virus assembly relying mostly on fast mixing, is also made. Even if they miss the early stages of the transitions, time-resolved measurements have provided new insights into many of these processes. Further progress in time-resolution can be expected in cases where light-triggered fast pump–probe experiments are possible, as illustrated by recent experiments on CO-haemoglobin.Less
Following a brief historical introduction, the difference between dynamics and kinetics is explained to clarify that SAS produces kinetic information. Data interpretation methods are based on those developed for mixtures. An overview of the main perturbation methods (temperature, pressure, mixing, light and fields) and the relevant SAS instrumentation is presented. Examples of applications illustrate some of the results obtained for assembly and (un)folding phenomena induced by different rates of temperature or pressure change. An extensive survey of protein and RNA (un)folding studies, as well as studies of the kinetics of allosteric transitions and virus assembly relying mostly on fast mixing, is also made. Even if they miss the early stages of the transitions, time-resolved measurements have provided new insights into many of these processes. Further progress in time-resolution can be expected in cases where light-triggered fast pump–probe experiments are possible, as illustrated by recent experiments on CO-haemoglobin.
A. Ducruix and R. Giegé
- Published in print:
- 1999
- Published Online:
- November 2020
- ISBN:
- 9780199636792
- eISBN:
- 9780191918148
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199636792.003.0009
- Subject:
- Chemistry, Crystallography: Chemistry
There are many methods to crystallize biological macromolecules (for reviews see refs 1-3), all of which aim at bringing the solution of macromolecules to a ...
More
There are many methods to crystallize biological macromolecules (for reviews see refs 1-3), all of which aim at bringing the solution of macromolecules to a supersaturation state (see Chapters 10 and 11). Although vapour phase equilibrium and dialysis techniques are the two most favoured by crystallographers and biochemists, batch and interface diffusion methods will also be described. Many chemical and physical parameters influence nucleation and crystal growth of macromolecules (see Chapter 1, Table 1). Nucleation and crystal growth will in addition be affected by the method used. Thus it may be wise to try different methods, keeping in mind that protocols should be adapted (see Chapter 4). As solubility is dependent on temperature (it could increase or decrease depending on the protein), it is strongly recommended to work at constant temperature (unless temperature variation is part of the experiment), using commercially thermoregulated incubators. Refrigerators can be used, but if the door is often open, temperature will vary, impeding reproducibility. Also, vibrations due to the refrigerating compressor can interfere with crystal growth. This drawback can be overcome by dissociating the refrigerator from the compressor. In this chapter, crystallization will be described and correlated with solubility diagrams as described in Chapter 10. Observation is an important step during a crystallization experiment. If you have a large number of samples to examine, then this will be time-consuming, and a zoom lens would be an asset. The use of a binocular generally means the presence of a lamp; use of a cold lamp avoids warming the crystals (which could dissolve them). If crystals are made at 4°C and observation is made at room temperature, observation time should be minimized. Preparation of the solutions of all chemicals used for the crystallization of biological macromolecules should follow some common rules: • when possible, use a hood (such as laminar flux hood) to avoid dust • all chemicals must be of purest chemical grade (ACS grade) • stock solutions are prepared as concentrated as possible with double distilled water. Solubility of most chemicals are given in Merck Index. Filter solutions with 0.22 μm minifilter.
Less
There are many methods to crystallize biological macromolecules (for reviews see refs 1-3), all of which aim at bringing the solution of macromolecules to a supersaturation state (see Chapters 10 and 11). Although vapour phase equilibrium and dialysis techniques are the two most favoured by crystallographers and biochemists, batch and interface diffusion methods will also be described. Many chemical and physical parameters influence nucleation and crystal growth of macromolecules (see Chapter 1, Table 1). Nucleation and crystal growth will in addition be affected by the method used. Thus it may be wise to try different methods, keeping in mind that protocols should be adapted (see Chapter 4). As solubility is dependent on temperature (it could increase or decrease depending on the protein), it is strongly recommended to work at constant temperature (unless temperature variation is part of the experiment), using commercially thermoregulated incubators. Refrigerators can be used, but if the door is often open, temperature will vary, impeding reproducibility. Also, vibrations due to the refrigerating compressor can interfere with crystal growth. This drawback can be overcome by dissociating the refrigerator from the compressor. In this chapter, crystallization will be described and correlated with solubility diagrams as described in Chapter 10. Observation is an important step during a crystallization experiment. If you have a large number of samples to examine, then this will be time-consuming, and a zoom lens would be an asset. The use of a binocular generally means the presence of a lamp; use of a cold lamp avoids warming the crystals (which could dissolve them). If crystals are made at 4°C and observation is made at room temperature, observation time should be minimized. Preparation of the solutions of all chemicals used for the crystallization of biological macromolecules should follow some common rules: • when possible, use a hood (such as laminar flux hood) to avoid dust • all chemicals must be of purest chemical grade (ACS grade) • stock solutions are prepared as concentrated as possible with double distilled water. Solubility of most chemicals are given in Merck Index. Filter solutions with 0.22 μm minifilter.
Antony N. Beris and Brian J. Edwards
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195076943
- eISBN:
- 9780197560341
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195076943.003.0017
- Subject:
- Chemistry, Thermochemistry and Chemical Thermodynamics
The industrial use of low-ambient-temperature, weakly ionized plasmas as a reaction environment is growing rapidly. This is primarily evident in the ...
More
The industrial use of low-ambient-temperature, weakly ionized plasmas as a reaction environment is growing rapidly. This is primarily evident in the manufacturing technologies of advanced materials, such as the ones used in micro-electronic devices [Jensen, 1987]. The advantages of the plasma environment are due primarily to the presence of high energy electrons which allow high energy chemistry to take place at low ambient temperatures. An example is the successful plasma-enhanced chemical vapor deposition of silicon nitride at temperatures as low as 250-350°C versus temperatures in the range of 700-900°C required for thermal deposition [Reif, 1984]. Thus emerges a need for modeling of the reaction chemistry and the transport phenomena within complex, multicomponent, charged-particle systems, under the influence of externally-imposed electric and magnetic fields. The present chapter addresses this need within the framework of a multi-fluid reactive continuum [Woods, 1975, ch. 9]. Multi-fluid continuum descriptions have arisen as a natural generalization of multicomponent systems in order to account for the absence of momentum and/or energy equilibria between different species populations within the same system [Enz, 1974; Woods, 1975, ch. 9]. The key underlying assumption is that of interpenetrating continua: each one of the mutually interacting, constituent subsystems is characterized as a separate continuum with its own (macroscopic) state variables. Hidden within this assumption is the local equilibrium hypothesis, not between different subsystems—that would have resulted in the more traditional multicomponent description—but within each subsystem in order for the description of each subsystem using (equilibrium) state variables to be meaningful. This is both an asset and a liability of the multi-fluid approach: an asset, because the whole framework of equilibrium thermodynamics is still applicable at the subsystem level, resulting, among other things, in a description requiring only a few well-defined macroscopic state variables; a liability, because it places very stringent requirements on the type of systems to which this theory can be applied. The multi-fluid approach is valid only for phenomena with characteristic time scales much larger than the time scale for each subsystem to reach internal (local) thermodynamic equilibrium.
Less
The industrial use of low-ambient-temperature, weakly ionized plasmas as a reaction environment is growing rapidly. This is primarily evident in the manufacturing technologies of advanced materials, such as the ones used in micro-electronic devices [Jensen, 1987]. The advantages of the plasma environment are due primarily to the presence of high energy electrons which allow high energy chemistry to take place at low ambient temperatures. An example is the successful plasma-enhanced chemical vapor deposition of silicon nitride at temperatures as low as 250-350°C versus temperatures in the range of 700-900°C required for thermal deposition [Reif, 1984]. Thus emerges a need for modeling of the reaction chemistry and the transport phenomena within complex, multicomponent, charged-particle systems, under the influence of externally-imposed electric and magnetic fields. The present chapter addresses this need within the framework of a multi-fluid reactive continuum [Woods, 1975, ch. 9]. Multi-fluid continuum descriptions have arisen as a natural generalization of multicomponent systems in order to account for the absence of momentum and/or energy equilibria between different species populations within the same system [Enz, 1974; Woods, 1975, ch. 9]. The key underlying assumption is that of interpenetrating continua: each one of the mutually interacting, constituent subsystems is characterized as a separate continuum with its own (macroscopic) state variables. Hidden within this assumption is the local equilibrium hypothesis, not between different subsystems—that would have resulted in the more traditional multicomponent description—but within each subsystem in order for the description of each subsystem using (equilibrium) state variables to be meaningful. This is both an asset and a liability of the multi-fluid approach: an asset, because the whole framework of equilibrium thermodynamics is still applicable at the subsystem level, resulting, among other things, in a description requiring only a few well-defined macroscopic state variables; a liability, because it places very stringent requirements on the type of systems to which this theory can be applied. The multi-fluid approach is valid only for phenomena with characteristic time scales much larger than the time scale for each subsystem to reach internal (local) thermodynamic equilibrium.
W. Mark Saltzman
- Published in print:
- 2001
- Published Online:
- November 2020
- ISBN:
- 9780195085891
- eISBN:
- 9780197560501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195085891.003.0009
- Subject:
- Chemistry, Medicinal Chemistry
Drug diffusion is an essential mechanism for drug dispersion throughout biological systems. Diffusion is fundamental to the migration of agents in the body ...
More
Drug diffusion is an essential mechanism for drug dispersion throughout biological systems. Diffusion is fundamental to the migration of agents in the body and, as we will see in Chapter 9, diffusion can be used as a reliable mechanism for drug delivery. The rate of diffusion (i.e., the diffusion coefficient) depends on the architecture of the diffusing molecule. In the previous chapter a hypothetical solute with a diffusion coefficient of 10-7 cm2/s was used to describe the kinetics of diffusional spread throughout a region. Therapeutic agents have a multitude of sizes and shapes and, hence, diffusion coefficients vary in ways that are not easily predictable. Variability in the properties of agents is not the only difficulty in predicting rates of diffusion. Biological tissues present diverse resistances to molecular diffusion. Resistance to diffusion also depends on architecture: tissue composition, structure, and homogeneity are important variables. This chapter explores the variation in diffusion coefficient for molecules of different size and structure in physiological environments. The first section reviews some of the most important methods used to measure diffusion coefficients, while subsequent sections describe experimental measurements in media of increasing complexity: water, membranes, cells, and tissues. Diffusion coefficients are usually measured by observing changes in solute concentration with time and/or position. In most situations, concentration changes are monitored in laboratory systems of simple geometry; equally simple models (such as the ones developed in Chapter 3) can then be used to determine the diffusion coefficient. However, in biological systems, diffusion almost always occurs in concert with other phenomena that also influence solute concentration, such as bulk motion of fluid or chemical reaction. Therefore, experimental conditions that isolate diffusion—by eliminating or reducing fluid flows, chemical reactions, or metabolism—are often employed. Certain agents are eliminated from a tissue so slowly that the rate of elimination is negligible compared to the rate of dispersion. These molecules can be used as “tracers” to probe mechanisms of dispersion in the tissue, provided that elimination is negligible during the period of measurement. Frequently used tracers include sucrose [1, 2], iodoantipyrene [3], inulin [1], and size-fractionated dextran [3, 4].
Less
Drug diffusion is an essential mechanism for drug dispersion throughout biological systems. Diffusion is fundamental to the migration of agents in the body and, as we will see in Chapter 9, diffusion can be used as a reliable mechanism for drug delivery. The rate of diffusion (i.e., the diffusion coefficient) depends on the architecture of the diffusing molecule. In the previous chapter a hypothetical solute with a diffusion coefficient of 10-7 cm2/s was used to describe the kinetics of diffusional spread throughout a region. Therapeutic agents have a multitude of sizes and shapes and, hence, diffusion coefficients vary in ways that are not easily predictable. Variability in the properties of agents is not the only difficulty in predicting rates of diffusion. Biological tissues present diverse resistances to molecular diffusion. Resistance to diffusion also depends on architecture: tissue composition, structure, and homogeneity are important variables. This chapter explores the variation in diffusion coefficient for molecules of different size and structure in physiological environments. The first section reviews some of the most important methods used to measure diffusion coefficients, while subsequent sections describe experimental measurements in media of increasing complexity: water, membranes, cells, and tissues. Diffusion coefficients are usually measured by observing changes in solute concentration with time and/or position. In most situations, concentration changes are monitored in laboratory systems of simple geometry; equally simple models (such as the ones developed in Chapter 3) can then be used to determine the diffusion coefficient. However, in biological systems, diffusion almost always occurs in concert with other phenomena that also influence solute concentration, such as bulk motion of fluid or chemical reaction. Therefore, experimental conditions that isolate diffusion—by eliminating or reducing fluid flows, chemical reactions, or metabolism—are often employed. Certain agents are eliminated from a tissue so slowly that the rate of elimination is negligible compared to the rate of dispersion. These molecules can be used as “tracers” to probe mechanisms of dispersion in the tissue, provided that elimination is negligible during the period of measurement. Frequently used tracers include sucrose [1, 2], iodoantipyrene [3], inulin [1], and size-fractionated dextran [3, 4].
Joseph E.sr. Earley,
- Published in print:
- 2016
- Published Online:
- November 2020
- ISBN:
- 9780190494599
- eISBN:
- 9780197559666
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190494599.003.0017
- Subject:
- Chemistry, Theoretical Chemistry
A main aim of chemical research is to understand how the characteristic properties of specific chemical substances relate to the composition and to the ...
More
A main aim of chemical research is to understand how the characteristic properties of specific chemical substances relate to the composition and to the structure of those materials. Such investigations assume a broad consensus regarding basic aspects of chemistry. Philosophers generally regard widespread agreement on basic principles as a remote goal, not something already achieved. They do not agree on how properties stay together in ordinary objects. Some follow John Locke [1632–1704] and maintain that properties of entities inhere in substrates. The item that this approach considers to underlie characteristics is often called “a bare particular” (Sider 2006). However, others reject this understanding and hold that substances are bundles of properties—an approach advocated by David Hume [1711–1776]. Some supporters of Hume’s theory hold that entities are collections of “tropes” (property-instances) held together in a “compresence relationship” (Simons 1994). Recently several authors have pointed out the importance of “structures” for the coherence of substances, but serious questions have been raised about those proposals. Philosophers generally use a time-independent (synchronic) approach and do not consider how chemists understand properties of chemical substances and of dynamic networks of chemical reactions. This chapter aims to clarify how current chemical understanding relates to aspects of contemporary philosophy. The first section introduces philosophical debates, the second considers properties of chemical systems, the third part deals with theories of wholes and parts, the fourth segment argues that closure grounds properties of coherences, the fifth section introduces structural realism (SR), the sixth part considers contextual emergence and concludes that dynamic structures of processes may qualify as determinants (“causes”) of specific outcomes, and the final section suggests that ordinary items are based on closure of relationships among constituents additionally determined by selection for integration into more-extensive coherences. Ruth Garrett Millikan discussed the concept of substance in philosophy: . . . Substances . . . are whatever one can learn from given only one or a few encounters, various skills or information that will apply to other encounters. . . . Further, this possibility must be grounded in some kind of natural necessity. . . . The function of a substance concept is to make possible this sort of learning and use of knowledge for a specific substance. . . . (Millikan 2000, 33)
Less
A main aim of chemical research is to understand how the characteristic properties of specific chemical substances relate to the composition and to the structure of those materials. Such investigations assume a broad consensus regarding basic aspects of chemistry. Philosophers generally regard widespread agreement on basic principles as a remote goal, not something already achieved. They do not agree on how properties stay together in ordinary objects. Some follow John Locke [1632–1704] and maintain that properties of entities inhere in substrates. The item that this approach considers to underlie characteristics is often called “a bare particular” (Sider 2006). However, others reject this understanding and hold that substances are bundles of properties—an approach advocated by David Hume [1711–1776]. Some supporters of Hume’s theory hold that entities are collections of “tropes” (property-instances) held together in a “compresence relationship” (Simons 1994). Recently several authors have pointed out the importance of “structures” for the coherence of substances, but serious questions have been raised about those proposals. Philosophers generally use a time-independent (synchronic) approach and do not consider how chemists understand properties of chemical substances and of dynamic networks of chemical reactions. This chapter aims to clarify how current chemical understanding relates to aspects of contemporary philosophy. The first section introduces philosophical debates, the second considers properties of chemical systems, the third part deals with theories of wholes and parts, the fourth segment argues that closure grounds properties of coherences, the fifth section introduces structural realism (SR), the sixth part considers contextual emergence and concludes that dynamic structures of processes may qualify as determinants (“causes”) of specific outcomes, and the final section suggests that ordinary items are based on closure of relationships among constituents additionally determined by selection for integration into more-extensive coherences. Ruth Garrett Millikan discussed the concept of substance in philosophy: . . . Substances . . . are whatever one can learn from given only one or a few encounters, various skills or information that will apply to other encounters. . . . Further, this possibility must be grounded in some kind of natural necessity. . . . The function of a substance concept is to make possible this sort of learning and use of knowledge for a specific substance. . . . (Millikan 2000, 33)
Grant Fisher
- Published in print:
- 2016
- Published Online:
- November 2020
- ISBN:
- 9780190494599
- eISBN:
- 9780197559666
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190494599.003.0022
- Subject:
- Chemistry, Theoretical Chemistry
COMPUTATIONAL MODELING IN ORGANIC chemistry employs multiple methods of approximation and idealization. Coordinating and integrating methods can be ...
More
COMPUTATIONAL MODELING IN ORGANIC chemistry employs multiple methods of approximation and idealization. Coordinating and integrating methods can be challenging because even if a common theoretical basis is assumed, the computational result can depend on the choice of method. This can result in epistemic dissent as practitioners draw incompatible inferences about the mechanisms of organic reactions. These problems arose in the latter part of the twentieth century as quantum chemists attempted to extend their models and methods to the study of pericyclic reactions. The Woodward-Hoffmann rules were introduced in the mid-1960s to rationalize and predict the energetic requirements of a number of reactions of considerable synthetic significance. Soon after, quantitative quantum chemical approaches developed apace. But alternative methods of approximation yielded divergent quantitative predictions of transition state geometries and energies. This chapter explores the difficulties facing quantum chemists in the late twentieth century as they attempted to construct computational models of pericyclic reactions. Divergent model predictions resulted in the methods used to construct computational models becoming the focus of epistemic scrutiny and dissent. The failure to achieve robust quantitative results across quantitative methods prompted practitioners to scrutinize the consequences of pragmatic tradeoffs between computational manageability and predictive accuracy. I call the strategies employed to probe pragmatic tradeoffs diagnostics. Diagnostics provides the means to probe manageability—accuracy tradeoffs for sources of predictive divergence and to determine the reliability and applicability of approximation procedures, idealizations, and even techniques of parametrization. Furthermore, although technological developments in computing power continues to increase, and indeed that there is now a general consensus on the veracity of high level ab initio and density functional methods applied to pericyclic reactions, diagnostics imposes non-contingent pragmatic constraints on computational modelling. What counts as a “manageable” model is characterized by two dimensions: computational tractability and cognitive accessibility. While the former is a contingent feature of technological development the latter is not because cognitive skills are an ineliminable feature of computational modelling in organic chemistry.
Less
COMPUTATIONAL MODELING IN ORGANIC chemistry employs multiple methods of approximation and idealization. Coordinating and integrating methods can be challenging because even if a common theoretical basis is assumed, the computational result can depend on the choice of method. This can result in epistemic dissent as practitioners draw incompatible inferences about the mechanisms of organic reactions. These problems arose in the latter part of the twentieth century as quantum chemists attempted to extend their models and methods to the study of pericyclic reactions. The Woodward-Hoffmann rules were introduced in the mid-1960s to rationalize and predict the energetic requirements of a number of reactions of considerable synthetic significance. Soon after, quantitative quantum chemical approaches developed apace. But alternative methods of approximation yielded divergent quantitative predictions of transition state geometries and energies. This chapter explores the difficulties facing quantum chemists in the late twentieth century as they attempted to construct computational models of pericyclic reactions. Divergent model predictions resulted in the methods used to construct computational models becoming the focus of epistemic scrutiny and dissent. The failure to achieve robust quantitative results across quantitative methods prompted practitioners to scrutinize the consequences of pragmatic tradeoffs between computational manageability and predictive accuracy. I call the strategies employed to probe pragmatic tradeoffs diagnostics. Diagnostics provides the means to probe manageability—accuracy tradeoffs for sources of predictive divergence and to determine the reliability and applicability of approximation procedures, idealizations, and even techniques of parametrization. Furthermore, although technological developments in computing power continues to increase, and indeed that there is now a general consensus on the veracity of high level ab initio and density functional methods applied to pericyclic reactions, diagnostics imposes non-contingent pragmatic constraints on computational modelling. What counts as a “manageable” model is characterized by two dimensions: computational tractability and cognitive accessibility. While the former is a contingent feature of technological development the latter is not because cognitive skills are an ineliminable feature of computational modelling in organic chemistry.
John Ross, Igor Schreiber, and Marcel O. Vlad
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780195178685
- eISBN:
- 9780197562277
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195178685.003.0004
- Subject:
- Chemistry, Physical Chemistry
It is useful to have a brief discussion of some kinetic processes that we shall treat in later chapters. Some, but not all, of the material in this chapter is ...
More
It is useful to have a brief discussion of some kinetic processes that we shall treat in later chapters. Some, but not all, of the material in this chapter is presented in [1] in more detail. A macroscopic, deterministic chemical reacting system consists of a number of different species, each with a given concentration (molecules or moles per unit volume). The word “macroscopic” implies that the concentrations are of the order of Avogadro’s number (about 6.02 × 1023) per liter. The concentrations are constant at a given instant, that is, thermal fluctuations away from the average concentration are negligibly small (more in section 2.3). The kinetics in many cases, but far from all, obeys mass action rate expressions of the type . . . dA/dt = k(T )AαBβ . . . (2.1) . . . where T is temperature, A is the concentration of species A, the same for B, and possibly other species indicated by dots in the equation, and α and β are empirically determined “orders” of reaction. The rate coefficient k is generally a function of temperature and frequently a function of T only. The dependence of k on T is given empirically by the Arrhenius equation . . . k(T ) = C exp−Ea/RT (2.2) . . . where C, the frequency factor, is either nearly constant or a weakly dependent function of temperature, and Ea is the activation energy. Rate coefficients are averages of reaction cross-sections, as measured for example by molecular beam experiments. The a priori calculation of cross-sections from quantum mechanical fundamentals is extraordinarily difficult and has been done to good accuracy only for the simplest trimolecular systems (such as D + H2). A widely used alternative approach is based on activated complex theory. In its simplest form, two reactants collide and form an activated complex, said to be in equilibrium. One degree of freedom of the complex, a vibration, is allowed to lead to the dissociation of the complex to products, and the rate of that dissociation is taken to be the rate of the reaction.
Less
It is useful to have a brief discussion of some kinetic processes that we shall treat in later chapters. Some, but not all, of the material in this chapter is presented in [1] in more detail. A macroscopic, deterministic chemical reacting system consists of a number of different species, each with a given concentration (molecules or moles per unit volume). The word “macroscopic” implies that the concentrations are of the order of Avogadro’s number (about 6.02 × 1023) per liter. The concentrations are constant at a given instant, that is, thermal fluctuations away from the average concentration are negligibly small (more in section 2.3). The kinetics in many cases, but far from all, obeys mass action rate expressions of the type . . . dA/dt = k(T )AαBβ . . . (2.1) . . . where T is temperature, A is the concentration of species A, the same for B, and possibly other species indicated by dots in the equation, and α and β are empirically determined “orders” of reaction. The rate coefficient k is generally a function of temperature and frequently a function of T only. The dependence of k on T is given empirically by the Arrhenius equation . . . k(T ) = C exp−Ea/RT (2.2) . . . where C, the frequency factor, is either nearly constant or a weakly dependent function of temperature, and Ea is the activation energy. Rate coefficients are averages of reaction cross-sections, as measured for example by molecular beam experiments. The a priori calculation of cross-sections from quantum mechanical fundamentals is extraordinarily difficult and has been done to good accuracy only for the simplest trimolecular systems (such as D + H2). A widely used alternative approach is based on activated complex theory. In its simplest form, two reactants collide and form an activated complex, said to be in equilibrium. One degree of freedom of the complex, a vibration, is allowed to lead to the dissociation of the complex to products, and the rate of that dissociation is taken to be the rate of the reaction.