Douglas L. Dorset
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198529088
- eISBN:
- 9780191712838
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198529088.001.0001
- Subject:
- Physics, Crystallography: Physics
This book describes the solid state behaviour of organic materials based the polymethylene chain, i.e., the functional molecular component of polyethylenes, soaps, detergents, edible fats, lipids, ...
More
This book describes the solid state behaviour of organic materials based the polymethylene chain, i.e., the functional molecular component of polyethylenes, soaps, detergents, edible fats, lipids, oils, greases, and waxes. Along with chain unsaturation and branching, polydispersity, i.e., the aggregation of several polymethylene chain lengths, is shown to control various physical properties, including the preservation of metastable phases (polymorphic as well as ‘rotator’ forms). Using linear chain waxes as model materials, this book explores how solid solutions are stabilized and what structures are possible. Strictly linear molecules are compared to those functionalized with ‘head-groups’. The onset of fractionation, followed by formation of eutectic phases, is discussed, again describing the structures of favoured molecular assemblies. The rationale for polydisperse aggregation derives from the early work of A. I. Kitaigorodskii, demonstrating how certain homeomorphic parameters such as relative molecular shape and volume, as well as favoured crystalline polymorphs, lead to stable solid solutions. Relevant to high-molecular weight polymers, the influence of chain-folding is also discussed. A comprehensive review of known linear chain single crystal structures, including the alkanes, cycloalkanes, perfluoroalkanes, fatty alcohols, fatty acids, fatty acid esters, and cholesteryl esters, is presented to show how molecular shape, including chain branching, influences layer packing and co-solubility. Finally, a critique of previously suggested models for petroleum and natural wax assemblies is given, based on current crystallographic and spectroscopic information. This includes single crystal structures based on electron diffraction data. Although constrained to single chain molecules in the examples discussed, cited behaviour can be generalized to multiple chain-containing fats and lipids.Less
This book describes the solid state behaviour of organic materials based the polymethylene chain, i.e., the functional molecular component of polyethylenes, soaps, detergents, edible fats, lipids, oils, greases, and waxes. Along with chain unsaturation and branching, polydispersity, i.e., the aggregation of several polymethylene chain lengths, is shown to control various physical properties, including the preservation of metastable phases (polymorphic as well as ‘rotator’ forms). Using linear chain waxes as model materials, this book explores how solid solutions are stabilized and what structures are possible. Strictly linear molecules are compared to those functionalized with ‘head-groups’. The onset of fractionation, followed by formation of eutectic phases, is discussed, again describing the structures of favoured molecular assemblies. The rationale for polydisperse aggregation derives from the early work of A. I. Kitaigorodskii, demonstrating how certain homeomorphic parameters such as relative molecular shape and volume, as well as favoured crystalline polymorphs, lead to stable solid solutions. Relevant to high-molecular weight polymers, the influence of chain-folding is also discussed. A comprehensive review of known linear chain single crystal structures, including the alkanes, cycloalkanes, perfluoroalkanes, fatty alcohols, fatty acids, fatty acid esters, and cholesteryl esters, is presented to show how molecular shape, including chain branching, influences layer packing and co-solubility. Finally, a critique of previously suggested models for petroleum and natural wax assemblies is given, based on current crystallographic and spectroscopic information. This includes single crystal structures based on electron diffraction data. Although constrained to single chain molecules in the examples discussed, cited behaviour can be generalized to multiple chain-containing fats and lipids.
Thomas S. Bianchi and Elizabeth A. Canuel
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691134147
- eISBN:
- 9781400839100
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691134147.003.0003
- Subject:
- Biology, Ecology
This chapter discusses the basic principles surrounding the application of stable isotopes in natural ecosystems, which are based on variations in the relative abundance of lighter isotopes from ...
More
This chapter discusses the basic principles surrounding the application of stable isotopes in natural ecosystems, which are based on variations in the relative abundance of lighter isotopes from chemical rather than nuclear processes. Due to faster reaction kinetics of the lighter isotope of an element, reaction products in nature can be enriched in the lighter isotope. These fractionation processes can be complex, but have proven to be useful in determining geothermometry and paleoclimatology, as well as sources of organic matter in ecological studies. The most common stable isotopes used in oceanic and estuarine studies are 18O, 2H, 13C, 15N, and 34S. The preference for using such isotopes is related to their low atomic mass, significant mass differences in isotopes, covalent character in bonding, multiple oxidations states, and sufficient abundance of the rare isotope. Living plants and animals in the biosphere contain a constant level of 14C, but when they die there is no further exchange with the atmosphere and the activity of 14C decreases with a half-life of 5730 ± 40 yr; this provides the basis for establishing the age of archeological objects and fossil remains.Less
This chapter discusses the basic principles surrounding the application of stable isotopes in natural ecosystems, which are based on variations in the relative abundance of lighter isotopes from chemical rather than nuclear processes. Due to faster reaction kinetics of the lighter isotope of an element, reaction products in nature can be enriched in the lighter isotope. These fractionation processes can be complex, but have proven to be useful in determining geothermometry and paleoclimatology, as well as sources of organic matter in ecological studies. The most common stable isotopes used in oceanic and estuarine studies are 18O, 2H, 13C, 15N, and 34S. The preference for using such isotopes is related to their low atomic mass, significant mass differences in isotopes, covalent character in bonding, multiple oxidations states, and sufficient abundance of the rare isotope. Living plants and animals in the biosphere contain a constant level of 14C, but when they die there is no further exchange with the atmosphere and the activity of 14C decreases with a half-life of 5730 ± 40 yr; this provides the basis for establishing the age of archeological objects and fossil remains.
Holger Hintelmann
- Published in print:
- 2012
- Published Online:
- September 2012
- ISBN:
- 9780520271630
- eISBN:
- 9780520951396
- Item type:
- chapter
- Publisher:
- University of California Press
- DOI:
- 10.1525/california/9780520271630.003.0004
- Subject:
- Biology, Ecology
The determination of natural variations in mercury isotope ratios is a rapidly emerging area of research, opening new avenues for studying the fate of mercury in the environment. This chapter ...
More
The determination of natural variations in mercury isotope ratios is a rapidly emerging area of research, opening new avenues for studying the fate of mercury in the environment. This chapter presents the basics on physical and chemical isotope fractionation mechanisms, including both mass-dependent and mass-independent mercury isotope fractionation. An overview of analytical techniques used for mercury isotope ratio measurements is followed by a review of mercury isotope fractionation in natural samples. The use of mercury isotope signatures as tracers for sources and processes will be discussed in the context of source appointment for mercury emissions and bioaccumulation in aquatic ecosystems, with particular emphasis on the role that mercury isotopes may play in understanding the global mercury cycle.Less
The determination of natural variations in mercury isotope ratios is a rapidly emerging area of research, opening new avenues for studying the fate of mercury in the environment. This chapter presents the basics on physical and chemical isotope fractionation mechanisms, including both mass-dependent and mass-independent mercury isotope fractionation. An overview of analytical techniques used for mercury isotope ratio measurements is followed by a review of mercury isotope fractionation in natural samples. The use of mercury isotope signatures as tracers for sources and processes will be discussed in the context of source appointment for mercury emissions and bioaccumulation in aquatic ecosystems, with particular emphasis on the role that mercury isotopes may play in understanding the global mercury cycle.
Alan Baddeley
- Published in print:
- 2002
- Published Online:
- May 2009
- ISBN:
- 9780195134971
- eISBN:
- 9780199864157
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195134971.003.0016
- Subject:
- Neuroscience, Behavioral Neuroscience, Molecular and Cellular Systems
This chapter shows that the frontal lobes play an important role in integrating information from many other areas of the brain, and are crucially involved in its manipulation for purposes such as ...
More
This chapter shows that the frontal lobes play an important role in integrating information from many other areas of the brain, and are crucially involved in its manipulation for purposes such as learning, comprehension, and reasoning. Given that these are precisely the roles attributed to working memory, it seems likely that the functional and anatomical approaches will continue to develop synergistically, as the complex functions assigned to working memory are tackled using an increasingly sophisticated armory of new psychological and neurobiological techniques.Less
This chapter shows that the frontal lobes play an important role in integrating information from many other areas of the brain, and are crucially involved in its manipulation for purposes such as learning, comprehension, and reasoning. Given that these are precisely the roles attributed to working memory, it seems likely that the functional and anatomical approaches will continue to develop synergistically, as the complex functions assigned to working memory are tackled using an increasingly sophisticated armory of new psychological and neurobiological techniques.
Peter Hoskin and Wendy Makin
- Published in print:
- 2003
- Published Online:
- November 2011
- ISBN:
- 9780192628114
- eISBN:
- 9780191730115
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192628114.003.0005
- Subject:
- Palliative Care, Patient Care and End-of-Life Decision Making, Pain Management and Palliative Pharmacology
This chapter provides an overview of radiotherapy. Radiotherapy is the method of treatment using ionizing radiation. Normally, radiation is in the form of X-rays or gamma rays which, when directed ...
More
This chapter provides an overview of radiotherapy. Radiotherapy is the method of treatment using ionizing radiation. Normally, radiation is in the form of X-rays or gamma rays which, when directed through a cell, result in the ionization and destruction of DNA. Methods of delivering radiation include external X-ray beams and the use of radioisotopes. Radiation is potentially dangerous, hence administration of radiotherapy is usually done within a radiotherapy department. Radiotherapy plays a significant role in palliative care: it aids in the management of local symptoms such as pain, haemorrhage, and obstruction. Topics discussed in the chapter include the different kinds of radiation, radioisotope therapy, biological effects of radiation, and fractionation. The chapter also discusses the practicality of radiotherapy, and the practicality and efficiency of radiotherapy in palliative care.Less
This chapter provides an overview of radiotherapy. Radiotherapy is the method of treatment using ionizing radiation. Normally, radiation is in the form of X-rays or gamma rays which, when directed through a cell, result in the ionization and destruction of DNA. Methods of delivering radiation include external X-ray beams and the use of radioisotopes. Radiation is potentially dangerous, hence administration of radiotherapy is usually done within a radiotherapy department. Radiotherapy plays a significant role in palliative care: it aids in the management of local symptoms such as pain, haemorrhage, and obstruction. Topics discussed in the chapter include the different kinds of radiation, radioisotope therapy, biological effects of radiation, and fractionation. The chapter also discusses the practicality of radiotherapy, and the practicality and efficiency of radiotherapy in palliative care.
Matthieu Roy-Barman and Catherine Jeandel
- Published in print:
- 2016
- Published Online:
- December 2016
- ISBN:
- 9780198787495
- eISBN:
- 9780191829604
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198787495.003.0003
- Subject:
- Physics, Geophysics, Atmospheric and Environmental Physics
Stable isotopes provide traceability throughout the ocean. The isotopes of a given chemical element have the same electronic structure and the same chemical behavior but slightly different weights. ...
More
Stable isotopes provide traceability throughout the ocean. The isotopes of a given chemical element have the same electronic structure and the same chemical behavior but slightly different weights. Therefore, slight variations of their relative abundance occur in nature because the atom diffusion speed and the atom bond strength in molecules are mass-dependent. This leads to shifts of the isotopic ratios between reactants and products during physical, chemical and biological processes. These isotopic fractionations are typically of the order of a few ppm. Light elements such as H, O, C, N, S and Si are most prone to these effects. Their stable isotopes provide signatures used to label the provenance of water and organic matter, to determine the extent of reactions or as isotopic thermometers. Recently developed “non-traditional” isotopes of transition metals, mass-independent fractionations and clumped isotopes are also presented. Fractionation and mixing equations are established and applied to ocean processes.Less
Stable isotopes provide traceability throughout the ocean. The isotopes of a given chemical element have the same electronic structure and the same chemical behavior but slightly different weights. Therefore, slight variations of their relative abundance occur in nature because the atom diffusion speed and the atom bond strength in molecules are mass-dependent. This leads to shifts of the isotopic ratios between reactants and products during physical, chemical and biological processes. These isotopic fractionations are typically of the order of a few ppm. Light elements such as H, O, C, N, S and Si are most prone to these effects. Their stable isotopes provide signatures used to label the provenance of water and organic matter, to determine the extent of reactions or as isotopic thermometers. Recently developed “non-traditional” isotopes of transition metals, mass-independent fractionations and clumped isotopes are also presented. Fractionation and mixing equations are established and applied to ocean processes.
Mark A. Nanny, Roger A. Minear, and Jerry A. Leenheer (eds)
- Published in print:
- 1997
- Published Online:
- November 2020
- ISBN:
- 9780195097511
- eISBN:
- 9780197560853
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195097511.003.0025
- Subject:
- Chemistry, Environmental Chemistry
This chapter is the result of a panel discussion held at the end of the symposium “NMR Spectroscopy in Environmental Science and Technology” that was presented at the ACS National Meeting in ...
More
This chapter is the result of a panel discussion held at the end of the symposium “NMR Spectroscopy in Environmental Science and Technology” that was presented at the ACS National Meeting in Denver, Colorado, March 28–April 2, 1993. The intention of the panel discussion was to examine and make recommendations for the future of environmental NMR research. This chapter is a general synopsis of the answers and comments from the panelists and members of the audience to three posed questions. The six panelists were: . . . Dr. Roger A. Minear (Moderator), University of Illinois, Urbana, IL Dr. H.-D. Lüdemann, Institut für Biophysik & Physikalische Biochemie, Regensburg, Germany Dr. Robert Wershaw, United States Geological Survey, Denver, CO Dr. Jerry A. Leenheer, United States Geological Survey, Denver, CO Dr. Gary Maciel, Colorado State University, Fort Collins, CO Dr. Leo Condron, Lincoln University, Canterbury, New Zealand . . . It was generally agreed that the area in which environmental NMR research will be the most influential is the examination of chemical and physical interactions between contaminants and the environmental matrix, especially for heterogeneous and complex matrices. This is because NMR can be used as an in-situ and non-invasive probe. One advantage of NMR for environmental studies is that it can specifically follow the chemistry occurring in complex environments and matrices. In addition, the wide range of NMR-accessible nuclei creates significant potential for research in this area. A specific area where NMR could be useful is the examination of chemicals and their transformation in soils and sediments, both biotic and abiotic, without having to use extraction methods. This could provide information regarding precursors, reaction products, and changes occurring in soils, without jeopardizing sample integrity by extraction methods. Tracking reactions and reaction by-products in such matrices can be carried one step further by labeling compounds with NMR-sensitive nuclei and following the concurrent reactions. It will also be useful to use NMR in this fashion to examine the influence of the biota upon the reaction and the reaction products, which will in turn advance studies examining bioavailability and bioremediation processes.
Less
This chapter is the result of a panel discussion held at the end of the symposium “NMR Spectroscopy in Environmental Science and Technology” that was presented at the ACS National Meeting in Denver, Colorado, March 28–April 2, 1993. The intention of the panel discussion was to examine and make recommendations for the future of environmental NMR research. This chapter is a general synopsis of the answers and comments from the panelists and members of the audience to three posed questions. The six panelists were: . . . Dr. Roger A. Minear (Moderator), University of Illinois, Urbana, IL Dr. H.-D. Lüdemann, Institut für Biophysik & Physikalische Biochemie, Regensburg, Germany Dr. Robert Wershaw, United States Geological Survey, Denver, CO Dr. Jerry A. Leenheer, United States Geological Survey, Denver, CO Dr. Gary Maciel, Colorado State University, Fort Collins, CO Dr. Leo Condron, Lincoln University, Canterbury, New Zealand . . . It was generally agreed that the area in which environmental NMR research will be the most influential is the examination of chemical and physical interactions between contaminants and the environmental matrix, especially for heterogeneous and complex matrices. This is because NMR can be used as an in-situ and non-invasive probe. One advantage of NMR for environmental studies is that it can specifically follow the chemistry occurring in complex environments and matrices. In addition, the wide range of NMR-accessible nuclei creates significant potential for research in this area. A specific area where NMR could be useful is the examination of chemicals and their transformation in soils and sediments, both biotic and abiotic, without having to use extraction methods. This could provide information regarding precursors, reaction products, and changes occurring in soils, without jeopardizing sample integrity by extraction methods. Tracking reactions and reaction by-products in such matrices can be carried one step further by labeling compounds with NMR-sensitive nuclei and following the concurrent reactions. It will also be useful to use NMR in this fashion to examine the influence of the biota upon the reaction and the reaction products, which will in turn advance studies examining bioavailability and bioremediation processes.
Karl S. Matlin
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780226520483
- eISBN:
- 9780226520650
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226520650.003.0011
- Subject:
- Biology, Biochemistry / Molecular Biology
In his introduction to Edmund Cowdry’s General Cytology, published in 1924, E. B. Wilson celebrated the broadening of cytology into a true “cellular biology” through the combination of traditional ...
More
In his introduction to Edmund Cowdry’s General Cytology, published in 1924, E. B. Wilson celebrated the broadening of cytology into a true “cellular biology” through the combination of traditional morphology and physiology with biophysics and biochemistry. The technologies that made the development of modern cell biology possible, electron microscopy and cell fractionation, were not, however, effectively used until the 1940s and 1950s in the work of Albert Claude, Keith Porter, and George Palade. Their most important contribution was not just the successful application of these techniques, but the integration of these methods into a kind of epistemic strategy dependent on using microscopic representations of cellular form to constrain the scope of mechanistic hypotheses and to provide a biological context for cycles of decomposition and analysis. Günter Blobel used the same strategy to extend their work to the molecular level, yielding a detailed mechanism of protein secretion. Analysis of current research into cell polarization that uses fluorescence-cell imaging and computational modeling instead of electron microscopy and cell fractionation, suggests that the same epistemic strategy used by earlier cell biologists remains a productive approach.Less
In his introduction to Edmund Cowdry’s General Cytology, published in 1924, E. B. Wilson celebrated the broadening of cytology into a true “cellular biology” through the combination of traditional morphology and physiology with biophysics and biochemistry. The technologies that made the development of modern cell biology possible, electron microscopy and cell fractionation, were not, however, effectively used until the 1940s and 1950s in the work of Albert Claude, Keith Porter, and George Palade. Their most important contribution was not just the successful application of these techniques, but the integration of these methods into a kind of epistemic strategy dependent on using microscopic representations of cellular form to constrain the scope of mechanistic hypotheses and to provide a biological context for cycles of decomposition and analysis. Günter Blobel used the same strategy to extend their work to the molecular level, yielding a detailed mechanism of protein secretion. Analysis of current research into cell polarization that uses fluorescence-cell imaging and computational modeling instead of electron microscopy and cell fractionation, suggests that the same epistemic strategy used by earlier cell biologists remains a productive approach.
John A. Ripmeester and L. S. Kotlyar
- Published in print:
- 1997
- Published Online:
- November 2020
- ISBN:
- 9780195097511
- eISBN:
- 9780197560853
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195097511.003.0017
- Subject:
- Chemistry, Environmental Chemistry
The two oil sands plants operated by Syncrude Canada Ltd. and Suncor Canada Ltd. near Fort MacMurray, Alberta, use a hot water process for the separation of bitumen from oil sands. In brief, hot ...
More
The two oil sands plants operated by Syncrude Canada Ltd. and Suncor Canada Ltd. near Fort MacMurray, Alberta, use a hot water process for the separation of bitumen from oil sands. In brief, hot water and oil sands, with caustic soda as dispersing agent, are mixed thoroughly, and bitumen is floated to the top of the resulting slurry by streams of air. After secondary bitumen recovery, the remaining tailings are carried to ponds, where the coarse sands are used to form dikes, the fine tails are left to settle, and freed water is recycled. Typical production figures for the Syncrude plant are 390 000 barrels of diluted bitumen per day produced from 325 000 tonnes of oil sand. One complicating factor is that the fine tails dewater only to a solids content of ~30%, requiring ponds of ever increasing size (the Syncrude pond is 22km2) to store the resulting sludge. As the ponded material is toxic to wildlife, it poses a considerable local environmental hazard. In addition, there is the potential hazard of contamination of surface water and a major river system as a result of seepage or potential dike failure. The work reported here was carried out as part of a major project initiated to address the problem of the existing tailings ponds, and also to modify the currently used separation process so as not to produce sludge. Starting with the recognition that the very stable fine tails, consisting of water, silt, clay and residual bitumen, have gel-like properties, we employed the strategy of fractionating the fine tails with the hope of identifying a specific fraction which might show gel-forming propensity. This was done by breaking the gel, and collecting fractions according to sedimentation behavior during centrifugation. Fractions consisting of the coarser solids (>0.5μm) settled rapidly, whereas fractions with smaller particle sizes (termed ultrafines) gave suspensions which set into stiff, thixotropic gels on standing. Gel formation and the sol-gel transition in colloidal clay suspensions are classical problems which have received much attention over the years; however, much remains to be learned. NMR techniques have shown considerable promise in understanding clay-water interactions at a microscopic level.
Less
The two oil sands plants operated by Syncrude Canada Ltd. and Suncor Canada Ltd. near Fort MacMurray, Alberta, use a hot water process for the separation of bitumen from oil sands. In brief, hot water and oil sands, with caustic soda as dispersing agent, are mixed thoroughly, and bitumen is floated to the top of the resulting slurry by streams of air. After secondary bitumen recovery, the remaining tailings are carried to ponds, where the coarse sands are used to form dikes, the fine tails are left to settle, and freed water is recycled. Typical production figures for the Syncrude plant are 390 000 barrels of diluted bitumen per day produced from 325 000 tonnes of oil sand. One complicating factor is that the fine tails dewater only to a solids content of ~30%, requiring ponds of ever increasing size (the Syncrude pond is 22km2) to store the resulting sludge. As the ponded material is toxic to wildlife, it poses a considerable local environmental hazard. In addition, there is the potential hazard of contamination of surface water and a major river system as a result of seepage or potential dike failure. The work reported here was carried out as part of a major project initiated to address the problem of the existing tailings ponds, and also to modify the currently used separation process so as not to produce sludge. Starting with the recognition that the very stable fine tails, consisting of water, silt, clay and residual bitumen, have gel-like properties, we employed the strategy of fractionating the fine tails with the hope of identifying a specific fraction which might show gel-forming propensity. This was done by breaking the gel, and collecting fractions according to sedimentation behavior during centrifugation. Fractions consisting of the coarser solids (>0.5μm) settled rapidly, whereas fractions with smaller particle sizes (termed ultrafines) gave suspensions which set into stiff, thixotropic gels on standing. Gel formation and the sol-gel transition in colloidal clay suspensions are classical problems which have received much attention over the years; however, much remains to be learned. NMR techniques have shown considerable promise in understanding clay-water interactions at a microscopic level.
Jerry A. Leenheer
- Published in print:
- 1997
- Published Online:
- November 2020
- ISBN:
- 9780195097511
- eISBN:
- 9780197560853
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195097511.003.0019
- Subject:
- Chemistry, Environmental Chemistry
Natural organic matter (NOM) is a major intermediate in the global carbon, nitrogen, sulfur, and phosphorus cycles. NOM is also the environmental matrix that frequently controls binding, transport, ...
More
Natural organic matter (NOM) is a major intermediate in the global carbon, nitrogen, sulfur, and phosphorus cycles. NOM is also the environmental matrix that frequently controls binding, transport, degradation, and toxicity of many organic and inorganic contaminants. Despite its importance, NOM is poorly understood at the structural chemistry level because of its molecular complexity and heterogeniety. Nuclear magnetic resonance (NMR) spectroscopy is one of the most useful spectrometric methods used to investigate NOM structure because qualitative and quantitative organic structure information for certain organic elements can be generated by NMR for NOM in both the solution and solid states under nondegradative conditions. However, NMR spectroscopy is not as sensitive as infrared or ultraviolet-visible spectroscopy; it is not at present applicable to organic oxygen and sulfur, and quantification of NMR spectra is difficult under certain conditions. The purpose of this overview is to present briefly the “state of the art” of NMR characterization of NOM, and to suggest future directions for NMR research into NOM. More comprehensive texts concerning the practice of NMR spectroscopy and its application to NOM in various environments have been produced by Wilson and by Wershaw and Mikita. Carbon, hydrogen, and oxygen are the major elements of NOM; together they comprise about 90% of the mass. The minor elements that constitute the remainder are nitrogen, sulfur, phosphorus, and trace amounts of the various halogen elements. With the exception of coal, in which carbon is the most abundant element, the order of relative abundance in NOM on an atomic basis is H > C > O > N > S > P = halogens. The optimum NMR-active nuclei for these elements are 1H, 13C, 17O, 15N, 33S, 31P, and 19F. The natural abundances and receptivities of these nuclei relative to 1H are given in Table 12.1. Quadrupolar effects for 17O, 33S, and halogen elements other than 19F lead to line broadening that greatly limits resolution in NMR studies of these elements in NOM.
Less
Natural organic matter (NOM) is a major intermediate in the global carbon, nitrogen, sulfur, and phosphorus cycles. NOM is also the environmental matrix that frequently controls binding, transport, degradation, and toxicity of many organic and inorganic contaminants. Despite its importance, NOM is poorly understood at the structural chemistry level because of its molecular complexity and heterogeniety. Nuclear magnetic resonance (NMR) spectroscopy is one of the most useful spectrometric methods used to investigate NOM structure because qualitative and quantitative organic structure information for certain organic elements can be generated by NMR for NOM in both the solution and solid states under nondegradative conditions. However, NMR spectroscopy is not as sensitive as infrared or ultraviolet-visible spectroscopy; it is not at present applicable to organic oxygen and sulfur, and quantification of NMR spectra is difficult under certain conditions. The purpose of this overview is to present briefly the “state of the art” of NMR characterization of NOM, and to suggest future directions for NMR research into NOM. More comprehensive texts concerning the practice of NMR spectroscopy and its application to NOM in various environments have been produced by Wilson and by Wershaw and Mikita. Carbon, hydrogen, and oxygen are the major elements of NOM; together they comprise about 90% of the mass. The minor elements that constitute the remainder are nitrogen, sulfur, phosphorus, and trace amounts of the various halogen elements. With the exception of coal, in which carbon is the most abundant element, the order of relative abundance in NOM on an atomic basis is H > C > O > N > S > P = halogens. The optimum NMR-active nuclei for these elements are 1H, 13C, 17O, 15N, 33S, 31P, and 19F. The natural abundances and receptivities of these nuclei relative to 1H are given in Table 12.1. Quadrupolar effects for 17O, 33S, and halogen elements other than 19F lead to line broadening that greatly limits resolution in NMR studies of these elements in NOM.
Thomas Tacke, Stefan Wieland, and Peter Panster
- Published in print:
- 2004
- Published Online:
- November 2020
- ISBN:
- 9780195154832
- eISBN:
- 9780197561935
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195154832.003.0020
- Subject:
- Chemistry, Environmental Chemistry
As described in other chapters of this book and elsewhere (Jessop, 1999), a wide range of catalytic reactions can be carried out in supercritical fluids, such as Fischer–Tropsch synthesis, ...
More
As described in other chapters of this book and elsewhere (Jessop, 1999), a wide range of catalytic reactions can be carried out in supercritical fluids, such as Fischer–Tropsch synthesis, isomerization, hydroformylation, CO2 hydrogenation, synthesis of fine chemicals, hydrogenation of fats and oils, biocatalysis, and polymerization. In this chapter, we describe experiments aimed at addressing the potential of using supercritical carbon dioxide (and carbon dioxide/propane mixtures) for applications in the hydrogenation of vegetable oils and free fatty acids. Supercritical fluids, particularly carbon dioxide, offer a number of potential advantages for chemical processing including (1) continuously tunable density, (2) high solubilities for many solids and liquids, (3) complete miscibility with gases (e.g., hydrogen, oxygen), (4) excellent heat and mass transfer, and (5) the ease of separation of product and solvent. The low viscosity and excellent thermal and mass transport properties of supercritical fluids are particularly attractive for continuous catalytic reactions (Harrod and Moller, 1996; Hutchenson and Foster, 1995; Kiran and Levelt Sengers, 1994; Perrut and Brunner, 1994; Tacke et al., 1998). There are a number of reports on hydrogenation reactions in supercritical fluids using homogenous and heterogeneous catalysts (Baiker, 1999; Harrod and Moller, 1996; Hitzler and Poliakoff, 1997; Hitzler et al., 1998; Jessop et al., 1999; Meehan et al., 2000; van den Hark et al., 1999). We have investigated the selective hydrogenation of vegetable oils and the complete hydrogenation of free fatty acids for oleochemical applications, since there are some disadvantages associated with the current industrial process and the currently used supported nickel catalyst. The hydrogenation of fats and oils is a very old technology (Veldsink et al., 1997). It was invented in 1901, by Normann, in order to increase the melting point and the oxidation stability of fats and oils through selective hydrogenation. Since the melting point increases during the hydrogenation, the reaction is also referred to as hardening. The melting behavior of the hydrogenated product is determined by the reaction conditions (temperature, hydrogen pressure, agitation, hydrogen uptake). Vegetable oils (edible oils) are hydrogenated selectively for application in the food industry; whereas free fatty acids are completely hydrogenated for oleochemical applications (e.g., detergents).
Less
As described in other chapters of this book and elsewhere (Jessop, 1999), a wide range of catalytic reactions can be carried out in supercritical fluids, such as Fischer–Tropsch synthesis, isomerization, hydroformylation, CO2 hydrogenation, synthesis of fine chemicals, hydrogenation of fats and oils, biocatalysis, and polymerization. In this chapter, we describe experiments aimed at addressing the potential of using supercritical carbon dioxide (and carbon dioxide/propane mixtures) for applications in the hydrogenation of vegetable oils and free fatty acids. Supercritical fluids, particularly carbon dioxide, offer a number of potential advantages for chemical processing including (1) continuously tunable density, (2) high solubilities for many solids and liquids, (3) complete miscibility with gases (e.g., hydrogen, oxygen), (4) excellent heat and mass transfer, and (5) the ease of separation of product and solvent. The low viscosity and excellent thermal and mass transport properties of supercritical fluids are particularly attractive for continuous catalytic reactions (Harrod and Moller, 1996; Hutchenson and Foster, 1995; Kiran and Levelt Sengers, 1994; Perrut and Brunner, 1994; Tacke et al., 1998). There are a number of reports on hydrogenation reactions in supercritical fluids using homogenous and heterogeneous catalysts (Baiker, 1999; Harrod and Moller, 1996; Hitzler and Poliakoff, 1997; Hitzler et al., 1998; Jessop et al., 1999; Meehan et al., 2000; van den Hark et al., 1999). We have investigated the selective hydrogenation of vegetable oils and the complete hydrogenation of free fatty acids for oleochemical applications, since there are some disadvantages associated with the current industrial process and the currently used supported nickel catalyst. The hydrogenation of fats and oils is a very old technology (Veldsink et al., 1997). It was invented in 1901, by Normann, in order to increase the melting point and the oxidation stability of fats and oils through selective hydrogenation. Since the melting point increases during the hydrogenation, the reaction is also referred to as hardening. The melting behavior of the hydrogenated product is determined by the reaction conditions (temperature, hydrogen pressure, agitation, hydrogen uptake). Vegetable oils (edible oils) are hydrogenated selectively for application in the food industry; whereas free fatty acids are completely hydrogenated for oleochemical applications (e.g., detergents).
Lara Vanessa Jefferson, Marcello Pennacchio, and Kayri Havens
- Published in print:
- 2014
- Published Online:
- May 2015
- ISBN:
- 9780199755936
- eISBN:
- 9780190267834
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:osobl/9780199755936.003.0002
- Subject:
- Biology, Ecology
This chapter examines the scientific experimentations aimed at identifying the germination stimulants and compounds present in certain plant forms. It surveys the effects of the different methods, ...
More
This chapter examines the scientific experimentations aimed at identifying the germination stimulants and compounds present in certain plant forms. It surveys the effects of the different methods, namely wood charring and the bioassay-driven fractionation process, that led to the observation that germination-affecting chemicals may be water soluble, thermostable, and active in low concentrations.Less
This chapter examines the scientific experimentations aimed at identifying the germination stimulants and compounds present in certain plant forms. It surveys the effects of the different methods, namely wood charring and the bioassay-driven fractionation process, that led to the observation that germination-affecting chemicals may be water soluble, thermostable, and active in low concentrations.
Thomas S. Bianchi
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780195160826
- eISBN:
- 9780197562048
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195160826.003.0015
- Subject:
- Earth Sciences and Geography, Geochemistry
There is a broad spectrum (approximately 1700) of radioactive isotopes (or radionuclides) that are useful tools for measuring rates of processes on Earth. The term nuclide is commonly used ...
More
There is a broad spectrum (approximately 1700) of radioactive isotopes (or radionuclides) that are useful tools for measuring rates of processes on Earth. The term nuclide is commonly used interchangeably with atom. The major sources of radionuclides are: (1) primordial (e.g., 238U, 235U, and 234Th-series radionuclides); (2) anthropogenic or transient (e.g., 137Cs, 90Sr, 239Pu); and (3) cosmogenic (e.g., 7Be, 14C, 32P). These isotopes can be further divided into two general groups, the particle-reactive and non-particle-reactive radionuclides. Transport pathways of non-particle-reactive radionuclides in aquatic systems are more simplistic and primarily controlled by water masses. Conversely, particle-reactive radionuclides adsorb onto particles, making their fate inextricably linked with the particle. Consequently, these particle-bound radionuclides are very useful in determining sedimentation and mixing rates, as well as the overall fate of important elements in estuarine and coastal biogeochemical cycles. Radioactivity is defined as the spontaneous adjustment of nuclei of unstable nuclides to a more stable state. Radiation (e.g., alpha, beta, and gamma rays) is released in different forms as a direct result of changes in the nuclei of these nuclides. The general composition of an atom can simply be divided into the atomic number, which is the number of protons (Z) in a nucleus. The mass number (A) is the number of neutrons (N) plus protons in a nucleus (A = Z + N). Isotopes are different forms of an element that have the same Z value but a different N. Instability in nuclei is generally caused by having an inappropriate number of neutrons relative to the number of protons. Some of the pathways by which a nucleus can spontaneously transform are as follows: (1) alpha decay, or loss of an alpha particle (nucleus of a 4He atom) from the nucleus, which results in a decrease in the atomic number by two (two protons) and the mass number by four units (two protons and two neutrons); (2) beta (negatron) decay, which occurs when a neutron changes to a proton and a negatron (negatively charged electron) is emitted, thereby increasing the atomic number by one unit; (3) emission of a positron (positively charged electron) which results in a proton becoming a neutron and a decrease in the atomic number by one unit; and (4) electron capture, where a proton is changed to a neutron after combining with the captured extranuclear electron (from the K shell)—the atomic number is decreased by one unit.
Less
There is a broad spectrum (approximately 1700) of radioactive isotopes (or radionuclides) that are useful tools for measuring rates of processes on Earth. The term nuclide is commonly used interchangeably with atom. The major sources of radionuclides are: (1) primordial (e.g., 238U, 235U, and 234Th-series radionuclides); (2) anthropogenic or transient (e.g., 137Cs, 90Sr, 239Pu); and (3) cosmogenic (e.g., 7Be, 14C, 32P). These isotopes can be further divided into two general groups, the particle-reactive and non-particle-reactive radionuclides. Transport pathways of non-particle-reactive radionuclides in aquatic systems are more simplistic and primarily controlled by water masses. Conversely, particle-reactive radionuclides adsorb onto particles, making their fate inextricably linked with the particle. Consequently, these particle-bound radionuclides are very useful in determining sedimentation and mixing rates, as well as the overall fate of important elements in estuarine and coastal biogeochemical cycles. Radioactivity is defined as the spontaneous adjustment of nuclei of unstable nuclides to a more stable state. Radiation (e.g., alpha, beta, and gamma rays) is released in different forms as a direct result of changes in the nuclei of these nuclides. The general composition of an atom can simply be divided into the atomic number, which is the number of protons (Z) in a nucleus. The mass number (A) is the number of neutrons (N) plus protons in a nucleus (A = Z + N). Isotopes are different forms of an element that have the same Z value but a different N. Instability in nuclei is generally caused by having an inappropriate number of neutrons relative to the number of protons. Some of the pathways by which a nucleus can spontaneously transform are as follows: (1) alpha decay, or loss of an alpha particle (nucleus of a 4He atom) from the nucleus, which results in a decrease in the atomic number by two (two protons) and the mass number by four units (two protons and two neutrons); (2) beta (negatron) decay, which occurs when a neutron changes to a proton and a negatron (negatively charged electron) is emitted, thereby increasing the atomic number by one unit; (3) emission of a positron (positively charged electron) which results in a proton becoming a neutron and a decrease in the atomic number by one unit; and (4) electron capture, where a proton is changed to a neutron after combining with the captured extranuclear electron (from the K shell)—the atomic number is decreased by one unit.