Christopher Hammond
- Published in print:
- 2015
- Published Online:
- August 2015
- ISBN:
- 9780198738671
- eISBN:
- 9780191801938
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198738671.003.0013
- Subject:
- Physics, Crystallography: Physics, Condensed Matter Physics / Materials
This chapter starts with the Fourier series and Fourier transforms, and the representation of periodic functions. It then looks at Fourier analysis in crystallography. It details electron density, ...
More
This chapter starts with the Fourier series and Fourier transforms, and the representation of periodic functions. It then looks at Fourier analysis in crystallography. It details electron density, the derivation of relations between Fourier coefficients and structure factors, the X-ray resolution of a crystal structure, and the structural analysis of crystals and molecules (trial and error methods, the Patterson function, interpretation of Patterson maps, heavy atom and isomorphous replacement techniques, direct methods, and charge flipping). Finally, it provides an analysis of the Fraunhofer diffraction pattern from a grating and the Abbe theory of image formation.Less
This chapter starts with the Fourier series and Fourier transforms, and the representation of periodic functions. It then looks at Fourier analysis in crystallography. It details electron density, the derivation of relations between Fourier coefficients and structure factors, the X-ray resolution of a crystal structure, and the structural analysis of crystals and molecules (trial and error methods, the Patterson function, interpretation of Patterson maps, heavy atom and isomorphous replacement techniques, direct methods, and charge flipping). Finally, it provides an analysis of the Fraunhofer diffraction pattern from a grating and the Abbe theory of image formation.
Bas Edixhoven and Jean-Marc Couveignes (eds)
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691142012
- eISBN:
- 9781400839001
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691142012.001.0001
- Subject:
- Mathematics, Number Theory
Modular forms are tremendously important in various areas of mathematics, from number theory and algebraic geometry to combinatorics and lattices. Their Fourier coefficients, with Ramanujan's ...
More
Modular forms are tremendously important in various areas of mathematics, from number theory and algebraic geometry to combinatorics and lattices. Their Fourier coefficients, with Ramanujan's tau-function as a typical example, have deep arithmetic significance. Prior to this book, the fastest known algorithms for computing these Fourier coefficients took exponential time, except in some special cases. This book gives an algorithm for computing coefficients of modular forms of level one in polynomial time. For example, Ramanujan's tau of a prime number p can be computed in time bounded by a fixed power of the logarithm of p. Such fast computation of Fourier coefficients is itself based on the main result of the book: the computation, in polynomial time, of Galois representations over finite fields attached to modular forms by the Langlands program. Because these Galois representations typically have a nonsolvable image, this result is a major step forward from explicit class field theory, and it could be described as the start of the explicit Langlands program. The book begins with a concise and concrete introduction that makes it accessible to readers without an extensive background in arithmetic geometry, and it includes a chapter that describes actual computations.Less
Modular forms are tremendously important in various areas of mathematics, from number theory and algebraic geometry to combinatorics and lattices. Their Fourier coefficients, with Ramanujan's tau-function as a typical example, have deep arithmetic significance. Prior to this book, the fastest known algorithms for computing these Fourier coefficients took exponential time, except in some special cases. This book gives an algorithm for computing coefficients of modular forms of level one in polynomial time. For example, Ramanujan's tau of a prime number p can be computed in time bounded by a fixed power of the logarithm of p. Such fast computation of Fourier coefficients is itself based on the main result of the book: the computation, in polynomial time, of Galois representations over finite fields attached to modular forms by the Langlands program. Because these Galois representations typically have a nonsolvable image, this result is a major step forward from explicit class field theory, and it could be described as the start of the explicit Langlands program. The book begins with a concise and concrete introduction that makes it accessible to readers without an extensive background in arithmetic geometry, and it includes a chapter that describes actual computations.
David Blow
- Published in print:
- 2002
- Published Online:
- November 2020
- ISBN:
- 9780198510512
- eISBN:
- 9780191919244
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198510512.003.0016
- Subject:
- Chemistry, Crystallography: Chemistry
When everything has been done to make the phases as good as possible, the time has come to examine the image of the structure in the form of an ...
More
When everything has been done to make the phases as good as possible, the time has come to examine the image of the structure in the form of an electron-density map. The electron-density map is the Fourier transform of the structure factors (with their phases). If the resolution and phases are good enough, the electron-density map may be interpreted in terms of atomic positions. In practice, it may be necessary to alternate between study of the electron-density map and the procedures mentioned in Chapter 10, which may allow improvements to be made to it. Electron-density maps contain a great deal of information, which is not easy to grasp. Considerable technical effort has gone into methods of presenting the electron density to the observer in the clearest possible way. The Fourier transform is calculated as a set of electron-density values at every point of a three-dimensional grid labelled with fractional coordinates x, y, z. These coordinates each go from 0 to 1 in order to cover the whole unit cell. To present the electron density as a smoothly varying function, values have to be calculated at intervals that are much smaller than the nominal resolution of the map. Say, for example, there is a protein unit cell 50 Å on a side, at a routine resolution of 2Å. This means that some of the waves included in the calculation of the electron density go through a complete wave cycle in 2 Å. As a rule of thumb, to represent this properly, the spacing of the points on the grid for calculation must be less than one-third of the resolution. In our example, this spacing might be 0.6 Å. To cover the whole of the 50 Å unit cell, about 80 values of x are needed; and the same number of values of y and z. The electron density therefore needs to be calculated on an array of 80×80×80 points, which is over half a million values. Although our world is three-dimensional, our retinas are two-dimensional, and we are good at looking at pictures and diagrams in two dimensions.
Less
When everything has been done to make the phases as good as possible, the time has come to examine the image of the structure in the form of an electron-density map. The electron-density map is the Fourier transform of the structure factors (with their phases). If the resolution and phases are good enough, the electron-density map may be interpreted in terms of atomic positions. In practice, it may be necessary to alternate between study of the electron-density map and the procedures mentioned in Chapter 10, which may allow improvements to be made to it. Electron-density maps contain a great deal of information, which is not easy to grasp. Considerable technical effort has gone into methods of presenting the electron density to the observer in the clearest possible way. The Fourier transform is calculated as a set of electron-density values at every point of a three-dimensional grid labelled with fractional coordinates x, y, z. These coordinates each go from 0 to 1 in order to cover the whole unit cell. To present the electron density as a smoothly varying function, values have to be calculated at intervals that are much smaller than the nominal resolution of the map. Say, for example, there is a protein unit cell 50 Å on a side, at a routine resolution of 2Å. This means that some of the waves included in the calculation of the electron density go through a complete wave cycle in 2 Å. As a rule of thumb, to represent this properly, the spacing of the points on the grid for calculation must be less than one-third of the resolution. In our example, this spacing might be 0.6 Å. To cover the whole of the 50 Å unit cell, about 80 values of x are needed; and the same number of values of y and z. The electron density therefore needs to be calculated on an array of 80×80×80 points, which is over half a million values. Although our world is three-dimensional, our retinas are two-dimensional, and we are good at looking at pictures and diagrams in two dimensions.
John Marra
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195068436
- eISBN:
- 9780197560235
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195068436.003.0014
- Subject:
- Earth Sciences and Geography, Oceanography and Hydrology
There are primarily three ways in which the ocean can be sampled. First, depth profiles of water properties can be collected. The sampling resolution for depth ...
More
There are primarily three ways in which the ocean can be sampled. First, depth profiles of water properties can be collected. The sampling resolution for depth profiles can be very high (<1 m), and time resolution can be good under some circumstances. But since relatively few stations can be completed, geographic coverage is generally poor. Variability in space can be optimized if data can be collected while the ship is underway. In this second sampling mode, water is pumped aboard for sampling, or else sensing instruments are towed behind the ship. This method vastly improves sampling horizontal variability; however, depth resolution is compromised, and measurements cannot be ordered in time. The third method is to place instruments in the ocean, either tethered to moorings or on drifters. While depth resolution is only moderately good (typically, tens of meters), and spatial data nonexistent, this method has the advantage, unobtainable with the other modes, of high resolution in time. While moorings and drifters have been in the repertoire of physical oceanographic sampling for some time, it is only recently that they have been used to sample biological and optical properties of the sea. In this chapter, I discuss the capabilities of this kind of sampling from the point of view of a recent program, the BIOWATT Mooring Experiment in 1987. One of the express purposes of this experiment was to expand the range of variables that can be measured from moored instrumentation. Here, I will show how the time resolution made possible with moored sensors allows the measurement of parameters of phytoplankton production on diurnal time scales, as well as allowing a look at seasonal variability. The BIOWATT Mooring Experiment was a collaboration among a large number of people, all of whom contributed to its success. It was the first deployment of a mooring with a variety of sensors and whose goal was to record the optical, biological, and physical variability over a seasonal cycle. The idea for this type of experiment for BIOWATT originated with Tom Dickey and his (then) graduate student, Dave Siegel.
Less
There are primarily three ways in which the ocean can be sampled. First, depth profiles of water properties can be collected. The sampling resolution for depth profiles can be very high (<1 m), and time resolution can be good under some circumstances. But since relatively few stations can be completed, geographic coverage is generally poor. Variability in space can be optimized if data can be collected while the ship is underway. In this second sampling mode, water is pumped aboard for sampling, or else sensing instruments are towed behind the ship. This method vastly improves sampling horizontal variability; however, depth resolution is compromised, and measurements cannot be ordered in time. The third method is to place instruments in the ocean, either tethered to moorings or on drifters. While depth resolution is only moderately good (typically, tens of meters), and spatial data nonexistent, this method has the advantage, unobtainable with the other modes, of high resolution in time. While moorings and drifters have been in the repertoire of physical oceanographic sampling for some time, it is only recently that they have been used to sample biological and optical properties of the sea. In this chapter, I discuss the capabilities of this kind of sampling from the point of view of a recent program, the BIOWATT Mooring Experiment in 1987. One of the express purposes of this experiment was to expand the range of variables that can be measured from moored instrumentation. Here, I will show how the time resolution made possible with moored sensors allows the measurement of parameters of phytoplankton production on diurnal time scales, as well as allowing a look at seasonal variability. The BIOWATT Mooring Experiment was a collaboration among a large number of people, all of whom contributed to its success. It was the first deployment of a mooring with a variety of sensors and whose goal was to record the optical, biological, and physical variability over a seasonal cycle. The idea for this type of experiment for BIOWATT originated with Tom Dickey and his (then) graduate student, Dave Siegel.
Ilya Polyak
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195099997
- eISBN:
- 9780197560938
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195099997.003.0003
- Subject:
- Computer Science, Mathematical Theory of Computation
In this chapter, several systems of digital filters are presented. The first system consists of regressive smoothing filters, which are a direct consequence of the ...
More
In this chapter, several systems of digital filters are presented. The first system consists of regressive smoothing filters, which are a direct consequence of the least squares polynomial approximation to equally spaced observations. Descriptions of some particular univariate cases of these filters have been published and applied (see, for example, Anderson, 1971; Berezin and Zhidkov, 1965; Kendall and Stuart, 1963; Lanczos, 1956), but the study presented in this chapter is more general, more elaborate in detail, and more fully illustrated. It gives exhaustive information about classical smoothing, differentiating, one- and two-dimensional filtering schemes with their representation in the spaces of time, lags, and frequencies. The results are presented in the form of algorithms, which can be directly used for software development as well as for theoretical analysis of their accuracy in the design of an experiment. The second system consists of harmonic filters, which are a direct consequence of a Fourier approximation of the observations. These filters are widely used in the spectral and correlation analysis of time series. The foundation for developing regressive filters is the least squares polynomial approximation (of equally spaced observations), a principal notion that will be considered briefly.
Less
In this chapter, several systems of digital filters are presented. The first system consists of regressive smoothing filters, which are a direct consequence of the least squares polynomial approximation to equally spaced observations. Descriptions of some particular univariate cases of these filters have been published and applied (see, for example, Anderson, 1971; Berezin and Zhidkov, 1965; Kendall and Stuart, 1963; Lanczos, 1956), but the study presented in this chapter is more general, more elaborate in detail, and more fully illustrated. It gives exhaustive information about classical smoothing, differentiating, one- and two-dimensional filtering schemes with their representation in the spaces of time, lags, and frequencies. The results are presented in the form of algorithms, which can be directly used for software development as well as for theoretical analysis of their accuracy in the design of an experiment. The second system consists of harmonic filters, which are a direct consequence of a Fourier approximation of the observations. These filters are widely used in the spectral and correlation analysis of time series. The foundation for developing regressive filters is the least squares polynomial approximation (of equally spaced observations), a principal notion that will be considered briefly.