Jacqueline A. Stedall
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198524953
- eISBN:
- 9780191711886
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198524953.003.0006
- Subject:
- Mathematics, History of Mathematics
At the beginning of his mathematical career, John Wallis embarked on the work that was to be published in 1656 as the Arithmetica infinitorum. The book was his masterpiece and over the following ten ...
More
At the beginning of his mathematical career, John Wallis embarked on the work that was to be published in 1656 as the Arithmetica infinitorum. The book was his masterpiece and over the following ten or twenty years was to have a profound influence on the course of English mathematics. It contains the result for which Wallis is now best remembered, his infinite fraction for 4/π, but to Wallis and his contemporaries this was not the book's only, or most remarkable, feature: its real importance lay in the new methods Wallis devised to solve age-old problems of quadrature and cubature. Wallis was well aware of the importance of his work and later devoted the final quarter of A treatise of algebra describing the contents and implications of the Arithmetica infinitorum, as developed in the book itself and by Newton and others in the years following its publication. This chapter revisits the Arithmetica infinitorum and reviews its significance.Less
At the beginning of his mathematical career, John Wallis embarked on the work that was to be published in 1656 as the Arithmetica infinitorum. The book was his masterpiece and over the following ten or twenty years was to have a profound influence on the course of English mathematics. It contains the result for which Wallis is now best remembered, his infinite fraction for 4/π, but to Wallis and his contemporaries this was not the book's only, or most remarkable, feature: its real importance lay in the new methods Wallis devised to solve age-old problems of quadrature and cubature. Wallis was well aware of the importance of his work and later devoted the final quarter of A treatise of algebra describing the contents and implications of the Arithmetica infinitorum, as developed in the book itself and by Newton and others in the years following its publication. This chapter revisits the Arithmetica infinitorum and reviews its significance.
Craig Burnside
- Published in print:
- 2001
- Published Online:
- November 2003
- ISBN:
- 9780199248278
- eISBN:
- 9780191596605
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199248273.003.0005
- Subject:
- Economics and Finance, Macro- and Monetary Economics
A number of numerical methods are discussed for solving dynamic stochastic general equilibrium models that fall within the common category of discrete state‐space methods. These methods can be ...
More
A number of numerical methods are discussed for solving dynamic stochastic general equilibrium models that fall within the common category of discrete state‐space methods. These methods can be applied in situations where the state space of the model in question is given by a finite set of discrete points; in these cases the methods provide an ‘exact’ solution to the model in question. However they are frequently applied in situations where the model's state space is continuous in which case the discrete state space can be viewed as an approximation to the continuous state space. Discrete state‐space methods are discussed in the context of two well‐known examples: a simple one‐asset version of Lucas's (1978) consumption‐based asset pricing model and the one‐sector neoclassical growth model. The discussion does not aim to exhaust the list of possible discrete state‐space methods as they are very numerous; rather it describes several examples that illustrate the basic principles involved. The main sections of the chapter describe the basic principles of numerical quadrature underlying most discrete state‐space methods, show how they can be applied in a very straightforward way to problems in which the state space consists entirely of exogenous state variables, and describe methods that can be used when there are endogenous state variables. The last section notes the several files associated with the chapter for use with MATLAB.Less
A number of numerical methods are discussed for solving dynamic stochastic general equilibrium models that fall within the common category of discrete state‐space methods. These methods can be applied in situations where the state space of the model in question is given by a finite set of discrete points; in these cases the methods provide an ‘exact’ solution to the model in question. However they are frequently applied in situations where the model's state space is continuous in which case the discrete state space can be viewed as an approximation to the continuous state space. Discrete state‐space methods are discussed in the context of two well‐known examples: a simple one‐asset version of Lucas's (1978) consumption‐based asset pricing model and the one‐sector neoclassical growth model. The discussion does not aim to exhaust the list of possible discrete state‐space methods as they are very numerous; rather it describes several examples that illustrate the basic principles involved. The main sections of the chapter describe the basic principles of numerical quadrature underlying most discrete state‐space methods, show how they can be applied in a very straightforward way to problems in which the state space consists entirely of exogenous state variables, and describe methods that can be used when there are endogenous state variables. The last section notes the several files associated with the chapter for use with MATLAB.
J. C. Garrison and R. Y. Chiao
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780198508861
- eISBN:
- 9780191708640
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508861.003.0018
- Subject:
- Physics, Atomic, Laser, and Optical Physics
This chapter begins with a review of classical tomography, which reconstructs hidden structures inside an object by successive transmissions of an X-ray beam at different angles and lateral ...
More
This chapter begins with a review of classical tomography, which reconstructs hidden structures inside an object by successive transmissions of an X-ray beam at different angles and lateral displacements. By interpreting the Wigner distribution as an analogue of the density distribution in a physical object, the mathematical methods of classical tomography — the Radon transform, the Fourier slice theorem, and the inverse Radon transform — are adapted to perform a high resolution determination of a quantum state of light. In optical homodyne tomography, the classical transmission angle is replaced by the adjustable phase of a local oscillator, and the successive lateral displacements are replaced by a series of measurements of the quadrature operator defined by the local oscillator phase. Experimental data is presented showing various properties of the Wigner distribution for a coherent state.Less
This chapter begins with a review of classical tomography, which reconstructs hidden structures inside an object by successive transmissions of an X-ray beam at different angles and lateral displacements. By interpreting the Wigner distribution as an analogue of the density distribution in a physical object, the mathematical methods of classical tomography — the Radon transform, the Fourier slice theorem, and the inverse Radon transform — are adapted to perform a high resolution determination of a quantum state of light. In optical homodyne tomography, the classical transmission angle is replaced by the adjustable phase of a local oscillator, and the successive lateral displacements are replaced by a series of measurements of the quadrature operator defined by the local oscillator phase. Experimental data is presented showing various properties of the Wigner distribution for a coherent state.
Klaus Boehmer
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199577040
- eISBN:
- 9780191595172
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577040.001.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important ...
More
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important examples. This is the first and only book, proving in a systematic and unifying way, stability and convergence results and methods for solving nonlinear discrete equations via discrete Newton methods for the different numerical methods for all these problems. The proofs use linearization, compact perturbation of the coercive principal parts, or monotone operator techniques, and approximation theory. This is examplified for linear to fully nonlinear problems (highest derivatives occur nonlinearly) and for the most important space discretization methods: conforming and nonconforming finite element, discontinuous Galerkin, finite difference and wavelet methods. The proof of stability for nonconforming methods employs the anticrime operator as an essential tool. For all these methods approximate evaluation of the discrete equations, and eigenvalue problems are discussed. The numerical methods are based upon analytic results for this wide class of problems, guaranteeing existence, uniqueness and regularity of the exact solutions. In the next book, spectral, mesh‐free methods and convergence for bifurcation and center manifolds for all these combinations are proved. Specific long open problems, solved here, are numerical methods for fully nonlinear elliptic problems, wavelet and mesh‐free methods for nonlinear problems, and more general nonlinear boundary conditions. Adaptivity is discussed for finite element and wavelet methods with totally different techniques.Less
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important examples. This is the first and only book, proving in a systematic and unifying way, stability and convergence results and methods for solving nonlinear discrete equations via discrete Newton methods for the different numerical methods for all these problems. The proofs use linearization, compact perturbation of the coercive principal parts, or monotone operator techniques, and approximation theory. This is examplified for linear to fully nonlinear problems (highest derivatives occur nonlinearly) and for the most important space discretization methods: conforming and nonconforming finite element, discontinuous Galerkin, finite difference and wavelet methods. The proof of stability for nonconforming methods employs the anticrime operator as an essential tool. For all these methods approximate evaluation of the discrete equations, and eigenvalue problems are discussed. The numerical methods are based upon analytic results for this wide class of problems, guaranteeing existence, uniqueness and regularity of the exact solutions. In the next book, spectral, mesh‐free methods and convergence for bifurcation and center manifolds for all these combinations are proved. Specific long open problems, solved here, are numerical methods for fully nonlinear elliptic problems, wavelet and mesh‐free methods for nonlinear problems, and more general nonlinear boundary conditions. Adaptivity is discussed for finite element and wavelet methods with totally different techniques.
Walter Gautschi
- Published in print:
- 2004
- Published Online:
- November 2020
- ISBN:
- 9780198506720
- eISBN:
- 9780191916571
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198506720.003.0003
- Subject:
- Computer Science, Mathematical Theory of Computation
This introductory chapter is to present a quick review of material on orthogonal polynomials that is particularly relevant to computation. Proofs of most results are ...
More
This introductory chapter is to present a quick review of material on orthogonal polynomials that is particularly relevant to computation. Proofs of most results are included; for those requiring more extensive analytic treatments, references are made to the literature.
Less
This introductory chapter is to present a quick review of material on orthogonal polynomials that is particularly relevant to computation. Proofs of most results are included; for those requiring more extensive analytic treatments, references are made to the literature.
Klaus Böhmer
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199577040
- eISBN:
- 9780191595172
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577040.003.0005
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Nonconforming FEMs avoid the strong restrictions of conforming FEMs. So discontinuous ansatz and test functions, approximate test integrals, and strong forms are admitted. This allows the proof of ...
More
Nonconforming FEMs avoid the strong restrictions of conforming FEMs. So discontinuous ansatz and test functions, approximate test integrals, and strong forms are admitted. This allows the proof of convergence for the full spectrum of linear to fully nonlinear equations and systems of orders 2 and 2m. General fully nonlinear problems only allow strong forms and enforce new techniques and C1 FEs. Variational crimes for FEs violating regularity and boundary conditions are studied in ℝ2 for linear and quasilinear problems. Essential tools are the anticrime transformations. The relations between the strong and weak form of the equations allow the usually technical proofs for consistency. Due to the dominant role of FEMs, numerical solutions for five classes of problems are only presented for FEMs. Most remain valid for the other methods as well: vari-ational methods for eigenvalue problems, convergence theory for monotone operators (quasilinear problems), FEMs for fully nonlinear elliptic problems and for nonlinear boundary conditions, and quadrature approximate FEMs. We thus close several gaps in the literature.Less
Nonconforming FEMs avoid the strong restrictions of conforming FEMs. So discontinuous ansatz and test functions, approximate test integrals, and strong forms are admitted. This allows the proof of convergence for the full spectrum of linear to fully nonlinear equations and systems of orders 2 and 2m. General fully nonlinear problems only allow strong forms and enforce new techniques and C1 FEs. Variational crimes for FEs violating regularity and boundary conditions are studied in ℝ2 for linear and quasilinear problems. Essential tools are the anticrime transformations. The relations between the strong and weak form of the equations allow the usually technical proofs for consistency. Due to the dominant role of FEMs, numerical solutions for five classes of problems are only presented for FEMs. Most remain valid for the other methods as well: vari-ational methods for eigenvalue problems, convergence theory for monotone operators (quasilinear problems), FEMs for fully nonlinear elliptic problems and for nonlinear boundary conditions, and quadrature approximate FEMs. We thus close several gaps in the literature.
Jon Barwise and John Etchemendy
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195104271
- eISBN:
- 9780197560983
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195104271.003.0014
- Subject:
- Computer Science, Computer Architecture and Logic Design
A major concern to the founders of modern logic—Frege, Peirce, Russell, and Hilbert—was to give an account of the logical structure of valid reasoning. ...
More
A major concern to the founders of modern logic—Frege, Peirce, Russell, and Hilbert—was to give an account of the logical structure of valid reasoning. Taking valid reasoning in mathematics as paradigmatic, these pioneers led the way in developing the accounts of logic which we teach today and that underwrite the work in model theory, proof theory, and definability theory. The resulting notions of proof, model, formal system, soundness, and completeness are things that no one claiming familiarity with logic can fail to understand, and they have also played an enormous role in the revolution known as computer science. The success of this model of inference led to an explosion of results and applications. But it also led most logicians—and those computer scientists most influenced by the logic tradition—to neglect forms of reasoning that did not fit well within this model. We are thinking, of course, of reasoning that uses devices like diagrams, graphs, charts, frames, nets, maps, and pictures. The attitude of the traditional logician to these forms of representation is evident in the quotation of Neil Tennant in Chapter I, which expresses the standard view of the role of diagrams in geometrical proofs. One aim of our work, as explained there, is to demonstrate that this dogma is misguided. We believe that many of the problems people have putting their knowledge of logic to work, whether in machines or in their own lives, stems from the logocentricity that has pervaded its study for the past hundred years. Recently, some researchers outside the logic tradition have explored uses of diagrams in knowledge representation and automated reasoning, finding inspiration in the work of Euler, Venn, and especially C. S. Peirce. This volume is a testament to this resurgence of interest in nonlinguistic representations in reasoning. While we applaud this resurgence, the aim of this chapter is to strike a cautionary note or two. Enchanted by the potential of nonlinguistic representations, it is all too easy to overreact and so to repeat the errors of the past.
Less
A major concern to the founders of modern logic—Frege, Peirce, Russell, and Hilbert—was to give an account of the logical structure of valid reasoning. Taking valid reasoning in mathematics as paradigmatic, these pioneers led the way in developing the accounts of logic which we teach today and that underwrite the work in model theory, proof theory, and definability theory. The resulting notions of proof, model, formal system, soundness, and completeness are things that no one claiming familiarity with logic can fail to understand, and they have also played an enormous role in the revolution known as computer science. The success of this model of inference led to an explosion of results and applications. But it also led most logicians—and those computer scientists most influenced by the logic tradition—to neglect forms of reasoning that did not fit well within this model. We are thinking, of course, of reasoning that uses devices like diagrams, graphs, charts, frames, nets, maps, and pictures. The attitude of the traditional logician to these forms of representation is evident in the quotation of Neil Tennant in Chapter I, which expresses the standard view of the role of diagrams in geometrical proofs. One aim of our work, as explained there, is to demonstrate that this dogma is misguided. We believe that many of the problems people have putting their knowledge of logic to work, whether in machines or in their own lives, stems from the logocentricity that has pervaded its study for the past hundred years. Recently, some researchers outside the logic tradition have explored uses of diagrams in knowledge representation and automated reasoning, finding inspiration in the work of Euler, Venn, and especially C. S. Peirce. This volume is a testament to this resurgence of interest in nonlinguistic representations in reasoning. While we applaud this resurgence, the aim of this chapter is to strike a cautionary note or two. Enchanted by the potential of nonlinguistic representations, it is all too easy to overreact and so to repeat the errors of the past.
R. M. Goody and Y. L. Yung
- Published in print:
- 1989
- Published Online:
- November 2020
- ISBN:
- 9780195051346
- eISBN:
- 9780197560976
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195051346.003.0010
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences
The source function for scattering, (2.32), is more complicated than a thermal source function on two accounts: it is not a function of local conditions alone, but ...
More
The source function for scattering, (2.32), is more complicated than a thermal source function on two accounts: it is not a function of local conditions alone, but involves conditions throughout the atmosphere, through the local radiation field, and the phase function, Pij(s, d), may be an extremely complex function of the directions, s and d, and the states of polarization, i and j. The general solution, (2.87), is still valid, but it is now an integral equation, involving the intensity both on the left-hand side and under the integral on the right-hand side. Successive approximations, starting with the first-order scattering term [third term on the right-hand side of (2.116)], are an obvious approach, and would lead to a solution, but there are more efficient and more accurate ways to solve the problem. Many methods are available because their fundamental theory has proved to be mathematically interesting and because there are important applications in neutron diffusion theory and astrophysics. These motivations are extraneous to atmospheric science, but the availability of the methodology has led to its adoption and extension to atmospheric problems. Many methods are available because their fundamental theory has proved to be mathematically interesting and because there are important applications in neutron diffusion theory and astrophysics. These motivations are extraneous to atmospheric science, but the availability of the methodology has led to its adoption and extension to atmospheric problems. Solutions to scattering problems can be elaborate and mathematically elegant; they can also be numerically onerous but, with access to modern computers, “exact” solutions are feasible, given the input parameters τv, av (=sv/ev), and Pi j. For monochromatic calculations with simple phase functions, numerical solutions present few difficulties. Nevertheless, the combination of unfamiliar formalism with inaccessible and undocumented algorithms can be daunting for those with only a peripheral interest in radiation calculations. It is, therefore, relevant to note that available data are imprecise and virtually never require the accuracy available from exact methods. Easily visualized two-stream approximations, combined with similarity relations to handle complex phase functions (see §§8.4.4 and 8.5.6), are often more than adequate, and some angular information can be added, if required, from the use of Eddington’s second approximation (§ 2.4.5).
Less
The source function for scattering, (2.32), is more complicated than a thermal source function on two accounts: it is not a function of local conditions alone, but involves conditions throughout the atmosphere, through the local radiation field, and the phase function, Pij(s, d), may be an extremely complex function of the directions, s and d, and the states of polarization, i and j. The general solution, (2.87), is still valid, but it is now an integral equation, involving the intensity both on the left-hand side and under the integral on the right-hand side. Successive approximations, starting with the first-order scattering term [third term on the right-hand side of (2.116)], are an obvious approach, and would lead to a solution, but there are more efficient and more accurate ways to solve the problem. Many methods are available because their fundamental theory has proved to be mathematically interesting and because there are important applications in neutron diffusion theory and astrophysics. These motivations are extraneous to atmospheric science, but the availability of the methodology has led to its adoption and extension to atmospheric problems. Many methods are available because their fundamental theory has proved to be mathematically interesting and because there are important applications in neutron diffusion theory and astrophysics. These motivations are extraneous to atmospheric science, but the availability of the methodology has led to its adoption and extension to atmospheric problems. Solutions to scattering problems can be elaborate and mathematically elegant; they can also be numerically onerous but, with access to modern computers, “exact” solutions are feasible, given the input parameters τv, av (=sv/ev), and Pi j. For monochromatic calculations with simple phase functions, numerical solutions present few difficulties. Nevertheless, the combination of unfamiliar formalism with inaccessible and undocumented algorithms can be daunting for those with only a peripheral interest in radiation calculations. It is, therefore, relevant to note that available data are imprecise and virtually never require the accuracy available from exact methods. Easily visualized two-stream approximations, combined with similarity relations to handle complex phase functions (see §§8.4.4 and 8.5.6), are often more than adequate, and some angular information can be added, if required, from the use of Eddington’s second approximation (§ 2.4.5).
Gleb L. Kotkin and Valeriy G. Serbo
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780198853787
- eISBN:
- 9780191888236
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198853787.003.0012
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a ...
More
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a particle inside a smooth elastic ellipsoid. The chapter also addresses the trajectory of a particle moving in the field of two Coulomb centres and a beam of electrons inside a short magnetic lens.Less
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a particle inside a smooth elastic ellipsoid. The chapter also addresses the trajectory of a particle moving in the field of two Coulomb centres and a beam of electrons inside a short magnetic lens.
Ilya Polyak
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195099997
- eISBN:
- 9780197560938
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195099997.003.0005
- Subject:
- Computer Science, Mathematical Theory of Computation
In this chapter, the nonparametric methods of estimating the spectra and correlation functions of stationary processes and homogeneous fields are considered. It is ...
More
In this chapter, the nonparametric methods of estimating the spectra and correlation functions of stationary processes and homogeneous fields are considered. It is assumed that the principal concepts and definitions of the corresponding theory are known (see Anderson, 1971; Box and Jenkins, 1976; Jenkins and Watts, 1968; Kendall and Stuart, 1967; Loeve, 1960; Parzen, 1966; Yaglom, 1986); therefore, only questions connected with the construction of numerical algorithms are studied. The basic results ranged from univariate process to multidimensional field are presented in Tables 3.1 and 3.2. These formulas make it possible to compare and trace the formal character of developing estimation procedures when the dimensionality is increasing. The schemes in these tables, as well as the formulas in the previous chapters, can be used for software development without any rearrangement. In part, this approach presents the application of the methods of Chapters 1 and 2 in evaluating random function characteristics. Of course, the final identification of the algorithm parameters (for example, the spectral window widths) can be made only through trial and error and by taking into account the character of the problem under study, that is, the physical properties of the processes and fields observed. The last section of this chapter presents results of the application of these methods to the analysis of some climatological fields. Here the basic results of the univariate spectral analysis are briefly discussed in order to develop algorithms for a multidimensional case by analogous reasoning. The complete description of the estimation procedures of the spectral and correlation analysis for univariate stationary process can be found, for example, in Jenkins and Watts, 1968.
Less
In this chapter, the nonparametric methods of estimating the spectra and correlation functions of stationary processes and homogeneous fields are considered. It is assumed that the principal concepts and definitions of the corresponding theory are known (see Anderson, 1971; Box and Jenkins, 1976; Jenkins and Watts, 1968; Kendall and Stuart, 1967; Loeve, 1960; Parzen, 1966; Yaglom, 1986); therefore, only questions connected with the construction of numerical algorithms are studied. The basic results ranged from univariate process to multidimensional field are presented in Tables 3.1 and 3.2. These formulas make it possible to compare and trace the formal character of developing estimation procedures when the dimensionality is increasing. The schemes in these tables, as well as the formulas in the previous chapters, can be used for software development without any rearrangement. In part, this approach presents the application of the methods of Chapters 1 and 2 in evaluating random function characteristics. Of course, the final identification of the algorithm parameters (for example, the spectral window widths) can be made only through trial and error and by taking into account the character of the problem under study, that is, the physical properties of the processes and fields observed. The last section of this chapter presents results of the application of these methods to the analysis of some climatological fields. Here the basic results of the univariate spectral analysis are briefly discussed in order to develop algorithms for a multidimensional case by analogous reasoning. The complete description of the estimation procedures of the spectral and correlation analysis for univariate stationary process can be found, for example, in Jenkins and Watts, 1968.
Robert J Marks II
- Published in print:
- 2009
- Published Online:
- November 2020
- ISBN:
- 9780195335927
- eISBN:
- 9780197562567
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195335927.003.0015
- Subject:
- Computer Science, Mathematical Theory of Computation
The literature on the recovery of signals and images is vast (e.g., [23, 110, 112, 257, 391, 439, 791, 795, 933, 934, 937, 945, 956, 1104, 1324, 1494, 1495, 1551]). In ...
More
The literature on the recovery of signals and images is vast (e.g., [23, 110, 112, 257, 391, 439, 791, 795, 933, 934, 937, 945, 956, 1104, 1324, 1494, 1495, 1551]). In this Chapter, the specific problem of recovering lost signal intervals from the remaining known portion of the signal is considered. Signal recovery is also a topic of Chapter 11 on POCS. To this point, sampling has been discrete. Bandlimited signals, we will show, can also be recovered from continuous samples. Our definition of continuous sampling is best presented by illustration.Asignal, f (t), is shown in Figure 10.1a, along with some possible continuous samples. Regaining f (t) from knowledge of ge(t) = f (t)Π(t/T) in Figure 10.1b is the extrapolation problem which has applications in a number of fields. In optics, for example, extrapolation in the frequency domain is termed super resolution [2, 40, 367, 444, 500, 523, 641, 720, 864, 1016, 1099, 1117]. Reconstructing f (t) from its tails [i.e., gi(t) = f (t){1 − Π(t/T)}] is the interval interpolation problem. Prediction, shown in Figure 10.1d, is the problem of recovering a signal with knowledge of that signal only for negative time. Lastly, illustrated in Figure 10.1e, is periodic continuous sampling. Here, the signal is known in sections periodically spaced at intervals of T. The duty cycle is α. Reconstruction of f (t) from this data includes a number of important reconstruction problems as special cases. (a) By keeping αT constant, we can approach the extrapolation problem by letting T go to ∞. (b) Redefine the origin in Figure 10.1e to be centered in a zero interval. Under the same assumption as (a), we can similarly approach the interpolation problem. (c) Redefine the origin as in (b). Then the interpolation problem can be solved by discarding data to make it periodically sampled. (d) Keep T constant and let α → 0. The result is reconstructing f (t) from discrete samples as discussed in Chapter 5. Indeed, this model has been used to derive the sampling theorem [246]. Figures 10.1b-e all illustrate continuously sampled versions of f (t).
Less
The literature on the recovery of signals and images is vast (e.g., [23, 110, 112, 257, 391, 439, 791, 795, 933, 934, 937, 945, 956, 1104, 1324, 1494, 1495, 1551]). In this Chapter, the specific problem of recovering lost signal intervals from the remaining known portion of the signal is considered. Signal recovery is also a topic of Chapter 11 on POCS. To this point, sampling has been discrete. Bandlimited signals, we will show, can also be recovered from continuous samples. Our definition of continuous sampling is best presented by illustration.Asignal, f (t), is shown in Figure 10.1a, along with some possible continuous samples. Regaining f (t) from knowledge of ge(t) = f (t)Π(t/T) in Figure 10.1b is the extrapolation problem which has applications in a number of fields. In optics, for example, extrapolation in the frequency domain is termed super resolution [2, 40, 367, 444, 500, 523, 641, 720, 864, 1016, 1099, 1117]. Reconstructing f (t) from its tails [i.e., gi(t) = f (t){1 − Π(t/T)}] is the interval interpolation problem. Prediction, shown in Figure 10.1d, is the problem of recovering a signal with knowledge of that signal only for negative time. Lastly, illustrated in Figure 10.1e, is periodic continuous sampling. Here, the signal is known in sections periodically spaced at intervals of T. The duty cycle is α. Reconstruction of f (t) from this data includes a number of important reconstruction problems as special cases. (a) By keeping αT constant, we can approach the extrapolation problem by letting T go to ∞. (b) Redefine the origin in Figure 10.1e to be centered in a zero interval. Under the same assumption as (a), we can similarly approach the interpolation problem. (c) Redefine the origin as in (b). Then the interpolation problem can be solved by discarding data to make it periodically sampled. (d) Keep T constant and let α → 0. The result is reconstructing f (t) from discrete samples as discussed in Chapter 5. Indeed, this model has been used to derive the sampling theorem [246]. Figures 10.1b-e all illustrate continuously sampled versions of f (t).
Atsushi Shimojima
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195104271
- eISBN:
- 9780197560983
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195104271.003.0006
- Subject:
- Computer Science, Computer Architecture and Logic Design
Diagrammatic reasoning is reasoning whose task is partially taken over by operations on diagrams. It consists of two kinds of activities: (i) physical ...
More
Diagrammatic reasoning is reasoning whose task is partially taken over by operations on diagrams. It consists of two kinds of activities: (i) physical operations, such as drawing and erasing lines, curves, figures, patterns, symbols, through which diagrams come to encode new information (or discard old information), and (ii) extractions of information diagrams, such as interpreting Venn diagrams, statistical graphs, and geographical maps. Given particular tasks of reasoning, different types of diagrams show different degrees of suitedness. For example, Euler diagrams are superior in handling certain problems concerning inclusion and membership among classes and individuals, but they cannot be generally applied to such problems without special provisos. Diagrams make many proofs in geometry shorter and more intuitive, while they take certain precautions of the reasoner’s to be used validly. Tables with particular configurations are better suited than other tables to reason about the train schedule of a station. Different types of geographical maps support different tasks of reasoning about a single mountain area. Mathematicians experience that coming up with the “right” sorts of diagrams is more than half-way to the solution of most complicated problems. Perhaps many of these phenomena are explained with reference to aspect (ii) of diagrammatic reasoning because some types of diagrams lets a reasoner retrieve a kind of information that others do not. or lets the reasoner retrieve it more “easily” than others. In fact, this is the approach that psychologists have traditionally taken. In this chapter, we take a different path and focus on aspect (i) of diagrammatic reasoning. Namely, we look closely at the process in which a reasoner applies operations to diagrams and in which diagrams come to encode new information through these operations. It seems that this process is different in some crucial points from one type of diagrams to another, and that these differences partially explain why some types of diagrams are better suited than others to particular tasks of reasoning.
Less
Diagrammatic reasoning is reasoning whose task is partially taken over by operations on diagrams. It consists of two kinds of activities: (i) physical operations, such as drawing and erasing lines, curves, figures, patterns, symbols, through which diagrams come to encode new information (or discard old information), and (ii) extractions of information diagrams, such as interpreting Venn diagrams, statistical graphs, and geographical maps. Given particular tasks of reasoning, different types of diagrams show different degrees of suitedness. For example, Euler diagrams are superior in handling certain problems concerning inclusion and membership among classes and individuals, but they cannot be generally applied to such problems without special provisos. Diagrams make many proofs in geometry shorter and more intuitive, while they take certain precautions of the reasoner’s to be used validly. Tables with particular configurations are better suited than other tables to reason about the train schedule of a station. Different types of geographical maps support different tasks of reasoning about a single mountain area. Mathematicians experience that coming up with the “right” sorts of diagrams is more than half-way to the solution of most complicated problems. Perhaps many of these phenomena are explained with reference to aspect (ii) of diagrammatic reasoning because some types of diagrams lets a reasoner retrieve a kind of information that others do not. or lets the reasoner retrieve it more “easily” than others. In fact, this is the approach that psychologists have traditionally taken. In this chapter, we take a different path and focus on aspect (i) of diagrammatic reasoning. Namely, we look closely at the process in which a reasoner applies operations to diagrams and in which diagrams come to encode new information through these operations. It seems that this process is different in some crucial points from one type of diagrams to another, and that these differences partially explain why some types of diagrams are better suited than others to particular tasks of reasoning.
R. M. Goody and Y. L. Yung
- Published in print:
- 1989
- Published Online:
- November 2020
- ISBN:
- 9780195051346
- eISBN:
- 9780197560976
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195051346.003.0004
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit ...
More
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit frequency (or wavelength) range per unit solid angle per unit area perpendicular to the given direction. In Fig. 2.1 the point P is surrounded by a small element of area dπs, perpendicular to the direction of the unit vector s. From each point on dπs a cone of solid angle dωs is drawn about the s vector. The bundle of rays, originating on dπs, and contained within dωs, transports in time dt and in the frequency range v to v + dv, the energy . . . Ev = Iv(P,S) dπs dωs dv dt, (2.1). . . where Iv(P, s) is the specific intensity at the point P in the s-direction. If Iv is not a function of direction the intensity field is said to be isotropic ; if Iv is not a function of position the field is said to be homogeneous.
Less
In common with astrophysical usage the word intensity will denote specific intensity of radiation, i.e., the flux of energy in a given direction per second per unit frequency (or wavelength) range per unit solid angle per unit area perpendicular to the given direction. In Fig. 2.1 the point P is surrounded by a small element of area dπs, perpendicular to the direction of the unit vector s. From each point on dπs a cone of solid angle dωs is drawn about the s vector. The bundle of rays, originating on dπs, and contained within dωs, transports in time dt and in the frequency range v to v + dv, the energy . . . Ev = Iv(P,S) dπs dωs dv dt, (2.1). . . where Iv(P, s) is the specific intensity at the point P in the s-direction. If Iv is not a function of direction the intensity field is said to be isotropic ; if Iv is not a function of position the field is said to be homogeneous.
Jörg Liesen and Zdenek Strakos
- Published in print:
- 2012
- Published Online:
- January 2013
- ISBN:
- 9780199655410
- eISBN:
- 9780191744174
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199655410.003.0003
- Subject:
- Mathematics, Applied Mathematics, Algebra
The projected system matrix in Krylov subspace methods consists of moments of the original system matrix with respect to the initial residual(s). This hints that Krylov subspace methods can be viewed ...
More
The projected system matrix in Krylov subspace methods consists of moments of the original system matrix with respect to the initial residual(s). This hints that Krylov subspace methods can be viewed as matching moments model reduction. Through the simplified Stieltjes moment problem, orthogonal polynomials, continued fractions, and Jacobi matrices, we thus obtain the Gauss–Christoffel quadrature representation of the conjugate gradient method (CG). It is described how generalisations to the non-Hermitian case can easily be achieved using the Vorobyev method of moments. Finally, the described results and their historical roots are linked with the model reduction of large-scale dynamical systems. The chapter demonstrates the strong connection between Krylov subspace methods used in state-of-the-art numerical calculations and classical topics of analysis and approximation theory. Since moments represent very general objects, this suggests that Krylov subspace methods might have much wider applications beyond their immediate context of solving algebraic problems.Less
The projected system matrix in Krylov subspace methods consists of moments of the original system matrix with respect to the initial residual(s). This hints that Krylov subspace methods can be viewed as matching moments model reduction. Through the simplified Stieltjes moment problem, orthogonal polynomials, continued fractions, and Jacobi matrices, we thus obtain the Gauss–Christoffel quadrature representation of the conjugate gradient method (CG). It is described how generalisations to the non-Hermitian case can easily be achieved using the Vorobyev method of moments. Finally, the described results and their historical roots are linked with the model reduction of large-scale dynamical systems. The chapter demonstrates the strong connection between Krylov subspace methods used in state-of-the-art numerical calculations and classical topics of analysis and approximation theory. Since moments represent very general objects, this suggests that Krylov subspace methods might have much wider applications beyond their immediate context of solving algebraic problems.
Niccolò Guicciardini
- Published in print:
- 2009
- Published Online:
- August 2013
- ISBN:
- 9780262013178
- eISBN:
- 9780262258869
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262013178.003.0012
- Subject:
- History, History of Science, Technology, and Medicine
This chapter presents proof that Newton developed advanced quadrature techniques, and also examines Corollary 3, Proposition 41, Book 1; and Corollary 2, Proposition 91, Book 1, of his Principia. In ...
More
This chapter presents proof that Newton developed advanced quadrature techniques, and also examines Corollary 3, Proposition 41, Book 1; and Corollary 2, Proposition 91, Book 1, of his Principia. In the two propositions, Newton simplified the problem into the calculation of the area bounded by a curve, and in the next series of corollaries and scholia, he employed the general solution to particular cases: That one must calculate the quadrature for specific cases. The solutions to corollaries and scholia heavily rely upon such calculations, but Newton did not provide details on how to do them. The chapter then explores some of the examples for using new analysis in the Principia.Less
This chapter presents proof that Newton developed advanced quadrature techniques, and also examines Corollary 3, Proposition 41, Book 1; and Corollary 2, Proposition 91, Book 1, of his Principia. In the two propositions, Newton simplified the problem into the calculation of the area bounded by a curve, and in the next series of corollaries and scholia, he employed the general solution to particular cases: That one must calculate the quadrature for specific cases. The solutions to corollaries and scholia heavily rely upon such calculations, but Newton did not provide details on how to do them. The chapter then explores some of the examples for using new analysis in the Principia.
- Published in print:
- 2006
- Published Online:
- March 2013
- ISBN:
- 9780226409542
- eISBN:
- 9780226409566
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226409566.003.0006
- Subject:
- History, History of Science, Technology, and Medicine
This chapter considers how Leibniz came to maintain that symbolic expression could constitute legitimate knowledge. It discusses how he produced his quadrature of the circle, the terms in which he ...
More
This chapter considers how Leibniz came to maintain that symbolic expression could constitute legitimate knowledge. It discusses how he produced his quadrature of the circle, the terms in which he defended his solution as legitimate knowledge, and some of the considerations about many larger philosophical and practical questions that he drew from it. The chapter suggests that Leibniz's great mathematical discovery of the quadrature and his defense of symbolic expression as legitimate mathematical knowledge became possible in part because of his practical attempts to create new symbolic and optical technologies that would permit human beings to see many things all at once. By tracking Leibniz's interest in these concrete techniques, we can better reconstruct how he developed some central concepts and practices in his mathematics and early philosophy, and we can understand less anachronistically the importance he attached to them. Drawing in part upon his mathematical solution of the quadrature and his arguments that this solution really was mathematical knowledge, Leibniz came to argue that bringing the soul and mind closer to God required a sophisticated deployment and involvement in the material processes of notation.Less
This chapter considers how Leibniz came to maintain that symbolic expression could constitute legitimate knowledge. It discusses how he produced his quadrature of the circle, the terms in which he defended his solution as legitimate knowledge, and some of the considerations about many larger philosophical and practical questions that he drew from it. The chapter suggests that Leibniz's great mathematical discovery of the quadrature and his defense of symbolic expression as legitimate mathematical knowledge became possible in part because of his practical attempts to create new symbolic and optical technologies that would permit human beings to see many things all at once. By tracking Leibniz's interest in these concrete techniques, we can better reconstruct how he developed some central concepts and practices in his mathematics and early philosophy, and we can understand less anachronistically the importance he attached to them. Drawing in part upon his mathematical solution of the quadrature and his arguments that this solution really was mathematical knowledge, Leibniz came to argue that bringing the soul and mind closer to God required a sophisticated deployment and involvement in the material processes of notation.
Ciaran McMorran
- Published in print:
- 2020
- Published Online:
- September 2020
- ISBN:
- 9780813066288
- eISBN:
- 9780813065267
- Item type:
- chapter
- Publisher:
- University Press of Florida
- DOI:
- 10.5744/florida/9780813066288.003.0005
- Subject:
- Literature, 20th-century Literature and Modernism
This chapter explores how geometry is presented as a language for describing both visual and nonvisual spaces in Finnegans Wake. It demonstrates how the Wake’s Protean visual landscape is shaped by ...
More
This chapter explores how geometry is presented as a language for describing both visual and nonvisual spaces in Finnegans Wake. It demonstrates how the Wake’s Protean visual landscape is shaped by its polyphonic narrative, and how the Wakean landscape’s boundaries expand and contract in accordance with the movement and breathing of human bodies. With reference to Bruno’s notion that “the infinite straight line […] becomes the infinite circle,” it illustrates how straight lines and rectilinear thought processes veer off course as they are projected onto the uneven bodily, textual, and terrestrial surfaces which record the Wake’s ouroboric narrative. This chapter also investigates how James Joyce incorporates the notion of a “4d universe” in Finnegans Wake, in which time constitutes the fourth dimension of space, and how the “fourdimmansions” of Wakean space-time are framed by the quadrilateral gaze of its four historians as they chart the Wake’s territories using crisscrossing lines of sight. By examining the four old men’s attempts to describe Mr. and Mrs. Porter’s “sleepingchambers” in cycles around the four bedposts in III.4, this chapter considers how the penultimate chapter of Finnegans Wake reflects Joyce’s own concerns with the quadrature of the circle in his writing of the Wake.Less
This chapter explores how geometry is presented as a language for describing both visual and nonvisual spaces in Finnegans Wake. It demonstrates how the Wake’s Protean visual landscape is shaped by its polyphonic narrative, and how the Wakean landscape’s boundaries expand and contract in accordance with the movement and breathing of human bodies. With reference to Bruno’s notion that “the infinite straight line […] becomes the infinite circle,” it illustrates how straight lines and rectilinear thought processes veer off course as they are projected onto the uneven bodily, textual, and terrestrial surfaces which record the Wake’s ouroboric narrative. This chapter also investigates how James Joyce incorporates the notion of a “4d universe” in Finnegans Wake, in which time constitutes the fourth dimension of space, and how the “fourdimmansions” of Wakean space-time are framed by the quadrilateral gaze of its four historians as they chart the Wake’s territories using crisscrossing lines of sight. By examining the four old men’s attempts to describe Mr. and Mrs. Porter’s “sleepingchambers” in cycles around the four bedposts in III.4, this chapter considers how the penultimate chapter of Finnegans Wake reflects Joyce’s own concerns with the quadrature of the circle in his writing of the Wake.
Mark Ladd
- Published in print:
- 2016
- Published Online:
- May 2016
- ISBN:
- 9780198729945
- eISBN:
- 9780191818783
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198729945.003.0007
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter introduces the various programs that have been devised to accompany the text, using both Python and Fortran programs, some of which are interactive. The topics that are programmed and ...
More
This chapter introduces the various programs that have been devised to accompany the text, using both Python and Fortran programs, some of which are interactive. The topics that are programmed and discussed include inter alia graphing, contouring, HMO calculations, Madelung constants, linear least squares, matrix operations, radial and angular wavefunctions, quadrature and roots of polynomials. In many cases, example data sets are provided in order to ensure correct working of the programs.Less
This chapter introduces the various programs that have been devised to accompany the text, using both Python and Fortran programs, some of which are interactive. The topics that are programmed and discussed include inter alia graphing, contouring, HMO calculations, Madelung constants, linear least squares, matrix operations, radial and angular wavefunctions, quadrature and roots of polynomials. In many cases, example data sets are provided in order to ensure correct working of the programs.
Sauro Succi
- Published in print:
- 2018
- Published Online:
- June 2018
- ISBN:
- 9780199592357
- eISBN:
- 9780191847967
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199592357.003.0015
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics, Condensed Matter Physics / Materials
This chapter describes the side-up approach to Lattice Boltzmann, namely the formal derivation from the continuum Boltzmann-(BGK) equation via Hermite projection and subsequent evaluation of the ...
More
This chapter describes the side-up approach to Lattice Boltzmann, namely the formal derivation from the continuum Boltzmann-(BGK) equation via Hermite projection and subsequent evaluation of the kinetic moments via Gauss–Hermite quadrature. From a slightly different angle, one may also interpret the Gauss–Hermite quadrature as an optimal sampling of velocity space, or, better still, an exact sampling of the bulk of the distribution function, the one contributing most to the lowest order kinetic moments (frequent events). Capturing higher–order moments, beyond hydrodynamics (rare events), requires an increasing number of nodes and weights.Less
This chapter describes the side-up approach to Lattice Boltzmann, namely the formal derivation from the continuum Boltzmann-(BGK) equation via Hermite projection and subsequent evaluation of the kinetic moments via Gauss–Hermite quadrature. From a slightly different angle, one may also interpret the Gauss–Hermite quadrature as an optimal sampling of velocity space, or, better still, an exact sampling of the bulk of the distribution function, the one contributing most to the lowest order kinetic moments (frequent events). Capturing higher–order moments, beyond hydrodynamics (rare events), requires an increasing number of nodes and weights.
Gleb L. Kotkin and Valeriy G. Serbo
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780198853787
- eISBN:
- 9780191888236
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198853787.003.0025
- Subject:
- Physics, Condensed Matter Physics / Materials
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a ...
More
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a particle inside a smooth elastic ellipsoid. The chapter also addresses the trajectory of a particle moving in the field of two Coulomb centres and a beam of electrons inside a short magnetic lens.Less
This chapter discusses the motion of particles which are scattered by and fall towards the center of the dipol, the motion of a particle in the Coulomb and the constant electric fields, and a particle inside a smooth elastic ellipsoid. The chapter also addresses the trajectory of a particle moving in the field of two Coulomb centres and a beam of electrons inside a short magnetic lens.