Sergiu Klainerman and Jérémie Szeftel
- Published in print:
- 2020
- Published Online:
- May 2021
- ISBN:
- 9780691212425
- eISBN:
- 9780691218526
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691212425.003.0008
- Subject:
- Mathematics, Geometry / Topology
This chapter focuses on the proof for Theorem M6 concerning initialization, Theorem M7 concerning extension, and Theorem M8 concerning the improvement of higher order weighted energies. It first ...
More
This chapter focuses on the proof for Theorem M6 concerning initialization, Theorem M7 concerning extension, and Theorem M8 concerning the improvement of higher order weighted energies. It first improves the bootstrap assumptions on decay estimates. The chapter then improves the bootstrap assumptions on energies and weighted energies for R and Γ relying on an iterative procedure which recovers derivatives one by one. It also outlines the norms for measuring weighted energies for curvature components and Ricci coefficients. To prove Theorem M8, the chapter relies on Propositions 8.11, 8.12, and 8.13. Among these propositions, only the last two involve the dangerous boundary term.Less
This chapter focuses on the proof for Theorem M6 concerning initialization, Theorem M7 concerning extension, and Theorem M8 concerning the improvement of higher order weighted energies. It first improves the bootstrap assumptions on decay estimates. The chapter then improves the bootstrap assumptions on energies and weighted energies for R and Γ relying on an iterative procedure which recovers derivatives one by one. It also outlines the norms for measuring weighted energies for curvature components and Ricci coefficients. To prove Theorem M8, the chapter relies on Propositions 8.11, 8.12, and 8.13. Among these propositions, only the last two involve the dangerous boundary term.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0002
- Subject:
- Mathematics, Probability / Statistics
This chapter discusses the basic techniques of state space analysis — such as filtering, smoothing, initialization, and forecasting — in terms of a simple example of a state space model, the local ...
More
This chapter discusses the basic techniques of state space analysis — such as filtering, smoothing, initialization, and forecasting — in terms of a simple example of a state space model, the local level model. It presents results from both the classical and Bayesian perspectives, assuming normality, and also from the standpoint of minimum variance linear unbiased estimation when the normality assumption is dropped.Less
This chapter discusses the basic techniques of state space analysis — such as filtering, smoothing, initialization, and forecasting — in terms of a simple example of a state space model, the local level model. It presents results from both the classical and Bayesian perspectives, assuming normality, and also from the standpoint of minimum variance linear unbiased estimation when the normality assumption is dropped.
J. Durbin and S.J. Koopman
- Published in print:
- 2012
- Published Online:
- December 2013
- ISBN:
- 9780199641178
- eISBN:
- 9780191774881
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199641178.003.0005
- Subject:
- Mathematics, Probability / Statistics
Computational algorithms in state space analyses are mainly based on recursions, that is, formulae in which the value at time t + 1 is calculated from earlier values for t, t − 1, …, 1. This chapter ...
More
Computational algorithms in state space analyses are mainly based on recursions, that is, formulae in which the value at time t + 1 is calculated from earlier values for t, t − 1, …, 1. This chapter deals with the question of how these recursions are started up at the beginning of the series, a process called initialisation. It provides a general treatment in which some elements of the initial state vector have known distributions while others are diffuse, that is, treated as random variables with infinite variance, or are treated as unknown constants to be estimated by maximum likelihood. The discussions cover the exact initial Kalman filter; exact initial state smoothing; exact initial disturbance smoothing; exact initial simulation smoothing; examples of initial conditions for some models; and augmented Kalman filter and smoother.Less
Computational algorithms in state space analyses are mainly based on recursions, that is, formulae in which the value at time t + 1 is calculated from earlier values for t, t − 1, …, 1. This chapter deals with the question of how these recursions are started up at the beginning of the series, a process called initialisation. It provides a general treatment in which some elements of the initial state vector have known distributions while others are diffuse, that is, treated as random variables with infinite variance, or are treated as unknown constants to be estimated by maximum likelihood. The discussions cover the exact initial Kalman filter; exact initial state smoothing; exact initial disturbance smoothing; exact initial simulation smoothing; examples of initial conditions for some models; and augmented Kalman filter and smoother.
T. N. Krishnamurti, H. S. Bedi, and V. M. Hardiker
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195094732
- eISBN:
- 9780197560761
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195094732.003.0011
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
In this chapter we describe two of the most commonly used initialization procedures. These are the dynamic normal mode initialization and the physical initialization methods. Historically, ...
More
In this chapter we describe two of the most commonly used initialization procedures. These are the dynamic normal mode initialization and the physical initialization methods. Historically, initialization for primitive equation models started from a hierarchy of static initialization methods. These include balancing the mass and the wind fields using a linear or nonlinear balance equation (Charney 1955; Phillips 1960), variational techniques for such adjustments satisfying the constraints of the model equations (Sasaki 1958), and dynamic initialization involving forward and backward integration of the model over a number of cycles to suppress high frequency gravity oscillations before the start of the integration (Miyakoda and Moyer 1968; Nitta and Hovermale 1969; Temperton 1976). A description of these classical methods can be found in textbooks such as Haltiner and Williams (1980). Basically, these methods invoke a balanced relationship between the mass and motion fields. However, it was soon realized that significant departures from the balance laws do occur over the tropics and the upperlevel jet stream region. It was also noted that such departures can be functions of the heat sources and sinks and dynamic instabilities of the atmosphere. The procedure called nonlinear normal mode initialization with physics overcomes some of these difficulties. Physical initialization is a powerful method that permits the incorporation of realistic rainfall distribution in the model’s initial state. This is an elegant and successful initialization procedure based on selective damping of the normal modes of the atmosphere, where the high-frequency gravity modes are suppressed while the slow-moving Rossby modes are left untouched. Williamson (1976) used the normal modes of a shallow water model for initialization by setting the initial amplitudes of the high frequency gravity modes equal to zero. Machenhauer (1977) and Baer (1977) developed the procedure for nonlinear normal mode initialization (NMI), which takes into account the nonlinearities in the model equations. Kitade (1983) incorporated the effect of physical processes in this initialization procedure. We describe here the normal mode initialization procedure. Essentially following Kasahara and Puri (1981), we first derive the equations for vertical and horizontal modes of the linearized form of the model equations.
Less
In this chapter we describe two of the most commonly used initialization procedures. These are the dynamic normal mode initialization and the physical initialization methods. Historically, initialization for primitive equation models started from a hierarchy of static initialization methods. These include balancing the mass and the wind fields using a linear or nonlinear balance equation (Charney 1955; Phillips 1960), variational techniques for such adjustments satisfying the constraints of the model equations (Sasaki 1958), and dynamic initialization involving forward and backward integration of the model over a number of cycles to suppress high frequency gravity oscillations before the start of the integration (Miyakoda and Moyer 1968; Nitta and Hovermale 1969; Temperton 1976). A description of these classical methods can be found in textbooks such as Haltiner and Williams (1980). Basically, these methods invoke a balanced relationship between the mass and motion fields. However, it was soon realized that significant departures from the balance laws do occur over the tropics and the upperlevel jet stream region. It was also noted that such departures can be functions of the heat sources and sinks and dynamic instabilities of the atmosphere. The procedure called nonlinear normal mode initialization with physics overcomes some of these difficulties. Physical initialization is a powerful method that permits the incorporation of realistic rainfall distribution in the model’s initial state. This is an elegant and successful initialization procedure based on selective damping of the normal modes of the atmosphere, where the high-frequency gravity modes are suppressed while the slow-moving Rossby modes are left untouched. Williamson (1976) used the normal modes of a shallow water model for initialization by setting the initial amplitudes of the high frequency gravity modes equal to zero. Machenhauer (1977) and Baer (1977) developed the procedure for nonlinear normal mode initialization (NMI), which takes into account the nonlinearities in the model equations. Kitade (1983) incorporated the effect of physical processes in this initialization procedure. We describe here the normal mode initialization procedure. Essentially following Kasahara and Puri (1981), we first derive the equations for vertical and horizontal modes of the linearized form of the model equations.
T. N. Krishnamurti, H. S. Bedi, and V. M. Hardiker
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195094732
- eISBN:
- 9780197560761
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195094732.003.0010
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology
In this chapter we present some of the physical processes that are used in numerical weather prediction modeling. Grid-point models, based on finite differences, and spectral models both generally ...
More
In this chapter we present some of the physical processes that are used in numerical weather prediction modeling. Grid-point models, based on finite differences, and spectral models both generally treat the physical processes in the same manner. The vertical columns above the horizontal grid points (the transform grid for the spectral models) are the ones along which estimates of the effects of the physical processes are made. In this chapter we present a treatment of the planetary boundary layer, including a discussion on the surface similarity theory. Also covered is the cumulus parameterization problem in terms of the Kuo scheme and the Arakawa- Schubert sheme. Large-scale condensation and radiative transfer in clear and cloudy skies are the final topics reviewed. There are at least three types of fluxes that one deals with, namely momentum, sensible heat, and moisture. Furthermore, one needs to examine separately the land and ocean regions. In this section we present the socalled bulk aerodynamic methods as well as the similarity analysis approach for the estimation of the surface fluxes. The radiation code in a numerical weather prediction model is usually coupled to the calculation of the surface energy balance. This will be covered later in Section 8.5.6. This surface energy balance is usually carried out over land areas, where one balances the net radiation against the surface fluxes of heat and moisture for the determination of soil temperature. Over oceans, the sea-surface temperatures are prescribed where the surface energy balance is implicit. Thus it is quite apparent that what one does in the parameterization of the planetary boundary layer has to be integrated with the radiative parameterization in a consistent manner.
Less
In this chapter we present some of the physical processes that are used in numerical weather prediction modeling. Grid-point models, based on finite differences, and spectral models both generally treat the physical processes in the same manner. The vertical columns above the horizontal grid points (the transform grid for the spectral models) are the ones along which estimates of the effects of the physical processes are made. In this chapter we present a treatment of the planetary boundary layer, including a discussion on the surface similarity theory. Also covered is the cumulus parameterization problem in terms of the Kuo scheme and the Arakawa- Schubert sheme. Large-scale condensation and radiative transfer in clear and cloudy skies are the final topics reviewed. There are at least three types of fluxes that one deals with, namely momentum, sensible heat, and moisture. Furthermore, one needs to examine separately the land and ocean regions. In this section we present the socalled bulk aerodynamic methods as well as the similarity analysis approach for the estimation of the surface fluxes. The radiation code in a numerical weather prediction model is usually coupled to the calculation of the surface energy balance. This will be covered later in Section 8.5.6. This surface energy balance is usually carried out over land areas, where one balances the net radiation against the surface fluxes of heat and moisture for the determination of soil temperature. Over oceans, the sea-surface temperatures are prescribed where the surface energy balance is implicit. Thus it is quite apparent that what one does in the parameterization of the planetary boundary layer has to be integrated with the radiative parameterization in a consistent manner.
J. V. Tucker and J. I. Zucker
- Published in print:
- 2001
- Published Online:
- November 2020
- ISBN:
- 9780198537816
- eISBN:
- 9780191916663
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198537816.003.0005
- Subject:
- Computer Science, Computer Architecture and Logic Design
The theory of the computable functions is a mathematical theory of total and partial functions of the form f : Nn →N, and sets of the form. . . SÍ Nn. . .that can be defined by means of algorithms ...
More
The theory of the computable functions is a mathematical theory of total and partial functions of the form f : Nn →N, and sets of the form. . . SÍ Nn. . .that can be defined by means of algorithms on the set . . . N = {0,1,2, . . . } . . . of natural numbers. The theory establishes what can and cannot be computed in an explicit way using finitely many simple operations on numbers. The set of naturals and a selection of these simple operations together form an algebra. A mathematical objective of the theory is to develop, analyse and compare a variety of models of computation and formal systems for defining functions over a range of algebras of natural numbers. Computability theory on N is of importance in science because it establishes the scope and limits of digital computation. The numbers are realised as concrete symbolic objects and the operations on the numbers can be carried out explicitly, in finitely many concrete symbolic steps. More generally, the numbers can be used to represent or code any form of discrete data. However, the question arises: . . . Can we develop theories of functions that can be defined by means of algorithms on other sets of data?. . . The obvious examples of numerical data are the integer, rational, real and complex numbers; and associated with these numbers there are data such as matrices, polynomials, power series and various types of functions. In addition, there are geometric objects that are represented using the real and complex numbers, including algebraic curves and manifolds. Examples of syntactic data are finite and infinite strings, terms, formulae, trees and graphs. For each set of data there are many choices for a collection of operations from which to build algorithms. . .How specific to the set of data and chosen operations are these computability theories? What properties do the computability theories over different sets of data have in common? . . . The theory of the computable functions on N is stable, rich and useful; will the theory of computable functions on the sets of real and complex numbers, and the other data sets also be so? The theory of computable functions on arbitrary many-sorted algebras will answer these questions.
Less
The theory of the computable functions is a mathematical theory of total and partial functions of the form f : Nn →N, and sets of the form. . . SÍ Nn. . .that can be defined by means of algorithms on the set . . . N = {0,1,2, . . . } . . . of natural numbers. The theory establishes what can and cannot be computed in an explicit way using finitely many simple operations on numbers. The set of naturals and a selection of these simple operations together form an algebra. A mathematical objective of the theory is to develop, analyse and compare a variety of models of computation and formal systems for defining functions over a range of algebras of natural numbers. Computability theory on N is of importance in science because it establishes the scope and limits of digital computation. The numbers are realised as concrete symbolic objects and the operations on the numbers can be carried out explicitly, in finitely many concrete symbolic steps. More generally, the numbers can be used to represent or code any form of discrete data. However, the question arises: . . . Can we develop theories of functions that can be defined by means of algorithms on other sets of data?. . . The obvious examples of numerical data are the integer, rational, real and complex numbers; and associated with these numbers there are data such as matrices, polynomials, power series and various types of functions. In addition, there are geometric objects that are represented using the real and complex numbers, including algebraic curves and manifolds. Examples of syntactic data are finite and infinite strings, terms, formulae, trees and graphs. For each set of data there are many choices for a collection of operations from which to build algorithms. . .How specific to the set of data and chosen operations are these computability theories? What properties do the computability theories over different sets of data have in common? . . . The theory of the computable functions on N is stable, rich and useful; will the theory of computable functions on the sets of real and complex numbers, and the other data sets also be so? The theory of computable functions on arbitrary many-sorted algebras will answer these questions.