*Michael Metcalf, John Reid, and Malcolm Cohen*

- Published in print:
- 2018
- Published Online:
- October 2018
- ISBN:
- 9780198811893
- eISBN:
- 9780191850028
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198811893.003.0007
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy

A complete description of the use of array processing is offered. Assumed-shape and automatic arrays are described, and the concept of elemental is introduced for operations, assignments, and ...
More

A complete description of the use of array processing is offered. Assumed-shape and automatic arrays are described, and the concept of elemental is introduced for operations, assignments, and procedures. Array-valued functions and the where construct are described, along with the notion of pure procedures. Array subobjects, aliasing, and array constructors are considered. The performance-enhancing features of the do concurrent construct and the contiguous property are included.Less

A complete description of the use of array processing is offered. Assumed-shape and automatic arrays are described, and the concept of elemental is introduced for operations, assignments, and procedures. Array-valued functions and the where construct are described, along with the notion of pure procedures. Array subobjects, aliasing, and array constructors are considered. The performance-enhancing features of the do concurrent construct and the contiguous property are included.

*T. N. Krishnamurti, H. S. Bedi, and V. M. Hardiker*

- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195094732
- eISBN:
- 9780197560761
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195094732.003.0009
- Subject:
- Earth Sciences and Geography, Meteorology and Climatology

Since the 1970s, the spectral method has become an increasingly popular technique for global numerical weather prediction. Global numerical models formulated using the spectral technique are used ...
More

Since the 1970s, the spectral method has become an increasingly popular technique for global numerical weather prediction. Global numerical models formulated using the spectral technique are used worldwide for both research and operational purposes. The success of the spectral technique can be attributed to the spectral transform technique developed independently by Eliasen et al. (1970) and Orszag (1970), and later refined by Bourke (1972). Prior to the introduction of the transform technique, the nonlinear terms were computed using a very tedious process called the interaction coefficients method. This method required large amounts of computer resources as well as enormous bookkeeping. The transform technique facilitates the computation of the nonlinear terms, as discussed later in Section 7.4. Furthermore, the Galerkin method discussed in Chapter 4 is widely used in most spectral models and provides us with alias-free computation of the nonlinear terms. The transform technique enables the current spectral models to be competitive in terms of computational overhead with respect to their grid-point counterparts. The transform technique is also well-suited for incorporating the terms dealing with physics in the prediction scheme. There are a number of advantages to using the spectral technique over the conventional grid-point method. However, we will not get into this discussion here. It should be noted that the model truncation limit specifies the scale of the shortest wavelength that can be resolved by the model. In the following section, we discuss the two most widely used truncations in a spectral model.
Less

Since the 1970s, the spectral method has become an increasingly popular technique for global numerical weather prediction. Global numerical models formulated using the spectral technique are used worldwide for both research and operational purposes. The success of the spectral technique can be attributed to the spectral transform technique developed independently by Eliasen et al. (1970) and Orszag (1970), and later refined by Bourke (1972). Prior to the introduction of the transform technique, the nonlinear terms were computed using a very tedious process called the interaction coefficients method. This method required large amounts of computer resources as well as enormous bookkeeping. The transform technique facilitates the computation of the nonlinear terms, as discussed later in Section 7.4. Furthermore, the Galerkin method discussed in Chapter 4 is widely used in most spectral models and provides us with alias-free computation of the nonlinear terms. The transform technique enables the current spectral models to be competitive in terms of computational overhead with respect to their grid-point counterparts. The transform technique is also well-suited for incorporating the terms dealing with physics in the prediction scheme. There are a number of advantages to using the spectral technique over the conventional grid-point method. However, we will not get into this discussion here. It should be noted that the model truncation limit specifies the scale of the shortest wavelength that can be resolved by the model. In the following section, we discuss the two most widely used truncations in a spectral model.

*J. C. Kaimal and J. J. Finnigan*

- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195062397
- eISBN:
- 9780197560167
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195062397.003.0010
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences

Much of what we know about the structure of the boundary layer is empirical, the result of painstaking analysis of observational data. As our understanding of the boundary layer evolved, so did our ...
More

Much of what we know about the structure of the boundary layer is empirical, the result of painstaking analysis of observational data. As our understanding of the boundary layer evolved, so did our ability to define more clearly the requirements for sensing atmospheric variables and for processing that information. Decisions regarding choice of sampling rates, averaging time, detrending, ways to minimize aliasing, and so on, became easier to make. We find we can even standardize most procedures for real-time processing. The smaller, faster computers, now within the reach of most boundary layer scientists, offer virtually unlimited possibilities for processing and displaying results even as an experiment is progressing. The information we seek, for the most part, falls into two groups: (1) time-averaged statistics such as the mean, variance, covariance, skewness, and kurtosis and (2) spectra and cospectra of velocity components and scalars such as temperature and humidity. We discuss them separately because of different sampling and processing requirements for the two. A proper understanding of these requirements is essential for the successful planning of any experiment. In this chapter we discuss these considerations in some detail with examples of methods used in earlier applications. We will assume that sensors collecting the data have adequate frequency response, precision, and long-term stability and that the sampling is performed digitally at equally spaced intervals. We also assume that the observation heights are chosen with due regard to sensor response and terrain roughness. For calculations of means and higher order moments we need time series that are long enough to include all the relevant low-frequency contributions to the process, sampled at rates fast enough to capture all the high-frequency contributions the sensors are able to measure. Improper choices of averaging times and sampling rates can indeed compromise our statistics. We need to understand how those two factors affect our measurements in order to make sensible decisions on how long and how fast to sample.
Less

Much of what we know about the structure of the boundary layer is empirical, the result of painstaking analysis of observational data. As our understanding of the boundary layer evolved, so did our ability to define more clearly the requirements for sensing atmospheric variables and for processing that information. Decisions regarding choice of sampling rates, averaging time, detrending, ways to minimize aliasing, and so on, became easier to make. We find we can even standardize most procedures for real-time processing. The smaller, faster computers, now within the reach of most boundary layer scientists, offer virtually unlimited possibilities for processing and displaying results even as an experiment is progressing. The information we seek, for the most part, falls into two groups: (1) time-averaged statistics such as the mean, variance, covariance, skewness, and kurtosis and (2) spectra and cospectra of velocity components and scalars such as temperature and humidity. We discuss them separately because of different sampling and processing requirements for the two. A proper understanding of these requirements is essential for the successful planning of any experiment. In this chapter we discuss these considerations in some detail with examples of methods used in earlier applications. We will assume that sensors collecting the data have adequate frequency response, precision, and long-term stability and that the sampling is performed digitally at equally spaced intervals. We also assume that the observation heights are chosen with due regard to sensor response and terrain roughness. For calculations of means and higher order moments we need time series that are long enough to include all the relevant low-frequency contributions to the process, sampled at rates fast enough to capture all the high-frequency contributions the sensors are able to measure. Improper choices of averaging times and sampling rates can indeed compromise our statistics. We need to understand how those two factors affect our measurements in order to make sensible decisions on how long and how fast to sample.