David G. Hankin, Michael S. Mohr, and Ken B. Newman
- Published in print:
- 2019
- Published Online:
- December 2019
- ISBN:
- 9780198815792
- eISBN:
- 9780191853463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815792.003.0012
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the ...
More
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the number of trees per hectare killed by a bark beetle infestation may be exceptionally high in one region of a national forest and near zero elsewhere. In such circumstances it may be highly desirable or even required that a sample survey directed at estimation of total tree mortality across a forest be based on selection of random locations that have good spatial balance, i.e., locations are well spread over the landscape with relatively even distances between them. A simple random sample cannot guarantee good spatial balance. We present two methods that have been proposed for selection of spatially balanced samples: GRTS (Generalized Random Tessellation Stratified Sampling) and BAS (Balanced Acceptance Sampling). Selection of samples using the GRTS approach involves a complicated series of sequential steps that allows generation of spatially balanced samples selected from finite populations or from infinite study areas. Selection of samples using BAS relies on the Halton sequence, is conceptually simpler, and produces samples that generally have better spatial balance than those produced by GRTS. Both approaches rely on use of software that is available in the R statistical/programming environment. Estimation relies on the Horvitz–Thompson estimator. Illustrative examples of running the SPSURVEY software package (used for GRTS) and links to the SDraw package (used for BAS) are provided at http://global.oup.com/uk/companion/hankin.Less
In many ecological and natural resource settings, there may be a high degree of spatial structure or pattern to the distribution of target variable values across the landscape. For example, the number of trees per hectare killed by a bark beetle infestation may be exceptionally high in one region of a national forest and near zero elsewhere. In such circumstances it may be highly desirable or even required that a sample survey directed at estimation of total tree mortality across a forest be based on selection of random locations that have good spatial balance, i.e., locations are well spread over the landscape with relatively even distances between them. A simple random sample cannot guarantee good spatial balance. We present two methods that have been proposed for selection of spatially balanced samples: GRTS (Generalized Random Tessellation Stratified Sampling) and BAS (Balanced Acceptance Sampling). Selection of samples using the GRTS approach involves a complicated series of sequential steps that allows generation of spatially balanced samples selected from finite populations or from infinite study areas. Selection of samples using BAS relies on the Halton sequence, is conceptually simpler, and produces samples that generally have better spatial balance than those produced by GRTS. Both approaches rely on use of software that is available in the R statistical/programming environment. Estimation relies on the Horvitz–Thompson estimator. Illustrative examples of running the SPSURVEY software package (used for GRTS) and links to the SDraw package (used for BAS) are provided at http://global.oup.com/uk/companion/hankin.
Bernt P. Stigum
- Published in print:
- 2014
- Published Online:
- September 2015
- ISBN:
- 9780262028585
- eISBN:
- 9780262323109
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028585.003.0005
- Subject:
- Economics and Finance, Econometrics
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters ...
More
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.Less
Chapter V begins with a discussion of formal theory-data confrontations in which the sample population plays a significant role. The formalism differs from the theory-data confrontations in Chapters III and IV, but the fundamental ideas of the empirical analysis are the same. The chapter presents an example in which the formal theory-data confrontation prescribes a factor-analytic test of Milton Friedman’s Permanent Income Hypothesis. This test is then contrasted with a factor-analytic test of Friedman’s hypothesis based on ideas that Ragnar Frisch developed in his 1934 treatise on confluence analysis. In Frisch’s confluence analysis Friedman’s permanent components of income and consumption become so-called systematic variates, and Friedman’s transitory com-ponents become accidental variates. Both the systematic and the accidental variates are unobservables that live and function in the real world - here the data universe. The two tests provide an extraordinary example of how different the present-day-econometrics treatment of errors in variables and errors in equations is from the formal-econometrics treatment of inaccurate observations of variables in Frisch’s model world.