Fred Campano and Dominick Salvatore
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780195300918
- eISBN:
- 9780199783441
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195300912.001.0001
- Subject:
- Economics and Finance, Development, Growth, and Environmental
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction ...
More
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction to not so familiar concepts relating to income distribution in the subsequent chapters. The income concept and household sample surveys are examined first, followed by descriptive statistics techniques commonly used to present the survey results. The commonality found in the shape of the income density function leads to statistical modeling, parameter estimation, and goodness of fit tests. Alternative models are then introduced along with the related summary measures of income distribution, including the Gini coefficient. This is followed by a sequence of chapters that deal with normative issues such as inequality, poverty, and country comparisons. The remaining chapters cover an assortment of topics including: economic development and globalization and their impact on income distribution, redistribution of income, and integrating macroeconomic models with income distribution models.Less
Intended as an introductory textbook for advanced undergraduates and first year graduate students, this book leads the reader from familiar basic micro- and macroeconomic concepts in the introduction to not so familiar concepts relating to income distribution in the subsequent chapters. The income concept and household sample surveys are examined first, followed by descriptive statistics techniques commonly used to present the survey results. The commonality found in the shape of the income density function leads to statistical modeling, parameter estimation, and goodness of fit tests. Alternative models are then introduced along with the related summary measures of income distribution, including the Gini coefficient. This is followed by a sequence of chapters that deal with normative issues such as inequality, poverty, and country comparisons. The remaining chapters cover an assortment of topics including: economic development and globalization and their impact on income distribution, redistribution of income, and integrating macroeconomic models with income distribution models.
Ray Chambers and Robert Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey ...
More
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.Less
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.
Myoung-jae Lee
- Published in print:
- 2005
- Published Online:
- February 2006
- ISBN:
- 9780199267699
- eISBN:
- 9780191603044
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199267693.001.0001
- Subject:
- Economics and Finance, Econometrics
This book brings to the fore recent advances in econometrics for treatment effect analysis. It aims to put together various economic treatment effect models in a coherent fashion, determine those ...
More
This book brings to the fore recent advances in econometrics for treatment effect analysis. It aims to put together various economic treatment effect models in a coherent fashion, determine those that can be parameters of interest, and show how these can be identified and estimated under weak assumptions. The emphasis throughout the book is on semi- and non-parametric estimation methods, but traditional parametric approaches are also discussed. This book is ideally suited to researchers and graduate students with a basic knowledge of econometrics.Less
This book brings to the fore recent advances in econometrics for treatment effect analysis. It aims to put together various economic treatment effect models in a coherent fashion, determine those that can be parameters of interest, and show how these can be identified and estimated under weak assumptions. The emphasis throughout the book is on semi- and non-parametric estimation methods, but traditional parametric approaches are also discussed. This book is ideally suited to researchers and graduate students with a basic knowledge of econometrics.
Donna Harrington
- Published in print:
- 2008
- Published Online:
- January 2009
- ISBN:
- 9780195339888
- eISBN:
- 9780199863662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195339888.003.0006
- Subject:
- Social Work, Research and Evaluation
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. ...
More
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. Longitudinal measurement invariance and equivalent models are briefly shown. Finally, multilevel confirmatory factor analysis models are also mentioned.Less
This chapter discusses the information that should be included when presenting CFA results, including model specification, input data, model estimation, model evaluation, and substantive conclusions. Longitudinal measurement invariance and equivalent models are briefly shown. Finally, multilevel confirmatory factor analysis models are also mentioned.
Raymond L. Chambers and Robert G. Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.003.0011
- Subject:
- Mathematics, Probability / Statistics
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where ...
More
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where the target parameter can be expressed as a differentiable function of finite population means, and a Taylor series linearisation argument is used to get a large sample approximation to the prediction variance of the substitution-based predictor. This Taylor linearisation approach is then generalised to target parameters that can be expressed as solutions of estimating equations. An application to inference about the median value of a homogeneous population serves to illustrate the basic approach, and this is then extended to the stratified population case.Less
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where the target parameter can be expressed as a differentiable function of finite population means, and a Taylor series linearisation argument is used to get a large sample approximation to the prediction variance of the substitution-based predictor. This Taylor linearisation approach is then generalised to target parameters that can be expressed as solutions of estimating equations. An application to inference about the median value of a homogeneous population serves to illustrate the basic approach, and this is then extended to the stratified population case.
Anthony Garratt, Kevin Lee, M. Hashem Pesaran, and Yongcheol Shin
- Published in print:
- 2006
- Published Online:
- September 2006
- ISBN:
- 9780199296859
- eISBN:
- 9780191603853
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199296855.003.0009
- Subject:
- Economics and Finance, Econometrics
This chapter describes the empirical work underlying the construction of the UK model, discusses the results obtained from testing its long-run properties, and compares the model with benchmark ...
More
This chapter describes the empirical work underlying the construction of the UK model, discusses the results obtained from testing its long-run properties, and compares the model with benchmark univariate models of the variables. The description of the modelling work not only provides one of the first examples of the use of the long-run structural cointegrating VAR techniques in an applied context, but it also includes a discussion of bootstrap experiments designed to investigate the small-sample properties of the tests employed.Less
This chapter describes the empirical work underlying the construction of the UK model, discusses the results obtained from testing its long-run properties, and compares the model with benchmark univariate models of the variables. The description of the modelling work not only provides one of the first examples of the use of the long-run structural cointegrating VAR techniques in an applied context, but it also includes a discussion of bootstrap experiments designed to investigate the small-sample properties of the tests employed.
Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0004
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis ...
More
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.Less
This chapter introduces the main concepts of statistical inference, or drawing conclusions from data. There are three main types of inference: point estimation, confidence estimation, and hypothesis testing. There are two major statistical paradigms which address the statistical inference questions: the classical, or frequentist paradigm, and the Bayesian paradigm. While most of statistics and machine learning is based on the classical paradigm, Bayesian techniques are being embraced by the statistical and scientific communities at an ever-increasing pace. The chapter begins with a short comparison of classical and Bayesian paradigms, and then discusses the three main types of statistical inference from the classical point of view.
Lawrence McNamara
- Published in print:
- 2007
- Published Online:
- January 2009
- ISBN:
- 9780199231454
- eISBN:
- 9780191710858
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199231454.003.0010
- Subject:
- Law, Law of Obligations
The question this study set out to answer was: if reputation is the interest to be protected by defamation law then what should be the test(s) for what is defamatory? The stated aim was to fill a gap ...
More
The question this study set out to answer was: if reputation is the interest to be protected by defamation law then what should be the test(s) for what is defamatory? The stated aim was to fill a gap in the common law by providing a principled, theoretically coherent statement of law regarding what is defamatory. This chapter proposes a new legal framework that aims to meet that goal. Only the principal test for what is defamatory should be retained because it is the only one that meaningfully protects reputation. However, the common law should dispose of the traditional, exclusive presumptions that form the content of ‘the right-thinking person’ and instead use inclusive presumptions that are premised upon an acceptance of equal moral worth. Any displacement of these presumptions should be controversial. A departure from the commitment to equal moral worth should be made only with great care and caution.Less
The question this study set out to answer was: if reputation is the interest to be protected by defamation law then what should be the test(s) for what is defamatory? The stated aim was to fill a gap in the common law by providing a principled, theoretically coherent statement of law regarding what is defamatory. This chapter proposes a new legal framework that aims to meet that goal. Only the principal test for what is defamatory should be retained because it is the only one that meaningfully protects reputation. However, the common law should dispose of the traditional, exclusive presumptions that form the content of ‘the right-thinking person’ and instead use inclusive presumptions that are premised upon an acceptance of equal moral worth. Any displacement of these presumptions should be controversial. A departure from the commitment to equal moral worth should be made only with great care and caution.
Raymond L. Chambers and Robert G. Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.003.0014
- Subject:
- Mathematics, Probability / Statistics
Inference for domains considers an important aspect of sample survey inference, where estimates are required not for the population actually surveyed but for subgroups of it typically referred to as ...
More
Inference for domains considers an important aspect of sample survey inference, where estimates are required not for the population actually surveyed but for subgroups of it typically referred to as domains. The main emphasis is on homogeneous domains whose population memberships, and hence sizes, are unknown, since this is the most common case. The case where the domain size is known is also discussed, as is linear domain estimation based on multipurpose sample weights.Less
Inference for domains considers an important aspect of sample survey inference, where estimates are required not for the population actually surveyed but for subgroups of it typically referred to as domains. The main emphasis is on homogeneous domains whose population memberships, and hence sizes, are unknown, since this is the most common case. The case where the domain size is known is also discussed, as is linear domain estimation based on multipurpose sample weights.
Lars Peter Hansen and Thomas J. Sargent
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691042770
- eISBN:
- 9781400848188
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691042770.001.0001
- Subject:
- Economics and Finance, History of Economic Thought
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and ...
More
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and quantities. The book presents examples from microeconomics, macroeconomics, and asset pricing. The models are cast in terms of a representative consumer. While the book demonstrates the analytical benefits acquired when an analysis with a representative consumer is possible, it also characterizes the restrictiveness of assumptions under which a representative household justifies a purely aggregative analysis. The book unites economic theory with a workable econometrics while going beyond and beneath demand and supply curves for dynamic economies. It constructs and applies competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. The book, based on the 2012 Gorman lectures, stresses heterogeneity, aggregation, and how a common structure unites what superficially appear to be diverse applications. An appendix describes MATLAB programs that apply to the book's calculations.Less
A common set of mathematical tools underlies dynamic optimization, dynamic estimation, and filtering. This book uses these tools to create a class of econometrically tractable models of prices and quantities. The book presents examples from microeconomics, macroeconomics, and asset pricing. The models are cast in terms of a representative consumer. While the book demonstrates the analytical benefits acquired when an analysis with a representative consumer is possible, it also characterizes the restrictiveness of assumptions under which a representative household justifies a purely aggregative analysis. The book unites economic theory with a workable econometrics while going beyond and beneath demand and supply curves for dynamic economies. It constructs and applies competitive equilibria for a class of linear-quadratic-Gaussian dynamic economies with complete markets. The book, based on the 2012 Gorman lectures, stresses heterogeneity, aggregation, and how a common structure unites what superficially appear to be diverse applications. An appendix describes MATLAB programs that apply to the book's calculations.
Rein Taagepera
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199534661
- eISBN:
- 9780191715921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199534661.003.0016
- Subject:
- Political Science, Comparative Politics, Political Economy
The results of existing statistical analysis can sometimes be used to estimate the parameters in quantitatively predictive logical models. This is important, because it expands the value of ...
More
The results of existing statistical analysis can sometimes be used to estimate the parameters in quantitatively predictive logical models. This is important, because it expands the value of previously published work in social sciences. Inferring logical model parameters in this way, however, may require more involved mathematics than direct testing.Less
The results of existing statistical analysis can sometimes be used to estimate the parameters in quantitatively predictive logical models. This is important, because it expands the value of previously published work in social sciences. Inferring logical model parameters in this way, however, may require more involved mathematics than direct testing.
Martin Schöneld
- Published in print:
- 2000
- Published Online:
- May 2006
- ISBN:
- 9780195132182
- eISBN:
- 9780199786336
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195132181.003.0003
- Subject:
- Philosophy, History of Philosophy
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 ...
More
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 discusses Kant’s dynamic ontology, such as his ideas on substantial interaction and energetic space. Section 3 analyzes Kant’s experimental and kinematic appraisals, which form the bulk of his first book. Section 4 describes Kant’s proposed synthesis of Cartesian momentum and Leibnizian energy as “true estimation” of force.Less
This chapter explores the text and contentions of Kant’s first book, Thoughts on the True Estimation of Living Forces (1747). Section 1 describes how Kant’s debut turned into a debacle. Section 2 discusses Kant’s dynamic ontology, such as his ideas on substantial interaction and energetic space. Section 3 analyzes Kant’s experimental and kinematic appraisals, which form the bulk of his first book. Section 4 describes Kant’s proposed synthesis of Cartesian momentum and Leibnizian energy as “true estimation” of force.
Martin Schöneld
- Published in print:
- 2000
- Published Online:
- May 2006
- ISBN:
- 9780195132182
- eISBN:
- 9780199786336
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0195132181.003.0004
- Subject:
- Philosophy, History of Philosophy
This chapter explores the role that True Estimation of Living Forces (1747) played in Kant’s intellectual development. Section 1 describes Kant’s way of dealing with controversies and discusses his ...
More
This chapter explores the role that True Estimation of Living Forces (1747) played in Kant’s intellectual development. Section 1 describes Kant’s way of dealing with controversies and discusses his appropriation of Bilfinger’s heuristic strategy. Section 2 describes how quantities differ from physical objects for Kant, traces the Pietist roots of this view, and examines how this view of mathematics hamstrung Crusius and initially also Kant. Section 3 discusses the differences between Kantian dynamics and Newtonian mechanics, and details the reasons for Kant’s distance from Newton.Less
This chapter explores the role that True Estimation of Living Forces (1747) played in Kant’s intellectual development. Section 1 describes Kant’s way of dealing with controversies and discusses his appropriation of Bilfinger’s heuristic strategy. Section 2 describes how quantities differ from physical objects for Kant, traces the Pietist roots of this view, and examines how this view of mathematics hamstrung Crusius and initially also Kant. Section 3 discusses the differences between Kantian dynamics and Newtonian mechanics, and details the reasons for Kant’s distance from Newton.
Darrell Duffie
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199279234
- eISBN:
- 9780191728419
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199279234.003.0003
- Subject:
- Economics and Finance, Financial Economics
This chapter presents the theory underlying the maximum likelihood estimation of term structures of survival probabilities, for example the dependence of default probability on time horizon. The ...
More
This chapter presents the theory underlying the maximum likelihood estimation of term structures of survival probabilities, for example the dependence of default probability on time horizon. The methodology allows the events of concern to be censored by disappearance of corporations from the data, due for instance to merger or acquisition. The idea is to estimate the parameter vector determining the default intensity as well as the parameter vector determining the transition probabilities of the covariate process, and then to use the maximum likelihood estimator of these parameters to estimate the survival probabilities of the corporations, for a range of choices of the survival horizon. The results show that the joint estimation of the parameters is relatively tractable under the doubly-stochastic property.Less
This chapter presents the theory underlying the maximum likelihood estimation of term structures of survival probabilities, for example the dependence of default probability on time horizon. The methodology allows the events of concern to be censored by disappearance of corporations from the data, due for instance to merger or acquisition. The idea is to estimate the parameter vector determining the default intensity as well as the parameter vector determining the transition probabilities of the covariate process, and then to use the maximum likelihood estimator of these parameters to estimate the survival probabilities of the corporations, for a range of choices of the survival horizon. The results show that the joint estimation of the parameters is relatively tractable under the doubly-stochastic property.
Raymond L. Chambers and Robert G. Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.003.0009
- Subject:
- Mathematics, Probability / Statistics
Robust estimation of the prediction variance discusses the issues that arise when model misspecification is second order. That is, when the second order moments of the working model for the ...
More
Robust estimation of the prediction variance discusses the issues that arise when model misspecification is second order. That is, when the second order moments of the working model for the population are incorrect, as is typically the case. Here balanced sampling is of no avail, and alternative, more robust, methods of prediction variance must be used. This chapter focuses on development of these methods for the case where the working population model is the ratio model, as well as when a general linear predictor is used and the working model has quite general first and second order moments. The case of a clustered population with unknown within cluster heteroskedasticity is also discussed and the ultimate cluster variance estimator derived.Less
Robust estimation of the prediction variance discusses the issues that arise when model misspecification is second order. That is, when the second order moments of the working model for the population are incorrect, as is typically the case. Here balanced sampling is of no avail, and alternative, more robust, methods of prediction variance must be used. This chapter focuses on development of these methods for the case where the working population model is the ratio model, as well as when a general linear predictor is used and the working model has quite general first and second order moments. The case of a clustered population with unknown within cluster heteroskedasticity is also discussed and the ultimate cluster variance estimator derived.
Darrell Duffie
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199279234
- eISBN:
- 9780191728419
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199279234.001.0001
- Subject:
- Economics and Finance, Financial Economics
This book addresses the empirical estimation of corporate default risk. The book addresses the measurement of corporate default risk based on the empirical estimation of default intensity processes, ...
More
This book addresses the empirical estimation of corporate default risk. The book addresses the measurement of corporate default risk based on the empirical estimation of default intensity processes, and their correlation. The default intensity of a borrower is the mean rate of arrival of default, conditional on the available information. For example, a default intensity of 0.1 means an expected arrival rate of one default per ten years, given all current information. Default intensities change with the arrival of new information about the borrower and its economic environment. The main focus here is on methodologies for estimating default intensities and on some key empirical properties of corporate default risk. The book pays special attention to the correlation of default risk across firms, and unobserved “frailty” factors that increase this correlation.Less
This book addresses the empirical estimation of corporate default risk. The book addresses the measurement of corporate default risk based on the empirical estimation of default intensity processes, and their correlation. The default intensity of a borrower is the mean rate of arrival of default, conditional on the available information. For example, a default intensity of 0.1 means an expected arrival rate of one default per ten years, given all current information. Default intensities change with the arrival of new information about the borrower and its economic environment. The main focus here is on methodologies for estimating default intensities and on some key empirical properties of corporate default risk. The book pays special attention to the correlation of default risk across firms, and unobserved “frailty” factors that increase this correlation.
M. Vidyasagar
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691133157
- eISBN:
- 9781400850518
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691133157.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that ...
More
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron–Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum–Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. It also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.Less
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron–Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum–Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. It also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.
Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Željko Ivezi, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0006
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
Inferring the probability density function (pdf) from a sample of data is known as density estimation. The same methodology is often called data smoothing. Density estimation in the one-dimensional ...
More
Inferring the probability density function (pdf) from a sample of data is known as density estimation. The same methodology is often called data smoothing. Density estimation in the one-dimensional case has been discussed in the previous chapters. This chapter extends it to multidimensional cases. Density estimation is one of the most critical components of extracting knowledge from data. For example, given a pdf estimated from point data, we can generate simulated distributions of data and compare them against observations. If we can identify regions of low probability within the pdf, we have a mechanism for the detection of unusual or anomalous sources. If our point data can be separated into subsamples using provided class labels, we can estimate the pdf for each subsample and use the resulting set of pdfs to classify new points: the probability that a new point belongs to each subsample/class is proportional to the pdf of each class evaluated at the position of the point.Less
Inferring the probability density function (pdf) from a sample of data is known as density estimation. The same methodology is often called data smoothing. Density estimation in the one-dimensional case has been discussed in the previous chapters. This chapter extends it to multidimensional cases. Density estimation is one of the most critical components of extracting knowledge from data. For example, given a pdf estimated from point data, we can generate simulated distributions of data and compare them against observations. If we can identify regions of low probability within the pdf, we have a mechanism for the detection of unusual or anomalous sources. If our point data can be separated into subsamples using provided class labels, we can estimate the pdf for each subsample and use the resulting set of pdfs to classify new points: the probability that a new point belongs to each subsample/class is proportional to the pdf of each class evaluated at the position of the point.
Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0007
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data ...
More
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.Less
With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.
Paul G. Voillequé
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780195127270
- eISBN:
- 9780199869121
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195127270.003.0002
- Subject:
- Biology, Ecology, Biochemistry / Molecular Biology
This chapter addresses the problem of estimating historic or future releases of radionuclides. The process of defining the “source term” is often the first step in a risk assessment. When detailed ...
More
This chapter addresses the problem of estimating historic or future releases of radionuclides. The process of defining the “source term” is often the first step in a risk assessment. When detailed information about environmental or personal contamination levels is available, source term estimation may not be helpful; however, such data are never available for prospective assessments and are frequently unavailable for historic releases. Reconstruction of historical measurements of releases has been performed for several facilities, and the resulting estimates have been used to estimate environmental contamination and health risks. Effluent discharges from new facilities are now treated to reduce radionuclide releases and meet environmental dose standards that have been established. Prospective release estimates, therefore, strongly depend upon the cleanup options selected and their performance. A wide variety of nuclear fuel cycle facilities is considered in the discussion.Less
This chapter addresses the problem of estimating historic or future releases of radionuclides. The process of defining the “source term” is often the first step in a risk assessment. When detailed information about environmental or personal contamination levels is available, source term estimation may not be helpful; however, such data are never available for prospective assessments and are frequently unavailable for historic releases. Reconstruction of historical measurements of releases has been performed for several facilities, and the resulting estimates have been used to estimate environmental contamination and health risks. Effluent discharges from new facilities are now treated to reduce radionuclide releases and meet environmental dose standards that have been established. Prospective release estimates, therefore, strongly depend upon the cleanup options selected and their performance. A wide variety of nuclear fuel cycle facilities is considered in the discussion.