Peter Mörters, Roger Moser, Mathew Penrose, Hartmut Schwetlick, and Johannes Zimmer (eds)
- Published in print:
- 2008
- Published Online:
- September 2008
- ISBN:
- 9780199239252
- eISBN:
- 9780191716911
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199239252.001.0001
- Subject:
- Mathematics, Probability / Statistics, Analysis
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various ...
More
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.Less
There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.
Ludwig Fahrmeir and Thomas Kneib
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199533022
- eISBN:
- 9780191728501
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199533022.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) ...
More
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.Less
Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.
José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West (eds)
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.001.0001
- Subject:
- Mathematics, Probability / Statistics
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in ...
More
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.Less
The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.
Paul Damien, Petros Dellaportas, Nicholas G. Polson, and David A. Stephens (eds)
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199695607
- eISBN:
- 9780191744167
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199695607.001.0001
- Subject:
- Mathematics, Probability / Statistics
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances ...
More
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.Less
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.
Steven J. Miller (ed.)
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691147611
- eISBN:
- 9781400866595
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691147611.001.0001
- Subject:
- Mathematics, Probability / Statistics
Benford's law states that the leading digits of many data sets are not uniformly distributed from one through nine, but rather exhibit a profound bias. This bias is evident in everything from ...
More
Benford's law states that the leading digits of many data sets are not uniformly distributed from one through nine, but rather exhibit a profound bias. This bias is evident in everything from electricity bills and street addresses to stock prices, population numbers, mortality rates, and the lengths of rivers. This book demonstrates the many useful techniques that arise from the law, showing how truly multidisciplinary it is, and encouraging collaboration. Beginning with the general theory, the chapters explain the prevalence of the bias, highlighting explanations for when systems should and should not follow Benford's law and how quickly such behavior sets in. The book goes on to discuss important applications in disciplines ranging from accounting and economics to psychology and the natural sciences. The book describes how Benford's law has been successfully used to expose fraud in elections, medical tests, tax filings, and financial reports. Additionally, numerous problems, background materials, and technical details are available online to help instructors create courses around the book. Emphasizing common challenges and techniques across the disciplines, this book shows how Benford's law can serve as a productive meeting ground for researchers and practitioners in diverse fields.Less
Benford's law states that the leading digits of many data sets are not uniformly distributed from one through nine, but rather exhibit a profound bias. This bias is evident in everything from electricity bills and street addresses to stock prices, population numbers, mortality rates, and the lengths of rivers. This book demonstrates the many useful techniques that arise from the law, showing how truly multidisciplinary it is, and encouraging collaboration. Beginning with the general theory, the chapters explain the prevalence of the bias, highlighting explanations for when systems should and should not follow Benford's law and how quickly such behavior sets in. The book goes on to discuss important applications in disciplines ranging from accounting and economics to psychology and the natural sciences. The book describes how Benford's law has been successfully used to expose fraud in elections, medical tests, tax filings, and financial reports. Additionally, numerous problems, background materials, and technical details are available online to help instructors create courses around the book. Emphasizing common challenges and techniques across the disciplines, this book shows how Benford's law can serve as a productive meeting ground for researchers and practitioners in diverse fields.
A. C. Davison, Yadolah Dodge, and N. Wermuth (eds)
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566540
- eISBN:
- 9780191718038
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566540.001.0001
- Subject:
- Mathematics, Probability / Statistics
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied ...
More
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.Less
Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.
Geoffrey Grimmett and Colin McDiarmid (eds)
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780198571278
- eISBN:
- 9780191718885
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198571278.001.0001
- Subject:
- Mathematics, Probability / Statistics
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and ...
More
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.Less
Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart
- Published in print:
- 2013
- Published Online:
- May 2013
- ISBN:
- 9780199535255
- eISBN:
- 9780191747106
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199535255.001.0001
- Subject:
- Mathematics, Probability / Statistics, Applied Mathematics
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many ...
More
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.Less
This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.
Charles L. Epstein and Rafe Mazzeo
- Published in print:
- 2013
- Published Online:
- October 2017
- ISBN:
- 9780691157122
- eISBN:
- 9781400846108
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691157122.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book provides the mathematical foundations for the analysis of a class of degenerate elliptic operators defined on manifolds with corners, which arise in a variety of applications such as ...
More
This book provides the mathematical foundations for the analysis of a class of degenerate elliptic operators defined on manifolds with corners, which arise in a variety of applications such as population genetics, mathematical finance, and economics. The results discussed in this book prove the uniqueness of the solution to the martingale problem and therefore the existence of the associated Markov process. The book uses an “integral kernel method” to develop mathematical foundations for the study of such degenerate elliptic operators and the stochastic processes they define. The precise nature of the degeneracies of the principal symbol for these operators leads to solutions of the parabolic and elliptic problems that display novel regularity properties. Dually, the adjoint operator allows for rather dramatic singularities, such as measures supported on high codimensional strata of the boundary. The book establishes the uniqueness, existence, and sharp regularity properties for solutions to the homogeneous and inhomogeneous heat equations, as well as a complete analysis of the resolvent operator acting on Hölder spaces. It shows that the semigroups defined by these operators have holomorphic extensions to the right half plane. The book also demonstrates precise asymptotic results for the long-time behavior of solutions to both the forward and backward Kolmogorov equations.Less
This book provides the mathematical foundations for the analysis of a class of degenerate elliptic operators defined on manifolds with corners, which arise in a variety of applications such as population genetics, mathematical finance, and economics. The results discussed in this book prove the uniqueness of the solution to the martingale problem and therefore the existence of the associated Markov process. The book uses an “integral kernel method” to develop mathematical foundations for the study of such degenerate elliptic operators and the stochastic processes they define. The precise nature of the degeneracies of the principal symbol for these operators leads to solutions of the parabolic and elliptic problems that display novel regularity properties. Dually, the adjoint operator allows for rather dramatic singularities, such as measures supported on high codimensional strata of the boundary. The book establishes the uniqueness, existence, and sharp regularity properties for solutions to the homogeneous and inhomogeneous heat equations, as well as a complete analysis of the resolvent operator acting on Hölder spaces. It shows that the semigroups defined by these operators have holomorphic extensions to the right half plane. The book also demonstrates precise asymptotic results for the long-time behavior of solutions to both the forward and backward Kolmogorov equations.
Florence Merlevède, Magda Peligrad, and Sergey Utev
- Published in print:
- 2019
- Published Online:
- April 2019
- ISBN:
- 9780198826941
- eISBN:
- 9780191865961
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198826941.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure ...
More
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure and asymptotic behavior of stochastic processes. This monograph has double scope. First, to present tools for dealing with dependent structures directed toward obtaining normal approximations. Second, to apply the normal approximations presented in the book to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem (CLT) and functional moderate deviation principle (MDP). The results will point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. Over the course of the book different types of dependence structures are considered, ranging from the traditional mixing structures to martingale-like structures and to weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications have been carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analyzing new data in economics, linear processes with dependent innovations will also be considered and analyzed.Less
This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure and asymptotic behavior of stochastic processes. This monograph has double scope. First, to present tools for dealing with dependent structures directed toward obtaining normal approximations. Second, to apply the normal approximations presented in the book to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem (CLT) and functional moderate deviation principle (MDP). The results will point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. Over the course of the book different types of dependence structures are considered, ranging from the traditional mixing structures to martingale-like structures and to weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications have been carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analyzing new data in economics, linear processes with dependent innovations will also be considered and analyzed.
M. Vidyasagar
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691133157
- eISBN:
- 9781400850518
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691133157.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that ...
More
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron–Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum–Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. It also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.Less
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron–Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum–Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. It also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.
Jon Williamson
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199228003
- eISBN:
- 9780191711060
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199228003.001.0001
- Subject:
- Mathematics, Probability / Statistics, Logic / Computer Science / Mathematical Philosophy
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely ...
More
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.Less
Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.
Arno Berger and Theodore P. Hill
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691163062
- eISBN:
- 9781400866588
- Item type:
- book
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691163062.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book provides the first comprehensive treatment of Benford's law, the surprising logarithmic distribution of significant digits discovered in the late nineteenth century. Establishing the ...
More
This book provides the first comprehensive treatment of Benford's law, the surprising logarithmic distribution of significant digits discovered in the late nineteenth century. Establishing the mathematical and statistical principles that underpin this intriguing phenomenon, the text combines up-to-date theoretical results with overviews of the law's colorful history, rapidly growing body of empirical evidence, and wide range of applications. The book begins with basic facts about significant digits, Benford functions, sequences, and random variables, including tools from the theory of uniform distribution. After introducing the scale-, base-, and sum-invariance characterizations of the law, the book develops the significant-digit properties of both deterministic and stochastic processes, such as iterations of functions, powers of matrices, differential equations, and products, powers, and mixtures of random variables. Two concluding chapters survey the finitely additive theory and the flourishing applications of Benford's law. Carefully selected diagrams, tables, and close to 150 examples illuminate the main concepts throughout. The book includes many open problems, in addition to dozens of new basic theorems and all the main references. A distinguishing feature is the emphasis on the surprising ubiquity and robustness of the significant-digit law. The book can serve as both a primary reference and a basis for seminars and courses.Less
This book provides the first comprehensive treatment of Benford's law, the surprising logarithmic distribution of significant digits discovered in the late nineteenth century. Establishing the mathematical and statistical principles that underpin this intriguing phenomenon, the text combines up-to-date theoretical results with overviews of the law's colorful history, rapidly growing body of empirical evidence, and wide range of applications. The book begins with basic facts about significant digits, Benford functions, sequences, and random variables, including tools from the theory of uniform distribution. After introducing the scale-, base-, and sum-invariance characterizations of the law, the book develops the significant-digit properties of both deterministic and stochastic processes, such as iterations of functions, powers of matrices, differential equations, and products, powers, and mixtures of random variables. Two concluding chapters survey the finitely additive theory and the flourishing applications of Benford's law. Carefully selected diagrams, tables, and close to 150 examples illuminate the main concepts throughout. The book includes many open problems, in addition to dozens of new basic theorems and all the main references. A distinguishing feature is the emphasis on the surprising ubiquity and robustness of the significant-digit law. The book can serve as both a primary reference and a basis for seminars and courses.
Ray Chambers and Robert Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey ...
More
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.Less
This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.
Steve Selvin
- Published in print:
- 2019
- Published Online:
- May 2019
- ISBN:
- 9780198833444
- eISBN:
- 9780191872280
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198833444.001.0001
- Subject:
- Mathematics, Probability / Statistics, Applied Mathematics
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are ...
More
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are collected to answer. The text contains brief histories of the evolution of statistical methods and a number of brief biographies of the most famous statisticians of the 20th century. Also throughout are a few statistical jokes, puzzles, and traditional stories. The level of the Joy of Statistics is elementary and explores a variety of statistical applications using graphs and plots, along with detailed and intuitive descriptions and occasionally using a bit of 10th grade mathematics. Examples of a few of the topics are gambling games such as roulette, blackjack, and lotteries as well as more serious subjects such as comparison of black/white infant mortality rates, coronary heart disease risk, and ethnic differences in Hodgkin’s disease. The statistical description of these methods and topics are accompanied by easy to understand explanations labeled “how it works.”Less
The Joy of Statistics consists of a series of 42 “short stories,” each illustrating how elementary statistical methods are applied to data to produce insight and solutions to the questions data are collected to answer. The text contains brief histories of the evolution of statistical methods and a number of brief biographies of the most famous statisticians of the 20th century. Also throughout are a few statistical jokes, puzzles, and traditional stories. The level of the Joy of Statistics is elementary and explores a variety of statistical applications using graphs and plots, along with detailed and intuitive descriptions and occasionally using a bit of 10th grade mathematics. Examples of a few of the topics are gambling games such as roulette, blackjack, and lotteries as well as more serious subjects such as comparison of black/white infant mortality rates, coronary heart disease risk, and ethnic differences in Hodgkin’s disease. The statistical description of these methods and topics are accompanied by easy to understand explanations labeled “how it works.”
Peter Grindrod
- Published in print:
- 2014
- Published Online:
- March 2015
- ISBN:
- 9780198725091
- eISBN:
- 9780191792526
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198725091.001.0001
- Subject:
- Mathematics, Analysis, Probability / Statistics
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear ...
More
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.Less
This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.
Russell Cheng
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780198505044
- eISBN:
- 9780191746390
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198505044.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates ...
More
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.Less
This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.
Christopher G. Small and Jinfang Wang
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506881
- eISBN:
- 9780191709258
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506881.001.0001
- Subject:
- Mathematics, Probability / Statistics
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive ...
More
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive numerical methods to construct estimators. Root search algorithms and one-step estimators are standard methods of solution. This book provides a comprehensive study of nonlinear estimating equations and artificial likelihoods for statistical inference. It provides extensive coverage and comparison of hill climbing algorithms which, when started at points of nonconcavity, often have very poor convergence properties. For additional flexibility, number of modifications to the standard methods for solving these algorithms are proposed. The book also goes beyond simple root search algorithms to include a discussion of the testing of roots for consistency and the modification of available estimating functions to provide greater stability in inference. A variety of examples from practical applications are included to illustrate the problems and possibilities.Less
Nonlinearity arises in statistical inference in various ways, with varying degrees of severity, as an obstacle to statistical analysis. More entrenched forms of nonlinearity often require intensive numerical methods to construct estimators. Root search algorithms and one-step estimators are standard methods of solution. This book provides a comprehensive study of nonlinear estimating equations and artificial likelihoods for statistical inference. It provides extensive coverage and comparison of hill climbing algorithms which, when started at points of nonconcavity, often have very poor convergence properties. For additional flexibility, number of modifications to the standard methods for solving these algorithms are proposed. The book also goes beyond simple root search algorithms to include a discussion of the testing of roots for consistency and the modification of available estimating functions to provide greater stability in inference. A variety of examples from practical applications are included to illustrate the problems and possibilities.
Thomas A. Weber
- Published in print:
- 2011
- Published Online:
- August 2013
- ISBN:
- 9780262015738
- eISBN:
- 9780262298483
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262015738.001.0001
- Subject:
- Mathematics, Probability / Statistics
This book bridges optimal control theory and economics, discussing ordinary differential equations (ODEs), optimal control, game theory, and mechanism design in one volume. Technically rigorous and ...
More
This book bridges optimal control theory and economics, discussing ordinary differential equations (ODEs), optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations is the backbone of the theory developed in the book, and Chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendices provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer’s creation of an environment in which players interact to maximize the designer’s objective. The book focuses on applications; the problems are an integral part of the text.Less
This book bridges optimal control theory and economics, discussing ordinary differential equations (ODEs), optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations is the backbone of the theory developed in the book, and Chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendices provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer’s creation of an environment in which players interact to maximize the designer’s objective. The book focuses on applications; the problems are an integral part of the text.
Raphaël Mourad (ed.)
- Published in print:
- 2014
- Published Online:
- December 2014
- ISBN:
- 9780198709022
- eISBN:
- 9780191779619
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198709022.001.0001
- Subject:
- Mathematics, Probability / Statistics, Biostatistics
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic ...
More
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.Less
At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.