S. N. Afriat
- Published in print:
- 1987
- Published Online:
- November 2003
- ISBN:
- 9780198284611
- eISBN:
- 9780191595844
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198284616.001.0001
- Subject:
- Economics and Finance, Microeconomics
This book approaches various aspects of economics that have to do with choice, and the opportunity for it, such as individual and social choice, production, optimal programming, and the market. The ...
More
This book approaches various aspects of economics that have to do with choice, and the opportunity for it, such as individual and social choice, production, optimal programming, and the market. The topics belong mostly to microeconomics, but they also have other connections. The object is to state a view about choice and value and to give an account of the logical apparatus. With this there is a wish to present limited matters fairly completely and unproblematically, and where there is some issue about their nature to consider that also. The book consists of six parts, each containing several chapters.Parts I–IV deal with generalities about choice, individual or social, and representative economic topics. The remaining parts have more concern with straightforwardly mathematical subjects, which have an application or interpretation for economics but need not be exclusively connected there. Chapters often are fairly self‐contained or belong to sequences that can be taken more or less on their own. The topics are in the main fabric of economic theory, and most students encounter them. A preamble at the start of every chapter tells what it is about; from this and possibly some further scanning, the main ideas should be easily gathered by those who might not be concerned with all details. Expository materials and reworkings of published fragments have been joined with unpublished work from past and recent years. In all there is a view about choice and ‘the optimum’ in economics surely acceptable to some and perhaps what they have always thought but undoubtedly not to everyone.Less
This book approaches various aspects of economics that have to do with choice, and the opportunity for it, such as individual and social choice, production, optimal programming, and the market. The topics belong mostly to microeconomics, but they also have other connections. The object is to state a view about choice and value and to give an account of the logical apparatus. With this there is a wish to present limited matters fairly completely and unproblematically, and where there is some issue about their nature to consider that also. The book consists of six parts, each containing several chapters.
Parts I–IV deal with generalities about choice, individual or social, and representative economic topics. The remaining parts have more concern with straightforwardly mathematical subjects, which have an application or interpretation for economics but need not be exclusively connected there. Chapters often are fairly self‐contained or belong to sequences that can be taken more or less on their own. The topics are in the main fabric of economic theory, and most students encounter them. A preamble at the start of every chapter tells what it is about; from this and possibly some further scanning, the main ideas should be easily gathered by those who might not be concerned with all details. Expository materials and reworkings of published fragments have been joined with unpublished work from past and recent years. In all there is a view about choice and ‘the optimum’ in economics surely acceptable to some and perhaps what they have always thought but undoubtedly not to everyone.
John Geweke and Garland Durham
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0015
- Subject:
- Economics and Finance, Microeconomics
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence ...
More
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence when (as is often the case) only kernels of the relevant probability densities are available. The chapter further demonstrates that Rényi divergence is central to the convergence and efficiency of Monte Carlo integration procedures in which information flow is controlled. It uses this perspective to develop more flexible approaches to the controlled introduction of information; in the limited set of examples considered here, these alternatives enhance efficiency.Less
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence when (as is often the case) only kernels of the relevant probability densities are available. The chapter further demonstrates that Rényi divergence is central to the convergence and efficiency of Monte Carlo integration procedures in which information flow is controlled. It uses this perspective to develop more flexible approaches to the controlled introduction of information; in the limited set of examples considered here, these alternatives enhance efficiency.
Min Chen
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0016
- Subject:
- Economics and Finance, Microeconomics
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of ...
More
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of data intelligence workflows. This work is built on the information-theoretic metric proposed by Chen and Golan for this purpose and several recent studies and applications of the metric. We present a set of extended interpretations of the metric by relating the metric to encryption, compression, model development, perception, cognition, languages, and media.Less
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of data intelligence workflows. This work is built on the information-theoretic metric proposed by Chen and Golan for this purpose and several recent studies and applications of the metric. We present a set of extended interpretations of the metric by relating the metric to encryption, compression, model development, perception, cognition, languages, and media.
Robert G. Chambers
- Published in print:
- 2021
- Published Online:
- December 2020
- ISBN:
- 9780190063016
- eISBN:
- 9780190063047
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190063016.003.0002
- Subject:
- Economics and Finance, Econometrics, Microeconomics
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by ...
More
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by introducing the notions of a correspondence, upper hemi-continuity, and lower hemi-continuity. Superdifferential and subdifferential correspondences for real-valued functions are then introduced, and their essential properties and their role in characterizing global optima are surveyed. Convex sets are introduced and related to functional concavity (convexity). The relationship between functional concavity (convexity), superdifferentiability (subdifferentiability), and the existence of (one-sided) directional derivatives is examined. The theory of convex conjugates and essential conjugate duality results are discussed. Topics treated include Berge's Maximum Theorem, cyclical monotonicity of superdifferential (subdifferential) correspondences, concave (convex) conjugates and biconjugates, Fenchel's Inequality, the Fenchel-Rockafellar Conjugate Duality Theorem, support functions, superlinear functions, sublinear functions, the theory of infimal convolutions and supremal convolutions, and Fenchel's Duality Theorem.Less
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by introducing the notions of a correspondence, upper hemi-continuity, and lower hemi-continuity. Superdifferential and subdifferential correspondences for real-valued functions are then introduced, and their essential properties and their role in characterizing global optima are surveyed. Convex sets are introduced and related to functional concavity (convexity). The relationship between functional concavity (convexity), superdifferentiability (subdifferentiability), and the existence of (one-sided) directional derivatives is examined. The theory of convex conjugates and essential conjugate duality results are discussed. Topics treated include Berge's Maximum Theorem, cyclical monotonicity of superdifferential (subdifferential) correspondences, concave (convex) conjugates and biconjugates, Fenchel's Inequality, the Fenchel-Rockafellar Conjugate Duality Theorem, support functions, superlinear functions, sublinear functions, the theory of infimal convolutions and supremal convolutions, and Fenchel's Duality Theorem.
Robert G. Chambers
- Published in print:
- 2021
- Published Online:
- December 2020
- ISBN:
- 9780190063016
- eISBN:
- 9780190063047
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190063016.003.0004
- Subject:
- Economics and Finance, Econometrics, Microeconomics
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. ...
More
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. Conjugate duality results are developed for each. The resulting dual representations (E(q;y), R(p,x), and π(p,q)) are shown to characterize all of the economically relevant information in, respectively, V(y), Y(x), and Gr(≽(y)). The implications of different restrictions on ≽(y) for the dual representations are examined.Less
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. Conjugate duality results are developed for each. The resulting dual representations (E(q;y), R(p,x), and π(p,q)) are shown to characterize all of the economically relevant information in, respectively, V(y), Y(x), and Gr(≽(y)). The implications of different restrictions on ≽(y) for the dual representations are examined.
Pieter Adriaans
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0002
- Subject:
- Economics and Finance, Microeconomics
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the ...
More
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.Less
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.
Paul Stoneman, Eleonora Bartoloni, and Maurizio Baussola
- Published in print:
- 2018
- Published Online:
- March 2018
- ISBN:
- 9780198816676
- eISBN:
- 9780191858321
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198816676.003.0012
- Subject:
- Economics and Finance, Microeconomics
This chapter addresses the impact of product innovation on economic welfare, initially defined as the sum of consumer and producer surpluses. In a static framework, it is shown how product innovation ...
More
This chapter addresses the impact of product innovation on economic welfare, initially defined as the sum of consumer and producer surpluses. In a static framework, it is shown how product innovation can increase welfare via additions to consumer surplus and increased firm profits; and an estimate that the value of the increase for a typical product innovation might equal 2.5 per cent of the innovator’s revenue is reported from the literature. Problems with measuring welfare by the sum of consumer and producer surplus are raised, especially because of changes in the producers’ incentives to innovate. In an intertemporal framework, it is further shown that the optimal diffusion path could arise under either monopoly supply or competitive supply, depending on buyers’ price expectations formation processes. It is also argued that variety itself may generate welfare, and whether free markets would generate optimal variety is discussed. The literature suggests not.Less
This chapter addresses the impact of product innovation on economic welfare, initially defined as the sum of consumer and producer surpluses. In a static framework, it is shown how product innovation can increase welfare via additions to consumer surplus and increased firm profits; and an estimate that the value of the increase for a typical product innovation might equal 2.5 per cent of the innovator’s revenue is reported from the literature. Problems with measuring welfare by the sum of consumer and producer surplus are raised, especially because of changes in the producers’ incentives to innovate. In an intertemporal framework, it is further shown that the optimal diffusion path could arise under either monopoly supply or competitive supply, depending on buyers’ price expectations formation processes. It is also argued that variety itself may generate welfare, and whether free markets would generate optimal variety is discussed. The literature suggests not.