Hans Fehr and Fabian Kindermann
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198804390
- eISBN:
- 9780191917202
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198804390.003.0013
- Subject:
- Computer Science, Programming Languages
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one ...
More
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one or several variables that maximizes the value of a specific objective function given certain constraints. While there exist some analytical solutions to deterministic dynamic optimization problems, things become much more complicated as soon as the environment in which we are searching for optimal decisions becomes uncertain. In such cases researchers typically rely on the technique of dynamic programming. This chapter introduces the principles of dynamic programming and provides a couple of solution algorithms that differ in accuracy, speed, and applicability. Chapters 8 to 11 show how to apply these dynamic programming techniques to various problems in macroeconomics and finance. To get things started we want to lay out the basic idea of dynamic programming and introduce the language that is typically used to describe it. The easiest way to do this is with a very simple example that we can solve both ‘by hand’ and with the dynamic programming technique. Let’s assume an agent owns a certain resource (say a cake or a mine) which has the size a0. In every period t = 0, 1, 2, . . . ,∞ the agent can decide how much to extract from this resource and consume, i.e. how much of the cake to eat or how many resources to extract from the mine.We denote his consumption in period t as ct. At each point in time the agent derives some utility from consumption which we express by the so-called instantaneous utility function u(ct). We furthermore assume that the agent’s utility is additively separable over time and that the agent is impatient, meaning that he derives more utility from consuming in period t than in any later period.We describe the extent of his impatience with the time discount factor 0 < β < 1.
Less
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one or several variables that maximizes the value of a specific objective function given certain constraints. While there exist some analytical solutions to deterministic dynamic optimization problems, things become much more complicated as soon as the environment in which we are searching for optimal decisions becomes uncertain. In such cases researchers typically rely on the technique of dynamic programming. This chapter introduces the principles of dynamic programming and provides a couple of solution algorithms that differ in accuracy, speed, and applicability. Chapters 8 to 11 show how to apply these dynamic programming techniques to various problems in macroeconomics and finance. To get things started we want to lay out the basic idea of dynamic programming and introduce the language that is typically used to describe it. The easiest way to do this is with a very simple example that we can solve both ‘by hand’ and with the dynamic programming technique. Let’s assume an agent owns a certain resource (say a cake or a mine) which has the size a0. In every period t = 0, 1, 2, . . . ,∞ the agent can decide how much to extract from this resource and consume, i.e. how much of the cake to eat or how many resources to extract from the mine.We denote his consumption in period t as ct. At each point in time the agent derives some utility from consumption which we express by the so-called instantaneous utility function u(ct). We furthermore assume that the agent’s utility is additively separable over time and that the agent is impatient, meaning that he derives more utility from consuming in period t than in any later period.We describe the extent of his impatience with the time discount factor 0 < β < 1.
Ulf Grenander and Michael I. Miller
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780198505709
- eISBN:
- 9780191916564
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198505709.003.0010
- Subject:
- Computer Science, Programming Languages
This chapter studies second order and Gaussian fields on the background spaces which are the continuum limits of the finite graphs. For this random processes in ...
More
This chapter studies second order and Gaussian fields on the background spaces which are the continuum limits of the finite graphs. For this random processes in Hilbert spaces are examined. Orthogonal expansions such as Karhunen–Loeve are examined, with spectral representations of the processes established. Gaussian processes induced by differential operators representing physical processes in the world are studied.
Less
This chapter studies second order and Gaussian fields on the background spaces which are the continuum limits of the finite graphs. For this random processes in Hilbert spaces are examined. Orthogonal expansions such as Karhunen–Loeve are examined, with spectral representations of the processes established. Gaussian processes induced by differential operators representing physical processes in the world are studied.
Amy J. Ruggles and Richard L. Church
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195085754
- eISBN:
- 9780197560495
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195085754.003.0012
- Subject:
- Archaeology, Archaeological Methodology and Techniques
The general interest of linking GIS capabilities and location-allocation (L-A) techniques to investigate certain spatial problems should be evident. The techniques and ...
More
The general interest of linking GIS capabilities and location-allocation (L-A) techniques to investigate certain spatial problems should be evident. The techniques and the technology are often complementary. A GIS can provide, manage, and display data that L-A models require; in turn, L-A models can enhance GIS analytic capabilities. This combination of information management and analysis should have wide appeal. The technique and technology may be especially wellmatched when one considers many of the special requirements of archaeological applications of L-A models. We intend to investigate and illustrate the value of such a combined approach though the example of a regional settlement analysis of the Late Horizon Basin of Mexico. Geographic information systems are increasingly common in archaeology. Their ability to manage, store, manipulate, and present spatial data is of real value, since the spatial relationship between objects is often an archaeological artifact in its own right. Space is central to both archaeological data (Spaulding 1960; Savage 1990a) and theory (Green 1990). Although GIS may not always offer intrinsically new and different manipulations or analyses of the data, they can make certain techniques easier to apply. There is a wide spectrum of GIS-based modeling applications in archaeology (Allen 1990; Savage 1990a). The anchors of this spectrum range from the use of GIS in the public sector in cultural resource management settings to more research-oriented applications. The strongest development of GIS-based archaeological modeling is probably in the former context. Models developed here are predominantly what Warren (1990) identifies as “inductive” predictive models where patterns in the empirical observations are recognized, usually using statistical methods or probability models. This type of application is usually identified with “site location” modeling (Savage 1990a). As defined, these models do not predict the probable locations of individual sites but rather calculate the probability that a geographic area will contain a site, given its environmental characteristics (Carmichael 1990: 218). The primary role of GIS in many of these applications is to manage and integrate spatial information and feed it to some exterior model.
Less
The general interest of linking GIS capabilities and location-allocation (L-A) techniques to investigate certain spatial problems should be evident. The techniques and the technology are often complementary. A GIS can provide, manage, and display data that L-A models require; in turn, L-A models can enhance GIS analytic capabilities. This combination of information management and analysis should have wide appeal. The technique and technology may be especially wellmatched when one considers many of the special requirements of archaeological applications of L-A models. We intend to investigate and illustrate the value of such a combined approach though the example of a regional settlement analysis of the Late Horizon Basin of Mexico. Geographic information systems are increasingly common in archaeology. Their ability to manage, store, manipulate, and present spatial data is of real value, since the spatial relationship between objects is often an archaeological artifact in its own right. Space is central to both archaeological data (Spaulding 1960; Savage 1990a) and theory (Green 1990). Although GIS may not always offer intrinsically new and different manipulations or analyses of the data, they can make certain techniques easier to apply. There is a wide spectrum of GIS-based modeling applications in archaeology (Allen 1990; Savage 1990a). The anchors of this spectrum range from the use of GIS in the public sector in cultural resource management settings to more research-oriented applications. The strongest development of GIS-based archaeological modeling is probably in the former context. Models developed here are predominantly what Warren (1990) identifies as “inductive” predictive models where patterns in the empirical observations are recognized, usually using statistical methods or probability models. This type of application is usually identified with “site location” modeling (Savage 1990a). As defined, these models do not predict the probable locations of individual sites but rather calculate the probability that a geographic area will contain a site, given its environmental characteristics (Carmichael 1990: 218). The primary role of GIS in many of these applications is to manage and integrate spatial information and feed it to some exterior model.
Robin Detterman, Jenny Ventura, Lihi Rosenthal, and Ken Berrick
- Published in print:
- 2019
- Published Online:
- November 2020
- ISBN:
- 9780190886516
- eISBN:
- 9780197559901
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190886516.003.0014
- Subject:
- Education, Care and Counseling of Students
By now you are likely aware that unconditional education (UE) is a practice of optimization. That is, the aim is to provide just the right amount of intervention to ...
More
By now you are likely aware that unconditional education (UE) is a practice of optimization. That is, the aim is to provide just the right amount of intervention to get the job done, but never unnecessary excess. Chapter 1 introduced the key principles that drive UE: efficiency, intentional relationship building, cross-sector responsibility, and local decision-making. Much of the rest of this book has addressed what happens in schools when these principles are absent. However, in reviewing early UE implementation pitfalls, most, if not all, missteps can be traced back to an overzealous application of these principles without adequate consideration for a just-right approach. This chapter will explore these common missteps and trace the surprising ways in which an over-application of the principles of UE can unintentionally replicate the very practices of exclusion it was designed to address. The previous chapters have proposed that healthy and trusting relationships play a central role when it comes to both personal and organizational learning. While the cultivation of relationships takes time, once established, the presence of relational trust can accelerate efforts. In schools highly impacted by trauma, an initial investment in relationship building is in fact a prerequisite for any successful transformation to take hold. The work of creating trauma-informed schools necessitates that we acknowledge these experiences and create plans to address the vicarious trauma often felt by school staff themselves. In some cases, even this is not enough. Organizational trauma—in which interactions within the entire building or district itself evidence the weight of working in resource-strapped environments—is common in public schools. It is often the case that years of unhealthy competition, inadequate funding, and failed initiatives and promises have overwhelmed an organization’s protective structures and rendered it less resilient for the hard work required to bring about the exact change the organization needs in order to heal and thrive (Vickers & Kouzmin, 2001). Not all public schools operate as traumatized systems, yet the conditions within many schools, particularly those serving a high percentage of students who belong to systematically oppressed groups, are most vulnerable.
Less
By now you are likely aware that unconditional education (UE) is a practice of optimization. That is, the aim is to provide just the right amount of intervention to get the job done, but never unnecessary excess. Chapter 1 introduced the key principles that drive UE: efficiency, intentional relationship building, cross-sector responsibility, and local decision-making. Much of the rest of this book has addressed what happens in schools when these principles are absent. However, in reviewing early UE implementation pitfalls, most, if not all, missteps can be traced back to an overzealous application of these principles without adequate consideration for a just-right approach. This chapter will explore these common missteps and trace the surprising ways in which an over-application of the principles of UE can unintentionally replicate the very practices of exclusion it was designed to address. The previous chapters have proposed that healthy and trusting relationships play a central role when it comes to both personal and organizational learning. While the cultivation of relationships takes time, once established, the presence of relational trust can accelerate efforts. In schools highly impacted by trauma, an initial investment in relationship building is in fact a prerequisite for any successful transformation to take hold. The work of creating trauma-informed schools necessitates that we acknowledge these experiences and create plans to address the vicarious trauma often felt by school staff themselves. In some cases, even this is not enough. Organizational trauma—in which interactions within the entire building or district itself evidence the weight of working in resource-strapped environments—is common in public schools. It is often the case that years of unhealthy competition, inadequate funding, and failed initiatives and promises have overwhelmed an organization’s protective structures and rendered it less resilient for the hard work required to bring about the exact change the organization needs in order to heal and thrive (Vickers & Kouzmin, 2001). Not all public schools operate as traumatized systems, yet the conditions within many schools, particularly those serving a high percentage of students who belong to systematically oppressed groups, are most vulnerable.
Douglas Schenck and Peter Wilson
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195087147
- eISBN:
- 9780197560532
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195087147.003.0010
- Subject:
- Computer Science, Software Engineering
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a ...
More
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a quality model. We are basically recommending a policy of progressive refinement when modeling but the actual process usually turns out to be iterative. So, although one might start out with good intentions of using a top-down approach, one often ends up with a mixture of top-down, bottom-up, and middle-out strategies. The recommendations are principally cast in the form of check lists and give a skeleton outline of the process. Chapter 4 provides a complete worked example which puts some flesh on the bones. An information model may be created by a single person, given sufficient knowledge, or preferably and more likely by a team of people. An information model represents some portion of the real world. In order to produce such a model an obvious requirement is knowledge of the particular real world aspects that are of interest. People with this knowledge are called domain experts. The other side of the coin is that knowledge of information modeling is required in order to develop an information model. These people are called modeling experts. Typically, the domain experts are not conversant with information modeling and the modeling experts are not conversant with the subject. Hence the usual need for at least two parties to join forces. Together the domain and modeling experts can produce an information model that satisfies their own requirements. However, an information model is typically meant to be used by a larger audience than just its creators. There is a need to communicate the model to those who may not have the skills and knowledge to create such a model but who do have the background to utilize it. Thus the requirement for a third group to review the model during its formative stages to ensure that it is understandable by the target audience. This is the review team who act somewhat like the editors in a publishing house, or like friendly quality control inspectors.
Less
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a quality model. We are basically recommending a policy of progressive refinement when modeling but the actual process usually turns out to be iterative. So, although one might start out with good intentions of using a top-down approach, one often ends up with a mixture of top-down, bottom-up, and middle-out strategies. The recommendations are principally cast in the form of check lists and give a skeleton outline of the process. Chapter 4 provides a complete worked example which puts some flesh on the bones. An information model may be created by a single person, given sufficient knowledge, or preferably and more likely by a team of people. An information model represents some portion of the real world. In order to produce such a model an obvious requirement is knowledge of the particular real world aspects that are of interest. People with this knowledge are called domain experts. The other side of the coin is that knowledge of information modeling is required in order to develop an information model. These people are called modeling experts. Typically, the domain experts are not conversant with information modeling and the modeling experts are not conversant with the subject. Hence the usual need for at least two parties to join forces. Together the domain and modeling experts can produce an information model that satisfies their own requirements. However, an information model is typically meant to be used by a larger audience than just its creators. There is a need to communicate the model to those who may not have the skills and knowledge to create such a model but who do have the background to utilize it. Thus the requirement for a third group to review the model during its formative stages to ensure that it is understandable by the target audience. This is the review team who act somewhat like the editors in a publishing house, or like friendly quality control inspectors.
D. Brynn Hibbert
- Published in print:
- 2007
- Published Online:
- November 2020
- ISBN:
- 9780195162127
- eISBN:
- 9780197562093
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195162127.003.0008
- Subject:
- Chemistry, Analytical Chemistry
Although thoroughly convinced that no laboratory can function without proper regard to quality in all its myriad forms, the question remains, What do we do? As a ...
More
Although thoroughly convinced that no laboratory can function without proper regard to quality in all its myriad forms, the question remains, What do we do? As a quality control manager with a budget and the best aspirations possible, what are the first steps in providing your laboratory or company with an appropriate system (other than buying this book, of course)? Each laboratory is unique, and what is important for one may be less important for another. So before buying software or nailing control charts to your laboratory door, sit down and think about what you hope to achieve. Consider how many different analyses are done, the volume of test items, the size of the operation, what level of training your staff have, whether the laboratory is accredited or seeking accreditation, specific quality targets agreed upon with a client, and any particular problems. This chapter explains how to use some of the standard quality tools, including ways to describe your present system and methods and ongoing statistical methods to chart progress to quality. Being in “statistical control” in an analytical laboratory is a state in which the results are without uncorrected bias and vary randomly with a known and acceptable standard deviation. Statistical control is held to be a good and proper state because once we are dealing with a random variable, future behavior can be predicted and therefore risk is controlled. Having results that conform to the normal or Gaussian distribution (see chapter 2) means that about 5 in every 100 results will fall outside ± 2 standard deviations of the population mean, and 3 in 1000 will fall outside ± 3 standard deviations. By monitoring results to discover if this state is violated, something can be done about the situation before the effects become serious (i.e., expensive). If you are in charge of quality control laboratories in manufacturing companies, it is important to distinguish between the variability of a product and the variability of the analysis. When analyzing tablets on a pharmaceutical production line, variability in the results of an analysis has two contributions: from the product itself and from the analytical procedure.
Less
Although thoroughly convinced that no laboratory can function without proper regard to quality in all its myriad forms, the question remains, What do we do? As a quality control manager with a budget and the best aspirations possible, what are the first steps in providing your laboratory or company with an appropriate system (other than buying this book, of course)? Each laboratory is unique, and what is important for one may be less important for another. So before buying software or nailing control charts to your laboratory door, sit down and think about what you hope to achieve. Consider how many different analyses are done, the volume of test items, the size of the operation, what level of training your staff have, whether the laboratory is accredited or seeking accreditation, specific quality targets agreed upon with a client, and any particular problems. This chapter explains how to use some of the standard quality tools, including ways to describe your present system and methods and ongoing statistical methods to chart progress to quality. Being in “statistical control” in an analytical laboratory is a state in which the results are without uncorrected bias and vary randomly with a known and acceptable standard deviation. Statistical control is held to be a good and proper state because once we are dealing with a random variable, future behavior can be predicted and therefore risk is controlled. Having results that conform to the normal or Gaussian distribution (see chapter 2) means that about 5 in every 100 results will fall outside ± 2 standard deviations of the population mean, and 3 in 1000 will fall outside ± 3 standard deviations. By monitoring results to discover if this state is violated, something can be done about the situation before the effects become serious (i.e., expensive). If you are in charge of quality control laboratories in manufacturing companies, it is important to distinguish between the variability of a product and the variability of the analysis. When analyzing tablets on a pharmaceutical production line, variability in the results of an analysis has two contributions: from the product itself and from the analytical procedure.
James Wei
- Published in print:
- 2007
- Published Online:
- November 2020
- ISBN:
- 9780195159172
- eISBN:
- 9780197561997
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195159172.003.0012
- Subject:
- Chemistry, Physical Chemistry
The reverse search starts from a set of desired properties and asks for substances that possess them. Theoretical knowledge and past experience should be ...
More
The reverse search starts from a set of desired properties and asks for substances that possess them. Theoretical knowledge and past experience should be relied upon to suggest where to look, since it is the fastest and least expensive approach. When theoretical knowledge and past experience have been exhausted, then random searches may be the only way to make progress, if the problem is sufficiently important and there is enough budget and patience. Table 7.1 compares some of the requirements and the pros and cons of the guided search and the random search The best strategy on how to spend resources of time and money most efficiently can be considered a problem in operations research, under the topic of “optimal resource allocation.” The best way to use the limited resources of money and time effectively may be a mixed strategy, with some guided and some random searches. Even a random search has to start somewhere. At the beginning, there should be a plan on what territories to cover and how to cover them. The plan can be deterministic, which is completely planned out in advance and executed accordingly. The plan can also be adaptive: after the arrival of each batch of results and preliminary evaluations, the plan would evolve to take advantage of the new information and understanding gained. Even a random search must begin at a starting point and stake out the most promising directions for initial explorations. In most cases, there is a lead compound that has some of the desired properties, but which is deficient in others, and serves as the starting point of the random search to find better compounds in this neighborhood. The historic cases in section 1.2 involve the modification of an existing product, such as vulcanizing raw rubber and adding an acetyl group to salicylic acid. One explores around the lead compound by using small amounts of additives, blending with other material, changing processing conditions and temperature, and changing structure by chemical reactions.
Less
The reverse search starts from a set of desired properties and asks for substances that possess them. Theoretical knowledge and past experience should be relied upon to suggest where to look, since it is the fastest and least expensive approach. When theoretical knowledge and past experience have been exhausted, then random searches may be the only way to make progress, if the problem is sufficiently important and there is enough budget and patience. Table 7.1 compares some of the requirements and the pros and cons of the guided search and the random search The best strategy on how to spend resources of time and money most efficiently can be considered a problem in operations research, under the topic of “optimal resource allocation.” The best way to use the limited resources of money and time effectively may be a mixed strategy, with some guided and some random searches. Even a random search has to start somewhere. At the beginning, there should be a plan on what territories to cover and how to cover them. The plan can be deterministic, which is completely planned out in advance and executed accordingly. The plan can also be adaptive: after the arrival of each batch of results and preliminary evaluations, the plan would evolve to take advantage of the new information and understanding gained. Even a random search must begin at a starting point and stake out the most promising directions for initial explorations. In most cases, there is a lead compound that has some of the desired properties, but which is deficient in others, and serves as the starting point of the random search to find better compounds in this neighborhood. The historic cases in section 1.2 involve the modification of an existing product, such as vulcanizing raw rubber and adding an acetyl group to salicylic acid. One explores around the lead compound by using small amounts of additives, blending with other material, changing processing conditions and temperature, and changing structure by chemical reactions.
L. K. Doraiswamy
- Published in print:
- 2001
- Published Online:
- November 2020
- ISBN:
- 9780195096897
- eISBN:
- 9780197560822
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195096897.003.0010
- Subject:
- Chemistry, Organic Chemistry
When a reactant or a set of reactants undergoes several reactions (at least two) simultaneously, the reaction is said to be a complex reaction. The total ...
More
When a reactant or a set of reactants undergoes several reactions (at least two) simultaneously, the reaction is said to be a complex reaction. The total conversion of the key reactant, which is used as a measure of reaction in simple reactions, has little meaning in complex reactions, and what is of primary interest is the fraction of reactant converted to the desired product. Thus the more pertinent quantity is product distribution from which the conversion to the desired product can be calculated. This is usually expressed in terms of the yield or selectivity of the reaction with respect to the desired product. From the design point of view, an equally important consideration is the analysis and quantitative treatment of complex reactions, a common example of which is the dehydration of alcohol represented by We call such a set of simultaneous reactions a complex multiple reaction. It is also important to note that many organic syntheses involve a number of steps, each carried out under different conditions (and sometimes in different reactors), leading to what we designate as multistep reactions (normally called a synthetic scheme by organic chemists). This could, for example, be a sequence of reactions like dehydration, oxidation, Diels-Alder, and hydrogenation. This chapter outlines simple procedures for the treatment of complex multiple and multistep reactions and explains the concepts of selectivity and yield. For a more detailed treatment of multiple reactions, the following books may be consulted: Aris (1969) and Nauman (1987). We conclude the chapter by considering a reaction with both catalytic and noncatalytic steps, which also constitutes a kind of complex reaction. Because both chemists and chemical engineers are involved in formulating a practical strategy for accomplishing an organic synthesis, it is important to appreciate the roles of each.
Less
When a reactant or a set of reactants undergoes several reactions (at least two) simultaneously, the reaction is said to be a complex reaction. The total conversion of the key reactant, which is used as a measure of reaction in simple reactions, has little meaning in complex reactions, and what is of primary interest is the fraction of reactant converted to the desired product. Thus the more pertinent quantity is product distribution from which the conversion to the desired product can be calculated. This is usually expressed in terms of the yield or selectivity of the reaction with respect to the desired product. From the design point of view, an equally important consideration is the analysis and quantitative treatment of complex reactions, a common example of which is the dehydration of alcohol represented by We call such a set of simultaneous reactions a complex multiple reaction. It is also important to note that many organic syntheses involve a number of steps, each carried out under different conditions (and sometimes in different reactors), leading to what we designate as multistep reactions (normally called a synthetic scheme by organic chemists). This could, for example, be a sequence of reactions like dehydration, oxidation, Diels-Alder, and hydrogenation. This chapter outlines simple procedures for the treatment of complex multiple and multistep reactions and explains the concepts of selectivity and yield. For a more detailed treatment of multiple reactions, the following books may be consulted: Aris (1969) and Nauman (1987). We conclude the chapter by considering a reaction with both catalytic and noncatalytic steps, which also constitutes a kind of complex reaction. Because both chemists and chemical engineers are involved in formulating a practical strategy for accomplishing an organic synthesis, it is important to appreciate the roles of each.
Michael A. Lones and Andy M. Tyrrell
- Published in print:
- 2004
- Published Online:
- November 2020
- ISBN:
- 9780195155396
- eISBN:
- 9780197561942
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195155396.003.0007
- Subject:
- Computer Science, Mathematical Theory of Computation
Programming is a process of optimization; taking a specification, which tells us what we want, and transforming it into an implementation, a program, which ...
More
Programming is a process of optimization; taking a specification, which tells us what we want, and transforming it into an implementation, a program, which causes the target system to do exactly what we want. Conventionally, this optimization is achieved through manual design. However, manual design can be slow and error-prone, and recently there has been increasing interest in automatic programming; using computers to semiautomate the process of refining a specification into an implementation. Genetic programming is a developing approach to automatic programming, which, rather than treating programming as a design process, treats it as a search process. However, the space of possible programs is infinite, and finding the right program requires a powerful search process. Fortunately for us, we are surrounded by a monotonous search process capable of producing viable systems of great complexity: evolution. Evolution is the inspiration behind genetic programming. Genetic programming copies the process and genetic operators of biological evolution but does not take any inspiration from the biological representations to which they are applied. It can be argued that the program representation that genetic programming does use is not well suited to evolution. Biological representations, by comparison, are a product of evolution and, a fact to which this book is testament, describe computational structures. This chapter is about enzyme genetic programming, a form of genetic programming that mimics biological representations in an attempt to improve the evolvability of programs. Although it would be an advantage to have a familiarity with both genetic programming and biological representations, concise introductions to both these subjects are provided. According to modern biological understanding, evolution is solely responsible for the complexity we see in the structure and behavior of biological organisms. Nevertheless, evolution itself is a simple process that can occur in any population of imperfectly replicating entities where the right to replicate is determined by a process of selection. Consequently, given an appropriate model of such an environment, evolution can also occur within computers.
Less
Programming is a process of optimization; taking a specification, which tells us what we want, and transforming it into an implementation, a program, which causes the target system to do exactly what we want. Conventionally, this optimization is achieved through manual design. However, manual design can be slow and error-prone, and recently there has been increasing interest in automatic programming; using computers to semiautomate the process of refining a specification into an implementation. Genetic programming is a developing approach to automatic programming, which, rather than treating programming as a design process, treats it as a search process. However, the space of possible programs is infinite, and finding the right program requires a powerful search process. Fortunately for us, we are surrounded by a monotonous search process capable of producing viable systems of great complexity: evolution. Evolution is the inspiration behind genetic programming. Genetic programming copies the process and genetic operators of biological evolution but does not take any inspiration from the biological representations to which they are applied. It can be argued that the program representation that genetic programming does use is not well suited to evolution. Biological representations, by comparison, are a product of evolution and, a fact to which this book is testament, describe computational structures. This chapter is about enzyme genetic programming, a form of genetic programming that mimics biological representations in an attempt to improve the evolvability of programs. Although it would be an advantage to have a familiarity with both genetic programming and biological representations, concise introductions to both these subjects are provided. According to modern biological understanding, evolution is solely responsible for the complexity we see in the structure and behavior of biological organisms. Nevertheless, evolution itself is a simple process that can occur in any population of imperfectly replicating entities where the right to replicate is determined by a process of selection. Consequently, given an appropriate model of such an environment, evolution can also occur within computers.
Graham Brack, Penny Franklin, and Jill Caldwell
- Published in print:
- 2013
- Published Online:
- November 2020
- ISBN:
- 9780199697878
- eISBN:
- 9780191918490
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199697878.003.0013
- Subject:
- Clinical Medicine and Allied Health, Nursing
By the end of this chapter you should be able to:… ● Understand the responsibilities and accountability of the student and trained nurse with regards to medicines ...
More
By the end of this chapter you should be able to:… ● Understand the responsibilities and accountability of the student and trained nurse with regards to medicines management ● Understand the reasons for policy to support medicines management ● Interpret the role of the nurse in relation to policies and standards for medicines management ● Understand the role of the nurse in relation to key standards and drivers for the safer administration and management of medicines…. The aims of this chapter are to support you to interpret the responsibility that you already carry as a student and will carry as a registrant when giving medicines to patients and to help you to understand what is meant by accountability and how this relates to your role now and in the future in the management of medicines. Medicines management occurs wherever there is a patient and is carried out in a variety of settings which include:… ● acute hospitals ● community hospitals ● care homes, both residential care homes and nursing homes ● the patient’s own home ● schools ● community clinics…. The National Patient Safety Agency (NPSA, 2004 ) has produced guidance for organizations on supporting patient safety. They suggested the implementation of seven steps as follows:… 1 Build a safety culture. 2 Lead and support your staff. 3 Integrate your risk management activity. 4 Promote reporting. 5 Involve and communicate with patients and the public. 6 Learn and share safety lessons. 7 Implement solutions to prevent harm…. When interpreted in relation to medicines management and nursing care this means that the employing organization has a duty of care to its employees and patients to ensure that medicines are dispensed, supplied, and administered safely and that procedures are in place to support this. Managers need to be made aware of anything that might prevent this, and must ensure that checks are in place to prevent harm from occurring. The clear and prompt reporting of concerns, risk, and errors to management is pivotal to patient safety and medicines management in nursing and, from an organizational point of view, patient consultation and involvement is vital. Lessons must be shared in a ‘low blame culture’ and changes made to support the reduction of risk and potential harm. For more on communication and on risk reduction please see Chapters 1 and 10.
Less
By the end of this chapter you should be able to:… ● Understand the responsibilities and accountability of the student and trained nurse with regards to medicines management ● Understand the reasons for policy to support medicines management ● Interpret the role of the nurse in relation to policies and standards for medicines management ● Understand the role of the nurse in relation to key standards and drivers for the safer administration and management of medicines…. The aims of this chapter are to support you to interpret the responsibility that you already carry as a student and will carry as a registrant when giving medicines to patients and to help you to understand what is meant by accountability and how this relates to your role now and in the future in the management of medicines. Medicines management occurs wherever there is a patient and is carried out in a variety of settings which include:… ● acute hospitals ● community hospitals ● care homes, both residential care homes and nursing homes ● the patient’s own home ● schools ● community clinics…. The National Patient Safety Agency (NPSA, 2004 ) has produced guidance for organizations on supporting patient safety. They suggested the implementation of seven steps as follows:… 1 Build a safety culture. 2 Lead and support your staff. 3 Integrate your risk management activity. 4 Promote reporting. 5 Involve and communicate with patients and the public. 6 Learn and share safety lessons. 7 Implement solutions to prevent harm…. When interpreted in relation to medicines management and nursing care this means that the employing organization has a duty of care to its employees and patients to ensure that medicines are dispensed, supplied, and administered safely and that procedures are in place to support this. Managers need to be made aware of anything that might prevent this, and must ensure that checks are in place to prevent harm from occurring. The clear and prompt reporting of concerns, risk, and errors to management is pivotal to patient safety and medicines management in nursing and, from an organizational point of view, patient consultation and involvement is vital. Lessons must be shared in a ‘low blame culture’ and changes made to support the reduction of risk and potential harm. For more on communication and on risk reduction please see Chapters 1 and 10.
Hans Fehr and Fabian Kindermann
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198804390
- eISBN:
- 9780191917202
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198804390.003.0005
- Subject:
- Computer Science, Programming Languages
In this chapter we develop simple methods for solving numerical problems. We start with linear equation systems, continue with nonlinear equations and ...
More
In this chapter we develop simple methods for solving numerical problems. We start with linear equation systems, continue with nonlinear equations and finally talk about optimization, interpolation, and integration methods. Each section starts with a motivating example from economics before we discuss some of the theory and intuition behind the numerical solution method. Finally, we present some Fortran code that applies the solution technique to the economic problem. This section mainly addresses the issue of solving linear equation systems. As a linear equation system is usually defined by a matrix equation, we first have to talk about how to work with matrices and vectors in Fortran. After that, we will present some linear equation system solving techniques.
Less
In this chapter we develop simple methods for solving numerical problems. We start with linear equation systems, continue with nonlinear equations and finally talk about optimization, interpolation, and integration methods. Each section starts with a motivating example from economics before we discuss some of the theory and intuition behind the numerical solution method. Finally, we present some Fortran code that applies the solution technique to the economic problem. This section mainly addresses the issue of solving linear equation systems. As a linear equation system is usually defined by a matrix equation, we first have to talk about how to work with matrices and vectors in Fortran. After that, we will present some linear equation system solving techniques.
Hans Fehr and Fabian Kindermann
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198804390
- eISBN:
- 9780191917202
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198804390.003.0009
- Subject:
- Computer Science, Programming Languages
The discussion in the Chapters 3 and 4 centred around static optimization problems.The static general equilibrium model of Chapter 3 features an exogenous ...
More
The discussion in the Chapters 3 and 4 centred around static optimization problems.The static general equilibrium model of Chapter 3 features an exogenous capital stock and Chapter 4 discusses investment decisions with risky assets, but in a static context. In this chapter we take a first step towards the analysis of dynamic problems. We introduce the life-cycle model and analyse the intertemporal choice of consumption and individual savings. We start with discussing the most basic version of this model and then introduce labour-income uncertainty to explain different motives for saving. In later sections, we extended the model by considering alternative savings vehicles and explain portfolio choice and annuity demand. Throughout this chapter we follow a partial equilibrium approach, so that factor prices for capital and labour are specified exogenously and not determined endogenously as in Chapter 3. This section assumes that households can only save in one asset. Since we abstract from bequest motives in this chapter, households do save because they need resources to consume in old age or because they want to provide a buffer stock in case of uncertain future outcomes.The first motive is the so-called old-age savings motive while the second is the precautionary savings motive. In order to derive savings decisions it is assumed in the following that a household lives for three periods. In the first two periods the agent works and receives labour income w while in the last period the agent lives from his accumulated previous savings. In order to derive the optimal asset structure a2 and a3 (i.e. the optimal savings), the agent maximizes the utility function . . . U(c1, c2, c3) = u(c1) + βu(c2) + β2u(c3) . . . where β denotes a time discount factor and u(c) = c1−1/γ /1−1/γ describes the preference function with γ ≥ 0 measuring the intertemporal elasticity of substitution.
Less
The discussion in the Chapters 3 and 4 centred around static optimization problems.The static general equilibrium model of Chapter 3 features an exogenous capital stock and Chapter 4 discusses investment decisions with risky assets, but in a static context. In this chapter we take a first step towards the analysis of dynamic problems. We introduce the life-cycle model and analyse the intertemporal choice of consumption and individual savings. We start with discussing the most basic version of this model and then introduce labour-income uncertainty to explain different motives for saving. In later sections, we extended the model by considering alternative savings vehicles and explain portfolio choice and annuity demand. Throughout this chapter we follow a partial equilibrium approach, so that factor prices for capital and labour are specified exogenously and not determined endogenously as in Chapter 3. This section assumes that households can only save in one asset. Since we abstract from bequest motives in this chapter, households do save because they need resources to consume in old age or because they want to provide a buffer stock in case of uncertain future outcomes.The first motive is the so-called old-age savings motive while the second is the precautionary savings motive. In order to derive savings decisions it is assumed in the following that a household lives for three periods. In the first two periods the agent works and receives labour income w while in the last period the agent lives from his accumulated previous savings. In order to derive the optimal asset structure a2 and a3 (i.e. the optimal savings), the agent maximizes the utility function . . . U(c1, c2, c3) = u(c1) + βu(c2) + β2u(c3) . . . where β denotes a time discount factor and u(c) = c1−1/γ /1−1/γ describes the preference function with γ ≥ 0 measuring the intertemporal elasticity of substitution.
M. Riès-Kautt and A. Ducruix
- Published in print:
- 1999
- Published Online:
- November 2020
- ISBN:
- 9780199636792
- eISBN:
- 9780191918148
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199636792.003.0014
- Subject:
- Chemistry, Crystallography: Chemistry
Biological macromolecules follow the same thermodynamic rules as inorganic or organic small molecules concerning supersaturation, nucleation, and crystal growth (1). ...
More
Biological macromolecules follow the same thermodynamic rules as inorganic or organic small molecules concerning supersaturation, nucleation, and crystal growth (1). Nevertheless macromolecules present particularities, because the intramolecular interactions responsible of their tertiary structure, the intermolecular interactions involved in the crystal contacts, and the interactions necessary to solubilize them in a solvent are similar. Therefore these different interactions may become competitive with each other. In addition, the biological properties of biological macromolecules may be conserved although the physico-chemical properties, such as the net charge, may change depending on the crystallization conditions (pH, ionic strength, etc.). A charged biological macromolecule requires counterions to maintain the electroneutrality of the solution; therefore it should be considered as a protein (or nucleic acid) salt with its own physico-chemical properties, depending on the nature of the counterions. To crystallize a biological macromolecule, its solution must have reached supersaturation which is the driving force for crystal growth. The understanding of the influence of the crystallization parameters on protein solubility of model proteins is necessary to guide the preparation of crystals of new proteins and their manipulation. Only the practical issues are developed in this chapter, and the reader should refer to recent reviews (2-4) for a description of the fundamental physical chemistry underlying crystallogenesis. The solubilization of a solute (e.g. a biological macromolecule) in an efficient solvent requires solvent-solute interactions, which must be similar to the solvent-solvent interactions and to the solute-solute interactions of the compound to be dissolved. All of the compounds of a protein solution (protein, water, buffer, crystallizing agents, and others) interact with each other via various, often weak, types of interactions: monopole-monopole, monopole-dipole, dipole-dipole, Van der Waals hydrophobic interactions, and hydrogen bonds. Solubility is defined as the amount of solute dissolved in a solution in equilibrium with its crystal form at a given temperature. For example, crystalline ammonium sulfate dissolves at 25°C until its concentration reaches 4.1 moles per litre of water, the excess remaining non-dissolved. More salt can be dissolved when raising the temperature, but if the temperature is brought back to 25°C, the solution becomes supersaturated, and the excess of salt crystallizes until its concentration reaches again its solubility value at 25°C (4.1 moles per litre of water).
Less
Biological macromolecules follow the same thermodynamic rules as inorganic or organic small molecules concerning supersaturation, nucleation, and crystal growth (1). Nevertheless macromolecules present particularities, because the intramolecular interactions responsible of their tertiary structure, the intermolecular interactions involved in the crystal contacts, and the interactions necessary to solubilize them in a solvent are similar. Therefore these different interactions may become competitive with each other. In addition, the biological properties of biological macromolecules may be conserved although the physico-chemical properties, such as the net charge, may change depending on the crystallization conditions (pH, ionic strength, etc.). A charged biological macromolecule requires counterions to maintain the electroneutrality of the solution; therefore it should be considered as a protein (or nucleic acid) salt with its own physico-chemical properties, depending on the nature of the counterions. To crystallize a biological macromolecule, its solution must have reached supersaturation which is the driving force for crystal growth. The understanding of the influence of the crystallization parameters on protein solubility of model proteins is necessary to guide the preparation of crystals of new proteins and their manipulation. Only the practical issues are developed in this chapter, and the reader should refer to recent reviews (2-4) for a description of the fundamental physical chemistry underlying crystallogenesis. The solubilization of a solute (e.g. a biological macromolecule) in an efficient solvent requires solvent-solute interactions, which must be similar to the solvent-solvent interactions and to the solute-solute interactions of the compound to be dissolved. All of the compounds of a protein solution (protein, water, buffer, crystallizing agents, and others) interact with each other via various, often weak, types of interactions: monopole-monopole, monopole-dipole, dipole-dipole, Van der Waals hydrophobic interactions, and hydrogen bonds. Solubility is defined as the amount of solute dissolved in a solution in equilibrium with its crystal form at a given temperature. For example, crystalline ammonium sulfate dissolves at 25°C until its concentration reaches 4.1 moles per litre of water, the excess remaining non-dissolved. More salt can be dissolved when raising the temperature, but if the temperature is brought back to 25°C, the solution becomes supersaturated, and the excess of salt crystallizes until its concentration reaches again its solubility value at 25°C (4.1 moles per litre of water).
John Geweke and Garland Durham
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0015
- Subject:
- Economics and Finance, Microeconomics
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence ...
More
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence when (as is often the case) only kernels of the relevant probability densities are available. The chapter further demonstrates that Rényi divergence is central to the convergence and efficiency of Monte Carlo integration procedures in which information flow is controlled. It uses this perspective to develop more flexible approaches to the controlled introduction of information; in the limited set of examples considered here, these alternatives enhance efficiency.Less
Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence when (as is often the case) only kernels of the relevant probability densities are available. The chapter further demonstrates that Rényi divergence is central to the convergence and efficiency of Monte Carlo integration procedures in which information flow is controlled. It uses this perspective to develop more flexible approaches to the controlled introduction of information; in the limited set of examples considered here, these alternatives enhance efficiency.
Min Chen
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0016
- Subject:
- Economics and Finance, Microeconomics
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of ...
More
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of data intelligence workflows. This work is built on the information-theoretic metric proposed by Chen and Golan for this purpose and several recent studies and applications of the metric. We present a set of extended interpretations of the metric by relating the metric to encryption, compression, model development, perception, cognition, languages, and media.Less
The core of data science is our fundamental understanding about data intelligence processes for transforming data to decisions. One aspect of this understanding is how to analyze the cost-benefit of data intelligence workflows. This work is built on the information-theoretic metric proposed by Chen and Golan for this purpose and several recent studies and applications of the metric. We present a set of extended interpretations of the metric by relating the metric to encryption, compression, model development, perception, cognition, languages, and media.
Robert G. Chambers
- Published in print:
- 2021
- Published Online:
- December 2020
- ISBN:
- 9780190063016
- eISBN:
- 9780190063047
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190063016.003.0002
- Subject:
- Economics and Finance, Econometrics, Microeconomics
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by ...
More
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by introducing the notions of a correspondence, upper hemi-continuity, and lower hemi-continuity. Superdifferential and subdifferential correspondences for real-valued functions are then introduced, and their essential properties and their role in characterizing global optima are surveyed. Convex sets are introduced and related to functional concavity (convexity). The relationship between functional concavity (convexity), superdifferentiability (subdifferentiability), and the existence of (one-sided) directional derivatives is examined. The theory of convex conjugates and essential conjugate duality results are discussed. Topics treated include Berge's Maximum Theorem, cyclical monotonicity of superdifferential (subdifferential) correspondences, concave (convex) conjugates and biconjugates, Fenchel's Inequality, the Fenchel-Rockafellar Conjugate Duality Theorem, support functions, superlinear functions, sublinear functions, the theory of infimal convolutions and supremal convolutions, and Fenchel's Duality Theorem.Less
Mathematical tools necessary to the argument are presented and discussed. The focus is on concepts borrowed from the convex analysis and variational analysis literatures. The chapter starts by introducing the notions of a correspondence, upper hemi-continuity, and lower hemi-continuity. Superdifferential and subdifferential correspondences for real-valued functions are then introduced, and their essential properties and their role in characterizing global optima are surveyed. Convex sets are introduced and related to functional concavity (convexity). The relationship between functional concavity (convexity), superdifferentiability (subdifferentiability), and the existence of (one-sided) directional derivatives is examined. The theory of convex conjugates and essential conjugate duality results are discussed. Topics treated include Berge's Maximum Theorem, cyclical monotonicity of superdifferential (subdifferential) correspondences, concave (convex) conjugates and biconjugates, Fenchel's Inequality, the Fenchel-Rockafellar Conjugate Duality Theorem, support functions, superlinear functions, sublinear functions, the theory of infimal convolutions and supremal convolutions, and Fenchel's Duality Theorem.
Robert G. Chambers
- Published in print:
- 2021
- Published Online:
- December 2020
- ISBN:
- 9780190063016
- eISBN:
- 9780190063047
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190063016.003.0004
- Subject:
- Economics and Finance, Econometrics, Microeconomics
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. ...
More
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. Conjugate duality results are developed for each. The resulting dual representations (E(q;y), R(p,x), and π(p,q)) are shown to characterize all of the economically relevant information in, respectively, V(y), Y(x), and Gr(≽(y)). The implications of different restrictions on ≽(y) for the dual representations are examined.Less
Three generic economic optimization problems (expenditure (cost) minimization, revenue maximization, and profit maximization) are studied using the mathematical tools developed in Chapters 2 and 3. Conjugate duality results are developed for each. The resulting dual representations (E(q;y), R(p,x), and π(p,q)) are shown to characterize all of the economically relevant information in, respectively, V(y), Y(x), and Gr(≽(y)). The implications of different restrictions on ≽(y) for the dual representations are examined.
Pieter Adriaans
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0002
- Subject:
- Economics and Finance, Microeconomics
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the ...
More
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.Less
A computational theory of meaning tries to understand the phenomenon of meaning in terms of computation. Here we give an analysis in the context of Kolmogorov complexity. This theory measures the complexity of a data set in terms of the length of the smallest program that generates the data set on a universal computer. As a natural extension, the set of all programs that produce a data set on a computer can be interpreted as the set of meanings of the data set. We give an analysis of the Kolmogorov structure function and some other attempts to formulate a mathematical theory of meaning in terms of two-part optimal model selection. We show that such theories will always be context dependent: the invariance conditions that make Kolmogorov complexity a valid theory of measurement fail for this more general notion of meaning. One cause is the notion of polysemy: one data set (i.e., a string of symbols) can have different programs with no mutual information that compresses it. Another cause is the existence of recursive bijections between ℕ and ℕ2 for which the two-part code is always more efficient. This generates vacuous optimal two-part codes. We introduce a formal framework to study such contexts in the form of a theory that generalizes the concept of Turing machines to learning agents that have a memory and have access to each other’s functions in terms of a possible world semantics. In such a framework, the notions of randomness and informativeness become agent dependent. We show that such a rich framework explains many of the anomalies of the correct theory of algorithmic complexity. It also provides perspectives for, among other things, the study of cognitive and social processes. Finally, we sketch some application paradigms of the theory.
Christopher Tsoukis
- Published in print:
- 2020
- Published Online:
- November 2020
- ISBN:
- 9780198825371
- eISBN:
- 9780191912498
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198825371.003.0005
- Subject:
- Economics and Finance, Macro- and Monetary Economics
This chapter offers an introduction to the methods and main models used in dynamic macroeconomics. After reviewing key concepts such as lifetime utility maximization and the period-by-period and ...
More
This chapter offers an introduction to the methods and main models used in dynamic macroeconomics. After reviewing key concepts such as lifetime utility maximization and the period-by-period and intertemporal budget constraints, first-order conditions for intertemporal optimization (the Euler equation and the labour-leisure choice) are developed. These methods are applied in developing the workhorse Ramsey model, with discussion of related concepts such as dynamic efficiency and market equilibrium versus the command optimum. An extension of the Ramsey model incorporates adjustment costs in investment and develops the user cost of capital. Furthermore, the Sidrauski model, with its implications for monetary economies, is reviewed. Finally, the discussion turns to another workhorse dynamic model, the overlapping-generations model and its implications. As an application of this model, the properties of various methods of funding social insurance are discussed.Less
This chapter offers an introduction to the methods and main models used in dynamic macroeconomics. After reviewing key concepts such as lifetime utility maximization and the period-by-period and intertemporal budget constraints, first-order conditions for intertemporal optimization (the Euler equation and the labour-leisure choice) are developed. These methods are applied in developing the workhorse Ramsey model, with discussion of related concepts such as dynamic efficiency and market equilibrium versus the command optimum. An extension of the Ramsey model incorporates adjustment costs in investment and develops the user cost of capital. Furthermore, the Sidrauski model, with its implications for monetary economies, is reviewed. Finally, the discussion turns to another workhorse dynamic model, the overlapping-generations model and its implications. As an application of this model, the properties of various methods of funding social insurance are discussed.