Jan Modersitzki
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198528418
- eISBN:
- 9780191713583
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528418.003.0007
- Subject:
- Mathematics, Applied Mathematics
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image ...
More
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image features or directly on image intensities. Image features can be user supplied (e.g., landmarks) or may be deduced automatically from the image intensities (e.g., principal axes). Typical examples of intensity-based distance measures are the sum of squared differences, correlation or mutual information. For all proposed techniques, the transformation is parametric, i.e., it can be expanded in terms of some parameters and basis functions. The desired transformation is a minimizer of the distance measure in the space spanned by the basis functions. The minimizer can be obtained from algebraic equations or by applying appropriate optimization tools.Less
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image features or directly on image intensities. Image features can be user supplied (e.g., landmarks) or may be deduced automatically from the image intensities (e.g., principal axes). Typical examples of intensity-based distance measures are the sum of squared differences, correlation or mutual information. For all proposed techniques, the transformation is parametric, i.e., it can be expanded in terms of some parameters and basis functions. The desired transformation is a minimizer of the distance measure in the space spanned by the basis functions. The minimizer can be obtained from algebraic equations or by applying appropriate optimization tools.
Bijan Mohammadi and Olivier Pironneau
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199546909
- eISBN:
- 9780191720482
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199546909.001.0001
- Subject:
- Mathematics, Mathematical Physics
The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering ...
More
The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.Less
The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.
Fabio-Cesare Bagliano and Giuseppe Bertola
- Published in print:
- 2004
- Published Online:
- January 2005
- ISBN:
- 9780199266821
- eISBN:
- 9780191601606
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199266824.001.0001
- Subject:
- Economics and Finance, Macro- and Monetary Economics
Introduces methodological tools for dynamic analysis of macroeconomic phenomena: consumption and investment choices, employment, and unemployment outcomes, and economic growth. Discrete‐time dynamic ...
More
Introduces methodological tools for dynamic analysis of macroeconomic phenomena: consumption and investment choices, employment, and unemployment outcomes, and economic growth. Discrete‐time dynamic optimization under uncertainty is introduced in Ch. 1 and applied to intertemporal consumption theory, with particular attention to empirical implementation. Chapter 2 focuses on continuous‐time optimization techniques and discusses the relevant insights in the context of partial equilibrium investment models. Chapter 3 applies previous chapters’ tools to dynamic labour demand, deriving the labour market equilibrium when both firms and workers face dynamic adjustment problems. Chapter 4 studies continuous‐time equilibrium dynamics of representative‐agent economies featuring both consumption and investment choices, with applications to long‐run growth issues. The role of externalities in more recent models of endogenous growth is carefully discussed. Chapter 5 studies the determination of aggregate equilibria in markets with decentralized trading, discussing the possibility of coordination failures and multiple equilibria. A search model of the labour market, focussed on the flows into and out of unemployment, is then analyzed and the dynamics of frictional unemployment are discussed. Many exercises can be found both within and at the ends of chapters, with extended solutions.Less
Introduces methodological tools for dynamic analysis of macroeconomic phenomena: consumption and investment choices, employment, and unemployment outcomes, and economic growth. Discrete‐time dynamic optimization under uncertainty is introduced in Ch. 1 and applied to intertemporal consumption theory, with particular attention to empirical implementation. Chapter 2 focuses on continuous‐time optimization techniques and discusses the relevant insights in the context of partial equilibrium investment models. Chapter 3 applies previous chapters’ tools to dynamic labour demand, deriving the labour market equilibrium when both firms and workers face dynamic adjustment problems. Chapter 4 studies continuous‐time equilibrium dynamics of representative‐agent economies featuring both consumption and investment choices, with applications to long‐run growth issues. The role of externalities in more recent models of endogenous growth is carefully discussed. Chapter 5 studies the determination of aggregate equilibria in markets with decentralized trading, discussing the possibility of coordination failures and multiple equilibria. A search model of the labour market, focussed on the flows into and out of unemployment, is then analyzed and the dynamics of frictional unemployment are discussed. Many exercises can be found both within and at the ends of chapters, with extended solutions.
Gregory C. Chow
- Published in print:
- 1997
- Published Online:
- October 2011
- ISBN:
- 9780195101928
- eISBN:
- 9780199855032
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195101928.001.0001
- Subject:
- Economics and Finance, Financial Economics
This work provides a unified and simple treatment of dynamic economics using dynamic optimization as the main theme, and the method of Lagrange multipliers to solve dynamic economic problems. The ...
More
This work provides a unified and simple treatment of dynamic economics using dynamic optimization as the main theme, and the method of Lagrange multipliers to solve dynamic economic problems. The book presents the optimization framework for dynamic economics to foster an understanding of the approach. Instead of using dynamic programming, the book chooses instead to use the method of Lagrange multipliers in the analysis of dynamic optimization because it is easier and more efficient than dynamic programming, and gives an understanding of the substance of dynamic economics better. The book treats a number of topics in economics, including economic growth, macroeconomics, microeconomics, finance, and dynamic games. It also teaches by examples, using concepts to solve simple problems; it then moves to general propositions.Less
This work provides a unified and simple treatment of dynamic economics using dynamic optimization as the main theme, and the method of Lagrange multipliers to solve dynamic economic problems. The book presents the optimization framework for dynamic economics to foster an understanding of the approach. Instead of using dynamic programming, the book chooses instead to use the method of Lagrange multipliers in the analysis of dynamic optimization because it is easier and more efficient than dynamic programming, and gives an understanding of the substance of dynamic economics better. The book treats a number of topics in economics, including economic growth, macroeconomics, microeconomics, finance, and dynamic games. It also teaches by examples, using concepts to solve simple problems; it then moves to general propositions.
Jerome L. Stein
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780199280575
- eISBN:
- 9780191603501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199280576.003.0002
- Subject:
- Economics and Finance, Financial Economics
Data on the credit rating of bonds issued in the first half of the 1990s suggest that investors in emerging market securities paid little attention to credit risk, or that they were comfortable with ...
More
Data on the credit rating of bonds issued in the first half of the 1990s suggest that investors in emerging market securities paid little attention to credit risk, or that they were comfortable with the high level of credit risk that they were incurring. This chapter develops a paradigm for intertemporal optimization under uncertainty in a finite horizon discrete time context, with the constraint that there be no default on short-term foreign currency denominated debt. The object is to select consumption, investment, and the resulting short-term debt in the first period to maximize the expected present value of the utility of consumption over both periods. The constraint is that regardless of the state of nature in the second period, there will be no default on the debt.Less
Data on the credit rating of bonds issued in the first half of the 1990s suggest that investors in emerging market securities paid little attention to credit risk, or that they were comfortable with the high level of credit risk that they were incurring. This chapter develops a paradigm for intertemporal optimization under uncertainty in a finite horizon discrete time context, with the constraint that there be no default on short-term foreign currency denominated debt. The object is to select consumption, investment, and the resulting short-term debt in the first period to maximize the expected present value of the utility of consumption over both periods. The constraint is that regardless of the state of nature in the second period, there will be no default on the debt.
Jerome L. Stein
- Published in print:
- 2006
- Published Online:
- May 2006
- ISBN:
- 9780199280575
- eISBN:
- 9780191603501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199280576.003.0003
- Subject:
- Economics and Finance, Financial Economics
This chapter answers the following technical questions: In a stochastic environment, where the return on capital and the interest rate are stochastic, what is an optimal (1) long-term debt, (2) ...
More
This chapter answers the following technical questions: In a stochastic environment, where the return on capital and the interest rate are stochastic, what is an optimal (1) long-term debt, (2) expected current account, (3) consumption, and (4) expected growth rate. The mathematical techniques necessary to answer these questions, concerning intertemporal optimization in continuous time over an infinite horizon, involve dynamic programming. A mean-variance interpretation is given for the dynamic programming solution.Less
This chapter answers the following technical questions: In a stochastic environment, where the return on capital and the interest rate are stochastic, what is an optimal (1) long-term debt, (2) expected current account, (3) consumption, and (4) expected growth rate. The mathematical techniques necessary to answer these questions, concerning intertemporal optimization in continuous time over an infinite horizon, involve dynamic programming. A mean-variance interpretation is given for the dynamic programming solution.
Graham Priest
- Published in print:
- 2005
- Published Online:
- May 2006
- ISBN:
- 9780199263288
- eISBN:
- 9780191603631
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199263280.003.0008
- Subject:
- Philosophy, Logic/Philosophy of Mathematics
This chapter argues that the common view that believing a contradiction is the nadir of rationality should be rejected, and that rational considerations may require one to believe contradictions. An ...
More
This chapter argues that the common view that believing a contradiction is the nadir of rationality should be rejected, and that rational considerations may require one to believe contradictions. An informal model of rationality as an optimization procedure under constraint is given.Less
This chapter argues that the common view that believing a contradiction is the nadir of rationality should be rejected, and that rational considerations may require one to believe contradictions. An informal model of rationality as an optimization procedure under constraint is given.
Graham Priest
- Published in print:
- 2005
- Published Online:
- May 2006
- ISBN:
- 9780199263288
- eISBN:
- 9780191603631
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0199263280.003.0009
- Subject:
- Philosophy, Logic/Philosophy of Mathematics
This chapter discusses an account of belief-revision that is compatible with the rational belief of contradictions. In the process, a formal account of the model of rationality of the preceding ...
More
This chapter discusses an account of belief-revision that is compatible with the rational belief of contradictions. In the process, a formal account of the model of rationality of the preceding chapter is provided. The account of belief-revision is contrasted with the familiar AGM account.Less
This chapter discusses an account of belief-revision that is compatible with the rational belief of contradictions. In the process, a formal account of the model of rationality of the preceding chapter is provided. The account of belief-revision is contrasted with the familiar AGM account.
Gøsta Esping‐Andersen
- Published in print:
- 1999
- Published Online:
- November 2003
- ISBN:
- 9780198742005
- eISBN:
- 9780191599163
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198742002.003.0009
- Subject:
- Political Science, Comparative Politics
The introduction to this chapter discusses the question of why nations respond so differently to a set of social risks that are similar over various countries, and analyses three typical homines: ...
More
The introduction to this chapter discusses the question of why nations respond so differently to a set of social risks that are similar over various countries, and analyses three typical homines: Homo liberalisimus, Homo familias, and Homo socialdemocraticus. When the instincts of these ideal typical homines are combined, moral conflicts result, although a sufficient mass manages to profile itself in collective expression and sways society towards its preferred welfare regime. Old risks may fade and new ones emerge, but the response of a welfare regime will be, more likely than not, normatively path dependent. It is argued that since core institutional traits appear to be so unyielding to change, it is unlikely that the contemporary welfare state crisis will produce revolutionary change: there may be a blueprint for an ideal post‐industrial regime, but unless it is compatible with existing welfare regime practice, it may not be practicable. The author argues that, nonetheless, optimizing welfare in a post‐industrial setting will require radical departures, and these are discussed under the following headings: What is to be Optimized; Rival Reform Strategies; The Market Strategy; A Third Way?; and Equality with Inequality?Less
The introduction to this chapter discusses the question of why nations respond so differently to a set of social risks that are similar over various countries, and analyses three typical homines: Homo liberalisimus, Homo familias, and Homo socialdemocraticus. When the instincts of these ideal typical homines are combined, moral conflicts result, although a sufficient mass manages to profile itself in collective expression and sways society towards its preferred welfare regime. Old risks may fade and new ones emerge, but the response of a welfare regime will be, more likely than not, normatively path dependent. It is argued that since core institutional traits appear to be so unyielding to change, it is unlikely that the contemporary welfare state crisis will produce revolutionary change: there may be a blueprint for an ideal post‐industrial regime, but unless it is compatible with existing welfare regime practice, it may not be practicable. The author argues that, nonetheless, optimizing welfare in a post‐industrial setting will require radical departures, and these are discussed under the following headings: What is to be Optimized; Rival Reform Strategies; The Market Strategy; A Third Way?; and Equality with Inequality?
Ralph Hertwig, Ulrich Hoffrage, and ABC Research Group (eds)
- Published in print:
- 2012
- Published Online:
- January 2013
- ISBN:
- 9780195388435
- eISBN:
- 9780199950089
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195388435.001.0001
- Subject:
- Psychology, Social Psychology
This book invites readers to discover the simple heuristics that people use to navigate the complexities and surprises of environments populated with others. The social world is a terrain where ...
More
This book invites readers to discover the simple heuristics that people use to navigate the complexities and surprises of environments populated with others. The social world is a terrain where humans and other animals compete with conspecifics for myriad resources, including food, mates, and status, and where rivals grant the decision maker little time for deep thought, protracted information search, or complex calculations. The social world also encompasses domains, however, where social animals such as humans learn from one another how to deal with the vagaries of a natural world that both inflicts unforeseeable hazards and presents useful opportunities and dare to trust and forge alliances with one another to boost their chances of success. According to the book's thesis, the undeniable complexity of the social world does not dictate cognitive complexity as many scholars of rationality argue. Rather, it entails circumstances that render optimization impossible or computationally arduous: intractability, the existence of incommensurable considerations, and competing goals. With optimization beyond reach, less can be more. That is, heuristics—simple strategies for making decisions when time is pressing and careful deliberation an unaffordable luxury—become indispensible mental tools. As accurate or even more accurate than complex methods when used in the appropriate environments, these heuristics are good descriptive models of how people make many decisions and inferences, but their impressive performance also poses a normative challenge for optimization models. In short, the homo socialis may prove to be a homo heuristicus whose intelligence reflects ecological rather than logical rationality.Less
This book invites readers to discover the simple heuristics that people use to navigate the complexities and surprises of environments populated with others. The social world is a terrain where humans and other animals compete with conspecifics for myriad resources, including food, mates, and status, and where rivals grant the decision maker little time for deep thought, protracted information search, or complex calculations. The social world also encompasses domains, however, where social animals such as humans learn from one another how to deal with the vagaries of a natural world that both inflicts unforeseeable hazards and presents useful opportunities and dare to trust and forge alliances with one another to boost their chances of success. According to the book's thesis, the undeniable complexity of the social world does not dictate cognitive complexity as many scholars of rationality argue. Rather, it entails circumstances that render optimization impossible or computationally arduous: intractability, the existence of incommensurable considerations, and competing goals. With optimization beyond reach, less can be more. That is, heuristics—simple strategies for making decisions when time is pressing and careful deliberation an unaffordable luxury—become indispensible mental tools. As accurate or even more accurate than complex methods when used in the appropriate environments, these heuristics are good descriptive models of how people make many decisions and inferences, but their impressive performance also poses a normative challenge for optimization models. In short, the homo socialis may prove to be a homo heuristicus whose intelligence reflects ecological rather than logical rationality.
Michael J. North and Charles M. Macal
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780195172119
- eISBN:
- 9780199789894
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172119.003.0005
- Subject:
- Business and Management, Strategy
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, ...
More
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, participatory simulation, statistical modeling, risk analysis, and optimization. It also discusses why businesses and government agencies do modeling and simulation.Less
This chapter uses a supply chain example to compare and contrast agent-based modeling and simulation with other modeling techniques, including systems dynamics, discrete-event simulation, participatory simulation, statistical modeling, risk analysis, and optimization. It also discusses why businesses and government agencies do modeling and simulation.
Rolf Niedermeier
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780198566076
- eISBN:
- 9780191713910
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566076.003.0001
- Subject:
- Mathematics, Combinatorics / Graph Theory / Discrete Mathematics
This chapter discusses three introductory examples for studying exact and fixed-parameter algorithms. It starts with the boolean Satisfiability problem and its numerous parameters, then discusses an ...
More
This chapter discusses three introductory examples for studying exact and fixed-parameter algorithms. It starts with the boolean Satisfiability problem and its numerous parameters, then discusses an application problem from railway optimization, and concludes with a communication problem in tree networks (Multicut in Trees). It briefly summarizes the leitmotif of parameterized algorithm design and analysis.Less
This chapter discusses three introductory examples for studying exact and fixed-parameter algorithms. It starts with the boolean Satisfiability problem and its numerous parameters, then discusses an application problem from railway optimization, and concludes with a communication problem in tree networks (Multicut in Trees). It briefly summarizes the leitmotif of parameterized algorithm design and analysis.
Rolf Niedermeier
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780198566076
- eISBN:
- 9780191713910
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566076.003.0002
- Subject:
- Mathematics, Combinatorics / Graph Theory / Discrete Mathematics
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access ...
More
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access Machines, big Oh notation, strings and graphs. It concludes by looking at the basics from computational complexity theory.Less
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access Machines, big Oh notation, strings and graphs. It concludes by looking at the basics from computational complexity theory.
J. C. Gower and G. B. Dijksterhuis
- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198510581
- eISBN:
- 9780191708961
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198510581.003.0009
- Subject:
- Mathematics, Probability / Statistics
This chapter is concerned with generalizations where the two sets of configurations X1 and X2 are replaced by K sets, X1 , X2 ...
More
This chapter is concerned with generalizations where the two sets of configurations X1 and X2 are replaced by K sets, X1 , X2 ,..., XK , each with its own transformation matrix T1 ,..., Tk . All the variants of two-sets Procrustes problems generalize. Different choices of Tk , scaling, weighting, and the optimization criteria are discussed.Less
This chapter is concerned with generalizations where the two sets of configurations X1 and X2 are replaced by K sets, X1 , X2 ,..., XK , each with its own transformation matrix T1 ,..., Tk . All the variants of two-sets Procrustes problems generalize. Different choices of Tk , scaling, weighting, and the optimization criteria are discussed.
Moody T. Chu and Gene H. Golub
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566649
- eISBN:
- 9780191718021
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566649.003.0007
- Subject:
- Mathematics, Applied Mathematics
This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general ...
More
This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general framework by using the projected gradient method is described. A broad range of applications, including the Toeplitz inverse eigenvalue problem, the simultaneous reduction problem, and the nearest normal matrix approximation, are discussed.Less
This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general framework by using the projected gradient method is described. A broad range of applications, including the Toeplitz inverse eigenvalue problem, the simultaneous reduction problem, and the nearest normal matrix approximation, are discussed.
Jan Modersitzki
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198528418
- eISBN:
- 9780191713583
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528418.003.0006
- Subject:
- Mathematics, Applied Mathematics
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, ...
More
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, correlation, and mutual information are discussed. Particular attention is paid to the differentiability of the distance measures. The desired transformation is restricted to a parameterizable space, and as such can be expanded in terms of a linear combination of some basis functions. The registration task is considered as an optimization problem, where the objective is to find the optimal coefficient in the expansion while minimizing the distance measure. The well-known Gauss-Newton method is described and used for numerical optimization. Different examples are used to identify similarities and differences of the distance measures.Less
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, correlation, and mutual information are discussed. Particular attention is paid to the differentiability of the distance measures. The desired transformation is restricted to a parameterizable space, and as such can be expanded in terms of a linear combination of some basis functions. The registration task is considered as an optimization problem, where the objective is to find the optimal coefficient in the expansion while minimizing the distance measure. The well-known Gauss-Newton method is described and used for numerical optimization. Different examples are used to identify similarities and differences of the distance measures.
Eric A. Gaucher
- Published in print:
- 2007
- Published Online:
- September 2008
- ISBN:
- 9780199299188
- eISBN:
- 9780191714979
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199299188.003.0014
- Subject:
- Biology, Evolutionary Biology / Genetics
Approximately twenty studies have emerged where specific molecular systems from extinct organisms have been resurrected for study in the laboratory. These include digestive proteins (ribonucleases, ...
More
Approximately twenty studies have emerged where specific molecular systems from extinct organisms have been resurrected for study in the laboratory. These include digestive proteins (ribonucleases, proteases, and lysozymes) in ruminants and primates, which are used to illustrate how digestive function arose from non-digestive function in response to a changing global ecosystem; fermentive enzymes from fungi, which are used to illustrate how molecular adaptation supported mammals as they displaced dinosaurs as the dominant large land animals; pigments in the visual system adapting to different environments; steroid hormone receptors adapting to changing function in steroid-based regulation of metazoans; fluorescent proteins from ocean-dwelling invertebrates; enzyme cofactor evolution; and proteins from very ancient bacteria helping to define environments where the earliest forms of bacterial life lived. This chapter summarizes the different approaches exploited by these studies. The chapter outlines the different strategies exploited when building ancient genes in the laboratory, the various systems used to express the encoded proteins of the ancient genes, and the different types of functional assay used to characterize the behaviors of the ancient biomolecules.Less
Approximately twenty studies have emerged where specific molecular systems from extinct organisms have been resurrected for study in the laboratory. These include digestive proteins (ribonucleases, proteases, and lysozymes) in ruminants and primates, which are used to illustrate how digestive function arose from non-digestive function in response to a changing global ecosystem; fermentive enzymes from fungi, which are used to illustrate how molecular adaptation supported mammals as they displaced dinosaurs as the dominant large land animals; pigments in the visual system adapting to different environments; steroid hormone receptors adapting to changing function in steroid-based regulation of metazoans; fluorescent proteins from ocean-dwelling invertebrates; enzyme cofactor evolution; and proteins from very ancient bacteria helping to define environments where the earliest forms of bacterial life lived. This chapter summarizes the different approaches exploited by these studies. The chapter outlines the different strategies exploited when building ancient genes in the laboratory, the various systems used to express the encoded proteins of the ancient genes, and the different types of functional assay used to characterize the behaviors of the ancient biomolecules.
Naomi E. Chayen
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780198520979
- eISBN:
- 9780191706295
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198520979.003.0003
- Subject:
- Biology, Biochemistry / Molecular Biology
The availability of high-quality crystals is crucial to the structure determination of proteins by X-ray diffraction. It is still not understood why some proteins crystallize with ease while others ...
More
The availability of high-quality crystals is crucial to the structure determination of proteins by X-ray diffraction. It is still not understood why some proteins crystallize with ease while others stubbornly refuse to produce suitable crystals. Producing high-quality crystals has always been the bottleneck to structure determination and with the advent of structural genomics this problem is becoming increasingly acute. In spite of impressive advances in throughput, the crystallization problem has not been solved and better crystallization techniques need to be designed in order to overcome this hurdle. Finding favourable conditions for crystallization is usually achieved by screening of the protein solution with numerous crystallizing agents in order to find ‘hits’ that indicate which conditions may be suitable for crystal growth. Optimization of the crystallization conditions is done either by fine tuning of the parameters (precipitant, pH, temperature, additives, etc.) involved, or by manipulation of the crystallization phase diagram with the aim of guiding the experiment in the direction that will produce the desired results. This chapter highlights a variety of non-standard experimental methods of screening and optimization techniques with a focus on those that have been automated and can be adapted to high-throughput trials.Less
The availability of high-quality crystals is crucial to the structure determination of proteins by X-ray diffraction. It is still not understood why some proteins crystallize with ease while others stubbornly refuse to produce suitable crystals. Producing high-quality crystals has always been the bottleneck to structure determination and with the advent of structural genomics this problem is becoming increasingly acute. In spite of impressive advances in throughput, the crystallization problem has not been solved and better crystallization techniques need to be designed in order to overcome this hurdle. Finding favourable conditions for crystallization is usually achieved by screening of the protein solution with numerous crystallizing agents in order to find ‘hits’ that indicate which conditions may be suitable for crystal growth. Optimization of the crystallization conditions is done either by fine tuning of the parameters (precipitant, pH, temperature, additives, etc.) involved, or by manipulation of the crystallization phase diagram with the aim of guiding the experiment in the direction that will produce the desired results. This chapter highlights a variety of non-standard experimental methods of screening and optimization techniques with a focus on those that have been automated and can be adapted to high-throughput trials.
R. J. Morris, A. Perrakis, and V. S. Lamzin
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780198520979
- eISBN:
- 9780191706295
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198520979.003.0011
- Subject:
- Biology, Biochemistry / Molecular Biology
This chapter casts the high-throughput automation efforts being developed to meet the needs of Structural Genomics initiatives in the framework of an optimization problem. It gives a general overview ...
More
This chapter casts the high-throughput automation efforts being developed to meet the needs of Structural Genomics initiatives in the framework of an optimization problem. It gives a general overview on optimization techniques with a bias specifically towards the problem of crystallographic refinement. This picture is extended as the chapter considers model building, program flow control, decision-making, validation, and automation. Finer details of different approaches are provided in a conclusive review of some popular software packages and pipelines.Less
This chapter casts the high-throughput automation efforts being developed to meet the needs of Structural Genomics initiatives in the framework of an optimization problem. It gives a general overview on optimization techniques with a bias specifically towards the problem of crystallographic refinement. This picture is extended as the chapter considers model building, program flow control, decision-making, validation, and automation. Finer details of different approaches are provided in a conclusive review of some popular software packages and pipelines.
Ward C. Wheeler
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780199297306
- eISBN:
- 9780191713729
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199297306.003.0005
- Subject:
- Biology, Evolutionary Biology / Genetics
There are two properties that have been used to differentiate sequence data from other sorts of information: simplicity of states and length variation. Unlike complex anatomical features (e.g., limb ...
More
There are two properties that have been used to differentiate sequence data from other sorts of information: simplicity of states and length variation. Unlike complex anatomical features (e.g., limb or wing) that can express themselves in a myriad of forms, nucleotides exhibit only four conditions. Complexity and difference imply that states (e.g., presence/absence, or conditions) are not comparable across characters. Nucleotide states, on the other hand are identical no matter where they occur. Nucleotide sequences may also differ in length. These two aspects of molecular sequence data remove the complexity and positional information so often used in establishing primary homologies in anatomical systems. Two approaches have been developed to deal with the absence of preordained homologies and analyse sequence data. On one hand, methods have been devised to create the missing primary homology statements that are then analysed by standard techniques broadly referred to as multiple alignment. Traditionally, sequence data have undergone this pre-phylogenetic analysis step to permit familiar procedures akin to those used with anatomical characters. A second approach is to optimize directly sequence variation during cladogram searching. This methodology requires no notions of primary character homology or any global (topology-independent) homology statements whatsoever, other than that the compared sequences themselves be homologous.Less
There are two properties that have been used to differentiate sequence data from other sorts of information: simplicity of states and length variation. Unlike complex anatomical features (e.g., limb or wing) that can express themselves in a myriad of forms, nucleotides exhibit only four conditions. Complexity and difference imply that states (e.g., presence/absence, or conditions) are not comparable across characters. Nucleotide states, on the other hand are identical no matter where they occur. Nucleotide sequences may also differ in length. These two aspects of molecular sequence data remove the complexity and positional information so often used in establishing primary homologies in anatomical systems. Two approaches have been developed to deal with the absence of preordained homologies and analyse sequence data. On one hand, methods have been devised to create the missing primary homology statements that are then analysed by standard techniques broadly referred to as multiple alignment. Traditionally, sequence data have undergone this pre-phylogenetic analysis step to permit familiar procedures akin to those used with anatomical characters. A second approach is to optimize directly sequence variation during cladogram searching. This methodology requires no notions of primary character homology or any global (topology-independent) homology statements whatsoever, other than that the compared sequences themselves be homologous.