Rolf Niedermeier
- Published in print:
- 2006
- Published Online:
- September 2007
- ISBN:
- 9780198566076
- eISBN:
- 9780191713910
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566076.003.0002
- Subject:
- Mathematics, Combinatorics / Graph Theory / Discrete Mathematics
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access ...
More
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access Machines, big Oh notation, strings and graphs. It concludes by looking at the basics from computational complexity theory.Less
This chapter introduces the basic mathematical formalism and discusses concepts used throughout the book. Among other things, it looks at decision problems vs optimization problems, Random Access Machines, big Oh notation, strings and graphs. It concludes by looking at the basics from computational complexity theory.
Hidetoshi Nishimori
- Published in print:
- 2001
- Published Online:
- January 2010
- ISBN:
- 9780198509417
- eISBN:
- 9780191709081
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198509417.003.0009
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
A decision-making problem is often formulated as the minimization or maximization of a multivariable function, an optimization problem. This chapter shows that the methods of statistical mechanics ...
More
A decision-making problem is often formulated as the minimization or maximization of a multivariable function, an optimization problem. This chapter shows that the methods of statistical mechanics are useful to study some types of optimization problems including the number partitioning, the graph partitioning, the knapsack problem, and the satisfiability problem. All these problems are shown to be formulated and solved using the theory of spin glasses, in particular the replica method. Then, discussions are continued on the mathematical properties of simulated annealing, an approximate numerical method for generic optimization problems.Less
A decision-making problem is often formulated as the minimization or maximization of a multivariable function, an optimization problem. This chapter shows that the methods of statistical mechanics are useful to study some types of optimization problems including the number partitioning, the graph partitioning, the knapsack problem, and the satisfiability problem. All these problems are shown to be formulated and solved using the theory of spin glasses, in particular the replica method. Then, discussions are continued on the mathematical properties of simulated annealing, an approximate numerical method for generic optimization problems.
Paul Milgrom
- Published in print:
- 2006
- Published Online:
- January 2009
- ISBN:
- 9780199298839
- eISBN:
- 9780191711480
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199298839.003.0020
- Subject:
- Economics and Finance, History of Economic Thought
This chapter examines Samuelson's Le Chatelier principle of how the market responds to a change in parameters of demand and supply curves. It uses examples in demand theory, economic policy, and ...
More
This chapter examines Samuelson's Le Chatelier principle of how the market responds to a change in parameters of demand and supply curves. It uses examples in demand theory, economic policy, and empirical research to illustrate the principle. It also stresses the flexibility of the principle to adapt to changing assumptions. Changes from the optimizing agents to equilibrium systems whose primary use is to provide a foundation for understanding multipliers are observed. The chapter evaluates the performance of the principle performed when confronted with local optimization problems as in production function settings, and in positive feedbacks systems as in gaming situations. The principle is found progressive in that it is able to capitalize on symmetric relations among substitutes and complements. In that regard, the principle has extended research into multiplier analysis, a research area that will continue into the 21st century.Less
This chapter examines Samuelson's Le Chatelier principle of how the market responds to a change in parameters of demand and supply curves. It uses examples in demand theory, economic policy, and empirical research to illustrate the principle. It also stresses the flexibility of the principle to adapt to changing assumptions. Changes from the optimizing agents to equilibrium systems whose primary use is to provide a foundation for understanding multipliers are observed. The chapter evaluates the performance of the principle performed when confronted with local optimization problems as in production function settings, and in positive feedbacks systems as in gaming situations. The principle is found progressive in that it is able to capitalize on symmetric relations among substitutes and complements. In that regard, the principle has extended research into multiplier analysis, a research area that will continue into the 21st century.
Joachims Thorsten
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262033589
- eISBN:
- 9780262255899
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262033589.003.0006
- Subject:
- Computer Science, Machine Learning
This chapter discusses the transductive learning setting proposed by Vapnik where predictions are made only at a fixed number of known test points. Transductive support vector machines (TSVMs) ...
More
This chapter discusses the transductive learning setting proposed by Vapnik where predictions are made only at a fixed number of known test points. Transductive support vector machines (TSVMs) implement the idea of transductive learning by including test points in the computation of the margin. This chapter provides some examples for why the margin on the test examples can provide useful prior information for learning, in particular for the problem of text classification. The resulting optimization problems, however, are difficult to solve. The chapter reviews exact and approximate optimization methods and discusses their properties. Finally, the chapter discusses connections to other related semi-supervised learning approaches such as co-training and methods based on graph cuts, which can be seen as solving variants of the TSVM optimization problem.Less
This chapter discusses the transductive learning setting proposed by Vapnik where predictions are made only at a fixed number of known test points. Transductive support vector machines (TSVMs) implement the idea of transductive learning by including test points in the computation of the margin. This chapter provides some examples for why the margin on the test examples can provide useful prior information for learning, in particular for the problem of text classification. The resulting optimization problems, however, are difficult to solve. The chapter reviews exact and approximate optimization methods and discusses their properties. Finally, the chapter discusses connections to other related semi-supervised learning approaches such as co-training and methods based on graph cuts, which can be seen as solving variants of the TSVM optimization problem.
De Bie Tijl and Cristianini Nello
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262033589
- eISBN:
- 9780262255899
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262033589.003.0007
- Subject:
- Computer Science, Machine Learning
This chapter discusses an alternative approach that is based on a convex relaxation of the optimization problem associated with support vector machine transduction. The result is a semi-definite ...
More
This chapter discusses an alternative approach that is based on a convex relaxation of the optimization problem associated with support vector machine transduction. The result is a semi-definite programming (SDP) problem which can be optimized in polynomial time, the solution of which is an approximation of the optimal labeling as well as a bound on the true optimum of the original transduction objective function. To further decrease the computational complexity, this chapter proposes an approximation that allows solving transduction problems of up to 1,000 unlabeled samples. Finally, the formulation is extended to more general settings of semi-supervised learning, where equivalence and inequivalence constraints are given on labels of some of the samples.Less
This chapter discusses an alternative approach that is based on a convex relaxation of the optimization problem associated with support vector machine transduction. The result is a semi-definite programming (SDP) problem which can be optimized in polynomial time, the solution of which is an approximation of the optimal labeling as well as a bound on the true optimum of the original transduction objective function. To further decrease the computational complexity, this chapter proposes an approximation that allows solving transduction problems of up to 1,000 unlabeled samples. Finally, the formulation is extended to more general settings of semi-supervised learning, where equivalence and inequivalence constraints are given on labels of some of the samples.
Hans Fehr and Fabian Kindermann
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198804390
- eISBN:
- 9780191917202
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198804390.003.0013
- Subject:
- Computer Science, Programming Languages
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one or several variables that maximizes the ...
More
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one or several variables that maximizes the value of a specific objective function given certain constraints. While there exist some analytical solutions to deterministic dynamic optimization problems, things become much more complicated as soon as the environment in which we are searching for optimal decisions becomes uncertain. In such cases researchers typically rely on the technique of dynamic programming. This chapter introduces the principles of dynamic programming and provides a couple of solution algorithms that differ in accuracy, speed, and applicability. Chapters 8 to 11 show how to apply these dynamic programming techniques to various problems in macroeconomics and finance. To get things started we want to lay out the basic idea of dynamic programming and introduce the language that is typically used to describe it. The easiest way to do this is with a very simple example that we can solve both ‘by hand’ and with the dynamic programming technique. Let’s assume an agent owns a certain resource (say a cake or a mine) which has the size a0. In every period t = 0, 1, 2, . . . ,∞ the agent can decide how much to extract from this resource and consume, i.e. how much of the cake to eat or how many resources to extract from the mine.We denote his consumption in period t as ct. At each point in time the agent derives some utility from consumption which we express by the so-called instantaneous utility function u(ct). We furthermore assume that the agent’s utility is additively separable over time and that the agent is impatient, meaning that he derives more utility from consuming in period t than in any later period.We describe the extent of his impatience with the time discount factor 0 < β < 1.
Less
Dynamic optimization is widely used in many fields of economics, finance, and business management. Typically one searches for the optimal time path of one or several variables that maximizes the value of a specific objective function given certain constraints. While there exist some analytical solutions to deterministic dynamic optimization problems, things become much more complicated as soon as the environment in which we are searching for optimal decisions becomes uncertain. In such cases researchers typically rely on the technique of dynamic programming. This chapter introduces the principles of dynamic programming and provides a couple of solution algorithms that differ in accuracy, speed, and applicability. Chapters 8 to 11 show how to apply these dynamic programming techniques to various problems in macroeconomics and finance. To get things started we want to lay out the basic idea of dynamic programming and introduce the language that is typically used to describe it. The easiest way to do this is with a very simple example that we can solve both ‘by hand’ and with the dynamic programming technique. Let’s assume an agent owns a certain resource (say a cake or a mine) which has the size a0. In every period t = 0, 1, 2, . . . ,∞ the agent can decide how much to extract from this resource and consume, i.e. how much of the cake to eat or how many resources to extract from the mine.We denote his consumption in period t as ct. At each point in time the agent derives some utility from consumption which we express by the so-called instantaneous utility function u(ct). We furthermore assume that the agent’s utility is additively separable over time and that the agent is impatient, meaning that he derives more utility from consuming in period t than in any later period.We describe the extent of his impatience with the time discount factor 0 < β < 1.
Shin Hyunjung and Tsuda Koji
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262033589
- eISBN:
- 9780262255899
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262033589.003.0020
- Subject:
- Computer Science, Machine Learning
This chapter describes an algorithm to assign weights to multiple graphs within graph-based semi-supervised learning. Both predicting class labels and searching for weights for combining multiple ...
More
This chapter describes an algorithm to assign weights to multiple graphs within graph-based semi-supervised learning. Both predicting class labels and searching for weights for combining multiple graphs are formulated into one convex optimization problem. The graph-combining method is applied to functional class prediction of yeast proteins. When compared with individual graphs, the combined graph with optimized weights performs significantly better than any single graph. When compared with the semi-definite programming-based support vector machine (SDP/SVM), it shows comparable accuracy in a remarkably short time. Compared with a combined graph with equal-valued weights, this method could select important graphs without loss of accuracy, which implies the desirable property of integration with selectivity.Less
This chapter describes an algorithm to assign weights to multiple graphs within graph-based semi-supervised learning. Both predicting class labels and searching for weights for combining multiple graphs are formulated into one convex optimization problem. The graph-combining method is applied to functional class prediction of yeast proteins. When compared with individual graphs, the combined graph with optimized weights performs significantly better than any single graph. When compared with the semi-definite programming-based support vector machine (SDP/SVM), it shows comparable accuracy in a remarkably short time. Compared with a combined graph with equal-valued weights, this method could select important graphs without loss of accuracy, which implies the desirable property of integration with selectivity.
Massimiliano Di Ventra
- Published in print:
- 2022
- Published Online:
- March 2022
- ISBN:
- 9780192845320
- eISBN:
- 9780191937521
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192845320.003.0007
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
This Chapter introduces terminal-agnostic gates, namely gates that do not distinguish between the traditional input or traditional output terminals. Memory is the physical property that allows the ...
More
This Chapter introduces terminal-agnostic gates, namely gates that do not distinguish between the traditional input or traditional output terminals. Memory is the physical property that allows the realization of these self-organizing gates. It also discusses the mathematical properties of these gates when assembled into circuits, and their possible hardware realization. It gives some examples of these circuits for the solution of factorization.Less
This Chapter introduces terminal-agnostic gates, namely gates that do not distinguish between the traditional input or traditional output terminals. Memory is the physical property that allows the realization of these self-organizing gates. It also discusses the mathematical properties of these gates when assembled into circuits, and their possible hardware realization. It gives some examples of these circuits for the solution of factorization.
Massimiliano Di Ventra
- Published in print:
- 2022
- Published Online:
- March 2022
- ISBN:
- 9780192845320
- eISBN:
- 9780191937521
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192845320.001.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
From the originator of MemComputing comes the very first book on this new computing paradigm that employs time non-locality (memory) to both process and store information. The book discusses the ...
More
From the originator of MemComputing comes the very first book on this new computing paradigm that employs time non-locality (memory) to both process and store information. The book discusses the rationale behind MemComputing, its theoretical foundations, and wide-range applicability to combinatorial optimization problems, Machine Learning, and Quantum Mechanics. The book is ideal for graduate students in Physics, Computer Science, Electrical Engineering, and Mathematics as well as researchers in both academia and industry interested in unconventional computing. The author relies on extensive margin notes, important remarks, and several artworks to better explain the main concepts and clarify all the jargon, making the book as self-contained as possible. The reader will be guided from the basic notions to the more advanced ones with a writing style that is always clear and engaging. Along the way, the reader will appreciate the advantages of this computing paradigm and the major differences that set it apart from the prevailing Turing model of computation, and even Quantum Computing.Less
From the originator of MemComputing comes the very first book on this new computing paradigm that employs time non-locality (memory) to both process and store information. The book discusses the rationale behind MemComputing, its theoretical foundations, and wide-range applicability to combinatorial optimization problems, Machine Learning, and Quantum Mechanics. The book is ideal for graduate students in Physics, Computer Science, Electrical Engineering, and Mathematics as well as researchers in both academia and industry interested in unconventional computing. The author relies on extensive margin notes, important remarks, and several artworks to better explain the main concepts and clarify all the jargon, making the book as self-contained as possible. The reader will be guided from the basic notions to the more advanced ones with a writing style that is always clear and engaging. Along the way, the reader will appreciate the advantages of this computing paradigm and the major differences that set it apart from the prevailing Turing model of computation, and even Quantum Computing.
Habib Ammari, Elie Bretin, Josselin Garnier, Hyeonbae Kang, Hyundae Lee, and Abdul Wahab
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691165318
- eISBN:
- 9781400866625
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691165318.003.0010
- Subject:
- Mathematics, Applied Mathematics
This chapter describes the use of time-reversal imaging techniques for optimal control of extended inclusions. It first considers the problem of reconstructing shape deformations of an extended ...
More
This chapter describes the use of time-reversal imaging techniques for optimal control of extended inclusions. It first considers the problem of reconstructing shape deformations of an extended elastic target before reconstructing the perturbations from boundary measurements of the displacement field. As for small-volume inclusions, direct imaging algorithms based on an asymptotic expansion for the perturbations in the data due to small shape deformations is introduced. The chapter also presents a concept equivalent to the polarization tensor for small-volume inclusions as well as the nonlinear optimization problem for reconstructing the shape of an extended inclusion from boundary displacement measurements. Finally, it develops iterative algorithms to address the nonlinearity of the problem.Less
This chapter describes the use of time-reversal imaging techniques for optimal control of extended inclusions. It first considers the problem of reconstructing shape deformations of an extended elastic target before reconstructing the perturbations from boundary measurements of the displacement field. As for small-volume inclusions, direct imaging algorithms based on an asymptotic expansion for the perturbations in the data due to small shape deformations is introduced. The chapter also presents a concept equivalent to the polarization tensor for small-volume inclusions as well as the nonlinear optimization problem for reconstructing the shape of an extended inclusion from boundary displacement measurements. Finally, it develops iterative algorithms to address the nonlinearity of the problem.
Massimiliano Di Ventra
- Published in print:
- 2022
- Published Online:
- March 2022
- ISBN:
- 9780192845320
- eISBN:
- 9780191937521
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192845320.003.0004
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
This Chapter clarifies the meaning of memory as time non-locality. It shows that time non-locality induces spatial non-locality. It then introduces the notion of circuit elements with memory, ...
More
This Chapter clarifies the meaning of memory as time non-locality. It shows that time non-locality induces spatial non-locality. It then introduces the notion of circuit elements with memory, memprocessors, and discusses the need of active elements in MemComputing. It finally highlights that MemComputing is not just ‘in-memory computing’.Less
This Chapter clarifies the meaning of memory as time non-locality. It shows that time non-locality induces spatial non-locality. It then introduces the notion of circuit elements with memory, memprocessors, and discusses the need of active elements in MemComputing. It finally highlights that MemComputing is not just ‘in-memory computing’.