*Moody T. Chu and Gene H. Golub*

- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198566649
- eISBN:
- 9780191718021
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566649.003.0007
- Subject:
- Mathematics, Applied Mathematics

This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general ...
More

This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general framework by using the projected gradient method is described. A broad range of applications, including the Toeplitz inverse eigenvalue problem, the simultaneous reduction problem, and the nearest normal matrix approximation, are discussed.Less

This chapter shows that the problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. A general framework by using the projected gradient method is described. A broad range of applications, including the Toeplitz inverse eigenvalue problem, the simultaneous reduction problem, and the nearest normal matrix approximation, are discussed.

*Robert B. Gramacy and Herbert K. H. Lee*

- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0008
- Subject:
- Mathematics, Probability / Statistics

Optimization of complex functions, such as the output of computer simulators, is a difficult task that has received much attention in the literature. A less studied problem is that of optimization ...
More

Optimization of complex functions, such as the output of computer simulators, is a difficult task that has received much attention in the literature. A less studied problem is that of optimization under unknown constraints, i.e., when the simulator must be invoked both to determine the typical real‐valued response and to determine if a constraint has been violated, either for physical or policy reasons. We develop a statistical approach based on Gaussian processes and Bayesian learning to both approximate the unknown function and estimate the probability of meeting the constraints. A new integrated improvement criterion is proposed to recognize that responses from inputs that violate the constraint may still be informative about the function, and thus could potentially be useful in the optimization. The new criterion is illustrated on synthetic data, and on a motivating optimization problem from health care policy.Less

Optimization of complex functions, such as the output of computer simulators, is a difficult task that has received much attention in the literature. A less studied problem is that of optimization under unknown constraints, *i.e*., when the simulator must be invoked both to determine the typical real‐valued response *and* to determine if a constraint has been violated, either for physical or policy reasons. We develop a statistical approach based on Gaussian processes and Bayesian learning to *both* approximate the unknown function and estimate the probability of meeting the constraints. A new integrated improvement criterion is proposed to recognize that responses from inputs that violate the constraint may still be informative about the function, and thus could potentially be useful in the optimization. The new criterion is illustrated on synthetic data, and on a motivating optimization problem from health care policy.

*Howard C. Elman, David J. Silvester, and Andrew J. Wathen*

- Published in print:
- 2014
- Published Online:
- September 2014
- ISBN:
- 9780199678792
- eISBN:
- 9780191780745
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199678792.003.0006
- Subject:
- Mathematics, Numerical Analysis, Computational Mathematics / Optimization

This chapter is an introduction to techniques of optimization with constraints defined by partial differential equations. It discusses optimization and discretization methods together with solution ...
More

This chapter is an introduction to techniques of optimization with constraints defined by partial differential equations. It discusses optimization and discretization methods together with solution algorithms for the resulting saddle-point problems.Less

This chapter is an introduction to techniques of optimization with constraints defined by partial differential equations. It discusses optimization and discretization methods together with solution algorithms for the resulting saddle-point problems.