*Bijan Mohammadi and Olivier Pironneau*

- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199546909
- eISBN:
- 9780191720482
- Item type:
- book

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199546909.001.0001
- Subject:
- Mathematics, Mathematical Physics

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering ...
More

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.Less

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.

*Bijan Mohammadi and Olivier Pironneau*

- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199546909
- eISBN:
- 9780191720482
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199546909.003.0005
- Subject:
- Mathematics, Mathematical Physics

This chapter describes sensitivity analysis and automatic differentiation (AD). These include the theory, then an object oriented library for AD by operator overloading, and finally the authors' ...
More

This chapter describes sensitivity analysis and automatic differentiation (AD). These include the theory, then an object oriented library for AD by operator overloading, and finally the authors' experience with AD systems using code generation operating in both direct and reverse modes. The chapter describes the different possibilities and through simple programs, gives a comprehensive survey of direct AD by operator overloading and for the reverse mode, the adjoint code method. Several elementary and more advanced examples help the understanding of this central concept.Less

This chapter describes sensitivity analysis and automatic differentiation (AD). These include the theory, then an object oriented library for AD by operator overloading, and finally the authors' experience with AD systems using code generation operating in both direct and reverse modes. The chapter describes the different possibilities and through simple programs, gives a comprehensive survey of direct AD by operator overloading and for the reverse mode, the adjoint code method. Several elementary and more advanced examples help the understanding of this central concept.

*L. Hascoët*

- Published in print:
- 2014
- Published Online:
- March 2015
- ISBN:
- 9780198723844
- eISBN:
- 9780191791185
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198723844.003.0015
- Subject:
- Physics, Geophysics, Atmospheric and Environmental Physics

This chapter describes how adjoint algorithms can be created by automatic differentiation (AD). Data assimilation makes intensive use of gradients. In many situations, the so-called adjoint approach ...
More

This chapter describes how adjoint algorithms can be created by automatic differentiation (AD). Data assimilation makes intensive use of gradients. In many situations, the so-called adjoint approach is generally the most efficient way to compute gradients, by propagating derivatives backwards from the result of the given model or function. Writing an adjoint algorithm by hand is a complex, error-prone task. When the given model is provided in the form of a computer algorithm, AD can build its adjoint algorithm mechanically, for instance by program transformation. This chapter presents the principles of AD, focusing on the adjoint mode. It provides a brief panorama of existing AD tools, and the program analysis and compiler technology that they employ to produce efficient adjoint algorithms.Less

This chapter describes how adjoint algorithms can be created by automatic differentiation (AD). Data assimilation makes intensive use of gradients. In many situations, the so-called adjoint approach is generally the most efficient way to compute gradients, by propagating derivatives backwards from the result of the given model or function. Writing an adjoint algorithm by hand is a complex, error-prone task. When the given model is provided in the form of a computer algorithm, AD can build its adjoint algorithm mechanically, for instance by program transformation. This chapter presents the principles of AD, focusing on the adjoint mode. It provides a brief panorama of existing AD tools, and the program analysis and compiler technology that they employ to produce efficient adjoint algorithms.

*Thomas P. Trappenberg*

- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780198828044
- eISBN:
- 9780191883873
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198828044.003.0005
- Subject:
- Neuroscience, Behavioral Neuroscience

This chapter returns to the more theoretical embedding of machine learning in regression. Prior chapters have shown that writing machine learning programs is easy using high-level computer languages ...
More

This chapter returns to the more theoretical embedding of machine learning in regression. Prior chapters have shown that writing machine learning programs is easy using high-level computer languages and with the help of good machine learning libraries. However, applying such algorithms appropriately with superior performance requires considerable experience and a deeper knowledge of the underlying ideas and algorithms. This chapter takes a step back to consider basic regression in more detail, which in turn will form the foundation for discussing probabilistic models in following chapters. This includes the important discussion of gradient descent as a learning algorithm.Less

This chapter returns to the more theoretical embedding of machine learning in regression. Prior chapters have shown that writing machine learning programs is easy using high-level computer languages and with the help of good machine learning libraries. However, applying such algorithms appropriately with superior performance requires considerable experience and a deeper knowledge of the underlying ideas and algorithms. This chapter takes a step back to consider basic regression in more detail, which in turn will form the foundation for discussing probabilistic models in following chapters. This includes the important discussion of gradient descent as a learning algorithm.