*Rob H. Bisseling*

- Published in print:
- 2004
- Published Online:
- September 2007
- ISBN:
- 9780198529392
- eISBN:
- 9780191712869
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198529392.003.0002
- Subject:
- Mathematics, Applied Mathematics

This chapter discusses parallel dense matrix computations, in particular the solution of linear systems by LU decomposition with partial pivoting. A general Cartesian scheme is presented for the ...
More

This chapter discusses parallel dense matrix computations, in particular the solution of linear systems by LU decomposition with partial pivoting. A general Cartesian scheme is presented for the distribution of matrices. Based on BSP cost analysis, the square cyclic distribution is proposed as particularly suitable for matrix computations such as LU decomposition or Gaussian elimination. The chapter introduces two-phase broadcasting of vectors, which is a useful collective communication method for sending copies of matrix rows or columns to a group of processors. These techniques are demonstrated in the specific case of LU decomposition, but they are applicable to almost all parallel matrix computations. The performance of the parallel LU program is examined in detail using a graphical BSP profiler.Less

This chapter discusses parallel dense matrix computations, in particular the solution of linear systems by LU decomposition with partial pivoting. A general Cartesian scheme is presented for the distribution of matrices. Based on BSP cost analysis, the square cyclic distribution is proposed as particularly suitable for matrix computations such as LU decomposition or Gaussian elimination. The chapter introduces two-phase broadcasting of vectors, which is a useful collective communication method for sending copies of matrix rows or columns to a group of processors. These techniques are demonstrated in the specific case of LU decomposition, but they are applicable to almost all parallel matrix computations. The performance of the parallel LU program is examined in detail using a graphical BSP profiler.

*I. S. Duff, A. M. Erisman, and J. K. Reid*

- Published in print:
- 2017
- Published Online:
- April 2017
- ISBN:
- 9780198508380
- eISBN:
- 9780191746420
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508380.003.0003
- Subject:
- Mathematics, Numerical Analysis

We review the fundamental operations in the direct solution of linear equations without concern for rounding error caused by computer arithmetic. We consider the relationship between Gaussian ...
More

We review the fundamental operations in the direct solution of linear equations without concern for rounding error caused by computer arithmetic. We consider the relationship between Gaussian elimination and LU factorization. We compare the use of different computational sequences including left-looking and right-looking. We show that advantage can be taken of symmetry and we consider the use of blocking. Our choice of material is based on what will be useful in the sparse case.Less

We review the fundamental operations in the direct solution of linear equations without concern for rounding error caused by computer arithmetic. We consider the relationship between Gaussian elimination and LU factorization. We compare the use of different computational sequences including left-looking and right-looking. We show that advantage can be taken of symmetry and we consider the use of blocking. Our choice of material is based on what will be useful in the sparse case.

*I. S. Duff, A. M. Erisman, and J. K. Reid*

- Published in print:
- 2017
- Published Online:
- April 2017
- ISBN:
- 9780198508380
- eISBN:
- 9780191746420
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508380.003.0005
- Subject:
- Mathematics, Numerical Analysis

This chapter provides an introduction to the use of Gaussian elimination for solving sets of linear equations that are sparse. We examine the three principle phases of most computer programs for this ...
More

This chapter provides an introduction to the use of Gaussian elimination for solving sets of linear equations that are sparse. We examine the three principle phases of most computer programs for this task: ANALYSE, FACTORIZE, and SOLVE. We stress the importance of acceptable overheads and of numerical stability, but are not concerned with algorithmic or implementation details which are the subjects of later chapters.Less

This chapter provides an introduction to the use of Gaussian elimination for solving sets of linear equations that are sparse. We examine the three principle phases of most computer programs for this task: ANALYSE, FACTORIZE, and SOLVE. We stress the importance of acceptable overheads and of numerical stability, but are not concerned with algorithmic or implementation details which are the subjects of later chapters.

*I. S. Duff, A. M. Erisman, and J. K. Reid*

- Published in print:
- 2017
- Published Online:
- April 2017
- ISBN:
- 9780198508380
- eISBN:
- 9780191746420
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508380.003.0001
- Subject:
- Mathematics, Numerical Analysis

The use of graph theory to ‘visualize’ the relationship between sparsity patterns and Gaussian elimination is introduced. The potential of significant savings from the exploitation of sparsity is ...
More

The use of graph theory to ‘visualize’ the relationship between sparsity patterns and Gaussian elimination is introduced. The potential of significant savings from the exploitation of sparsity is illustrated by one example. The effect of computer hardware on efficient computation is discussed. Realization of sparsity means more than faster solutions; it affects the formulation of mathematical models and the feasibility of solving them.Less

The use of graph theory to ‘visualize’ the relationship between sparsity patterns and Gaussian elimination is introduced. The potential of significant savings from the exploitation of sparsity is illustrated by one example. The effect of computer hardware on efficient computation is discussed. Realization of sparsity means more than faster solutions; it affects the formulation of mathematical models and the feasibility of solving them.