F07 Class
関数リスト一覧   NagLibrary Namespaceへ  ライブラリイントロダクション  本ヘルプドキュメントのchm形式版

This chapter provides methods for the solution of systems of simultaneous linear equations, and associated computations. It provides methods for
  • – matrix factorizations;
  • – solution of linear equations;
  • – estimating matrix condition numbers;
  • – computing error bounds for the solution of linear equations;
  • – matrix inversion;
  • – computing scaling factors to equilibrate a matrix.
Methods are provided for both real and complex data.
The methods in this chapter (F07 class) handle only dense and band matrices (not matrices with more specialized structures, or general sparse matrices).
The methods in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.

Syntax

C#
public static class F07
Visual Basic (Declaration)
Public NotInheritable Class F07
Visual C++
public ref class F07 abstract sealed
F#
[<AbstractClassAttribute>]
[<SealedAttribute>]
type F07 =  class end

Background to the Problems

Notation

Matrix Factorizations

Solution of Systems of Equations

Sensitivity and Error Analysis

Normwise error bounds

Frequently, in practical problems the data A and b are not known exactly, and it is then important to understand how uncertainties or perturbations in the data can affect the solution.
Similar considerations apply when we study the effects of rounding errors introduced by computation in finite precision. The effects of rounding errors can be shown to be equivalent to perturbations in the original data, such that δA A  and δb b  are usually at most pnε, where ε is the machine precision and pn is an increasing function of n which is seldom larger than 10n (although in theory it can be as large as 2n-1).
In other words, the computed solution x^ is the exact solution of a linear system A+δAx^=b+δb which is close to the original system in a normwise sense.

Estimating condition numbers

The previous section has emphasised the usefulness of the quantity κA in understanding the sensitivity of the solution of Ax=b. To compute the value of κA from equation (3) is more expensive than solving Ax=b in the first place. Hence it is standard practice to estimate κA, in either the 1-norm or the -norm, by a method which only requires On2 additional operations, assuming that a suitable factorization of A is available.
The method used in this chapter is Higham's modification of Hager's method (see Higham (1988)). It yields an estimate which is never larger than the true value, but which seldom falls short by more than a factor of 3 (although artificial examples can be constructed where it is much smaller). This is acceptable since it is the order of magnitude of κA which is important rather than its precise value.
Because κA is infinite if A is singular, the methods in this chapter actually return the reciprocal of κA.

Scaling and Equilibration

Componentwise error bounds

Iterative refinement of the solution

Matrix Inversion

Packed Storage

Band and Tridiagonal Matrices

The LU factorization for general matrices, and the Cholesky factorization for symmetric and Hermitian positive-definite matrices both preserve bandedness. Hence methods are provided which take advantage of the band structure when solving systems of linear equations.
The Cholesky factorization preserves bandedness in a very precise sense: the factor U or L has the same number of superdiagonals or subdiagonals as the original matrix. In the LU factorization, the row-interchanges modify the band structure: if A has kl subdiagonals and ku superdiagonals, then L is not a band matrix but still has at most kl nonzero elements below the diagonal in each column; and U has at most kl+ku superdiagonals.
The Bunch–Kaufman factorization does not preserve bandedness, because of the need for symmetric row-and-column permutations; hence no methods are provided for symmetric indefinite band matrices.
The inverse of a band matrix does not in general have a band structure, so no methods are provided for computing inverses of band matrices.

Block Partitioned Algorithms

The performance of a block partitioned algorithm varies to some extent with the blocksize – that is, the number of rows or columns per block. This is a machine-dependent parameter, which is set to a suitable value when the library is implemented on each range of machines. You do not normally need to be aware of what value is being used. Different block sizes may be used for different methods. Values in the range 16 to 64 are typical.
On some machines there may be no advantage from using a block partitioned algorithm, and then the methods use an unblocked algorithm (effectively a blocksize of 1), relying solely on calls to the Level 2 BLAS (see F06 class again).

Mixed Precision LAPACK Routines

References

Inheritance Hierarchy

See Also