OOPS linear equations solvers¶
This note describes the linear equation solvers implemented in OOPS. The solvers are designed to be as generic as possible, to allow their use with a variety of different vector and matrix classes.
Generic code design¶
The solvers are implemented as generic (templated) functions. The functions are templated on a type VECTOR and one or more matrix types (AMATRIX, PMATRIX, etc.). Thus, we require that all vector arguments are of the same class, but the matrices do not have to be derived from a single class.
In addition to the vector and matrix arguments, there are two arguments specifying the maximum number of iterations to be performed, and the required reduction in residual norm. Iteration will stop when either the iteration limit is reached or the residual norm is reduced below the required factor. The return value from all the solvers is the achieved reduction in residual norm.
As a typical example, consider the IPCG solver:
template <typename VECTOR,
typename AMATRIX,
typename PMATRIX>
double IPCG (VECTOR & x,
const VECTOR & b,
const AMATRIX & A,
const PMATRIX & precond,
const int maxiter,
const double tolerance );
For all the solvers, VECTOR is expected to implement basic linear algebra operations:
double dot_product(VECTOR &, VECTOR &);
operator(=)
operator(+=)
operator(-=)
operator(*=) // double times VECTOR
axpy // u.axpy(a,v) sets u = u + a*v
The matrix classes are expected to implement:
void apply(const VECTOR&, VECTOR&) const
This function represents application (multiplication) of the matrix to the first argument, with the result returned in the second. For all the current algorithms, application of the preconditioner may be approximate, for example as the result of an iterative solution method.
The algorithms¶
The following algorithms are available:
IPCG: Inexact-Preconditioned Conjugate Gradients.
DRIPCG: “Derber-Rosati” Inexact-Preconditioned Conjugate Gradients.
GMRESR.
DRGMRESR: A “Derber-Rosati” version of GMRESR.
IPCG¶
Inexact Preconditioned Conjugate Gradients [GY99] is a slight variation on the well-known Preconditioned Conjugate Gradients (PCG) algorithm. Given an initial vector
The algorithm differs from PCG only in the definition of
This slight modification requires additional storage for the vector
Convergence results for IPCG are presented by [GY99]. Further results are given by [KL08].
DRIPCG¶
In many applications in variational data assimilation, the matrix
Furthermore, the matrix
[DR89] modified PCG. However, their approach can be applied more widely to a range of linear equation solvers. The essence of the approach is to introduce auxiliary vectors:
Defining
Note that no applications of
The Derber Rosati algorithm is sometimes called “Double” PCG. We have adopted this nomenclature for algorithms that include similar modifications. thus, we call the algorithm described above Derber-Rosati Inexact-Preconditioned Conjugate Gradients, or DRIPCG. The algorithm is closely related to CGMOD (Gratton, personal communication).
DRIPCG is algebraically equivalent to IPCG provided that the preconditioners are related by
GMRESR¶
GMRESR [VdVV94] is a robust algorithm for square, non-symmetric systems. Like IPCG, it allows the preconditioner to vary from iteration to iteration. The algorithm starts with
For a symmetric matrix and constant symmetric positive definite (SPD) preconditioner, GMRESR is algebraically equivalent to PCG. In this case, the explicit orthogonalization of
The storage requirements of GMRESR are significant, since the vectors
DRGMRESR¶
A “Derber-Rosati” version of GMRESR is easy to derive. As in the case of DRIPCG, we define
we have:
As in the case of DRIPCG, after