Many applications of scientific computing require solutions to a system of linear equations Ax=b, where A is a large, real, sparse, nonsingular matrix with a positive real part. Such discrete linear problems are commonly utilized to obtain numerical solutions to steady-state, time-dependent, nonlinear, and optimization problems from Partial Differential Equations (PDE) and other models. For many applications, the computation within the algorithms that solve linear problems dominates the total simulation time and more efficient algorithms will dramatically enhance the usefulness of the simulation.
Multigrid methods are known to provide optimal solvers for a large class of these problems. The favorable convergence properties of these methods stem from combining two complementary error-reduction processes: a local relaxation process, and a coarse-grid correction. It is expected that the relaxation process efficiently attenuates most of the error components. The remaining error is referred to as algebraically smooth error with respect to the given relaxation. The aim is then to construct coarser spaces with much smaller dimension and sparse bases to accurately represent the algebraically smooth error. As a result, an adequate two-grid method is obtained. Recursive application of relaxation and coarse-grid correction results in an efficient multigrid method, provided the sparsity of coarser problems is controlled. Choosing a relaxation with good smoothing properties and constructing coarse subspaces with adequate approximation properties (and the associated multigrid intergrid transfer operators) are the general goals when designing a multigrid method.
Instead of making assumptions regarding the geometric smoothness of algebraically smooth error, the coarsening techniques considered in this thesis focus on constructing coarse spaces within the smoothed aggregation (SA) framework, where interpolation operators are chosen to accurately approximate a certain set of algebraically smooth prototypes. Previously, adaptive smoothed aggregation (αSA) was applied to symmetric applications where a representative set algebraically smooth error vectors is neither obvious nor supplied. In this thesis we present two extensions to the αSA framework: efficient eigensolvers used to develop algebraically smooth error components for SPD problems and design of αSA-based solvers for nonsymmetric problems.
We claim that approximate solutions to certain eigenproblems are obtained with O(n) setup cost have representation of algebraically smooth error that is adequate enough to build an optimal SA solver. Such eigenproblems are available for use within the αSA framework for problems that demand such enhancement. We then discuss the design of a generalized eigensolver based on smoothed aggregation (GES-SA), a sparse iterative eigensolver that uses the same data structures used as the SA framework. We report numerical results for simple problems to assert that GES-SA is one efficient way to build optimal SA multigrid hierarchy.
Additionally, we design a nonsymmetric SA framework, based on the singular value decomposition of A, that reduces to the original SA framework for SPD problems. Within this framework, we provide a nonsymmetric version of the strong approximation property, which is commonly employed for convergence analysis for SPD matrices Preliminary two-level convergence analysis is presented, suggesting more relaxation should be applied at each level within a multigrid solver for highly nonsymmetric problems. Several features new to the SA framework are explored and developed in the design of a nonsymmetric version of αSA, which are also presented. We report numerical results that show our framework successfully creates optimal SA solvers for certain nonsymmetric problems.