Abstract

This paper continues and further develops some of the ideas previously introduced in part I of this work. In particular, it is shown that the main parallel solution technique developed in part I may be generalized to allow the parallel solution of an arbitrary sparse matrix. This generalization requires the matrix to be partitioned into p blocks and then coarsened (preferably in parallel) so that each of p different processors stores an entire submatrix plus a coarsening of the rest of the matrix. The linear problems with these new matrices may then be solved concurrently in order to obtain approximations to the solution of the full problem which may then be combined together in an appropriate way to define a general parallel preconditioner. As well as providing an overview of this new algorithm the paper also addresses the issues associated with partitioning the sparse matrix and coarsening certain blocks of its rows and columns. The paper concludes with the presentation and discussion of some preliminary numerical results.