Qin Lu <[email protected]> writes: > Jed, > > What about MatCreateSeqAIJWithArrays? Is it also implemented by looping over > the rows calling MatSetValues?
No, it uses the arrays directly because that format makes sense in serial. > My CRS matrix is constructed in the master processor when using a > parallel solver. This will absolutely cripple your scalability. If you're just trying to use a few cores, then fine, but if you care about scalability, you need to rethink your design. > Do I have to manually partition it (using metis, for instance) and > distribute it to all processors using MPI, or PETSc has any > subroutines to do this job? You should really partition the *mesh*, then assemble the local parts in parallel. The alternative you can use is to partition the matrix entries (renumbering from your "native" ordering) and then call MatSetValues (mapping the indices) from rank 0. This part is not scalable and may only make sense if you have a difficult problem or many systems to solve. Better to do it right and assemble in parallel. It's not difficult.
pgpz82Mj1msiq.pgp
Description: PGP signature
