Jed,
 
What about MatCreateSeqAIJWithArrays? Is it also implemented by looping over
the rows calling MatSetValues?
 
My CRS matrix is constructed in the master processor when using a parallel 
solver. Do I have to manually partition it (using metis, for instance) and 
distribute it to all processors using MPI, or PETSc has any subroutines to do 
this job?
 
Thanks,
Qin



On Thursday, October 31, 2013 3:50 PM, Jed Brown <[email protected]> wrote:
  
Qin Lu <[email protected]> writes:


> Jed,   Thanks a lot for your nice suggestions. The CRS matrix has
> already been created by the program and I don't want to change that,
> do you mean I should read the arrays (i, j, a) and set coefficients
> row by row using MatSetValues? Will it be much slower than passing the
> arrays directly to MatCreateSeqAIJWithArrays or
> MatCreateMPIAIJWithArrays, especially when the matrix is big?

Are your *parallel* matrices already assembled in that form (which is
not suitable to compute with)?

In any case, MatCreateMPIAIJWithArrays is *implemented* by looping over
the rows calling MatSetValues.  The best is to generate the matrix by
row or element and insert at that time, but that is mostly for memory
reasons.  Copying the entries by row is not that expensive.

Reply via email to