Please check out the manual page for MatSetSizes() http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetSizes.html
Basically you have two choices: 1/ Define the global size of the matrix and use PETSC_DECIDE for the local sizes. In this case, PETSc will define the local row size in a manner such that there are approximately the same number of rows on each process. 2/ Define the local sizes yourself and use PETSC_DETERMINE for the global size. Then you have full control over the parallel layout. The following functions described by these pages http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSize.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSize.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRanges.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRangesColumn.html might also be useful for you in double checking what the matrix decomposition looks like Cheers, Dave On 8 January 2014 12:26, mary sweat <[email protected]> wrote: > My target is the following. I got a huge linear system with a sparse huge > matrix, nothing to deal with PDE. How is the system splitted between > processes? is there in this suggested book the answer? > Thanks again > > > Il Martedì 7 Gennaio 2014 17:34, Jed Brown <[email protected]> ha > scritto: > mary sweat <[email protected]> writes: > > > > Hi all, I need to know how does KSP separate and distribute domain > > between processes and the way processes share and communicate halfway > > results. Is there any good documentation about it??? > > > The communication is in Mat and Vec functions. You can see it > summarized in -log_summary. For the underlying theory, see Barry's > book. > > http://www.mcs.anl.gov/~bsmith/ddbook.html > >
