"R. Oğuz Selvitopi" <[email protected]> writes:

> Hello,
>
> Is it possible to partition a parallel matrix in PETSc as follows:
>
> A B C
> D E F
> G H I
>
> The blocks A-D-G belong to processor 0 (A is the diagonal block, D and
> G are off-diagonal blocks)
>
> The blocks B-E-H belong to processor 1 (E is the diagonal block, B and
> H are off-diagonal blocks)
>
> The blocks C-F-I belong to processor 2 (I is the diagonal block, C and
> F are off-diagonal blocks)

Assemble the transpose, then either use MatMultTranspose or
MatCreateTranspose.

> Or, is it possible to have nine processors and each has a block of the
> matrix above? Block A belongs to processor 0, block B belongs to
> processor 1, and so on...

Not with sparse matrices (except using MatShell, in which case you are
responsible for the implementation).  You can do it for dense matrices
(see MatElemental).

Attachment: pgpPEPwd0knIx.pgp
Description: PGP signature

Reply via email to