On Sep 16, 2014, at 6:37 AM, Florian Lindner <[email protected]> wrote:

> Hello,
> 
> I'm currently replacing an RBF implementation with petsc linear algebra. The 
> program itself runs parallel using MPI but the piece of code I work on runs 
> strictly sequentially without making any use of MPI, just the same code on 
> every node. Right now we're more interessted in patsc sparse matrix abilities 
> then in its parallelization. Though parallelization is certainly interesting 
> later....
> 
> What is the best way to run petsc sequentially?
> 
> 1) MatSetType the matrix to MATSEQSBAIJ e.g. -> expects MPI communicator of 
> size 1.
> 2)  MatSetSizes(matrix, n, n, n, n) does not work.

    This should certainly work on one process

> 2) MatCreate not with PETSC_COMM_WORLD but with the communicator of size 1. 
> Where do I get it from? (probably MPI_Comm_create and friends)

  Just use PETSC_COMM_SELF

> 
> Is there another more petsc like way?
> 
> Thanks,
> Florian

Reply via email to