Dear Petsc developers,

I am writing my own code to calculate the FEM matrix. The following is my
general framework,

DMPlexCreateGmsh();
MPI_Comm_rank (Petsc_comm_world, &rank);
DMPlexDistribute (.., .., &dmDist);

dm = dmDist;
//This can create separate dm s for different processors. (reordering.)

MatCreate (Petsc_comm_world, &A)
// Loop over every tetrahedral element to calculate the local matrix for
each processor. Then we can get a local matrix A for each processor.

*My question is : it seems we should build a global matrix B (assemble all
the As for each partition) and then transfer B to KSP. KSP will do the
parallelization correctly, right? *

If that is right, I should define a whole domain matrix B before the
partitioning (MatCreate (Petsc_comm_world, &B); ), and then use
localtoglobal (which petsc function should I use? Do you have any
examples.) map to add A to B at the right positions (MatSetValues) ?

Does that make sense?

Thanks,

Xiaodong

Reply via email to