You may be able to mimic what you want by not using PETSC_DECIDE but instead 
computing up front how many rows of each matrix you want stored on each MPI 
process. You can use 0 for on certain MPI processes for certain matrices if you 
don't want any rows of that particular matrix stored on that particular MPI 
process.

  Barry


> On Mar 17, 2023, at 10:10 AM, Berger Clement <[email protected]> 
> wrote:
> 
> Dear all,
> 
> I want to construct a matrix by blocs, each block having different sizes and 
> partially stored by multiple processors. If I am not mistaken, the right way 
> to do so is by using the MATNEST type. However, the following code
> 
> Call 
> MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,2.0E0_wp,A,ierr)
> Call 
> MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,1.0E0_wp,B,ierr)
> Call 
> MatCreateNest(PETSC_COMM_WORLD,2,PETSC_NULL_INTEGER,2,PETSC_NULL_INTEGER,(/A,PETSC_NULL_MAT,PETSC_NULL_MAT,B/),C,ierr)
> 
> does not generate the same matrix depending on the number of processors. It 
> seems that it starts by everything owned by the first proc for A and B, then 
> goes on to the second proc and so on (I hope I am being clear).
> 
> Is it possible to change that ?
> 
> Note that I am coding in fortran if that has ay consequence.
> 
> Thank you,
> 
> Sincerely,
> 
> -- 
> Clément BERGER
> ENS de Lyon

Reply via email to