Carlos,
It is not clear what you want. As Matt noted you can use the lx,ly,lz
arguments to control the sizes of the subdomains if that is what you want.
PETSc always uses the "natural ordering of processes" for the MPI
communicator passed into the DMDA creation routine. In 2d MPI rank 0 gets i
coordinates 0 to m_0 - 1, j coordinates 0 to n_0 -1; rank 1 gets i coordinates
m_0 to m_1 -1, j coordinates 0 to n_0 - 1 etc. I believe the users manual has a
graphic describing this. Say we have 6 MPI processes and have partitioned
them for the DMDA as 3 in the i coordinates and 2 in the j coordinates this
looks like
3 4 5
0 1 2
If you wish to have different MPI ranks for each domain you need to write
some MPI code. You construct a new MPI communicator with the same processes as
the MPI communicator passed to the DMDA create routine but with the other
ordering you want to use. Now you just use MPI_Comm_rank() of the new MPI
communicator to "label" each subdomain. So you could (if you wanted to) label
them as
1 3 5
0 2 4
relative to the other communicator.
Barry
On Apr 3, 2013, at 4:24 PM, Carlos Pachajoa <cpachaj at gmail.com> wrote:
> Hello,
> I'm working on a CFD simulator that will be used
> for educational purposes.
>
> I'm using PETSc to solve a Poisson equation for the pressure, using DM and a
> 5-point stencil. I would like to extend it to work on parallel.
>
> Since the code will also be used to teach some fundamental concepts of
> parallel programming, it would be ideal if the user can set which rank deals
> with each subdomain. Out of the DM, one can obtain the corners of the region
> corresponding to a rank, however, I have not found a function to assign a
> processor to a region. I know that you can assign a set of rows to any
> processor when explicitly setting the columns of a matrix, but I wonder if
> this is also possible using DM.
>
> Thank you for your time and your hints.
>
> Best regards,
>
> Carlos