On Mon, Nov 18, 2013 at 3:24 AM, Michael Lange <[email protected]
> wrote:
> Hi Matt,
>
> I think there is a misunderstanding here. I am referring to the case where
> DMPlexDistribute() is run with overlap=1 (which is not the case in SNES
> ex12) and vertices in the overlap/halo region are assigned to the wrong
> rank. This can lead to a case where a proc may own a vertex that is not in
> its original (non-overlapping) partition, although the attached cell is not
> owned and will be marked as "ghost" by DMPlexConstructGhostCells().
>
> To illustrate this, I have attached an example consisting of a unit square
> with 3 faces in each dimension and a section with only vertex dofs. If run
> with two ranks, rank 1 will own all its vertices (13 roots), whereas rank 0
> only owns vertices not in the overlap/halo of rank 1 (3 roots). My
> understanding is that, since the original partition splits the square along
> its diagonal, the vertex distribution should be 10 to 6 with the 4 diagonal
> vertices assigned to rank 1 and all other vertices assigned according to
> the original partition. Is this correct, or am I missing something here?
>
I have simplified the example so that I can easily do things in my head.
Now we just have 2 faces per side. Here is the run with overlap = 0:
next *$:/PETSc3/petsc/petsc-dev$
/PETSc3/petsc/petsc-dev/arch-c-exodus-next/bin/mpiexec -host localhost -n 2
/PETSc3/petsc/petsc-dev/arch-c-exodus-next/lib/plex_overlap-obj/plex_overlap
-dm_view -overlap 0
Parallel Mesh in 2 dimensions:
0-cells: 6 6
1-cells: 9 9
2-cells: 4 4
Labels:
depth: 3 strata of sizes (6, 9, 4)
exterior_facets: 1 strata of sizes (4)
marker: 2 strata of sizes (9, 3)
PetscSF Object: 2 MPI processes
type: basic
sort=rank-order
[0] Number of roots=19, leaves=5, remote ranks=1
[0] 4 <- (1,6)
[0] 5 <- (1,8)
[0] 7 <- (1,9)
[0] 10 <- (1,13)
[0] 11 <- (1,17)
[1] Number of roots=19, leaves=0, remote ranks=0
Each partition gets 4 cells and 6 vertices since it is split along the
diagonal. The overlap
region contains the 3 vertices and 2 faces that lie on the diagonal, and
they are all owned by proc 1.
Now if we run with an overlap of 1:
next *$:/PETSc3/petsc/petsc-dev$
/PETSc3/petsc/petsc-dev/arch-c-exodus-next/bin/mpiexec -host localhost -n 2
/PETSc3/petsc/petsc-dev/arch-c-exodus-next/lib/plex_overlap-obj/plex_overlap
-dm_view -overlap 1
Parallel Mesh in 2 dimensions:
0-cells: 8 8
1-cells: 13 13
2-cells: 6 6
Labels:
depth: 3 strata of sizes (8, 13, 6)
exterior_facets: 1 strata of sizes (6)
marker: 2 strata of sizes (13, 5)
PetscSF Object: 2 MPI processes
type: basic
sort=rank-order
[0] Number of roots=27, leaves=19, remote ranks=1
[0] 0 <- (1,1)
[0] 2 <- (1,4)
[0] 6 <- (1,7)
[0] 7 <- (1,8)
[0] 8 <- (1,9)
[0] 9 <- (1,10)
[0] 10 <- (1,11)
[0] 11 <- (1,12)
[0] 12 <- (1,13)
[0] 14 <- (1,17)
[0] 15 <- (1,18)
[0] 16 <- (1,19)
[0] 17 <- (1,20)
[0] 18 <- (1,21)
[0] 19 <- (1,22)
[0] 20 <- (1,23)
[0] 21 <- (1,24)
[0] 25 <- (1,25)
[0] 26 <- (1,26)
[1] Number of roots=27, leaves=2, remote ranks=1
[1] 3 <- (0,1)
[1] 5 <- (0,5)
Each process gets 2 more cells (those who faces lie on the diagonal), 2
more vertices and 4 more edges. This is correct. The
two overlap cells are ghost for proc 1, but the 4 edges and 2 vertices are
owned. So you are correct, I need to mark all those
overlap points as "unownable" by the original process.
Thanks for finding this,
Matt
> Many thanks for all your help
> Michael
>
> On 16/11/13 13:54, Matthew Knepley wrote:
>
> On Sat, Nov 16, 2013 at 7:22 AM, Michael Lange <
> [email protected]> wrote:
>
>> Hi,
>>
>> I notice that, when creating the point SF for the parallel partition in
>> DMPlexDistribute, cells are assigned to procs according to the original
>> partition but vertices aren't. Was this done by design or is this a bug?
>>
>
> If this were true, there would be no communication for the P1 test of
> SNES ex12. Here is running it with
> -interpolate 1 and -dm_view ::ascii_info_detail
>
> PetscSF Object: 2 MPI processes
> type: basic
> sort=rank-order
> [0] Number of roots=19, leaves=5, remote ranks=1
> [0] 4 <- (1,6)
> [0] 5 <- (1,8)
> [0] 7 <- (1,9)
> [0] 10 <- (1,13)
> [0] 11 <- (1,17)
> [1] Number of roots=19, leaves=0, remote ranks=0
> [0] Roots referenced by my leaves, by rank
> [0] 1: 5 edges
> [0] 4 <- 6
> [0] 5 <- 8
> [0] 7 <- 9
> [0] 10 <- 13
> [0] 11 <- 17
> [1] Roots referenced by my leaves, by rank
>
> So there are 3 vertices and 2 edges in the point SF.
>
> Matt
>
>
>> In case it is a bug, I have attached a patch that fixes this by using the
>> closure of the original partition instead.
>>
>> Thanks and kind regards
>> Michael
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
static char help[] = "Small test problem. \n";
#include <petscdmplex.h>
#include <petscsnes.h>
#undef __FUNCT__
#define __FUNCT__ "main"
int main(int argc, char **argv)
{
DM dm,boundary,pardm,ghostdm;
PetscSF sf;
PetscSection section;
MPI_Comm comm;
PetscInt comm_size,comm_rank,numGhostCells;
DMLabel lbl;
PetscInt dim = 2, overlap = 1;
PetscReal lower[2] = {0.0, 0.0};
PetscReal upper[2] = {1.0, 1.0};
PetscInt faces[2] = {2, 2};
PetscInt comp[1] = {1};
PetscInt dof[4] = {1, 0, 0, 0};
PetscBool ghost = PETSC_FALSE;
PetscErrorCode ierr;
ierr = PetscInitialize(&argc, &argv, NULL, help);CHKERRQ(ierr);
comm = PETSC_COMM_WORLD;
ierr = MPI_Comm_size(comm, &comm_size);CHKERRQ(ierr);
ierr = MPI_Comm_rank(comm, &comm_rank);CHKERRQ(ierr);
ierr = PetscOptionsGetInt(NULL, "-overlap", &overlap, NULL);CHKERRQ(ierr);
ierr = PetscOptionsGetBool(NULL, "-ghost", &ghost, NULL);CHKERRQ(ierr);
/* Build a unit mesh (3x3 square) */
ierr = DMPlexCreate(comm, &boundary);CHKERRQ(ierr);
ierr = DMSetType(boundary, DMPLEX);CHKERRQ(ierr);
ierr = DMPlexSetDimension(boundary, dim-1);CHKERRQ(ierr);
ierr = DMPlexCreateSquareBoundary(boundary, lower, upper, faces);CHKERRQ(ierr);
ierr = DMPlexGenerate(boundary, NULL, PETSC_TRUE, &dm);CHKERRQ(ierr);
ierr = DMDestroy(&boundary);CHKERRQ(ierr);
/* Label exterior facets */
ierr = DMPlexCreateLabel(dm, "exterior_facets");CHKERRQ(ierr);
ierr = DMPlexGetLabel(dm, "exterior_facets", &lbl);CHKERRQ(ierr);
ierr = DMPlexMarkBoundaryFaces(dm, lbl);CHKERRQ(ierr);
ierr = DMPlexDistribute(dm, NULL, overlap, NULL, &pardm);CHKERRQ(ierr);
if (pardm) {
ierr = DMDestroy(&dm);CHKERRQ(ierr);
dm = pardm;
}
if (ghost) {
/* Mark ghost cells */
ierr = DMPlexConstructGhostCells(dm, "exterior_facets", &numGhostCells, &ghostdm);CHKERRQ(ierr);
ierr = DMDestroy(&dm);CHKERRQ(ierr);
dm = ghostdm;
}
ierr = DMSetFromOptions(dm);CHKERRQ(ierr);
ierr = DMPlexCreateSection(dm, dim, 1, comp, dof, 0, NULL, NULL, §ion);CHKERRQ(ierr);
ierr = DMSetDefaultSection(dm, section);CHKERRQ(ierr);
ierr = PetscSectionDestroy(§ion);CHKERRQ(ierr);
//ierr = DMGetDefaultSF(dm, &sf);CHKERRQ(ierr);CHKERRQ(ierr);
ierr = DMGetPointSF(dm, &sf);CHKERRQ(ierr);CHKERRQ(ierr);
ierr = PetscSFView(sf, PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);
ierr = DMDestroy(&dm);CHKERRQ(ierr);
ierr = PetscFinalize();
return 0;
}