Thank you Tobin. I was actually more interested in repartitioning after
the mesh has been dynamically changed (say, after refinement in certain
portions of mesh).
On 03/22/2018 07:10 AM, Tobin Isaac wrote:
On March 21, 2018 11:26:35 AM MDT, Saurabh Chawdhary <[email protected]> wrote:
Hello team,
I haven't used petsc since DMFOREST was released but I have a question
regarding repartitioning of DMFOREST mesh. How is the repartitioning of
mesh over processors done after some mesh refinement is carried out? Is
it done by calling a p4est function or partitioning is done in PETSc?
I was using p4est (natively) a couple of years ago and I remember that
when I tried to partition the grid I could only use the serial METIS
and
not the parMETIS with p4est (using a native function
/p4est_connectivity_reorder/). So what I want to know is whether
DMFOREST repartitioning is done in parallel or in serial?
You can use the following workflow:
- Create/read-only an unstructured hexahedral mesh as a DMPlex
- Use ParMETIS to repartition that: there is a ParMETIS implementation of
PetscPartitioner, which creates index sets and communication patterns for
DMPlexDistribute.
- Convert the DMPlex to DMP4est or DMP8est.
This does not avoid the fundamental limitations of p4est: the distributed mesh
will be redundantly serialized behind-the-scenes, and the coarse-mesh ordering
derived from ParMETIS will be static for the life of the forest mesh.
I am not sure I understand how the distributed mesh is redundantly
serialized in p4est. Do you mean that the partitioning is done serially?
Thank you.
Saurabh