Thanks a lot, Stefano.
I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we
expected.
The final gatherDM is listed as follows, rank 0 has all information (which
is right) while rank 1 has nothing.
Then I tried to feed this gatherDM  into adaptMMG on rank 0 only (it seems
MMG works better than ParMMG, that is why I want MMG to be tried first).
But it was stuck at collective petsc functions in DMAdaptMetric_Mmg_Plex().
By the way, the present work can work well with 1 rank.

Do you have any suggestions ?  Build a real serial DM?

Thanks a lot.
Xiaodong

DM Object: Parallel Mesh 2 MPI processes
  type: plex
Parallel Mesh in 3 dimensions:
  Number of 0-cells per rank: 56 0
  Number of 1-cells per rank: 289 0
  Number of 2-cells per rank: 452 0
  Number of 3-cells per rank: 216 0
Labels:
  depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216))
  celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216))
  Cell Sets: 2 strata with value/size (29 (152), 30 (64))
  Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20))
  Edge Sets: 1 strata with value/size (10 (10))
  Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4),
106 (4))
Field Field_0:
  adjacency FEM



On Fri, Apr 18, 2025 at 10:09 AM Stefano Zampini <stefano.zamp...@gmail.com>
wrote:

> If you have a vector distributed on the original mesh, then you can use
> the SF returned by DMPlexGetGatherDM and use that in a call to
> DMPlexDistributeField
>
> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu <liufi...@gmail.com> ha
> scritto:
>
>> Dear PETSc developers and users,
>>
>> I am currently exploring the integration of MMG3D with PETSc. Since MMG3D
>> supports only serial execution, I am planning to combine parallel and
>> serial computing in my workflow. Specifically, after solving the linear
>> systems in parallel using PETSc:
>>
>>    1.
>>
>>    I intend to use DMPlexGetGatherDM to collect the entire mesh on the
>>    root process for input to MMG3D.
>>    2.
>>
>>    Additionally, I plan to gather the error field onto the root process
>>    using VecScatter.
>>
>> However, I am concerned that the nth value in the gathered error vector
>> (step 2) may not correspond to the nth element in the gathered mesh (step
>> 1). Is this a valid concern?
>>
>> Do you have any suggestions or recommended practices for ensuring correct
>> correspondence between the solution fields and the mesh when switching from
>> parallel to serial mode?
>>
>> Thanks,
>>
>> Xiaodong
>>
>
>
> --
> Stefano
>

Reply via email to