Hi,

oh, that's a great news!

In our case we have our home-made file-format, invariant to the number of processes (thanks to MPI_File_set_view), that uses collective, asynchronous MPI I/O native calls for unstructured hybrid meshes and fields .

So our needs are not for reading meshes but only to fill an hybrid DMPlex with DMPlexBuildFromCellListParallel (or something else to come?)... to exploit petsc partitioners and parallel overlap computation...

Thanks for the follow-up! :)

Eric


On 2021-09-22 7:20 a.m., Matthew Knepley wrote:
On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo <[email protected] <mailto:[email protected]>> wrote:

    Dear Matthew,

    This is great news!
    For my part, I would be mostly interested in the parallel input
    interface. Sorry for that...
    Indeed, in our application,  we already have a parallel mesh data
    structure that supports hybrid meshes with parallel I/O and
    distribution (based on the MED format). We would like to use a
    DMPlex to make parallel mesh adaptation.
     As a matter of fact, all our meshes are in the MED format. We
    could also contribute to extend the interface of DMPlex with MED
    (if you consider it could be usefull).


An MED interface does exist. I stopped using it for two reasons:

  1) The code was not portable and the build was failing on different architectures. I had to manually fix it.

  2) The boundary markers did not provide global information, so that parallel reading was much harder.

Feel free to update my MED reader to a better design.

  Thanks,

     Matt

    Best regards,
    Nicolas


    Le mar. 21 sept. 2021 à 21:56, Matthew Knepley <[email protected]
    <mailto:[email protected]>> a écrit :

        On Tue, Sep 21, 2021 at 10:31 AM Karin&NiKo
        <[email protected] <mailto:[email protected]>> wrote:

            Dear Eric, dear Matthew,

            I share Eric's desire to be able to manipulate meshes
            composed of different types of elements in a PETSc's DMPlex.
            Since this discussion, is there anything new on this
            feature for the DMPlex object or am I missing something?


        Thanks for finding this!

        Okay, I did a rewrite of the Plex internals this summer. It
        should now be possible to interpolate a mesh with any
        number of cell types, partition it, redistribute it, and many
        other manipulations.

        You can read in some formats that support hybrid meshes. If
        you let me know how you plan to read it in, we can make it work.
        Right now, I don't want to make input interfaces that no one
        will ever use. We have a project, joint with Firedrake, to
        finalize
        parallel I/O. This will make parallel reading and writing for
        checkpointing possible, supporting topology, geometry, fields and
        layouts, for many meshes in one HDF5 file. I think we will
        finish in November.

          Thanks,

             Matt

            Thanks,
            Nicolas

            Le mer. 21 juil. 2021 à 04:25, Eric Chamberland
            <[email protected]
            <mailto:[email protected]>> a écrit :

                Hi,

                On 2021-07-14 3:14 p.m., Matthew Knepley wrote:
                On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland
                <[email protected]
                <mailto:[email protected]>> wrote:

                    Hi,

                    while playing with
                    DMPlexBuildFromCellListParallel, I noticed we
                    have to
                    specify "numCorners" which is a fixed value, then
                    gives a fixed number
                    of nodes for a series of elements.

                    How can I then add, for example, triangles and
                    quadrangles into a DMPlex?


                You can't with that function. It would be much mich
                more complicated if you could, and I am not sure
                it is worth it for that function. The reason is that
                you would need index information to offset into the
                connectivity list, and that would need to be
                replicated to some extent so that all processes know what
                the others are doing. Possible, but complicated.

                Maybe I can help suggest something for what you are
                trying to do?

                Yes: we are trying to partition our parallel mesh with
                PETSc functions.  The mesh has been read in parallel
                so each process owns a part of it, but we have to
                manage mixed elements types.

                When we directly use ParMETIS_V3_PartMeshKway, we give
                two arrays to describe the elements which allows mixed
                elements.

                So, how would I read my mixed mesh in parallel and
                give it to PETSc DMPlex so I can use a
                PetscPartitioner with DMPlexDistribute ?

                A second goal we have is to use PETSc to compute the
                overlap, which is something I can't find in PARMetis
                (and any other partitionning library?)

                Thanks,

                Eric



                  Thanks,

                      Matt

                    Thanks,

                    Eric

-- Eric Chamberland, ing., M. Ing
                    Professionnel de recherche
                    GIREF/Université Laval
                    (418) 656-2131 poste 41 22 42



-- What most experimenters take for granted before they
                begin their experiments is infinitely more
                interesting than any results to which their
                experiments lead.
                -- Norbert Wiener

                https://www.cse.buffalo.edu/~knepley/
                <http://www.cse.buffalo.edu/~knepley/>

-- Eric Chamberland, ing., M. Ing
                Professionnel de recherche
                GIREF/Université Laval
                (418) 656-2131 poste 41 22 42



-- What most experimenters take for granted before they begin
        their experiments is infinitely more interesting than any
        results to which their experiments lead.
        -- Norbert Wiener

        https://www.cse.buffalo.edu/~knepley/
        <http://www.cse.buffalo.edu/~knepley/>



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

--
Eric Chamberland, ing., M. Ing
Professionnel de recherche
GIREF/Université Laval
(418) 656-2131 poste 41 22 42

Reply via email to