On Tue, 13 Mar 2018, Vasileios Vavourakis wrote:

I would like to run (in parallel mode) a piece of libMesh code where on
each processor, a libMesh::Mesh is initialised & stored locally and, hence,
subsequently initialise the EquationSystems accordingly, and for each
processor independently.

The FE mesh, may be same for all processors, or different, but in principle
the information need to be stored independently (and partitioned in one
subdomain/partition) without any problems when running things in parallel.

I have tried to enforce the partitioning of the mesh via, e.g.:

   LibMeshInit init(argc, argv);
   Mesh msh(init.comm(), 1);
   //...import the mesh...
   EquationSystems es(msh);
   // ...add a system here...
   // ...solve the system, etc...

However, I noticed that should I run the code in that many processes as
many as the number of elements of the mesh, then it is ok; else it freezes
(especially at a point where I "update_global_solution".
Does this make sense to you?

Yes, I'm afraid so.  Although you might have told the mesh to give
every element to processor 0, when you created the mesh with
init.comm(), you made every processor on that communicator (i.e. every
processor in this case, since init defaults to MPI_COMM_WORLD) an
owner of that mesh.  When you do collective operations on a mesh, even
processors who don't own any elements on the mesh must be involved if
they're part of that mesh's communicator.

Any suggestions / tips?

I am afraid there might be an easy way to do it, however, I wanted to have
your opinion about it.

I *think* the thing to do would be to create a new Parallel::Communicator
wrapper around MPI_COMM_SELF, then use that to create a local Mesh.

I've never done that before, though.  If you do it and it works, we'd
love to have a unit test to make sure it *stays* working through
future library updates.  If you do it and it doesn't work, let us know
and (especially if you can set up a failing test case) I'll try to
help figure out what's wrong.

Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Libmesh-users mailing list

Reply via email to