thanks Roy for the quick turnaround.

my comments below to your reply…

> On 13 Mar 2018, at 21:02, Roy Stogner <> wrote:
> On Tue, 13 Mar 2018, Vasileios Vavourakis wrote:
>> I would like to run (in parallel mode) a piece of libMesh code where on
>> each processor, a libMesh::Mesh is initialised & stored locally and, hence,
>> subsequently initialise the EquationSystems accordingly, and for each
>> processor independently.
>> The FE mesh, may be same for all processors, or different, but in principle
>> the information need to be stored independently (and partitioned in one
>> subdomain/partition) without any problems when running things in parallel.
>> I have tried to enforce the partitioning of the mesh via, e.g.:
>>   LibMeshInit init(argc, argv);
>>   Mesh msh(init.comm(), 1);
>>   //...import the mesh...
>>   msh.prepare_for_use();
>>   msh.partition(1);
>>   //
>>   EquationSystems es(msh);
>>   // ...add a system here...
>>   es.init();
>>   // ...solve the system, etc...
>> However, I noticed that should I run the code in that many processes as
>> many as the number of elements of the mesh, then it is ok; else it freezes
>> (especially at a point where I "update_global_solution".
>> Does this make sense to you?
> Yes, I'm afraid so.  Although you might have told the mesh to give
> every element to processor 0, when you created the mesh with
> init.comm(), you made every processor on that communicator (i.e. every
> processor in this case, since init defaults to MPI_COMM_WORLD) an
> owner of that mesh.  When you do collective operations on a mesh, even
> processors who don't own any elements on the mesh must be involved if
> they're part of that mesh's communicator.

oh, I thought that the default setting in the Parallel::Communicator (i.e. see 
default constructor:
 for the “communicator” is: MPI_COMM_SELF

unless it gets initialised to MPI_COMM_WORLD somewhere in the LibMeshInit - 
apologies, i’m not always successful looking for some details in the library 
documentation :(

>> Any suggestions / tips?
>> I am afraid there might be an easy way to do it, however, I wanted to have
>> your opinion about it.
> I *think* the thing to do would be to create a new Parallel::Communicator
> wrapper around MPI_COMM_SELF, then use that to create a local Mesh.

done; will create a separate Communicator object, e.g.:
LibMeshInit init(argc, argv);
Communicator lcomm(MPI_COMM_SELF);
Mesh msh(lcomm, 1);
//...import the mesh...
EquationSystems es(msh);
// ...add a system here...
// ...solve the system, etc...

I will give it a spin and update libmesh-users asap...


> I've never done that before, though.  If you do it and it works, we'd
> love to have a unit test to make sure it *stays* working through
> future library updates.  If you do it and it doesn't work, let us know
> and (especially if you can set up a failing test case) I'll try to
> help figure out what's wrong.
> ---
> Roy

Check out the vibrant tech community on one of the world's most
engaging tech sites,!
Libmesh-users mailing list

Reply via email to