Re: [Libmesh-users] enforce to partition mesh on each processor locally

2018-03-13 Thread Roy Stogner


On Tue, 13 Mar 2018, Vasileios Vavourakis wrote:


oh, I thought that the default setting in the Parallel::Communicator (i.e. see 
default
constructor: 
http://libmesh.github.io/doxygen/classlibMesh_1_1Parallel_1_1Communicator.html#a697f8e599333609a45761828e14659c1)
 for the
“communicator” is: MPI_COMM_SELF


It is, but init.comm() isn't a default-constructor communicator, it's
a default-for-most-users communicator.  Typically when someone runs on
N processors it's because they want everything parallelized between N
processors, so we make it easy to get an All-N-Processors
communicator.


unless it gets initialised to MPI_COMM_WORLD somewhere in the
LibMeshInit - apologies, i’m not always successful looking for some
details in the library documentation :(


That's a very polite way to say "Why don't you even have a single line
of documentation for LibMeshInit::comm()?"  I would have been tempted
to phrase that with much more cursing.

I'll put together a PR with better comments now, so it'll get into the
online Doxygen eventually.


I will give it a spin and update libmesh-users asap...


Thanks,
---
Roy
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users


Re: [Libmesh-users] enforce to partition mesh on each processor locally

2018-03-13 Thread Vasileios Vavourakis
thanks Roy for the quick turnaround.

my comments below to your reply…

> On 13 Mar 2018, at 21:02, Roy Stogner  wrote:
> 
> 
> On Tue, 13 Mar 2018, Vasileios Vavourakis wrote:
> 
>> I would like to run (in parallel mode) a piece of libMesh code where on
>> each processor, a libMesh::Mesh is initialised & stored locally and, hence,
>> subsequently initialise the EquationSystems accordingly, and for each
>> processor independently.
>> 
>> The FE mesh, may be same for all processors, or different, but in principle
>> the information need to be stored independently (and partitioned in one
>> subdomain/partition) without any problems when running things in parallel.
>> 
>> I have tried to enforce the partitioning of the mesh via, e.g.:
>> 
>>   LibMeshInit init(argc, argv);
>>   Mesh msh(init.comm(), 1);
>>   //...import the mesh...
>>   msh.prepare_for_use();
>>   msh.partition(1);
>>   //
>>   EquationSystems es(msh);
>>   // ...add a system here...
>>   es.init();
>>   // ...solve the system, etc...
>> 
>> However, I noticed that should I run the code in that many processes as
>> many as the number of elements of the mesh, then it is ok; else it freezes
>> (especially at a point where I "update_global_solution".
>> Does this make sense to you?
> 
> Yes, I'm afraid so.  Although you might have told the mesh to give
> every element to processor 0, when you created the mesh with
> init.comm(), you made every processor on that communicator (i.e. every
> processor in this case, since init defaults to MPI_COMM_WORLD) an
> owner of that mesh.  When you do collective operations on a mesh, even
> processors who don't own any elements on the mesh must be involved if
> they're part of that mesh's communicator.

oh, I thought that the default setting in the Parallel::Communicator (i.e. see 
default constructor: 
http://libmesh.github.io/doxygen/classlibMesh_1_1Parallel_1_1Communicator.html#a697f8e599333609a45761828e14659c1
 
)
 for the “communicator” is: MPI_COMM_SELF

unless it gets initialised to MPI_COMM_WORLD somewhere in the LibMeshInit - 
apologies, i’m not always successful looking for some details in the library 
documentation :(


> 
>> Any suggestions / tips?
>> 
>> I am afraid there might be an easy way to do it, however, I wanted to have
>> your opinion about it.
> 
> I *think* the thing to do would be to create a new Parallel::Communicator
> wrapper around MPI_COMM_SELF, then use that to create a local Mesh.

done; will create a separate Communicator object, e.g.:
LibMeshInit init(argc, argv);
Communicator lcomm(MPI_COMM_SELF);
Mesh msh(lcomm, 1);
//...import the mesh...
msh.prepare_for_use();
//
EquationSystems es(msh);
// ...add a system here...
es.init();
// ...solve the system, etc...

I will give it a spin and update libmesh-users asap...

cheers,
Vasileios


> 
> I've never done that before, though.  If you do it and it works, we'd
> love to have a unit test to make sure it *stays* working through
> future library updates.  If you do it and it doesn't work, let us know
> and (especially if you can set up a failing test case) I'll try to
> help figure out what's wrong.
> ---
> Roy

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users


Re: [Libmesh-users] enforce to partition mesh on each processor locally

2018-03-13 Thread Roy Stogner


On Tue, 13 Mar 2018, Vasileios Vavourakis wrote:


I would like to run (in parallel mode) a piece of libMesh code where on
each processor, a libMesh::Mesh is initialised & stored locally and, hence,
subsequently initialise the EquationSystems accordingly, and for each
processor independently.

The FE mesh, may be same for all processors, or different, but in principle
the information need to be stored independently (and partitioned in one
subdomain/partition) without any problems when running things in parallel.

I have tried to enforce the partitioning of the mesh via, e.g.:

   LibMeshInit init(argc, argv);
   Mesh msh(init.comm(), 1);
   //...import the mesh...
   msh.prepare_for_use();
   msh.partition(1);
   //
   EquationSystems es(msh);
   // ...add a system here...
   es.init();
   // ...solve the system, etc...

However, I noticed that should I run the code in that many processes as
many as the number of elements of the mesh, then it is ok; else it freezes
(especially at a point where I "update_global_solution".
Does this make sense to you?


Yes, I'm afraid so.  Although you might have told the mesh to give
every element to processor 0, when you created the mesh with
init.comm(), you made every processor on that communicator (i.e. every
processor in this case, since init defaults to MPI_COMM_WORLD) an
owner of that mesh.  When you do collective operations on a mesh, even
processors who don't own any elements on the mesh must be involved if
they're part of that mesh's communicator.


Any suggestions / tips?

I am afraid there might be an easy way to do it, however, I wanted to have
your opinion about it.


I *think* the thing to do would be to create a new Parallel::Communicator
wrapper around MPI_COMM_SELF, then use that to create a local Mesh.

I've never done that before, though.  If you do it and it works, we'd
love to have a unit test to make sure it *stays* working through
future library updates.  If you do it and it doesn't work, let us know
and (especially if you can set up a failing test case) I'll try to
help figure out what's wrong.
---
Roy

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users