On Fri, 11 Mar 2016, Manav Bhatia wrote:

> While we are at it, I have one more observation:
>
> Consider the global comm with ranks {0, 1, 2, 3}.
>
> Then, after the following intended split: {0}, which requires rank
> {0} to request split with a color and ranks {1, 2, 3} with
> MPI_UNDEFINED, the mpi_communicator on the latter set will have a
> value MPI_COMM_NULL.
>
> The method Communicator::assign(), compares the value of the new
> comm against MPI_COMM_NULL. If a valid comm is found, then the rank
> and size are set consistently, otherwise they are set to 0 and 1.
> So, in the above simple case, a call to comm.rank() and comm.size()
> on all cpus for the new subcomm will always return 0 and 1.

Whereas 0 and 0 is what we really want?

> Of course, the use can explicitly check comm.get() == MPI_COMM_NULL
> to see if a communicator is valid or not. But, I am not sure if this
> is intended. In any case, I wanted to point this out.

Huh.  No, this isn't what was intended.  I'd just thought of
MPI_COMM_NULL as an "unused communicator" placeholder, and hadn't
worried about keeping those placeholders consistent.  I wonder whether
it would have made more sense to use MPI_COMM_SELF as the default...
but at this point it's probably not a good idea to switch.
---
Roy

------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to