On Jan 10, 2014, at 10:04 AM, George Bosilca <bosi...@icl.utk.edu> wrote:

>> MPI Comm comm;
>> // comm is setup as an hcoll-enabled communicator
>> if (rank == x) {
>>   MPI_Send(..., y, tag, MPI_COMM_WORLD);
>>   MPI_Comm_free(comm);
>> } else if (rank == y) {
>>   MPI_Comm_free(comm);
>>   MPI_Recv(..., x, tag, MPI_COMM_WORLD);
>> }
> 
> Based on today’s MPI standard this code is incorrect as the MPI_Comm_free is 
> collective, and you can’t have matching blocking communications crossing a 
> collective line.


I don't know exactly what you mean by "crossing a collective line", but 
communicating in different communicators while a different collective is 
occurring is certainly valid.  I.e., this is valid (and won't deadlock):

-----
MPI Comm comm;
// comm is setup as an hcoll-enabled communicator
MPI_Barrier(comm);
if (rank == x) {
  MPI_Send(..., y, tag, MPI_COMM_WORLD);
  MPI_Comm_free(comm);
} else if (rank == y) {
  MPI_Recv(..., x, tag, MPI_COMM_WORLD);
  MPI_Comm_free(comm);
} else {
  MPI_Comm_free(comm);
}
-----

My point (which I guess I didn't make well) is that COMM_FREE is collective, 
which we all know does not necessarily mean synchronizing.  If hcoll teardown 
is going to add synchronization, there could be situations that might be 
dangerous (if OMPI doesn't already synchronize during COMM_FREE, which is why I 
asked the question).

Sorry if I just muddled the conversation...  :-\

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to