Hi,
Seems like it's impossible to cancel buffered sends with pml/cm.
>From one hand, pml/cm completes the buffered send immediately
>(MCA_PML_CM_HVY_SEND_REQUEST_START):
if(OMPI_SUCCESS == ret &&
\
Hi,
We get occasional hangs with MTL/MXM during finalize, because a global
synchronization is needed before calling del_procs.
e.g rank A may call del_procs() and disconnect from rank B, while rank B is
still working.
What do you think about adding an MPI barrier on COMM_WORLD before calling
-
From: Nathan Hjelm [mailto:hje...@lanl.gov]
Sent: Monday, July 21, 2014 8:01 PM
To: Open MPI Developers
Cc: Yossi Etigin
Subject: Re: [OMPI devel] barrier before calling del_procs
I should add that it is an rte barrier and not an MPI barrier for technical
reasons.
-Nathan
On Mon, Jul 21,
infrastructure. Thus, we need to rely on an rte_barrier
not because it guarantees the correctness of the code, but because it provides
enough time to all processes to flush all HPC traffic.
George.
On Mon, Jul 21, 2014 at 1:10 PM, Yossi Etigin
<yos...@mellanox.com<mailto:yos...@mellanox.com&g