There is a particular use-case that is not currently supported, but will be
fixed as time permits. Jobs launched by the same mpirun can currently execute
MPI_Comm_connect/accept.
> On Apr 4, 2017, at 5:33 AM, Kawashima, Takahiro
> wrote:
>
> I filed a PR against
I filed a PR against v1.10.7 though v1.10.7 may not be released.
https://github.com/open-mpi/ompi/pull/3276
I'm not aware of v2.1.x issue, sorry. Other developer may be
able to answer.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Bullseye!
>
> Thank you, Takahiro, for your quick
Bullseye!
Thank you, Takahiro, for your quick answer. Brief tests with 1.10.6 show
that this did indeed solve the problem! I will look at this in more
detail, but it looks really good now.
About MPI_Comm_accept in 2.1.x. I've seen a thread here by Adam
Sylvester, where it essentially says
Bullseye!
Thank you, Takahiro, for your quick answer. Brief tests with 1.10.6 show
that this did indeed solve the problem! I will look at this in more
detail, but it looks really good now.
About MPI_Comm_accept in 2.1.x. I've seen a thread here by Adam
Sylvester, where it essentially says
Hi,
I encountered a similar problem using MPI_COMM_SPAWN last month.
Your problem my be same.
The problem was fixed by commit 0951a34 in Open MPI master and
backported to v2.1.x v2.0.x but not backported to v1.8.x and
v1.10.x.
https://github.com/open-mpi/ompi/commit/0951a34
Please try the
Dear Developers,
This is an old problem, which I described in an email to the users list
in 2015, but I continue to struggle with it. In short, MPI_Comm_accept /
MPI_Comm_disconnect combo causes any communication over openib btl
(e.g., also a barrier) to hang after a few clients connect and