Hi Chris
As you said, pending prior communication,
is a candidate.
Another that I saw is MPI_Finalize inside a conditional,
where the condition may or may not be met by all ranks:
if (condition) {
MPI_Finalize();
}
Regardless of the cause,
to check the ranks that reach MPI_Finalize,
did you try
Greetings,
I am using OpenMPI 1.4.3-1.1.el6 on RedHawk Linux 6.0.1 (Glacier) / RedHat
Enterprise Linux Workstation Release 6.1 (Santiago). I am currently working
through some issues that I encountered after upgrading from RedHawk 5.2 / RHEL
5.2 and OpenMPI 1.4.3-1 (openmpi-gcc_1.4.3-1). It
Hi Ralph
Thank you.
I switched back to memlock unlimited, rebooted the nodes,
and after that OpenMPI is working right with Infinband.
As for why the problem happened first place,
I can only think that somehow the Infiniband kernel modules and
driver didn't like my reducing the memlock limit,
John,
I don't think such a tool exists, or at least not at the extent you expect. I
know two testing application that might qualify (at a lesser extent) to what
you're looking for:
1. the IMB test suite.
2. The MPI_test_suite developed at HLRS for PACX-MPI and then for Open MPI. I
couldn't
Hi All
I'm the QA Manager at Allinea, a small company that produces the DDT
Parallel Debugger and MAP Parallel Profiler, and we spend a lot of
time manipulating MPI environments to get our debugging and profiling
to work. In particular for the MAP profiler, which is a sampling one,
we want to