Thank you Gilles for the pointer.
However that package "openmpi-gnu-ohpc-1.10.6-23.1.x86_64.rpm" has other
dependencies from the OpenHPC. Basically it is strongly tied to the whole
OpenHPC concept.
I did however follow your suggestion and rebuild the OpenMPI RPM package
from redhat adding the
Anthony,
a few things ...
- Open MPI v1.10 is no more supported
- you should at least use v2.0, preferably v2.1 or even the newly released 3.0
- if you need to run under torque/pbs, then Open MPI should be built
with tm support
- openhpc.org provides Open MPI 1.10.7 with tm support
Cheers,
This is not explained in the manual, when giving a hostfile (though I was
suspecting that was the case).
However running one process on each node listed WAS the default behaviour
in the past. In fact that is the default behaviour on a old Version 1.5.4
OpenMPI, I have on an old cluster which I
That is correct. If you don’t specify a slot count, we auto-discover the number
of cores on each node and set #slots to that number. If an RM is involved, then
we use what they give us
Sent from my iPad
> On Sep 26, 2017, at 8:11 PM, Anthony Thyssen
> wrote:
>
>
I have been having problems with OpenMPI on a new cluster of machines, using
stock RHEL7 packages.
ASIDE: This will be used with Torque-PBS (from EPEL archives), though
OpenMPI
(currently) does not have the "tm" resource manager configured to use PBS,
as you
will be able to see in the debug