I don't know about .deb packages, but at least in the rpms there is a
post install scriptlet that re-runs ldconfig to ensure the new libs are
in the ldconfig cache.
On 16/05//2016 18:04, Dave Love wrote:
"Rob Malpass" writes:
Almost in desperation, I cheated:
Why is that cheating? Unless
For provisioning, I personally use xCAT, which just started
supporting docker
http://xcat-docs.readthedocs.io/en/stable/advanced/docker/lifecycle_management.html
Together with slurm elastic computing feature
http://xcat-docs.readthedocs.io/en/stable/advanced/docker/lifecycle_ma
This is just a guess but - is it possible after building slurm
against PMIx, you should build openmpi against SLURM's PMIx
instead of directly using external? I would assume slurm's PMI
server now "knows" PMIx.
On 06/08//2017 16:14, Charles A Taylor
Hi List,
I've encountered an issue today - building an openmpi 1.6.4 from
source rpm, on a machine which has cuda-5 (latest) installed,
resulted in openmpi always using the cuda headers and libs.
I should mention that I have added the cuda libs dir to ldconfig,
a
Hi,
I've encountered strange issues when trying to run a simple mpi job
on a single host which has IB.
The complete errors:
-> mpirun -n 1 hello
--
WARNING: Failed to open "ofa-v2-mlx4
ote:
Don't include udapl - that code may well be stale
Sent from my iPhone
On Jun 23, 2013, at 3:42 AM, dani <d...@letai.org.il>
wrote:
Hi,
I've encountered strange
umentation for the current
parameters of mlx4_core? I can't locate it in the mellanox or ofed
sites.
On 24/06//2013 18:02, Jeff Squyres
(jsquyres) wrote:
On Jun 23, 2013, at 3:21 PM, dani wrote:
See this Open MPI FA
But that is not a requirement on ssh.
That is a requirement on the install base on the second node - both
must have the same environment variables set, using same paths on
each machine.
either install openmpi on each node, and setupĀ
/etc/profile.d/openmpi.{c,}sh and
I might be offbase here, but I think what was implied is that
you've built openmpi with --with-pmi without supplying the path
that holds pmi2 libs.
First build slurm with pmi, then build openmpi with path to pmi
so in slurm.
That might not provide pmix, but it