Andrej,
a load average of 700 is very curious.
i guess you already made sure load average is zero when the system is idle ...
are you running an hybrid app (e.g. MPI + OpenMP) ?
one possible explanation is you run 48 MPI tasks and each task has 48
OpenMP threads, and that kills performances.
wh
Hi Gilles,
Thanks for your reply!
> by "running on the head node", shall i understand you mean
> "running mpirun command *and* all mpi tasks on the head node" ?
Precisely.
> by "running on the compute node", shall i understand you mean
> "running mpirun on the compute node *and* all mpi tasks o
Andrej,
by "running on the head node", shall i understand you mean
"running mpirun command *and* all mpi tasks on the head node" ?
by "running on the compute node", shall i understand you mean
"running mpirun on the compute node *and* all mpi tasks on the *same*
compute node" ?
or do you mean
Hi everyone,
We have a small cluster of 6 identical 48-core nodes for astrophysical
research. We are struggling on getting openmpi to run efficiently on
the nodes. The head node is running ubuntu and openmpi-1.6.5 on a local
disk. All worker nodes are booting from NFS exported root that resides
on