Hello,

I'm trying to use openmpi 1.8.8 on a cluster managed by OAR, however i have
some troubles with the default slots number.

I have reserved one core on two nodes (each has 12 cores):

# cat $OAR_NODEFILE
nef097.inria.fr
nef098.inria.fr

but:
mpirun -np 2 --mca plm_rsh_agent oarsh -hostfile $OAR_NODEFILE ./NPmpi

runs only on the first node:
0: nef097
1: nef097
Now starting the main loop
  0:       1 bytes      7 times -->      0.00 Mbps in   12571.35 usec
[skip]


If i use a nodefile like this, it works:
nef097.inria.fr slots=1
nef098.inria.fr slots=1

The doc says however that the default value is 1, and openmpi 1.6.4 works
fine (the OS is Centos 7, btw)

Am i missing something ?

-- 
Nicolas NICLAUSSE                          Service DREAM
INRIA Sophia Antipolis                     http://www-sop.inria.fr/
2004 route des lucioles - BP 93            Tel: (33/0) 4 92 38 76 93
06902  SOPHIA-ANTIPOLIS cedex (France)     Fax: (33/0) 4 92 38 76 02

Reply via email to