Hello,

   When I submit a MPI job in this way:

     sbatch -N 5 -n 17 --ntasks-per-node=4 --partition=nodo.q
     ./myscript.sh

   I "think" I'm requesting 5 nodes for executing 17 processes with 4
   task per node...

   My "myscript.sh" is:

     #!/bin/bash
     source /soft/modules-3.2.10/Modules/3.2.10/init/bash
     module load openmpi/1.10.2
     mpirun ./mpihello


   I "supossed" that 17 processes would be allocated in this way:

     4 in the first node
     4 in the second node
     4 in the third node
     4 in the fourth node
     1 in the last node

   However, they are allocated in this other way:

     4 in the first node
     4 in the second node
     3 in the third node
     3 in the fourth node
     3 in the last node


   My slurmd.conf is:

     [...]
     SwitchType=switch/none
     TaskPlugin=task/none,task/affinity,task/cgroup
     DebugFlags=CPU_Bind,Gres

     # SCHEDULING
     FastSchedule=1
     SchedulerType=sched/backfill
     #SchedulerPort=7321
     SelectType=select/cons_res
     SelectTypeParameters=CR_Core

     # COMPUTE NODES
     NodeName=clus[01-12] CPUs=12 SocketsPerBoard=2 CoresPerSocket=6
     ThreadsPerCore=1 RealMemory=7806 TmpDisk=81880
     NodeName=clus-login CPUs=4 SocketsPerBoard=2 CoresperSocket=2
     ThreadsperCore=1 RealMemory=15886 TmpDisk=30705

     # PARTITIONS
     PartitionName=nodo.q Nodes=clus[01-12] Default=YES MaxTime=8:00:00
     State=UP AllocNodes=clus-login MaxCPUsPerNode=12
     PartitionName=test.q Nodes=clus-login MaxTime=10:00 State=UP
     AllocNodes=clus-login MaxCPUsPerNode=12
     [...]


   Why are tasks executing in this other way? What is wrong in my SLURM
   configuration?

   Thanks.

Reply via email to