Actually, it seems --exclusive might give you almost more or less what
you want, in combination with --mem=0 or --mem-per-cpu=0:

Without --exclusive:

18 (0) $ salloc --mem-per-cpu=0 --nodes=1 -A staff --time=1:0:0
salloc: Pending job allocation 148064
salloc: job 148064 queued and waiting for resources
salloc: job 148064 has been allocated resources
salloc: Granted job allocation 148064
1 (0) $ echo $SLURM_
$SLURM_CLUSTER_NAME       $SLURM_JOB_NODELIST       $SLURM_NODE_ALIASES
$SLURM_JOB_CPUS_PER_NODE  $SLURM_JOB_NUM_NODES      $SLURM_NODELIST
$SLURM_JOBID              $SLURM_JOB_PARTITION      $SLURM_SUBMIT_DIR
$SLURM_JOB_ID             $SLURM_MEM_PER_NODE       $SLURM_SUBMIT_HOST
$SLURM_JOB_NAME           $SLURM_NNODES             $SLURM_TASKS_PER_NODE
1 (0) $ echo $SLURM_JOB_CPUS_PER_NODE 
1
2 (0) $ echo $SLURM_MEM_PER_NODE 
22528
3 (0) $ echo $SLURM_TASKS_PER_NODE 
1
4 (0) $ srun hostname
compute-1-16.local
5 (0) $ srun -n8 hostname
srun: error: Unable to create job step: More processors requested than permitted
6 (0) $ export SLURM_NTASKS=8
7 (0) $ srun hostname
srun: error: Unable to create job step: More processors requested than permitted

So you are not allowed to run more than one task.

With --exclusive:

19 (0) $ salloc --mem-per-cpu=0 --nodes=1 --exclusive -A staff --time=1:0:0
salloc: Pending job allocation 148065
salloc: job 148065 queued and waiting for resources
salloc: job 148065 has been allocated resources
salloc: Granted job allocation 148065
1 (0) $ echo $SLURM_
$SLURM_CLUSTER_NAME       $SLURM_JOB_NODELIST       $SLURM_NODE_ALIASES
$SLURM_JOB_CPUS_PER_NODE  $SLURM_JOB_NUM_NODES      $SLURM_NODELIST
$SLURM_JOBID              $SLURM_JOB_PARTITION      $SLURM_SUBMIT_DIR
$SLURM_JOB_ID             $SLURM_MEM_PER_NODE       $SLURM_SUBMIT_HOST
$SLURM_JOB_NAME           $SLURM_NNODES             $SLURM_TASKS_PER_NODE
1 (0) $ echo $SLURM_JOB_CPUS_PER_NODE 
8
2 (0) $ echo $SLURM_MEM_PER_NODE 
22528
3 (0) $ echo $SLURM_TASKS_PER_NODE 
8
4 (0) $ srun hostname
compute-1-16.local
5 (0) $ srun -n$SLURM_TASKS_PER_NODE hostname
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
6 (0) $ srun -n8 whereami
I am on host compute-1-16.local
Allowed CPUs: 2 10
I am on CPU 10
I am on host compute-1-16.local
Allowed CPUs: 0 8
I am on CPU 8
I am on host compute-1-16.local
Allowed CPUs: 6 14
I am on CPU 6
I am on host compute-1-16.local
Allowed CPUs: 4 12
I am on CPU 4
I am on host compute-1-16.local
Allowed CPUs: 5 13
I am on CPU 13
I am on host compute-1-16.local
Allowed CPUs: 3 11
I am on CPU 11
I am on host compute-1-16.local
Allowed CPUs: 1 9
I am on CPU 9
I am on host compute-1-16.local
Allowed CPUs: 7 15
I am on CPU 15
7 (0) $ export SLURM_NTASKS=$SLURM_TASKS_PER_NODE 
8 (0) $ srun hostname
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local
compute-1-16.local

So you must either use srun -n$SLURM_TASKS_PER_NODE or set
SLURM_NTASKS=$SLURM_TASKS_PER_NODE (if you specify --ntasks=8 explicitly
to salloc, SLURM_NTASKS will be set for you).

(The "whereami" is a small c program I wrote that tests which host and
cpu it is running on, and which cpus it is allowed to use.)

This is on 15.08.12.  YMMV.

-- 
Cheers,
Bjørn-Helge Mevik, dr. scient,
Department for Research Computing, University of Oslo

Reply via email to