You need a multinode allocation but run srun only on one node at time.
In slurm 2.5 this works.

david@bokis ~/slurm/work>salloc -N 2
salloc: Granted job allocation 29

SLURM->david@bokis ~/slurm/work>squeue
JOBID      PARTITION    USER ST     GRES  CPUS   NODES   NODELIST
FEATURES DEPENDENCY  TIME
29          belatrix   david R    (null)     2       2  dario,joe
(null)             0:02

SLURM->david@bokis ~/slurm/work>

SLURM->david@bokis ~/slurm/work>srun -l -N 1 --ntasks-per-node=1 hostname
0: dario
SLURM->david@bokis ~/slurm/work>srun -l -N 1 hostname
0: dario
SLURM->david@bokis ~/slurm/work>srun -l -N 2 hostname
0: dario
1: joe
SLURM->david@bokis ~/slurm/work>srun -l hostname
1: joe
0: dario
SLURM->david@bokis ~/slurm/work>



*/David*


On Wed, Mar 6, 2013 at 9:21 AM, Bjørn-Helge Mevik <[email protected]>wrote:

>
> Carles Fenoy <[email protected]> writes:
>
> > Can't you use directly srun?
>
> Hm...  I can't see how.  srun will start as many instances of the job
> script as the number of tasks you specify (with --ntasks, --nodes,
> and/or --ntasks-per-node).
>
> Typically, a job script will first do some file management, then
> launch the main program, and then perhaps do some cleanup afterwards.
> Thus one wouldn't want the job script itself to be run in parallell.
>
> --
> Regards,
> Bjørn-Helge Mevik, dr. scient,
> Research Computing Services, University of Oslo
>

Reply via email to