Hi, I'm not sure I understand the problem but you can specify -N (--nodes)
and tasks and so on for each srun. That way you can control how many nodes
and tasks are distributed per srun:
srun -N 1 --gres=gpu:1 ...
srun -N 1 --gres=gpu:1 ...
from your original example should work..
-Doug
On Fr
Anyone please help how to achieve this very basic kind of job
distribution? This problem has not been solved yet.
On Fri, Oct 2, 2015 at 12:49 PM, John Hearns wrote:
> I stand corrected.
>
>
>
> I find myself in a maze of twisty little passages, all alike
>
>
>
> All the examples for SBATCH (in
I stand corrected.
I find myself in a maze of twisty little passages, all alike
All the examples for SBATCH (in the SLURM manual) uses 'SRUN' for execution of
runs. There are lot of other websites which gives SBATCH examples and all of
them uses SRUN, unless using some version of MPI.
__
>
>
>
> I wouldn’t do an srun in the middle of a batch job…. Why not just subnt
> for separate batch jobs?
>
> Or you could use a small job array http://slurm.schedmd.com/job_array.html
>
>
>
All the examples for SBATCH (in the SLURM manual) uses 'SRUN' for execution
of runs. There are lot of oth
So far I tried my hands with SRUN, SBATCH and SALLOC, and thought SBATCH will
do what I am looking for. However, SBATCH starts with assigning the requested
resource configuration but then runs every srun command on every node. For
instance, if my script looks like:
sbatch is the command t