Hi,

Am 25.10.2016 um 13:20 schrieb Patrice Peterson:
> is there a build-in way to queue LoadLeveler-like job steps in SLURM?
> Something like this:
> 
>     #!/bin/bash
>     #SBATCH --num-tasks=1
>     echo "prepping data, simple stuff"
>     #SBATCH --- END STEP
>     
>     #SBATCH --num-tasks=4
>     echo "main run, needs lots of resources"
>     srun main_program
>     #SBATCH --- END STEP
>     
>     #SBATCH --num-tasks=1
>     echo "cleaning up data, simple stuff"
> 
> which would then submit three more-or-less "separate" jobs with
> dependencies on each other?

Hmm, interesting question!
Made me overthink how dependencies get modeled here...
And I found
https://hpc.nih.gov/docs/job_dependencies.html
gives a better overview than the slurm docs -- you already know all of that?

What I'm even more unsure of is if the subtleties of (not) using per job
prolog/epilog is something to consider too... like a possible requeue
after a failed prolog and the difference between a dedicated cleaning
job and an epilog that doesn't care to much in case of errors.

Not really an answer. Just idea I think might be related.
If not: sorry for the noise.

Regards,
Benjamin
-- 
FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html
vox: +49 3641 9 44323 | fax: +49 3641 9 44321

Reply via email to