Hi,

We are getting our feet wet with slurm job arrays. It is nice to have that
feature.

One comment - it was a bit surprising that for the first job array job
SLURM_JOBID==SLURM_ARRAY_JOB_ID

It seems generally true that if one submits a 100 index job array that the
100 SLURM_JOBIDS are allocated sequentially. Can we count on that being
true?

Does anyone have some example sbatch scripts where the jobs' input, output
and working directory don't trivially map to job array indexes? That is
imagine a job array where someone already has many directories of various
names with input and output already prepared, but not trivial constructions
of the task id.  We had quickly hacked a list of those directories into a
file and then used the job array index to pick a line which then
corresponds to the directory for that job, but the input and output files
are already chosen at the higher level.

Anyway, if you have some examples I'd love to see them.

Does anyone have a feel for if or to what extent job arrays can be used to
increase the number of pending jobs which slurmctld can deal with before
message timeouts are hit?

Thanks in advance,
Chris

Reply via email to