Hi Chris,

you're right that the  job array size is limited to 64k in 14.03 and
before. With the upcoming 14.11 this limit is raised to 4M IIRC. You
could check this year's SLURM user group presentations
(http://slurm.schedmd.com/publications.html) where this was mentioned.

A far as I know there is no limit to the number of steps you can have in
your job script.

Regards,

        Uwe

Am 14.10.2014 03:54, schrieb Holmes, Christopher (CMU):
> Thanks Moe.
> 
> For further clarification, they are single-threaded jobs, and they may be in 
> a job array. I noticed that the job array size is limited by its data type:
> 
> ctl_conf_ptr->max_array_sz              = (uint16_t) NO_VAL;
> 
> Does anyone know if jobsteps in a batch script are limited in size? I’m 
> guessing that they are unlimited.
> 
> I’m also guessing that job arrays and job steps have faster throughput 
> capability since the resources are already scheduled. Is there much 
> difference between jobsteps and job arrays in terms of throughput or scale?
> 
> Thanks,
> --Chris
> 
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] 
> Sent: Monday, October 13, 2014 12:45 PM
> To: slurm-dev
> Subject: [slurm-dev] Re: SLURM experience with high throughout of 
> short-running jobs?
> 
> 
> That is highly configuration dependent. It is also notable that each major 
> releas eof Slurm  over the past few years has been significantly faster than 
> the previous release. Generally you should be able to sustain a few hundred 
> jobs per second.
> 
> Quoting "Holmes, Christopher (CMU)" <[email protected]>:
> 
>> Hello,
>>
>> Can anyone provide some information or experience with using SLURM to 
>> manage a high volume of short-running jobs (<60 seconds) on a large 
>> (2000+ node) cluster? Any rough numbers on throughput (ex.
>> 1000+jobs scheduled per second) would be appreciated!
>>
>> Thanks,
>> --Chris
> 
> 
> --
> Morris "Moe" Jette
> CTO, SchedMD LLC
> 

Reply via email to