A great deal depends upon your hardware and configuration. Slurm
should be able to handle a few hundred jobs per soecond when tuned for
high throughput as described here:
http://slurm.schedmd.com/high_throughput.html
If not tuned for high throughput, say with lots of logging, running on
a virtual machine, etc. then the slurmctld daemon will definitely bog
down. What sort of throughput were you seeing? Did the jobs just exit
right away?
Moe Jette
SchedMD
Quoting Paul Edmon <[email protected]>:
So I've found that if some one submits a ton of jobs that have a
very short runtime slurm tends to trash as jobs are launching and
exiting pretty much constantly. Is there an easy way to enforce a
minimum runtime?
-Paul Edmon-