CLASSIFICATION: UNCLASSIFIED
I've got a cluster with about 39 nodes, with 8 to 12 cores each. When I submit
a job array job of say 15k, about 300 of those jobs start up across the
cluster, but once those jobs complete, I only see one node's worth of jobs (say
8) going at a time from then on and always on just one node, and I don't see
the other nodes getting used at all. Would anyone have an idea as to why my
other nodes don't continue to get jobs placed on them? Here are some pertinent
settings from my slurm.conf
FastSchedule=0
SchedulerType=sched/builtin
SelectType=select/serial
SchedulerParameters=default_queue_depth=300
JobCompType=jobcomp/none
JobAcctGatherType=jobacct_gather/none
MaxJobCount=50000
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=2
SlurmctldTimeout=120
SlurmdTimeout=300
WaitTime=0
Thanks,
Tony
CLASSIFICATION: UNCLASSIFIED