This will be in the next major release of Slurm v13.12 in December.

Note this will maximize fragmentation of resources and adversely impact resource allocation for parallel jobs, so you would want to do this for a serial workload.

Quoting Roman Sirokov <[email protected]>:


Hello there,
How would you configure Slurm for a basic load-balancing? Currently we
are running a Slurm cluster of four nodes in one partition with the
following settings

   AllocNodes=ALL AllowGroups=ALL Default=YES
   DefaultTime=NONE DisableRootJobs=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=1
   Nodes=vm[3-6]
   Priority=1 RootOnly=NO Shared=YES:4 PreemptMode=OFF
   State=UP TotalCPUs=96 TotalNodes=4 DefMemPerNode=UNLIMITED
MaxMemPerNode=UNLIMITED


SchedulerType           = sched/backfill
SelectType              = select/cons_res
SelectTypeParameters    = CR_SOCKET_MEMORY


Submitted jobs always end up on the first node, until it is fully
booked and only then other nodes start receive jobs . How would you
fill nodes in a more uniform matter?
Many thanks in advance


Cheers,
Roman


Reply via email to