Hello, I have the following slurm nodes/partitions configuration:
SelectType=select/linear
NodeName=x[0-3] Feature=GPU Weight=2
NodeName=x[4-15] Weight=1
PartitionName=graph Nodes=x[0-15]
I want to start job on 4 nodes with allocation of one GPU-node at least (assume all nodes x[0-15] are idle), e.g.:
srun -p graph -N 4 -C "GPU*1" hostname -s
slurm-2.1.15 allocates nodes x[0,4-6]. That's expectedly (3 nodes with less weight=1 and 1 node with GPU feature). We upgraded slurm to 2.2.0 version and discovered it allocates nodes x[0-3] unexpectedly for the same nodes configuration and the same job.
If this is the correct slurm behaviour or it's the bug ?
What part of slurm code may be responsible for this behaviour ?

--
Best regards, Dennis.

Reply via email to