Hi,

We have nodes with different amounts of memory in the same partition.
Currently we have the situation that the job with the highest priority
can only run on a fat node and has to wait because they are all full.

Other jobs, which could run on thin nodes, but have a lower priority,
don't start.

Is this the intended behaviour?  If so, can SLURM be configured to
behave differently?

This is observed with version 2.2.7, so things may well have changed.

Regards

Loris

-- 
This signature is currently under construction.

Reply via email to