I might look at these options:
*preempt_reorder_count=#*
Specify how many attempts should be made in reording preemptable
jobs to minimize the count of jobs preempted. The default value is
1. High values may adversely impact performance. The logic to
support this option is only available in the select/cons_res and
select/cons_tres plugins.
*preempt_strict_order*
If set, then execute extra logic in an attempt to preempt only the
lowest priority jobs. It may be desirable to set this configuration
parameter when there are multiple priorities of preemptable jobs.
The logic to support this option is only available in the
select/cons_res and select/cons_tres plugins.
*preempt_youngest_first*
If set, then the preemption sorting algorithm will be changed to
sort by the job start times to favor preempting younger jobs over
older. (Requires preempt/partition_prio or preempt/qos plugins.)
In general slurm will try not to preempt if it can avoid it. These
options can help to guide that a bit more intelligently.
-Paul Edmon-
On 5/29/19 8:53 AM, Mike Harvey wrote:
I am relatively new to SLURM, and am having difficulty configuring our
scheduling to behave as we'd like.
Partition based job preemption is configured as follows:
PreemptType=preempt/partition_prio
PreemptMode=suspend,gang
This has been working fine. However, we recently added an older server
to the cluster, and tried to give it a higher weight than our other
servers, with the intent that it would only used when resources on
other lower-weighted nodes are unavailable. What seems to happen,
though, is that a newly submitted job will preempt/suspend a
lower-priority job on a lower-weighted node before running on the
higher-weighted node. What we would like is for preemption to only
occur when no resources are available.
Is this possible?
Thanks,
Mike Harvey
Systems Administrator
Bucknell University
har...@bucknell.edu <mailto:har...@bucknell.edu>