After revisit the `PropagatePrioProcess`, I solved this by setting `PropagatePrioProcess=1`, so that jobs could be scheduled by nice value. And I believe `--priority` of sbatch is for multifactor, that's why it didn't work with simple `--priority=NUMBER`. Please correct me if I still understand it wrong. Thanks!
On Thu, Feb 23, 2017 at 3:16 PM, Shenglai Li <lyell...@gmail.com> wrote: > Hi all, > > I'm having a naive question for setting priority by sbatch. > (PriorityType=priority/basic in slurm.conf) For example: > > JOBID PARTITION NAME USER ST TIME NODES > NODELIST(REASON) > > 84 wolverine sleep.sh ubuntu PD 0:00 1 > (Resources) #by sbatch sleep.sh > > 85 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) #by sbatch --priority=100 sleep.sh > > 86 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) #by sbatch --nice=100 sleep.sh > > 87 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) #by sbatch --nice=500 sleep.sh > > 88 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) #by sbatch --nice=300 sleep.sh > > 89 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) #by sbatch --priority=1000 sleep.sh > The output from scontrol show job shows `Priority=1 Nice=0` when simply > using `sbatch sleep.sh`. > It also shows `Priority=1 Nice=0` when using `sbatch --priority=XXX > sleep.sh`. > The priority order also won't change if I use different `nice` value, > (sbatch --nice=XXX sleep.sh) although it gets changed on scontrol show > output. > However, it works when I update the priority by `sudo scontrol update > jobid=89 priority=100`. > > JOBID PARTITION NAME USER ST TIME NODES > NODELIST(REASON) > > 89 wolverine sleep.sh ubuntu PD 0:00 1 > (Resources) > > 84 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) > > 85 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) > > 86 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) > > 87 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) > > 88 wolverine sleep.sh ubuntu PD 0:00 1 > (Priority) > > I'm just wondering if update job priority after submission is the best > practice, or if I misunderstand something about this. > Thank you all! > >