Hi All,
In our environment we have GPU. so what i found is if the user having high
priority and his job is in queue and waiting for the GPU resources which
are almost full and not available. so the other user submitted the job
which does not require the GPU resources are in queue even though lots
Hi Team,
After my Analysis i found that the user used the qdel command which is a
plugin with slurm and the job is not killed properly and it makes the
slurmstepd process in a kind of hung state. so when i was trying to start
the slurmd the process was not getting started.after killing those
I think that’s correct. From notes I’ve got for how we want to handle our
fairshare in the future:
Setting up a funded account (which can be assigned a fairshare):
sacctmgr add account member1 Description="Member1 Description" FairShare=N
Adding/removing a user to/from the funded
Hello all.
Is it possible to configure Slurm so that fairshare calc on a partition
does not impact calc on a different one?
We'd need to have different "priorities" on the "postprocessing" nodes
than the ones on "parallel" nodes, so that even if an user already used
up all his "quota" on