I’ve had a hard time translating some per-user limits from Moab to Slurm.  As 
far as I can tell, there isn’t really a way to set a default combined limit 
across the cluster for the jobs a particular user can run and soft limits like 
you describe don’t really exist.

You can kind of set a hard limit on the number of jobs a user can run, if you 
force all jobs to a QOS with MaxJobsPerUser set.  This limits the number of 
jobs in that QOS that a user can run.  I think I could probably make that work 
well enough.

Because soft limits aren’t really there.  I wrote some code to add a 
max_resv_jobs parameter to the associations to limit the number of jobs that 
can run before the backfill process stops reserving slots for jobs in that 
association.  That makes it so that the jobs that can’t run immediately, sit in 
the queue, while other jobs reserve slots.  However, my code doesn’t do 
anything to the job priority, so it’s not quite the same as Moab putting the 
jobs into the ‘blocked’ list and only checking them once the ‘eligible’ jobs 
have been checked.
I’d love to see some other options to accomplish this kind of thing.

-----
Gary Skouson



From: Liam Forbes [mailto:[email protected]]
Sent: Thursday, February 18, 2016 10:04 AM
To: slurm-dev <[email protected]>
Subject: [slurm-dev] max user jobs limit?

Good morning,

I am using SLURM 15.08.0 and need to implement a maximum number of jobs per 
user. The chosen scheduler is backfill, and I found the bf_max_job_user 
parameter in the slurm.conf man page. Is it possible to have a soft limit like 
in Maui/MOAB where if no other user's jobs are in the system then a user can 
have multiple jobs, up to a hard limit, running?

Regards,
-liam

-There are uncountably more irrational fears than rational ones. -P. Dolan
Liam Forbes     [email protected]<mailto:[email protected]>     ph: 
907-450-8618 fax: 907-450-8601
UAF Research Computing Systems Senior HPC Engineer            LPIC1, CISSP

Reply via email to