Untested, but you should be able to use a job_submit.lua file to detect if the 
job was started with srun or sbatch:

  *   Check with (job_desc.script == nil or job_desc.script == '')
  *   Adjust job_desc.time_limit accordingly

Here, I just gave people a shell function "hpcshell", which automatically drops 
them in a time-limited partition. Easier for them, fewer idle resources for 
everyone:

hpcshell ()
{
    srun --partition=interactive $@ --pty $SHELL -i
}
________________________________
From: slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf of Jaekyeom 
Kim <bta...@gmail.com>
Sent: Tuesday, August 4, 2020 5:35 AM
To: slurm-us...@schedmd.com <slurm-us...@schedmd.com>
Subject: [slurm-users] Correct way to give srun and sbatch different MaxTime 
values?


External Email Warning

This email originated from outside the university. Please use caution when 
opening attachments, clicking links, or responding to requests.

________________________________
Hi,

I'd like to prevent my Slurm users from taking up resources with dummy shell 
process jobs left unaware/intentionally.
To that end, I simply want to put a tougher maximum time limit for srun only.
One possible way might be to wrap the srun binary.
But could someone tell me if there is any proper way to do it, please?

Best,
Jaekyeom

Reply via email to