Sorry for the delay.I have tried those parameters and while it does prevent propogation of ulimits, it does not seem to work for srun or MPI jobs lauched with srun inside an sbatch, therefor it seems any users that start jobs with SRUN are limited to the ulimits of my login nodes.
After some consideration I feel I might be able to set up something with a prolog script, which I will test tomorrow.
Thanks! ------------------- Nicholas McCollum HPC Systems Administrator Alabama Supercomputer Authority On Sat, 4 Jun 2016, Pär Lindfors wrote:
On 06/03/2016 08:28 PM, Nicholas McCollum wrote:Slurm uses a totally different process to execute the jobs, which is fine... except the jobs inherit the ulimits from my login nodes. I have found that this can be circumvented by using --propagate=NONE while submitting the sbatch command, although I haven't found a way to force this upon all submitted jobs. I've tried using /etc/sysconfig/slurm and it appears this file is ignored. I would even be happy if this is something that I could set in the job_submit.lua plugin, but I have not seen a variable for something like this. Any ideas?Use the options PropagateResourceLimits or PropagateResourceLimitsExcept. Both are documented in the slurm.conf(5) man page. Pär Lindfors, NSC -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
smime.p7s
Description: S/MIME Cryptographic Signature
