On 10/01/17 10:57, Christopher Samuel wrote:
> If you are unlucky enough to have SSH based job launchers then you would
> also look at the BYU contributed pam_slurm_adopt
Actually this is useful even without that as it allows users to SSH into
a node they have a job on and not disturb the cores
On 06/01/17 15:08, Vicker, Darby (JSC-EG311) wrote:
> Among other things we want the prolog and epilog scripts to clean up any
> stray processes.
I would argue that a much better way to do that is to use Slurm's
cgroups support, that will contain a jobs processes into the cgroup
allowing it to
.ie>
Reply-To: slurm-dev <slurm-dev@schedmd.com>
Date: Friday, January 6, 2017 at 6:39 AM
To: slurm-dev <slurm-dev@schedmd.com>
Subject: [slurm-dev] Re: Prolog behavior with and without srun
Hi Darby,
I think the discrepency of behaviour you're seeing in the
Hi Darby,
I think the discrepency of behaviour you're seeing in the prolog is because of
this:
NOTE: By default the Prolog script is ONLY run on any individual node
when it first sees a job step from a new allocation; it does not run
the Prolog immediately when an allocation is granted.