On 10/01/17 10:57, Christopher Samuel wrote:

> If you are unlucky enough to have SSH based job launchers then you would
> also look at the BYU contributed pam_slurm_adopt

Actually this is useful even without that as it allows users to SSH into
a node they have a job on and not disturb the cores allocated to other
jobs on the node, just their own.

You could argue that this is more elegant though, to add an interactive
shell job step to a running job:

[samuel@barcoo ~]$ srun --jobid=6522365  --pty -u ${SHELL} -i -l
[samuel@barcoo010 ~]$
[samuel@barcoo010 ~]$ cat /proc/$$/cgroup
4:cpuacct:/slurm/uid_500/job_6522365/step_1/task_0
3:memory:/slurm/uid_500/job_6522365/step_1
2:cpuset:/slurm/uid_500/job_6522365/step_1
1:freezer:/slurm/uid_500/job_6522365/step_1


-- 
 Christopher Samuel        Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/      http://twitter.com/vlsci

Reply via email to