[slurm-dev] Re: Job Accounting for sstat

2016-08-29 Thread Christopher Samuel
On 30/08/16 12:39, Lachlan Musicman wrote: > Oh! Thanks. > > I presume that includes sruns that are in an sbatch file. Yup, that's right. cheers! Chris -- Christopher SamuelSenior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email:

[slurm-dev] RE: slurm license counter

2016-08-29 Thread Raymond Norris
Correct, NNU would allow a named user to run unlimited MATLAB on up to two machines concurrently, so I could see how things might get out of sync. As Stack pointed out, this is for MDCS which, similar to MATLAB concurrent, requires a single license for a single running instances (i.e. 32

[slurm-dev] Combating idle interactive sessions

2016-08-29 Thread T Friddy
Hi All, I'm having an issue with idle interactive sessions taking up resources for many hours. I was hoping I could restrict all interactive sessions to a single partition and then lower the time limit on that partition, but I haven't been able to find anything about partition specification

[slurm-dev] Multiple simultaneous jobs on a single node on SLURM 15.08.

2016-08-29 Thread Luis Torres
Hello Folks, We have recently deployed SLURM v 15.08.7-build1 on Ubuntu 16.04 submission and execution nodes with apt-get; we built and installed the source packages of the same release on Ubuntu 14.04 for the controller. Our primary issue is that we’re not able to run multiple jobs in a single

[slurm-dev] Question about Prolo and Epilog

2016-08-29 Thread Jason Hurlburt
Hello, We have one user who needs a filesystem mounted with a localflock option when he runs jobs. Can we just add a mount/unmount command to the Prolog and Epilog scripts, like this? if [ $SLURM_JOB_USER = "username" ]; then umount /mnt/mountpoint mount -t nfs -o

[slurm-dev] Re: The canonical way to write to user's output (stderr) log file on end of job

2016-08-29 Thread Dr. Thomas Orgis
Am Sun, 21 Aug 2016 17:29:39 -0700 schrieb Christopher Samuel : > Unfortunately you can only do this as part of the taskprolog, so > prepending to the users stdout. Thanks for confirming the (for me) puzzling state of affairs. Since I need to append info about the

[slurm-dev] NODE_FAIL on slurm_submit_batch_job()

2016-08-29 Thread Stephen Barrett
I'm having difficulty using the Slurm API. Specifically the batch submit call. When submitted through sbatch a bare bones script runs fine. When submitted through the API the same script fails with a NODE_FAIL status. slurm_submit_batch_job returns 0, and the response message contains a valid