On 30/08/16 12:39, Lachlan Musicman wrote:
> Oh! Thanks.
>
> I presume that includes sruns that are in an sbatch file.
Yup, that's right.
cheers!
Chris
--
Christopher SamuelSenior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email:
Correct, NNU would allow a named user to run unlimited MATLAB on up to two
machines concurrently, so I could see how things might get out of sync. As
Stack pointed out, this is for MDCS which, similar to MATLAB concurrent,
requires a single license for a single running instances (i.e. 32
Hi All,
I'm having an issue with idle interactive sessions taking up resources for
many hours. I was hoping I could restrict all interactive sessions to a
single partition and then lower the time limit on that partition, but I
haven't been able to find anything about partition specification
Hello Folks,
We have recently deployed SLURM v 15.08.7-build1 on Ubuntu 16.04 submission
and execution nodes with apt-get; we built and installed the source
packages of the same release on Ubuntu 14.04 for the controller.
Our primary issue is that we’re not able to run multiple jobs in a single
Hello,
We have one user who needs a filesystem mounted with a localflock
option when he runs jobs. Can we just add a mount/unmount command to
the Prolog and Epilog scripts, like this?
if [ $SLURM_JOB_USER = "username" ]; then
umount /mnt/mountpoint
mount -t nfs -o
Am Sun, 21 Aug 2016 17:29:39 -0700
schrieb Christopher Samuel :
> Unfortunately you can only do this as part of the taskprolog, so
> prepending to the users stdout.
Thanks for confirming the (for me) puzzling state of affairs. Since I
need to append info about the
I'm having difficulty using the Slurm API. Specifically the batch submit
call. When submitted through sbatch a bare bones script runs fine. When
submitted through the API the same script fails with a NODE_FAIL status.
slurm_submit_batch_job returns 0, and the response message contains a valid