We are pleased to announce the availability of Slurm version
17.11.0-0rc3 (release candidate 3).
The release candidate series reflects the end of feature development for
each release, the finalization of the RPC layer, and will - except for
bug fixes developed during the RC time frame - will
Hi Gennaro,
I tried that. It doesnt even queue the job with an error:
sbatch: unrecognized option '--array=1-24'
sbatch: error: Try help for more information.
Best,
Renat.
From: slurm-users [slurm-users-boun...@lists.schedmd.com] On Behalf Of Gennaro
Hi Renat,
On Thu, Nov 09, 2017 at 03:09:17PM +0100, Yakupov, Renat /DZNE wrote:
> I would like some suggestions on how to spread out in time the start
> of multiple parallel jobs with srun.
I would use:
sbatch --array=...
As far as I know srun doesn't support arrays.
> Is there a way to get a
Thanks I didn't about that parameter!!!
El 09/11/2017 a las 14:45, Brian W. Johanson escribió:
man slurm.conf
SchedulerParameters
The interpretation of this parameter varies by
SchedulerType. Multiple options may be comma separated.
max_script_size=#
Hi,
I'd be interested to know in what circumstances a multiple-MB-sized
batch script would be sensible and/or necessary.
Cheers,
Loris
"Brian W. Johanson" writes:
> man slurm.conf
>
> SchedulerParameters
> The interpretation of this parameter varies by
Dear SLURM users,
I would like some suggestions on how to spread out in time the start of
multiple parallel jobs with srun.
I have a very basic script which specifies number of nodes and tasks with just
one command: srun myjob. The problem is that 10-20 tasks start accessing files
at the same
man slurm.conf
SchedulerParameters
The interpretation of this parameter varies by SchedulerType.
Multiple options may be comma separated.
max_script_size=#
Specify the maximum size of a batch script, in bytes. The
default value is 4
Hello i have problem with my features.
i declared in my slurm.conf the features but when i run slurmd compute node i
have this problem in my log :
FeaturesAvail=(null) FeaturesActive=(null)
And the problem is that slurmd not start my function node_features_p_node_set
and i can not get my
I'll surely produce documentation as soon as I understand how all the
cluster is working. (It was something kinda "Here it is the root password
and the key to the room. You don't need anything else, don't you?" :) )
Thank to your precious suggestions I was able to get that the common shared
space