Paul Edmon via slurm-users writes:
> https://slurm.schedmd.com/upgrades.html#compatibility_window
>
> Looks like no. You have to be with in 2 major releases.
Also, server must be newer than client.
--
B/H
signature.asc
Description: PGP signature
--
slurm-users mailing list --
Ole Holm Nielsen via slurm-users writes:
> Whether or not to enable Hyper-Threading (HT) on your compute nodes
> depends entirely on the properties of applications that you wish to
> run on the nodes. Some applications are faster without HT, others are
> faster with HT. When HT is enabled, the
Sandor via slurm-users writes:
> I am working out the details of scrontab. My initial testing is giving me
> an unsolvable question
If you have an unsolvable problem, you don't have a problem, you have a
fact of life. :)
> Within scrontab editor I have the following example from the slurm
>
Tim Wickberg via slurm-users writes:
> [1] Slinky is not an acronym (neither is Slurm [2]), but loosely
> stands for "Slurm in Kubernetes".
And not at all inspired by Slinky Dog in Toy Story, I guess. :D
--
Cheers,
Bjørn-Helge Mevik, dr. scient,
Department for Research Computing, University
Jeffrey T Frey via slurm-users writes:
>> AFAIK, the fs.file-max limit is a node-wide limit, whereas "ulimit -n"
>> is per user.
>
> The ulimit is a frontend to rusage limits, which are per-process restrictions
> (not per-user).
You are right; I sit corrected. :)
(Except for number of procs
Ole Holm Nielsen writes:
> Hi Bjørn-Helge,
>
> That sounds interesting, but which limit might affect the kernel's
> fs.file-max? For example, a user already has a narrow limit:
>
> ulimit -n
> 1024
AFAIK, the fs.file-max limit is a node-wide limit, whereas "ulimit -n"
is per user.
Now that I
Ole Holm Nielsen via slurm-users writes:
> Therefore I believe that the root cause of the present issue is user
> applications opening a lot of files on our 96-core nodes, and we need
> to increase fs.file-max.
You could also set a limit per user, for instance in
/etc/security/limits.d/. Then
We've been running one cluster with SlurmdTimeout = 1200 sec for a
couple of years now, and I haven't seen any problems due to that.
--
Regards,
Bjørn-Helge Mevik, dr. scient,
Department for Research Computing, University of Oslo
signature.asc
Description: PGP signature
--
slurm-users
Amjad Syed via slurm-users writes:
> I need to submit a sequence of up to 400 jobs where the even jobs depend on
> the preceeding odd job to finish and every odd job depends on the presence
> of a file generated by the preceding even job (availability of the file for
> the first of those 400
This isn't answering your question, but I strongly suggest you build
Slurm from source. You can use the provided slurm.spec file to make
rpms (we do) or use "configure + make". Apart from being able to
upgrade whenever a new version is out (especially important for
security!), you can tailor the
10 matches
Mail list logo