;>
>> Best,
>>
>> Sean
>>
>>
>> On Thu, May 17, 2018 at 4:49 PM, Matthieu Hautreux <
>> matthieu.hautr...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Communications in Slurm are not only performed from controller to slurmd
Hi,
Communications in Slurm are not only performed from controller to slurmd
and from slurmd to controller. You need to ensure that your login nodes can
reach the controller and the slurmd nodes as well as ensure that slurmd on
the various nodes can contact each other. This last requirement is
Le jeu. 17 mai 2018 11:28, Mahmood Naderan a écrit :
> Hi,
> For an interactive job via srun, I see that after opening the gui, the
> session is terminated automatically which is weird.
>
> [mahmood@rocks7 ansys_test]$ srun --x11 -A y8 -p RUBY --ntasks=10
> --mem=8GB --pty
Hi,
At the time the MCS logic was added to Slurm, the filtering of slurmdbd
related information based on the MCS label was defered because it requires
a new field (mcs_label) into the slurmdbd job/step records.
The addition of this label in the main branch took times and only appears
in 17.11
Hi,
your login node may have a heavy load while starting such a large number of
independant sruns.
This may induce issues not seen under normal load, like partial read/write
on sockets, triggering bugs in slurm, for functions not properly protected
against such events.
Quickly looking at the
Hi Kevin,
Based on my understanding and a discussion with the SLURM dev team on that
subject, here are some information about the new support of X11 in
slurm-17.11 :
- slurm's native support of X11 forwarding is based on libssh2
- slurm's native support of X11 can be disabled at