Moe,
All worked as suggested, thanks. Just a side note, on login nodes
without further options, slurm daemon stops because is unable to find
local node name. slurmd -N <nodename> did the trick.
Appreciated thanks!
Carlos
On 11/18/2011 07:22 PM, Moe Jette wrote:
Quoting Carlos Aguado Sanchez <[email protected]>:
Dear all,
We would like to use slurm to start managing the compute resources of a
small cluster of nodes, O(10). Please, let me check with you what is the
correct way to setup a cluster where users shall login to a number of
separate nodes to submit their jobs. Ideally, those login nodes do not
take part of the compute pool.
Do not define the login nodes in slurm.conf, but do install SLURM on
those nodes.
Additionally, gres is used to control gpu resources. The hardware of the
compute and login pools is different (e.g. lack of GPUs on the login
part).
We have got a working configuration where all compute nodes share the
same slurm.conf file. In the login node, slurm.conf is slightly modified
because Gres fails to load in absence of the GPU dev files. That is,
login nodes have the option GresTypes commented.
That was fixed very recently. Perhaps version 2.3.2 or not yet
available. You can configure a gres.conf file like this for now on those
nodes:
name=gpu count=0
I have seen the option NO_CONF_HASH to prevent from logging conf.
related error messages. I'm not sure this is desired though. Could you
please shed some light into this?
I would recommend defining different gres.conf files and using the same
slurm.conf without NO_CONF_HASH
I have also seen the --enable-front-end option, but I'm not sure it
applies to this case, does it?
That would normally be used only on IBM BlueGene or Cray computers.
Thank you!
Carlos