Hi Neile,
I agree that you can run a firewall to block off all non-cluster nodes.
The requirement is that between compute nodes, all ports must be opened
in the firewall (in case you use one).
/Ole
On 10/27/2016 05:11 PM, Neile Havens wrote:
Can anyone confirm that Moe's statement is still valid with the current
Slurm version?
Conclusion: Compute nodes must have their Linux firewall disabled.
FWIW, I still run a firewall on my compute nodes. The firewall is open to any
traffic from other compute nodes or the head node, but blocks traffic from
elsewhere on our network (unfortunately, we don't have a dedicated network for
our cluster environment). Here are my notes from my install of SLURM 16.05 on
CentOS 7.
- head node
- NOTE: port 6817/tcp is for slurmctld, port 6819/tcp is for slurmdbd
- NOTE: opening to anything from cluster nodes, so that srun works (per Moe
Jette's comment in the link you sent)
- sudo firewall-cmd --add-rich-rule='rule family="ipv4" source
address="a.b.c.d/XX" accept'
- sudo firewall-cmd --runtime-to-permanent
- compute nodes
- NOTE: port 6818/tcp is for slurmd
- NOTE: opening to anything from cluster nodes makes it simpler to work
with MPI, although it
should be possible to configure specific port ranges in
/etc/openmpi-x86_64/openmpi-mca-params.conf
- sudo firewall-cmd --add-rich-rule='rule family="ipv4" source
address="a.b.c.d/XX" accept'
- sudo firewall-cmd --runtime-to-permanent