Am Tue, 29 Sep 2015 23:09:59 -0700
schrieb Marcin Stolarek <[email protected]>: 

> As far as I remember the easy way was to modify auth/munge not to trust
> root from particular host.

That is the point. While we trust all admins having root access, we
have a separation of trust between management nodes and compute nodes.
Only the latter have the hordes of users able to get local shells.

When someone has a local shell, there is always a chance that this
someone (or someone else who captured the login credentials/SSH key)
can become root. This chance is a lot higher than someone breaking into
a system without prior shell access. There are always root exploits.

Thus, we indeed don't want to trust root from a wide range of cluster
nodes. Now, the potential trouble I see with modifying munge (on the
slurmctl/db hosts):

1. Will everything needed for normal operation still work? I fear that
   things might break since this seems to be a non-standard way to run
   things.

2. This also removes some "safe" functionality like just querying the
   account limits for all users. But that is not really a problem as
   long as users can get at the information relevant to them.


> W dniu środa, 30 września 2015 Christopher Samuel <[email protected]>
> napisał(a):

> > You probably want an optional configurable list of multiple CIDR IP
> > ranges in slurmdbd.conf as there's no guarantee that slurmdbd (which is
> > what sacctmgr will be talking to) is running on the same host as slurmctld.

Right, that is a possible complication. But if communication is either
to slurmdbd or slurmctld, one can use ssh wrappers to the respective
commands and have the set of priviledged administration nodes provided
with the ssh keys for that, right? Or does sacctmgr also need
priviledged communication to slurmctld?


> > For instance we have many clusters talking back to a central host
> > running slurmdbd (and only slurmdbd) and have a variety of various
> > systems that talk to it for different tasks.

Ah, you employ the Great Solution of managing many clusters with a
common Slurm instance. We're just starting with a single cluster with
some partitions. Wouldn't even that case be covered by a wrapper akin to

        ssh $slurmdbhost sacctmgr ...

? I mean, there is a solution for trusing a set of machines / root
accounts already using ssh, which also does not rely on nobody
attempting IP spoofing.

I'll see if the path via munge works. Thanks for the comments, both of
you.


Alrighty then,

Thomas

-- 
Dr. Thomas Orgis
Universität Hamburg
RRZ / Zentrale Dienste / HPC
Schlüterstr. 70
20146 Hamburg
Tel.: 040/42838 8826
Fax: 040/428 38 6270

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to