Marco,

My configuration is as follows (on the compute node(s)) and it works for
me.

There is no need to set the 'UsePAM=1' variable in the slurm.conf.
I do not have the file /etc/pam.d/slurm.
Remove pam_slurm.so from the system-auth file and place it in the
password-auth file.

Edit /etc/pam.d/password-auth:

     account     required      pam_unix.so
     account     required      pam_slurm.so   <======== make sure it is
above the other "account required .." lines (except pam_unix.so)
     account     sufficient    pam_localuser.so
     account     sufficient    pam_succeed_if.so uid < 500 quiet
     account     required      pam_permit.so

Edit /etc/security/access.conf (last 2 lines):

     # All other users should be denied to get access from all sources.
     + : root : ALL    <======== uncomment this line
     - : ALL  : ALL    <======== uncomment this line

Regards,
Doug




From:   Marco Passerini <[email protected]>
To:     "slurm-dev" <[email protected]>,
Date:   03/06/2013 06:39 AM
Subject:        [slurm-dev] Slurm Pam to block access to nodes, not working




Hi,

I'm configuring a new cluster, with the latest development version of
Slurm. I'd like to have PAM configured to normally prevent users from
logging into the compute nodes, and allow them to log into the nodes
only when they have a valid allocation. I tried to configure Slurm-PAM
but it didn't work.

The computing nodes run CentOS 6.3, are configured in the following way:

[root@c2 ~]# rpm -qa | grep slurm
slurm-devel-2.6.0-0pre1.el6.x86_64
slurm-lua-2.6.0-0pre1.el6.x86_64
slurm-sql-2.6.0-0pre1.el6.x86_64
slurm-slurmdbd-2.4.3-1.el6.x86_64
slurm-plugins-2.6.0-0pre1.el6.x86_64
slurm-pam_slurm-2.6.0-0pre1.el6.x86_64
slurm-munge-2.6.0-0pre1.el6.x86_64
slurm-spank-x11-debuginfo-0.2.5-1.x86_64
slurm-2.6.0-0pre1.el6.x86_64
slurm-sjobexit-2.6.0-0pre1.el6.x86_64
slurm-sjstat-2.6.0-0pre1.el6.x86_64
slurm-perlapi-2.6.0-0pre1.el6.x86_64
slurm-torque-2.6.0-0pre1.el6.x86_64
slurm-spank-x11-0.2.5-1.x86_64

[root@c2 ~]# rpm -ql slurm-pam_slurm
/lib64/security/pam_slurm.so

[root@c2 ~]# cat /etc/pam.d/slurm
auth     required  pam_localuser.so
account  required  pam_unix.so
session  required  pam_limits.so


[root@c2 ~]# cat /etc/pam.d/system-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so try_first_pass nullok
auth        required      pam_deny.so

account     required      pam_unix.so broken_shadow
account     required      pam_slurm.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so try_first_pass use_authtok nullok
sha512 shadow
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in
crond quiet use_uid
session     required      pam_unix.so


[root@c2 ~]# ls -lah /etc/pam.d/slurm
-rw-r--r-- 1 root root 101 Aug  8  2012 /etc/pam.d/slurm

[root@c2 ~]# ls -lah /etc/pam.d/system-auth
-rw-r--r-- 1 root root 745 Aug  8  2012 /etc/pam.d/system-auth


[root@c2 ~]# cat /etc/slurm/slurm.conf | grep -i pam
UsePAM=1

[root@c2 ~]# cat /etc/slurm/slurm.conf | grep -i PropagateRes
PropagateResourceLimitsExcept=MEMLOCK,RLIMIT_AS,RLIMIT_CPU,RLIMIT_NPROC,RLIMIT_CORE,RLIMIT_DATA,RLIMIT_RSS,STACK


There's a copy of my ssh-key in the .ssh/authorized_keys in my home folder.

On the nodes there's my user identity in /etc/passwd and /etc/group, but
there's not shadow file.

If I login with my account to a node I can enter with no problems and
/var/log/secure says the following:

Mar  6 15:22:35 c2 sshd[64542]: Accepted publickey for myusername from
10.10.0.13 port 54821 ssh2
Mar  6 15:22:35 c2 sshd[64542]: pam_unix(sshd:session): session opened
for user myusername by (uid=0)

So, how can I prevent normal users to enter into the nodes if there's no
allocation? Am I doing something wrong?

Thanks in advance,
Marco

Reply via email to