Hi Christoph,

We do have it working with AUKS, which is mostly outside GE.

When a user uses 'qsub', there is a JSV that checks if the user has a ticket cache, and if so, submits it to the AUKS server (or prints a message saying job won't be able to access AFS). The AUKS server renews credentials that it keeps in its cache.

When the job is launched there is a prolog script that uses a local service principal to check out the user's kerberos principal from the AUKS server cache. And another script that renews the credentials locally.

epilog script kills the local processes. auksd can renew the credentials until the max policy setting (7 days for us at the Kerberos master servers).

Let me know if you need more help. Also the author of auksd Matthieu Hautreux was very responsive via e-mail.

http://sourceforge.net/projects/auks/
http://workshop.openafs.org/afsbpw10/wed_3_2.html

We use it with AFS, so there are also some 'aklog' commands in our scripts.

Regards,
Alex

On 10/02/2012 12:00 AM, Christoph Müller wrote:
Dear all,

I have been researching the web for some time, but have not yet found a
definite answer to the question whether SGE can be used with Kerberos
authentication. My questions are: How can I forward the user's ticket
from the submit hosts? Does SGE provide any built-in means for that?
Otherwise, could it be done using startup scripts? Is there any support
for automatically renewing ticktes for long-running jobs?

In detail: my boss decided that it would no longer be acceptable to live
with the well-known security issues inherent to NFS. We think that
kerberised NFS is probably the most user-friendly solution. However,
this will also affect our cluster and forces us to enable KRB5 here,
too. At the moment, users are authenticated using KRB5 on the submit
host, i.e. they have a ticket there. They could also acquire a ticket on
the execution hosts by SSH'ing there. However, afaik this cannot be
exploited for SGE, because the job script is executed by the shepard on
the first execution host assigned by the scheduler. I.e. the job is
started by the shepard spawning a process as the user and not by the
user starting a session with his own credentials. Is that correct? How
can I then transport the user's ticket to the execution host and assign
it to the job's process?

If I have the ticket on the host that runs the job script, the problem
should be solved for MPI as its children are started using SSH, and I
could just change the login method of SSH from pubkey to KRB5. Is that
correct?

Another problem is the fact that jobs can be long-running, i.e. the
lifetime of ten hours of a ticket might not be sufficient. Does SGE
provide any means to periodically renew tickets? If not, does anyone
know of a successful hack? I think it would suffice if the job could
fork off a shell that periodically runs kinit -r.

If anyone knows about some web resources on this issue, I would be
grateful for the links.

Thanks in advance,
Christoph

Gesendet von meinem Windows Phone


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users


--
Alex Chekholko [email protected]
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to