Re: some cross-realm trust questions
On Tue, Dec 28, 2010 at 05:02:45PM +, Victor Sudakov wrote: Russ Allbery wrote: You use a password. Enter the same password on both sides when creating the key, and then be sure to remove any extraneous enctypes on the Heimdal side that AD isn't configured to provide. Do you mean to say that the key derivation algorithm is the same in Heimdal and in MS AD? The same password will yield the same key anywhere, in any Kerberos implementation? Of course: that's part of the standard, else there'd be no interop. And BTW how do I figure out what enctypes AD is configured to provide? Is there anything like kadmin get for AD? Our adjoin[0] script (which was referenced in a BigAdmin paper by Baban Kenkre[1]) implements a heuristic to detect what enctypes are available based on, IIRC, trying to add an LDAP attribute named msDS-SupportedEncryptionTypes to the machine account object. Failure denotes older AD supporting 1DES and RC4 only; success denotes support for AES-128 and AES-256. (The script then sets up the userAccountControl and msDS-SupportedEncryptionTypes attributes to configure the user of the intersection of the enctypes offered by AD and the enctypes available and enabled on the host being joined to AD.) You can probably port adjoin to work with Heimdal with relatively little work. [0] http://hub.opensolaris.org/bin/view/Project+winchester/files?viewer=historylanguage=en [1] http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: some cross-realm trust questions
On Tue, Dec 28, 2010 at 01:34:17PM -0800, Wilper, Ross A wrote: Our adjoin[0] script (which was referenced in a BigAdmin paper by Baban Kenkre[1]) implements a heuristic to detect what enctypes are available based on, IIRC, trying to add an LDAP attribute named msDS-SupportedEncryptionTypes to the machine account object. Failure denotes older AD supporting 1DES and RC4 only; success denotes support for AES-128 and AES-256. This is actually a bit dangerous. If an Active Directory has the schema upgraded to Windows 2008 or later, but not all domain controllers have been upgraded to Windows 2008 or later, then this will give the wrong response. I did say heuristic. There are, potentially, if not actually, other ways in which it could fail. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: some cross-realm trust questions
On Mon, Dec 27, 2010 at 05:20:19AM +, Victor Sudakov wrote: Nicolas Williams wrote: 1. If a cross-realm trust is configured, do the realms' KDCs ever have to exchange any traffic between each other? No, they do not. That's great, but at least at the initialization stage, how is a shared key for the corresponding krbtgt principals transferred between the two KDCs? The Windows New Trust wizard just asks for a password and never offers to export a keytab or anything. True, but this is a step that must be executed locally on each realm (with the same exact password). There's no standard protocol to help realms agree on shared x-realm keys, not yet anyways. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: some cross-realm trust questions
On Sat, Dec 25, 2010 at 07:10:53AM +, Victor Sudakov wrote: 1. If a cross-realm trust is configured, do the realms' KDCs ever have to exchange any traffic between each other? No, they do not. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ssh to IP literal
On Mon, Dec 13, 2010 at 01:03:17PM -0500, Greg Hudson wrote: On Mon, 2010-12-13 at 00:34 -0500, Russ Allbery wrote: Well, it poses a problem for domain to realm mappings, as you've seen. What Russ says is true, but on top of that, the Kerberos library also needs to know what service ticket to ask for. It's likely that the client tried to get tickets for host/10.14.13...@defaultrealm before falling back to guessing 14.134.5 as the realm. The proximal issue is that you need a reverse DNS entry for 10.14.134.5. (Reliance on DNS for this purpose is a long-standing security issue, but we still do it.) When an app resolves a user-given IP address to a name which is then used for authentication purposes, the app should prompt the user as to whether the name is the one the user had intended. Most non-browser apps don't really do that. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: UDP and fragmentation
On Tue, Sep 14, 2010 at 04:45:25AM +, Victor Sudakov wrote: Greg Hudson wrote: BTW what can make Kerberos packets so big? Microsoft says: Depending on a variety of factors including security identifier (SID) history and group membership, some accounts will have larger Kerberos authentication packet sizes. What's there inside? Long principal names? Long keys? An Active Directory KDC will include authorization data within a Kerberos ticket which includes the set of groups you are a member of. If that's a lot of groups, then your ticket will be large. It is very interesting. Where is room in a Kerberos ticket for such data? In the authorization-data field [of EncTicketPart]. See RFC4120. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: bug: krb5_get_host_realm() no longer uses DNS
On Thu, May 20, 2010 at 03:23:51PM -0400, Greg Hudson wrote: On Wed, 2010-05-19 at 18:29 -0400, Richard Silverman wrote: in my system, DNS TXT records *are* explicit local configuration. They're explicit configuration, but not local to the host machine. (They're local to your organization, but that's not the sense in which I was using the term.) Most importantly: lookups of DNS TXT RRs for host2realm mapping are not DNSSEC-protected in the current code base, nor is it likely that DNSSEC will be deployed in most Kerberos use cases today (though we all hope DNSSEC will be widely deployed soon enough). In the case of localhost-realm lookups for AS exchanges there's little harm that could come from using DNS TXT RRs if you're also using SRV RRs to find the KDCs for that realm. In the case of TGS exchanges to get service tickets to local host-based principals there's no harm at all from using localhost-realm lookups. In all other cases there can definitely be harm. Perhaps dns_lookup_realm could have more values: no, on_referral_failure, and always. I would set yes to mean always for backward compatibility, but you might want it to be on_referral_failure. I think this is more complexity than you need. You don't actually need DNS records to take priority over referrals; you just want them to apply to acceptors--or, more generally, for acceptors to work without local configuration of the server realm. A more complete generalization can be based on what I wrote above. I could see us making either of the following changes to use cases like yours: 1. Bringing the krb5_get_fallback_host_realm() logic back into the acceptor case somehow, effectively restoring the 1.5 behavior for acceptors. The most obvious way to do this would, unfortunately, be a Via a krb5_context flags abstracted via accessor functions! pretty clear layering violation--krb5_kt_get_entry should not be invoking host-to-realm logic--so this would probably be done within the krb5 GSS mechanism code. I'd expect this to be triggered from the mech, yes. 2. Making the referral realm match any realm within keytabs. With this behavior, your servers wouldn't need to consult DNS TXT records to determine their own realms (though you'd presumably still need the records for the sake of initiators), eliminating a denial of service spoofing attack. Sam did not like this option back in 1996, although I don't fully understand his objection: This doesn't work for non-privileged apps, but it'd work for sshd. One option would be to match any realm if you get a null realm principal in the keytab code. I dislike this option a lot even though I think Doug would like it. The problem is that you get behavior that might encourage some people to remove domain_realm mappings. I don't follow that concern. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: bug: krb5_get_host_realm() no longer uses DNS
On Wed, May 19, 2010 at 05:58:41PM -0400, Greg Hudson wrote: The design of referrals support assumes that referrals from the local realm are less reliable than explicit local configuration, and more reliable than DNS-based or heuristic mechanisms. Per that design, the following changes are intentional: * krb5_get_host_realm() returns the referral (empty) realm if there is no explicit local configuration. This _could_ look in the keytab... If there's keys for the hostname in one realm, that must be it (or if there's keys for the thing in multiple realms, then pick the first, or last). It could also use validate the keytab entries found by attempting to get a TGT. This would only work for privileged apps or unpriv apps with keytabs, but there could be a service whose job it is to record the host's realm in /var/run where unpriv apps could get at it. * krb5_sname_to_principal() uses the referral realm if there is no explicit location configuration for the host. Note that this implicitly uses a ccache. This is annoying. It'd be nice if there was a way to explicitly pass in a credential (TGT) or, rather, ccache to use for canonicalization. (The same applies to gss_canonicalize_name(); we really need gss_canonicalize_name_with_cred() and the krb5 API equivalent, possibly as a function to set the default cred/ccache in a krb5_context to use for operations that use an implied cred/ccache). To be clear: I have thousands of hosts in a single DNS domain, which are in varying realms. I do not have the option of renaming all the hosts to align with their realm membership, and static configuration is impractical; the DNS is necessary. You appear to know your options reasonably well; for what it's worth, I would recommend either: 1. Setting GSSAPIStrictAcceptorCheck false on your servers, and not worrying too much about the potential for a client to use the wrong service to authenticate to sshd. +1. It's not likely that you'll have multiple sshd services running where the name of the service is security significant to the client such that the client could be misled to connect to the wrong one. Though, technically speaking, misdirection is a potential problem resulting from not checking that the acceptor principal matches the expected name. 2. Configuring each server to know what realm it's in (via the default realm setting in krb5.conf). Based on your statements, this is more configuration complexity than you want in your environment, but it's more secure than having your servers perform a spoofable DNS TXT query to find out where the live. +1. One could certainly write a boot script that determines the realm and updates krb5.conf at boot time... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: using a ssh key for krb5 mount
On Mon, May 17, 2010 at 05:02:31PM +0200, Richard Smits wrote: But my question is, is this possible ? Obtaining a krb5 ticket with ssh public/private key mechanism ? SSHv2 supports the use of Kerberos via the GSS-API. Putty, OpenSSH, SunSSH, Van Dyke, and various other implementations all support that, and that is what you should use (plus credential delegation). The only way to do what you actually propose would be by having PKINIT user certificates whose subject public keys are also the users' SSH public keys or by adding a PKIX-agent to go with ssh-agent. That is not a common usage, and so not supported by any software that I know, but it is technically doable. The more complex issue is: how to authenticate to a remote server using SSH public keys, forwand an ssh-agent, and get the remote server to automatically obtain a TGT using PKINIT and your forwarded agent. Nothing supports that to my knowledge. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: bug: krb5_get_host_realm() no longer uses DNS
On Mon, May 17, 2010 at 04:32:51PM -0400, Richard Silverman wrote: On Mon, 17 May 2010, Greg Hudson wrote: If a server determines its realm via a TXT record, e.g. for gss_acquire_cred(), then it now fails where it worked in earlier versions (this has bitten me with OpenSSH). Is there a reason your server needs to use gss_acquire_cred with a specified name, as opposed to just passing null credentials to gss_accept_sec_context, or a null name to gss_acquire_cred? Well, often this is not possible; many servers have determination of their service principal hard-wired. In this particular case it is possible, since my OpenSSH build has a GSSAPIStrictAcceptorCheck parameter. However, then ideally I should place a copy of the host principal in an OpenSSH-specific keytab. OpenSSH does not have a parameter for keytab location, so I have to modify the startup process to set KRB5_KTNAME... and so on. You can always use GSS_C_NO_CREDENTIAL and then inquire the established security context's acceptor principal name to see that it matches what you expected. In fact, that's what I recommend. Ideally the GSS-API should allow adding elements for multiple principals to credential handles, but we're not there. The krb5 mechanism should get fixed, and it could be fixed without fixing the underlying krb5 API. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: bug: krb5_get_host_realm() no longer uses DNS
On Mon, May 17, 2010 at 11:00:36PM +0100, Simon Wilkinson wrote: On 17 May 2010, at 22:07, Nicolas Williams wrote: You can always use GSS_C_NO_CREDENTIAL and then inquire the established security context's acceptor principal name to see that it matches what you expected. When I added StrictAcceptorCheck support to my OpenSSH patches (and to rot in their bugzilla) I thought about doing this. But I never managed to find a mechanism and GSSAPI implementation independent way of getting a name out of the GSSAPI in a format that I could check against the expected name (host@something). If that now exists, I'd be happy to revisit this. There's technically two ways to do this comparison: Method #1: Use gss_compare_name() to compare a name obtained by calling gss_import_name() on host@hostname to the acceptor name returned by gss_inquire_context(). Method #2: Use memcmp() to compare exported name tokens obtained by applying gss_export_name() to the acceptor name returned by gss_inquire_context() and to the MN returned by applying gss_canonicalize_name() to the name returned by gss_import_name(host@hostname). (Obviously, if the two exported name tokens have different lengths then the names are not equal and there's no need to memcmp().) In _this_ case I think method #1 is best. (Normally, for name-based access controls method #2 is better.) (The method that might seem more obvious to newcomers, of trying to compare display forms of names, is not correct nor portable across mechanisms.) Bear in mind that the OpenSSH GSSAPI code is designed to work with mechanisms other than Kerberos, and with implementations other than MIT. Changes that require mechanism or implementation specific hacks are not desirable. The above is intended to be portable across mechanisms. GSS-API principal name comparison is described in RFCs 2743 (section 1.1.5) and 2744 (section 3.10). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: bug: krb5_get_host_realm() no longer uses DNS
On Mon, May 17, 2010 at 06:38:48PM -0400, Greg Hudson wrote: On Mon, 2010-05-17 at 18:21 -0400, Nicolas Williams wrote: Method #1: Use gss_compare_name() to compare a name obtained by calling gss_import_name() on host@hostname to the acceptor name returned by gss_inquire_context(). One of the reasons not to specify a desired name in an acceptor is that you don't know the hostname used by the client (because of aliases). Neither method #1 nor method #2 will work if you don't have a hostname value. You really just want to verify the host part. True, but you can just iterate over all the known canonical hostnames of the host. (This feature is usually desired for virtualization reasons.) Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 (Kerberos 5 V2 - GSSAPI) - RRC
On Thu, May 06, 2010 at 04:07:03PM +0530, Srinivas Cheruku wrote: The Wrap token should be rotated to the right by count specified in RRC field, where as looks like MIT Kerberos (1.8.1) is rotating to left (when gss_unwrap() is called). Is this right? It has to be to the right in one case (wrap) and to the left in the other (unwrap). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: mixed case hostname issue
Hostnames are always case folded (to lower case) in principal names. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: pam_krenew ?
Solaris' pam_krb5 talks to a daemon (ktkt_warnd) that renews TGTs in the background. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Maximum size of a Unix MIT Kerberos database backend
On Tue, Nov 10, 2009 at 11:14:40AM -0600, John Washington wrote: Our backend was last counted at over 200,000 principals and the only noticeable impact (at this time) is that propagation time is around two minutes. My previous experience was with ~100K principals, and indeed, it scales fine. I suspect it scales just fine to much larger sizes. Things to keep in mind: - The MIT krb5 KDC (and so the Solaris one) is single-threaded, and demand for KDC exchanges matters more than number of principals in KDB, but you're likely to have multi-code/multi-thread-CPU hardware, so you may want to create a VM/zone/jail per-core or per-hardware thread and run the KDC in as many as you need to scale to demand. You'll probably want to measure how many KDC exchanges you can get per-HW thread and decide how many KDCs you need based on expected demand. Estimating demand requires knowledge of what kerberized services you will have. In any case, if you will deploy incrementally, then you can add KDCs as you deploy. - Incremental propagation helps; I recommend it. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MS IWA - extended protection - SSPI - channel binding
On Tue, Sep 22, 2009 at 09:50:19AM -0700, Peter wrote: From what I can tell, this change was not pushed as a critical update, I had to install a patch manually to get channel binding capability for Windows XP (http://support.microsoft.com/kb/968389). I've done some experimenting with both Windows 7 and Windows XP and channel binding definitely behaves differently on the two platforms. With Windows 7, IWA authentication appears to provide channel binding regardless if the application requests extended protection. Actually, this is causing a runtime failure in my Java application using jgss without any channel bindings defined on the acceptor: GSSException: Channel binding mismatch (Mechanism level: ChannelBinding not provided!) The JGSS issue is CR #6851973: 6851973 ignore incoming channel binding if acceptor does not set one The fix will be in the October 2009 updates. (The fix was integrated into build b64.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Long-running jobs with renewal of krb5 tickets and AFS tokens
On Sat, Feb 28, 2009 at 11:40:26PM -0500, Jason Edgecombe wrote: I guess setting things for renewable tickets longer than 7 days or running the jobs in local disk will be easiest. We have a 7 day normal/renewable lifetime. What length do other sites have? I have seen sites use on the order of months for the renewable ticket lifetime, but still hours for normal ticket lifetime. If you already use seven days for renew life you might as well double it -- whatever your threat model is, if you can accept seven days then chances are you can accept fourteen. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Long-running jobs with renewal of krb5 tickets and AFS tokens
On Mon, Mar 02, 2009 at 09:02:59PM -0500, Jason Edgecombe wrote: Nicolas Williams wrote: I have seen sites use on the order of months for the renewable ticket lifetime, but still hours for normal ticket lifetime. If you already use seven days for renew life you might as well double it -- whatever your threat model is, if you can accept seven days then chances are you can accept fourteen. Doubling it wouldn't really help. It would probably need to be on the order of a month. If I were to change the renewable lifetime, I need to change all principals, the client krb5.conf and the server kdc.conf. Is that correct? Hmmm, not sure. The client ought to ask for infinity, but I don't think that's the default, sadly. The kdc.conf parameters in question are best not used -- you can use kadmin policies instead. Also, IIRC the TGS principal's renew life puts a bound on all, IIRC, so generally you might want to set principals' renewable ticket life to be very long, and use the TGS principal as a big hammer. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: FIPS certification
On Sat, Feb 28, 2009 at 01:07:50PM -0500, Ken Raeburn wrote: On Feb 28, 2009, at 12:43, Theodore Tso wrote: It might be possible to dispatch on krb5_keyblock-magic to determine whether it the new fields are there, and in places where a passed in krb5_keyblock is allocated on the stack, the called function could allocate a new-style krb5_keyblock and import the key. (How many such places are there? I didn't think there would be that many.) It wouldn't be that pretty, yes, but if it's considered important to preserve the ABI, it's probably doable... Yeah, that's been considered. It's a little risky in that sometimes the magic field just isn't initialized (especially in an application- provided keyblock), and adding a dependence on it (at least on it Actually, is it ever initialized when allocated on the stack? I suspect not. It's been pointed out to me that it's not necessary to change krb5_keyblock just to use OpenSSL, and I think one could argue the same for PKCS#11. However, leaving krb5_keyblock unchanged is sub-optimal, and, most importantly for performance, means you can't cache derived keys in the keyblock itself (you could have a hash table). *not* having a certain 32-bit value that indicates the extended form) would be a minor ABI change. I think the risk is probably low, and it'd probably be worth the extra ugliness to get the benefits. We'd also still need to handle the krb5_keyblock structure embedded in krb5_creds; in that instance it wouldn't be extensible. It'd be so nice to be able to do a new API for a v2.0 someday. :-) Yes. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: FIPS certification
On Sat, Feb 28, 2009 at 01:07:50PM -0500, Ken Raeburn wrote: We'd also still need to handle the krb5_keyblock structure embedded in krb5_creds; in that instance it wouldn't be extensible. I suspect we can handle that by having a new krb5_keyblock for all non-krb5_creds uses of it, and krb5_keyblock_old for krb5_creds. It's only the auth_context and the GSS mech where we need to be able to cache derived keys and what not (crypto library handles). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: FIPS certification
On Sat, Feb 28, 2009 at 01:56:22PM -0600, Nicolas Williams wrote: On Sat, Feb 28, 2009 at 01:07:50PM -0500, Ken Raeburn wrote: We'd also still need to handle the krb5_keyblock structure embedded in krb5_creds; in that instance it wouldn't be extensible. I suspect we can handle that by having a new krb5_keyblock for all non-krb5_creds uses of it, and krb5_keyblock_old for krb5_creds. It's only the auth_context and the GSS mech where we need to be able to cache derived keys and what not (crypto library handles). There is another way... If we only care about performance in the GSS mechanism then there's no need to change krb5_keyblock. That means crypto in the raw krb5 API apps will not be as good, mostly because of the lack of derived key caching and because of the lack of caching of crypto library handles (including key schedules). But MIT krb5 already suffers from this anyways. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: FIPS certification
On Fri, Feb 27, 2009 at 09:29:15PM -0800, Randy Turner wrote: I haven't completely analyzed MIT Kerberos, but I was wondering if it would be possible to get the MIT Kerberos subsystem to use the OpenSSL crypto API for any cryptographic support needed for Kerberos? MIT Kerberos has its own crypto code, yes. Solaris Kerberos is based on MIT Kerberos and replaced the crypto with calls to PKCS#11 (in user-land). I believe the Solaris Kebreros team wants to integrate these changes (challenging though it is) into MIT krb5, but I don't know when it will happen. That would be your best bet. The Solaris Kerberos stack is opensource, like most things in OpenSolaris (though some parts under the CDDL, which MIT has in the past considered incompatible with its aims, so Sun has donated code to MIT in the past, meaning placed it under MIT's license). If you're interested we can talk about the challenges in revamping MIT krb5 to not use its own crypto code. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: FIPS certification
On Fri, Feb 27, 2009 at 09:45:21PM -0800, Russ Allbery wrote: Randy Turner rtur...@amalfisystems.com writes: I haven't completely analyzed MIT Kerberos, but I was wondering if it would be possible to get the MIT Kerberos subsystem to use the OpenSSL crypto API for any cryptographic support needed for Kerberos? I believe it would be extremely difficult (although maybe someone has made changes on this front and I've missed them). If you want Kerberos libraries that use OpenSSL crypto, you'll probably find it easier to just use Heimdal, which already does so, than trying to change MIT Kerberos to do so. Wyllys Ingersoll did it for Solaris a few years ago. In Solaris the Kerberos stack uses PKCS#11 (in user-land -- the kernel-land crypto interfaces are different, but in the kernel Kerberos still uses the Solaris crypto framework, instead of the MIT krb5 crypto code). The biggest challenge _by far_ is krb5_keyblock. The size of that structure is part of the ABI because it was always in a public header and code used (and still might) allocate krb5_keyblock variables as automatics. IIRC its layout too is part of the ABI. Solaris at the time did not expose a krb5 API, so it was trivial for us (Wyllys) to change krb5_keyblock and to add initializers for it. But when it comes to contributing these changes to MIT we'll run into this problem. There are solutions that preserve compatibility with code that allocates krb5_keyblock on the stack, but they aren't pretty. Breaking the ABI could be considered -- it'd be a smallish break, but it won't be Sun deciding that, but the MIT Kerberos community. Sun would love for MIT to adopt changes to make MIT krb5 use PKCS#11. Even using OpenSSL might work for us because we have the OpenSSL PKCS#11 ENGINE. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Establishing client credentials (TGT etc.) with GSSAPI
On Mon, Feb 23, 2009 at 02:00:55PM -0800, Chris wrote: FWIW, I was slightly confused with the language in the GSSAPI RFC which seems to indicate that an implementation of a mechanism (e.g. Kerberos) is not necessarily compatible with that mechanism used on its own. [...] I suspect that may have been a reference to how the Kerberos V GSS-API mechanism is not wire compatible with raw Kerberos V. Do you remember what specific text you're referring to, and can you point me at it? Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Establishing client credentials (TGT etc.) with GSSAPI
On Fri, Feb 20, 2009 at 01:24:06PM -0800, Chris wrote: I'm working on implementing Kerberos authentication from a C++ client to a Java service. The Java service wants a GSSAPI context. Is it correct that, if you can't rely on default GSSAPI credentials (i.e. login identity and pre-cached TGT), then a client should use gss_acquire_credentials() to establish this? I have tried this but haven't had success and just want to make sure I'm on the right path. The GSS-API does not give you a way to acqiure initial credentials (i.e., anything involving interaction with the user to obtain things like principal name, password, smartcard/token PIN, ...). That's out of scope for the GSS-API. IIRC JAAS does give you a way to do that, but I don't remember exactly. What the GSS_Acquire_cred() and GSS_Add_cred() functions allow you to do is to choose a credential to use when many are available. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solved: Kerberised NFS
On Fri, Feb 13, 2009 at 08:56:43AM +, Peter Eriksson wrote: Edward Irvine eirv...@tpg.com.au writes: I also did a little experiment. After logging in to the target machine, (with the GSSAPIDelegateCredentials working and all), I ran the kdestroy command. As expected, my home directory became immediately unreadable until I got a new TGT with the kinit command. Cool... Sorry I'm late to this thread (and thanks Doug!). Next you'll discovery the fun side effects of having a Secure NFS'd home directory (I've been running with that for about a year now). I've been running with one for a long time also. Most things work just as expected but then there are the warts... Firefox: When Firefox loses access to $HOME (for example if you are away from your computer long enough for the ticket to expire) then the Google search box will magically stop working. Solution: Restart Firefox. I've never noticed this. Partly that's because I have renewable TGTs with a fairly long renewable lifetime so that ktkt_warnd does the right thing and either I'm never away for too long or I logout if I will be. Thunderbird: When Thunderbird loses access to $HOME due to expiring tickets then it will you from being able to delete new mail in your IMAP inboxes. New mail will show up fine though... Solution: Restart Thunderbird. I use mutt :) xscreensaver: When $HOME goes away then xscreensaver will fail you launch the password dialog application when you wish to login again (since it can't read the .Xauthority file in your $HOME so it will not be allowed access to your X server). Blank window forever... Solution: ssh in from another machine and 'kill' xscreensaver. Never had this problem on Solaris. crontab jobs, Grid Engine Jobs: You'd better make sure you have tickets on the machines where they are going to start your jobs and that the tickets won't expire while the jobs are running. Solution: ? Yup, this is a problem. Arguably you shouldn't have cron jobs if they will need to use authentication mechanisms that either require interaction every time or which use credentials that can expire such that interaction is required to obtain fresh ones. Or you need to be very aware of the issue. Or the system needs to give you a way to cache your password/keys for cronjobs. None of those options is very satisfying. ssh with S/Key (one time password): Sure, you are let in after a successful authentication. But you will still need to enter your password to get the ticket - allowing someone to sniff it... I'm not sure I get this one. But ssh with pubkey userauth does fail if your home directory can't be accessed on the remote system (Solaris' sshd does a seteuid(your-UID) before accessing your authorized_keys file, IIRC). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: krb5_sendauth vs NAGLE vs DelayedAck
On Wed, Jan 14, 2009 at 04:52:34PM -0500, Ken Raeburn wrote: On Jan 14, 2009, at 15:22, John Hascall wrote: My solution was just to do: int on = 1; setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, on, sizeof(on)); before calling krb5_sendauth() but a better approach might be for krb5_write_message to end up calling writev so it does one write instead of two, I think. Yes, I think that's probably best -- maybe via a helper function to run a loop and manage the bookkeeping in case of short writes. Or setsockopt() TCP_CORK around krb5_sendauth(). Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: non-KDC replay cache problems?
On Mon, Dec 22, 2008 at 01:11:50PM -0500, Tom Yu wrote: Has anyone experienced problems due to false positive conditions on an application replay cache? [...] Yes, this happens with Windows clients, where the Kerberos stack may re-use a seconds and microseconds value, if multiple AP-REQs are initiated in the same second, but with a different sub-session key. If it turns out that almost all of the problems are due to the KDC replay cache, we can consider turning off the KDC replay cache, as we believe that doing so poses negligible security consequences, and is substantially easier. The KDC replay cache is not an issue, although the replay cache for TGS-REQs needs to behave similarly to the AP-REQ replay cache. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Sequence numbering after export and import of context
On Sun, Oct 05, 2008 at 11:13:00PM +0100, Markus Moeller wrote: Thank you for the replies. I get an GSS: error: The token was a duplicate of an earlier token and debugging on the client shows that it received seq 0 but expected 1. So I need to dig a bit further what my server processes do. Is the following OK : client - server main process establishes context - export_context client - child 1 import_context - unwrap + wrap (seq 0) - export_context client - child 2 import_context - unwrap + wrap (seq 1)- cleanup Do I understand correctly that you're importing a given exported security context token twice? If so, then no, that's not supported. RFC2743 is quite clear on this. And it makes sense too: there may be no way for child 1 and child 2 to keep their sequence number windows in sync and perform as well as if they did not even try to keep them in sync. Also, the spec allows the second GSS_Import_sec_context() function call to fail, and it is possible to imagine implementations where such a failure would occur. Heck, even if an implementation supported multiple imports of one exported security context token you'd still have problems because whatever the per-message token sequence number window size is, if one process consumes/produces per-message tokens at a sufficiently different rate than the other then you'll still get sequencing errors. You could cheat and not request sequencing, but there's no guarantee that that will work either -- as long as you're importing the same exported security context token more than once then you're in trouble, and if it works it will be an accident of the mechanism's implementation and so your application will not be portable. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Sequence numbering after export and import of context
On Mon, Oct 06, 2008 at 12:01:16AM -0400, Michael B Allen wrote: Personally I think the whole export / import of security contexts is a little awkward. Instead of moving the context we just put all IO buffers in shared memory and have one process running the muxer loop (although the reason for doing this has nothing to do with GSSAPI). In Solaris secure NFS can deal with mechanisms that don't support security context import/export, but for mechanisms that don't the price to pay is an upcall to user-land for every GSS per-message token. The security context import/export feature definitely has its place. In the case of the original poster, however, I agree that there is a better solution. But that mostly follows from the OP's application design being incompatible with security context import/export, and the only solution is to change the application design. At least IIUC. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris Pam_krb5.so.1 problem after installing MIT 1.6.3
On Wed, Sep 10, 2008 at 02:14:19PM -0500, Douglas E. Engert wrote: Chavez, James R. wrote: Doug, Thanks for the reply. I am actually using kerberos for authenticating logins through ssh. Because I had no DNS entry for this Solaris box I was getting the following debug output from pam_krb5. Aug 26 10:24:21 solaris1.example.com sshd[1147]: [ID 537602 auth.error] PAM-KRB5 (auth): krb5_verify_init_creds failed: Hostname cannot be canonicalized. This sounds like the sshd can not determine its FQDN. A host should be able to determine its name without DNS. This is coming from krb5_sname_to_principal(), which is called from krb5_verify_init_creds(), which is called from pam_krb5:pam_sm_authenticate(). Solaris Kerberos specifically requires DNS to be configured. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ktutil get
On Wed, Aug 06, 2008 at 03:38:27AM +, Victor Sudakov wrote: Victor Sudakov wrote: It is a pity I cannot check it out because Solaris' kadmin seems to be incompatible with FreeBSD's kadmind: $ kadmin kadmin: unable to get host based service name for realm SIBPTUS.TOMSK.RU I see, Solaris kadmin looks for _kerberos-adm._udp.SIBPTUS.TOMSK.RU What gives? FreeBSD's kadmind (Heimdal) does not listen on udp, it uses 749/tcp. Is there a way to make them work together, or is it hopeless? The kadmin protocol is not standard. Heimdal's kadmin protocol and MIT's (from which Solaris' derives) are incompatible. That said, later today I'll send out program source that might help you. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ktutil get
On Wed, Aug 06, 2008 at 10:18:01AM -0500, Nicolas Williams wrote: On Wed, Aug 06, 2008 at 03:38:27AM +, Victor Sudakov wrote: Victor Sudakov wrote: It is a pity I cannot check it out because Solaris' kadmin seems to be incompatible with FreeBSD's kadmind: $ kadmin kadmin: unable to get host based service name for realm SIBPTUS.TOMSK.RU I see, Solaris kadmin looks for _kerberos-adm._udp.SIBPTUS.TOMSK.RU What gives? FreeBSD's kadmind (Heimdal) does not listen on udp, it uses 749/tcp. Is there a way to make them work together, or is it hopeless? The kadmin protocol is not standard. Heimdal's kadmin protocol and MIT's (from which Solaris' derives) are incompatible. That said, later today I'll send out program source that might help you. A while back I wrote a utility for building keytab files when using Active Directory as the KDC; it uses the RFC3244 protocol to set the password of the given principal, so it should work with Heimdal. You can find it here: http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ktutil get
On Tue, Aug 05, 2008 at 04:44:54AM +, Victor Sudakov wrote: Victor Sudakov wrote: There is a very useful command ktutil get in Heimdal. It allows to conveniently join a host into a Kerberos domain, without bothering about transferring the keytab. What is the analogous command in the Solaris Kerberos implementation? No Solaris Kerberos experts here? Well, what is the analogous command in MIT Kerberos? Am I asking something stupid? How do you securely transfer a keytab for the host principal to the host? ktutil get does just that. kadmin(1M) is the tool to use to set principal keys and maintain keytab files. The kadmin protocol uses RPCSEC_GSS and Kerberos for transport protection. If you want to move keytab files around securely then use ssh/sftp or any other secure file transfer or remote filesystem protocol. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Creating an MIT style keytab for an existing Windows AD member computer
On Wed, Jul 23, 2008 at 02:01:43PM -0400, Michael B Allen wrote: Extracting the keys from AD is not possible [1]. Nor ist it possible to extract them from MIT krb5 KDCs. However, the ktpass utility from MS can set the password, generate the corresponding key separately and put it into a keytab file. You can build keytabs directly on MIT krb5 systems using the MIT krb5 API, or even interactively with kpasswd and ktutil (an early version of adjoin [see below] did just that). Or you could probably just use or adapt Sun's adjoin/ksetpw tools to your purposes: http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp http://www.sun.com/bigadmin/features/articles/kerberos_s10.pdf http://opensolaris.org/os/project/winchester/files/adjoin-s10u4.tar.gz http://opensolaris.org/os/project/winchester/files/adjoin-s10u5.tar.gz Note that you must have at least account operator privilege to set a password in AD. Indeed. Mike [1] There is a freeware utility called ktexport that can extract the keys from a DC and dump them into a keytab but it is only (sometimes) useful for debugging purposes with WireShark. The resulting keytab is not valid for use with any kind of service. Sure, if you have direct, privileged access to a KDC you could always extract its keys. Portions of the KDC could run directly in a hardware keystore, making it really hard to get to the keys, but that's not the case here. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Creating an MIT style keytab for an existing Windows AD member computer
On Wed, Jul 23, 2008 at 05:55:20PM -0700, Russ Allbery wrote: Nicolas Williams [EMAIL PROTECTED] writes: On Wed, Jul 23, 2008 at 02:01:43PM -0400, Michael B Allen wrote: Extracting the keys from AD is not possible [1]. Nor ist it possible to extract them from MIT krb5 KDCs. It is as of 1.6 using kadmin.local (not that this changes the rest of your point). Right, it doesn't -- running kadmin.local on the KDC with sufficient privilege qualifies as privileged access to a KDC :) Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Proposal to change the meaning of -allow_tix +allow_svr aka KRB5_KDB_DISALLOW_ALL_TIX !KRB5_KDB_DISALLOW_SVR
On Wed, Jun 18, 2008 at 04:54:04PM -0400, Ken Raeburn wrote: On Jun 18, 2008, at 16:33, Jeffrey Altman wrote: I believe that the meaning of allow_tix should be altered such that it only applies to the client in a TGS or AS request. This would permit -allow_tix to be applied to a service principal and ensure that no client ticket requests can be satisfied for that service principal while at the same time permitting other principals to obtain service tickets. Organizations that wish to disable the issuance of service tickets for the service principal would apply -allow_svr to the principal in addition to -allow_tix. I think it should be pointed out that such a change would allow tickets to start being issued where currently they would not when the KDC software gets updated -- even if the latter really was the intent of the realm administrator. Because of that, we might instead want to create a new flag with the semantics Jeff wants, and leave the existing flag with its current (suboptimal) behavior. Or provide a migration script. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Sun, Apr 06, 2008 at 02:52:43PM +, Victor Sudakov wrote: Nicolas Williams wrote: Now how do I enable GSSAPI authentication for local users? What should I put into the /etc/mail/authinfo file so that each local user who has a Kerberos ticket could authenticate herself to the mailhub? The users send mail from mutt, pine etc by calling /usr/sbin/sendmail. Am I asking something extraordinary? fetchmail works fine as GSSAPI client, so there is no more need to store a password in the config for receiving mail. I wish we could do the same for sending. Actually, I want to know about this too. I'll ask Sun's sendmail contact. Nicolas, any results? I followed up on March 19th on the list. I seem to recall my e-mails to you bouncing, so see the list archives. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Mon, Apr 07, 2008 at 01:48:31PM -0500, Nicolas Williams wrote: I followed up on March 19th on the list. I seem to recall my e-mails to you bouncing, so see the list archives. Right, because your sender address is obfuscated. Guess what: when I post my reply including the non-obfuscated form of your address then all will be able to see it. Please don't obfuscate your sender address. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Tue, Apr 08, 2008 at 01:49:02AM +, Victor Sudakov wrote: Nicolas Williams wrote: I followed up on March 19th on the list. I seem to recall my e-mails to you bouncing, so see the list archives. Sorry, what list? I posted the question to the Usenet newsgroup comp.protocols.kerberos, so I expected a reply there. Bah, I forgot about comp.protocols.kerberos (it's bidirectionally gatewayed to kerberos@mit.edu). Is the gateway having trouble again? Anyways, the list archives are here: http://mailman.mit.edu/mailman/listinfo/kerberos Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Wed, Mar 19, 2008 at 02:52:41AM +, Victor Sudakov wrote: In comp.mail.sendmail Victor Sudakov [EMAIL PROTECTED] wrote: Now how do I enable GSSAPI authentication for local users? What should I put into the /etc/mail/authinfo file so that each local user who has a Kerberos ticket could authenticate herself to the mailhub? The users send mail from mutt, pine etc by calling /usr/sbin/sendmail. Am I asking something extraordinary? fetchmail works fine as GSSAPI client, so there is no more need to store a password in the config for receiving mail. I wish we could do the same for sending. See: http://www.sendmail.org/~ca/email/auth.html under Using sendmail as a client with AUTH. It doesn't really address how to use this with Kerberos. It's not clear if you just have to give sendmail your Kerberos password (I doubt that will work, much less be acceptable), or if sendmail is able to somehow find your ccache and tickets. My guess: it just doesn't work, at least when sendmail is running in queue mode. To make it work will require enough changes that one could be forgiven for wondering why mutt et. al. shouldn't just learn how to talk SMTP/ SUBMIT to the real MSA anyways -- the way Thunderbird, Evolution and all other MUAs do it. Or, alternatively, why a standalone, non-queueing (or per-used queue daemon) mail submission program isn't the right answer. Or you might argue that sendmail just needs an option to work as described above (no queueing, no privs, or per-user queueing). BTW, on Solaris it wouldn't work anyways pending this: 6481399 sendmail needs to ship /etc/sasl/Sendmail.conf Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Wed, Mar 19, 2008 at 12:29:55PM -0500, Nicolas Williams wrote: To make it work will require enough changes that one could be forgiven may Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Wed, Mar 19, 2008 at 03:17:29PM -0400, Sam Hartman wrote: MIt does have a configuration where this works with sendmail for foreground delivery to a mailhub. I don't have details though. Good to know. Could you cajole someone into posting the details? Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: sendmail as MSA and client side GSSAPI
On Wed, Mar 19, 2008 at 02:52:41AM +, Victor Sudakov wrote: In comp.mail.sendmail Victor Sudakov [EMAIL PROTECTED] wrote: Now how do I enable GSSAPI authentication for local users? What should I put into the /etc/mail/authinfo file so that each local user who has a Kerberos ticket could authenticate herself to the mailhub? The users send mail from mutt, pine etc by calling /usr/sbin/sendmail. Am I asking something extraordinary? fetchmail works fine as GSSAPI client, so there is no more need to store a password in the config for receiving mail. I wish we could do the same for sending. Actually, I want to know about this too. I'll ask Sun's sendmail contact. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Account lockout support in Solaris 10 when authenticating against Kerberos
On Mon, Dec 10, 2007 at 08:32:57PM -0500, Yu, Ming wrote: But I am still not clear how to lock out account after n-times of failed login. Are you saying there is no way to do it in current version of MIT kerberos? I'm saying that the MIT and Solaris KDCs do not support that feature. BUT, you can write a script to scrape (i.e., tail) the KDC log files, keep a per-principal count of failed logins, and disable principals with too many consecutive failed logins. Doug's comment about /etc/passwd was about how you might lock out an account that you know you want to lock out, but Doug should really have told you to either disable the principal[*] or to use the passwd(1) command with the -l option. [*] Disabling the principal will cause the account to be locked IF AND ONLY IF Kerberos V is the only way to authenticate the account (e.g., because the passwd field of the account is NP, as Doug suggests). Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Account lockout support in Solaris 10 when authenticating againstKerberos
On Tue, Dec 11, 2007 at 08:35:07AM -0600, Douglas E. Engert wrote: But using PAM to lockout a user, is per machine. If you are trying to avoid password guesses, the user could try another machine, and get another N guesses. Better then nothing, but maybe not what you really want. As Russ points out below, maybe some intrusion detection system might also be in order, with PAM notifying the IDS. Then compromised clients can DoS your whole domain. But then, if you're implementing an N-strikes-you're-locked policy then they could anyways (which is why account lockout after N failed logins is a bad idea, particularly if you don't unlock the account automatically after a short period of time). Slowing down folks who are trying to guess passwords is a good thing. Letting them lock out all your user accounts is not. The folks in charge of writing corporate security policies need to take this into account. N-strikes-you're-locked is bad. N-strikes-we-slow-you-down is good. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Account lockout support in Solaris 10 when authenticating against Kerberos
On Mon, Dec 10, 2007 at 05:11:21PM -0600, Douglas E. Engert wrote: Yu, Ming wrote: Does anybody know how to implement account lockout features on Solaris 10 when the user authenticates against Kerberos? See man shadow. /etc/passwd, NIS or LDAP can have *LK* to indicate it is locked. I think it is the pam_unix_account that checks for this. For a Kerberos account without a local password use something like NP for the password. Right, but what the poster was asking, effectively, was how to make the KDC lock out the user after N failed [pre-]authentication attempts. The answer is that an MIT KDC with plain old db2 backend can't do it. An MIT KDC with an LDAP backend could do it, but it doesn't yet. The user should scrape logs on the KDC and lock accounts (principals) accordingly. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Adding supported enctypes to kdc
On Fri, Nov 16, 2007 at 03:50:16PM -0800, Russ Allbery wrote: John Washington [EMAIL PROTECTED] writes: I would definitely add aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96, as Microsoft is adding these to AD (and I prefer good encryption, not really broken encryption) Is there any reason to add the 128-bit keys? So far, it seems like everyone who can do 128-bit can also do 256-bit, but maybe that isn't true of the upcoming Windows release? (They're both equally export-controlled, so far as I know.) It isn't true for Solaris 10 without the supplemental cryptography packages -- I don't recall if this changed in S10U4 or will change in U5, but we're definitely moving towards delivering 256-bit key length support by default. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 10 sshd + GSSAPI = where's my cred cache?
On Mon, Nov 05, 2007 at 12:06:14PM -0500, Jeff Blaine wrote: Solved. Had to force client-side -o GSSAPIStoreDelegatedCredentials yes even though it was not defined anywhere as no (although probably a default for some reason). The manpage (ssh_config(4)) says: GSSAPIDelegateCredentials Enables/disables GSS-API credential forwarding. The default is no. ^ Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Now with PAM? Solaris 10 sshd and ticket forwarding
On Mon, Nov 05, 2007 at 02:43:56PM -0500, Jeff Blaine wrote: Those 3 lines make it work. Thanks again, Doug. I can't really imagine where I'd be with this unless someone had figured out all of this esoterica before me. Sheesh. The default other stack should have worked just fine. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Now with PAM? Solaris 10 sshd and ticket forwarding
On Mon, Nov 05, 2007 at 04:14:21PM -0500, Jeff Blaine wrote: Very likely. One heads down roads like these and the default 'other' stack are the last things to consider (for me at least). If we shipped a default PAM configuration for every application then modifying the other one wouldn't be a way to shoot yourself in the foot. But then you'd have to do that much more editing of pam.conf... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: gss_accept_sec_context
On Fri, Nov 02, 2007 at 01:54:07PM -0400, Kevin Coffman wrote: default_tkt_enctypes = des-cbc-crc default_tgs_enctypes = des-cbc-crc ktadd does not look at those enctype definitions on the local machine where you run ktadd. What is used is the supported_enctypes defined for the realm in the kdc configuration. If your service doesn't support all the enctypes listed there, then you must limit the list with the -e option when doing the ktadd. Er, it's a bit more complicated than that. kadmin ktadd without a -e argument lets kadmind pick an enctype list, namely, the supported_enctypes list (note: that's the KDC-side setting of supported_enctypes). kadmin ktadd with a -e argument specifies which enctypes to use. On Solaris 10 and up it's a bit more complicated still: without a -e argument kadmin ktadd behaves as if you had used -e with the list of permitted_enctypes (note: that's the client-side setting of permitted_enctypes). And the Solaris 10 and up kadmind uses 1DES enctypes only for clients that use the randkey-without-enctypes RPC. Bottom-line: - when doing ktadd you really want to specify what enctypes to use or else default to the *local* permitted_enctypes value, and of the enctypes you do specify, if you do, at least one should be in listed in the local permitted_enctypes; - if you're using straight MIT krb5's kadmin client then you should just always use the -e argument to ktadd, always. I think MIT should change kadmin's ktadd command to work more or less as the Solaris one does. The above applies only to ktadd, not chpass. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: SSH1 - gss-api - kerberos - java
On Fri, Nov 02, 2007 at 04:42:56PM -0400, Ranga Samudrala wrote: I am trying to develop a Java SSH client targeting a version of Kerberised SSH1 server talking GSS-API. Does anybody know of anybody else dealing with this scenario? Is there a place I can find SSH1 Java API that support communication using GSS-API? The Kerberized SSHv1 that floated about some time back is really not something that you want to use. Besides being non-standard, there were issues with it (I don't recall the details). Also, it does not use the GSS-API, so you'd need a Java implementation of raw Kerberos. You could probably use the underlying raw Kerberos V implementation in JGSS, but you may have to hack on the [fortunately now open source] JDK. I urge you to upgrade to SSHv2. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: SSH1 - gss-api - kerberos - java
On Fri, Nov 02, 2007 at 05:20:37PM -0400, Ranga Samudrala wrote: I have no control over the version of SSH we have to use. I am trying to support a client whose Kerberized SSH servers are v1.5-1.2.26 (which is very bad) and have been hacked to communicate using GSS- API. So, I am looking to see how I can come up with an SSH1 client that talks GSS-API. If it really is using the GSS-API then you have everything you need in the JDK and the rest is a matter of finding a spec or reverse engineering the protocol, plus a small matter of programming. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 10 sshd + GSSAPI = where's my cred cache?
On Thu, Nov 01, 2007 at 02:34:12PM -0400, Jeff Blaine wrote: I apologize for the general nature of this post. Maybe it's better posted to the secureshell list which is loaded with spam and is often choked up sitting on some server somewhere, but... I can ssh with GSSAPI auth to a Solaris 10 box fine. When I'm in though, klist says I have no credential cache and there's nothing useful in /tmp. Has anyone come across this and found an answer? Did you delegate a credential? Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 10 sshd + GSSAPI = where's my cred cache?
On Thu, Nov 01, 2007 at 04:05:55PM -0400, Roberto C. Sánchez wrote: On Thu, Nov 01, 2007 at 02:34:12PM -0400, Jeff Blaine wrote: Has anyone come across this and found an answer? $ grep GSSAPI ~/.ssh/config GSSAPIAuthentication yes GSSAPIDelegateCredentials yes You also need to kinit -f or set forwardable = true in the [libdefaults] section of krb5.conf(4). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 10 sshd + GSSAPI = where's my cred cache?
On Thu, Nov 01, 2007 at 04:31:39PM -0400, Jeff Blaine wrote: Douglas E. Engert wrote: Jeff Blaine wrote: I apologize for the general nature of this post. Maybe it's better posted to the secureshell list which is loaded with spam and is often choked up sitting on some server somewhere, but... I can ssh with GSSAPI auth to a Solaris 10 box fine. When I'm in though, klist says I have no credential cache and there's nothing useful in /tmp. What does your /etc/pam.conf look like? Doug, that should have little or nothing to do with this in S10. I was using the sshd non-PAM GSSAPIAuthentication (enabled by default). OK, really specific instructions: 1) On the server make sure that you are not setting the following sshd_config(4) parameters or that you set them as follows: # One or both of GSSAPIAuthentication and GSSAPIKeyExchange must be on GSSAPIAuthentication yes GSSAPIKeyExchange yes GSSAPIStoreDelegatedCredentials yes Restart the ssh service if you had to change this. 2) On the client side make sure that you have credentials to delegate (klist -f should show a forwardable TGT in your ccache). 3) On the client make sure that you're not disabling the relevant ssh_config(4) parameters in /etc/ssh/ssh_config or in ~/.ssh/config, particularly GSSAPIDelegateCredentials. To debug this try running ssh -vvv. If that does not produce enough information then try running sshd in dbug mode as well: # /usr/lib/ssh/sshd -dddp ... % ssh -p ... ... Capture the output and send it to me. We force ssh via PAM to be a session based cred, and get AFS token too: # Used by GSS, but ssh has bug about saving creds, so we use session based creds. That kind of explains things then. I guess it's a bug, eh? It's not. Doug is doing something that is very specific to his site. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Bug in krb5_keyblock_data function on Solaris 10/Opensolaris
Markus, Ken, Is this bug present in MIT krb5? Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Bug in krb5_keyblock_data function on Solaris 10/Opensolaris
On Mon, Oct 15, 2007 at 11:44:30PM +0100, Markus Moeller wrote: You are right and some calling functions like krb5_copy_keyblock do allocate, but not krb5_get_credentials(_core) if I now read the code right. Whether it's a bug at all depends on what the krb5_get_credentials() API docs say about increds-keyblock. The lack of MIT krb5 API docs doesn't help. Now that you know what the calling convention for krb5_copy_keyblock_data() you should be able to fix your test program to properly initialize the keyblock field of the creds passed to krb5_get_credentials() as input creds. [I'll try to refrain from getting into the problems with encoding krb5_keyblock internals knowledge into your apps.] Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Bug in krb5_keyblock_data function on Solaris 10/Opensolaris
On Tue, Oct 16, 2007 at 12:33:43AM +0100, Markus Moeller wrote: Maybe I miss something but I am not in control of the initialisation of the keyblock. The problem is mcreds-keyblock -contents in krb5_copy_keyblock_data, which is not allocated in any function before and not provided by the user. Yeah, I forgot. Solaris has a krb5_init_allocated_keyblock() function for this purpose. I suppose you could call krb5_init_keyblock() and do a struct copy, but that'd be asking for trouble (depending on what MIT wants to do in the future about caching derived keys (which Solaris does because we were able to modify krb5_keyblock before its layout and size became part of the ABI when we exposed the krb5 API). I'm either missing something else or you're right that there's a bug in krb5_get_credentials_core(). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: GSSAPI Key Exchange Patch for OpenSSH 4.7p1
On Fri, Sep 28, 2007 at 04:26:14PM -0500, Douglas E. Engert wrote: Sounds interesting. And yes, I would be interested in the cascading credentials delegation code. Does the delegation code depend on the key exchange code? Protocol-wise, yes, it does. There's two ways to use the GSS-API in SSHv2: - userauth only, but this happens once at the start of the session, so you can't delegate credentials after that - key exchange (and optionally userauth), which can be done again and again over the lifetime of the session Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Incremental Propagation
On Fri, Sep 21, 2007 at 03:08:26PM -0500, John Hascall wrote: Does MIT's current implementation of the Kerberos KDC include incremental propagation? I know it didn't a long time ago, then there were CITI patches for it, then those didn't work for awhile. I don't seem to be able to pinpoint an answer to it. There is no incremental propagation distributed with MIT Kerberos. I haven't studied it all that extensively, so correct me if I am wrong, but with the new DAL stuff there is now an opportunity to do a 'proper' job of multi-master KDCs (dare I say it) in a ubik-like or AD-like manner. There are plenty of LDAP servers suitable for backending the KDC that support incremental and/or multi-master replication. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Incremental Propagation
On Fri, Sep 21, 2007 at 03:54:22PM -0400, Jeffrey Altman wrote: John Harris wrote: Greetings, Does MIT's current implementation of the Kerberos KDC include incremental propagation? I know it didn't a long time ago, then there were CITI patches for it, then those didn't work for awhile. I don't seem to be able to pinpoint an answer to it. Thanks, John There is no incremental propagation distributed with MIT Kerberos. Not to spam but, the OpenSolaris KDC does have incremental propagation. The source for that is CDDL'ed. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Incremental Propagation
On Fri, Sep 21, 2007 at 03:29:16PM -0500, John Hascall wrote: There are plenty of LDAP servers suitable for backending the KDC that support incremental and/or multi-master replication. That, I suppose, depends on your definition of suitable. It certainly isn't suitable to me. The size of the KDC codebase is big enough to worry about, throwing something like an entire LDAP server into the mix is a whole 'nother kettle of fish. Maybe. If you run the LDAP servers for the KDC backend such that only the KDCs can be clients of it, with proper packet filtering, then there won't be much room for new attack vectors. Whereas if you use an LDAP server infrastructure that's also used for other things, like name services, then you'd be exposing the KDCs to attack via hostile (p0wned) directory services. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Incremental Propagation
On Fri, Sep 21, 2007 at 04:46:40PM -0500, John Hascall wrote: I'm not sure that model works well with the KDC's single-threadedness. Which, really, should be multi-threaded... Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: recent kadmin vulnernability and changing passwords
On Thu, Sep 06, 2007 at 08:55:47AM -0400, Edgecombe, Jason wrote: Hi All, Does kpasswd use the kadmin protocol? I'm just looking at options for mitigating the vulnerability. The Solaris kpasswd will use either the kadmin password or the kpasswd protocol. I don't recall if the same is true for the MIT kpasswd. But both protocols are served by the same kadmind binary. To mitigate the issue you can setup a packet filter that blocks connections to the kadmin port. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: confusion in ank.
On Mon, Apr 23, 2007 at 11:27:22AM -0400, Kevin Coffman wrote: I haven't looked at the code, but I think this is probably done on purpose and is not a bug. When you create a keytab, you create a new random key for the account. There is no password associated with that key, and there is no longer a reason for a password expiration. Password quality policies certainly shouldn't apply to randomly- generated keys, but that does not mean that there cannot be a key expiration policy. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: GSS-API routine for renewing credentials
On Wed, Apr 18, 2007 at 08:25:39PM +0200, Robert wrote: Does anyone know whether there is a routine in GSS-API to renew (forwarded) client credentials? I'm unable to locate such a routine in GSS-API, but maybe I'm overlooking it. There's no such thing. In SSHv2 we deal with this by re-keying the SSHv2 session and, in the process, establishing a new GSS-API security context, which is an opportunity to delegate a new credential. I.e., you have to establish a new security context. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: GSS-API routine for renewing credentials
On Wed, Apr 18, 2007 at 11:41:03PM +0200, Robert wrote: On Wed, Apr 18, 2007 at 08:25:39PM +0200, Robert wrote: Does anyone know whether there is a routine in GSS-API to renew (forwarded) client credentials? I'm unable to locate such a routine in GSS-API, but maybe I'm overlooking it. There's no such thing. In SSHv2 we deal with this by re-keying the SSHv2 session and, in the process, establishing a new GSS-API security context, which is an opportunity to delegate a new credential. I.e., you have to establish a new security context. Thanks Nico. I'm just thinking how that would work (if that would work for my situation). I looking at this from a client - gateway - backend server perspective. The client should actually not be bothered by the need to initiate a new security context with the gateway. That's what you indicate, right? (The gateway may need the delegated credentials to initiate a new security context to a second backend server (silentl failover)). Do you have control over the protocol that your application is using, or is it a standard protocol (or de facto standard from you point of view)? If the former, then just add an option to re-authenticate (establish a new security context). If the latter and the protocol is SSHv2, just do what I described earlier. If the latter and the protocol is something like IKE/KINK, then just establish a new SA or equivalent. If the latter and the protocol is something like ONC RPC w/ RPCSEC_GSS then just establish a new context (but you need to make sure that you map the new context to the correct session at the application protocol, if there is such a concept). If the latter and the protocol is something like FTP, or if it uses SASL (like IMAP), then you lose: you have to tear down the connection and start over if you really want to delegate a new credential. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: GSS-API routine for renewing credentials
On Thu, Apr 19, 2007 at 12:10:12AM +0200, Robert wrote: I do have control over the protocol (That is, in one instance. Another instance will probably make use of SASL). Thanks for your elaborate answer. It's much appreciated. I 'll go and play around with it a bit. Even if you're using SASL, if you have control over the application protocol you may still be able to signal the peer to tear down the SASL security layers and start over with new SASL authentication and security layers. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: One Time Identification, a request for comments/testing.
On Fri, Feb 02, 2007 at 10:16:28AM -0800, John Rudd wrote: It seems to me that if you're talking about a simple dumb USB thumb drive/data stick, that you're not going to be able to do anything to prevent an adversary from copying that data to a local host, and then brute-forcing the data over time. So, essentially, the only advantage over just putting a non-protected keytab on a USB drive and any other dumb-data-stick process is some amount of time it takes to overcome whatever encryption you've done on the keytab. The advantage of softtokens over hardtokens is that they are software-based, and when you don't have a smartcard around they can be useful in debugging, testing, or even as a cheap alternative to smartcards. And yes, softtokens are susceptible to offline dictionary and brute-force attacks by any attacker that can get their hands on them. But have you ever used passphrase-protected ssh private key files? I bet you have. It's darn useful because there's no need to buy a new piece of hardware -- you just have to be more careful than you might have to be with a smartcard. There's not much new here. This thread is starting to repeat itself. The only new questions here are: - should MIT krb5 have softtoken support? (And note that if it has PKCS#11 support for PKINIT and/or PA-ENC- TIMESTAMP long-term symmetric keys then it will have softtoken support wherever PKCS#11 softtoken providers are available.) - should there be a standard for softtoken formats? Since there are at least two PKCS#11 softtoken providers this is an interesting question. Where should such thing be standardized, if at all? Perhaps informally would be best. I think a more interesting approach would be a non- dumb data stick approach. It might start to sound like a variation of a smartcard, but why not think about a new USB device that's perhaps about the size of a USB data stick. It might present itself to the host computer as 2 devices: This stuff exists. Google it. And it is just a smartcard. Using bimetrics instead of PINs is interesting and a subject for another thread, probably on a different forum. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: One Time Identification, a request for comments/testing.
On Thu, Feb 01, 2007 at 06:47:43PM -0500, Sam Hartman wrote: OK, so the requirements you are trying to meet are: 1) soft token support for flash drives. 2) Support for central password management. 3) Allow minimal or no identifying information on the token. Any more? 4) Interoperability (meaning that you can use the softtoken on any OS). Which to me means a standard, de facto or otherwise, for the softtoken format. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: One Time Identification, a request for comments/testing.
On Thu, Feb 01, 2007 at 07:51:47AM +1100, Andrew Bartlett wrote: I think developing a cross-platform USB 'tumb drive' based soft token would be an immense benefit. It could make PKINIT real for many small sites that do not yet wish to invest in a token stack, and perhaps more importantly, make PKINIT and smart-card login something that developers and interested technical users can test with resources to hand. What do you mean by cross-platform? OpenSolaris has an OSS (CDDL'ed) PKCS#11 softtoken provider that does pretty much what you want. It stores its files in a filesystem, by default in a sub-directory of the user's home directory; filesystem type does not matter. Since you can put filesystems on a USB flash drive that should suffice for a cross-platform softtoken. The specifics of the Solaris softtoken's directory layout and file formats are project private interfaces IIRC, but if there's interest I imagine that we could document them, make them committed public interfaces and help establish a standard for a cross-platform softtoken. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: One Time Identification, a request for comments/testing.
On Thu, Feb 01, 2007 at 08:21:49AM +1100, Andrew Bartlett wrote: What do you mean by cross-platform? Works with windows desktops too :-) But I think this means that you want the format of the softtoken to be open and implementable by multiple implementors. Love also has a PKCS#11 softtoken. The details that I think might need work are integration so that the logon systems on various platforms 'know' that the token is there, and the softtoken driver should be used. Certainly those details should be worked out. But if my softtoken should work when plugged into a Solaris sytem, a Linux system or a Windows system then the format must be agreed by all, or else users will have to resort to installing one cross-platform implementation on all those systems. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: One Time Identification, a request for comments/testing.
On Wed, Jan 31, 2007 at 08:42:43AM -0600, Douglas E. Engert wrote: What keeps a user from copying the identity token from the USB device to a local or shared file system to avoid having to insert the USB device all the time? What are the security implications if the identity token is stolen? How does this compare to using cert and key on the USB device with PKINIT rather then your identity token? How does this compare to using a smart card or USB equivelent of a smartcard with PKINIT? To the user they still have to insert the card or USB device, and have to enter a pin or password? You're correct -- softtokens aren't a replacement for real smartcards. That doesn't stop a softtoken from being useful though. Compare softtokens to passphrase-protected ssh private key files in users' home directories :) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: SSH with Multiple Interfaces
Give your server host/f.q.d.n principals and keytab entries for all its interfaces' canonical names. And get a client that know how to decode the SSH_MSG_KEXGSS_ERROR message :) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: LDAP Schema Design Suggestions?
On Wed, Oct 25, 2006 at 08:22:42AM -0400, Edgecombe, Jason wrote: What about making positions as owners? people - positions - machines. People may have multiple positions/jobs and the job is responsible for the machine. Groups give you the same functionality without inventing something new. (Yes, ngroups_max...) Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: LDAP Schema Design Suggestions?
On Tue, Oct 24, 2006 at 06:19:04PM -0700, Henry B. Hotz wrote: No, I'm not talking about using LDAP to store the back-end for a KDC. I'm wondering if there are any thoughts or wisdom related to RFC 2307 (or successors) about how to store meta-information about Kerberos principals. That RFC defines schema's for machines and things with IP numbers. I also need to associate an owner for non-people principals. Users don't make good owners. They change job descriptions, go on extended vactions/sabatticals, leave, die, are laid off, are fired... IMO groups make much better owners. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Openssh, kerberos and Solaris 10
On Wed, Aug 09, 2006 at 09:52:51AM -0500, Douglas E. Engert wrote: Markus Moeller wrote: There shouldn't be the need of compiling openssh with Kerberos as the Solaris 10 version supports GSSAPI authentication. Yes and no. Until you want to store the delegated credential or do a krb5_userok test. Solaris' sshd does this using __gss_userok() and gss_store_cred(). With OpenSSH-4.1 at least ssh_gssapi_krb5_storecreds and ssh_gssapi_krb5_userok make krb5 API calls as gss never had a simple authz function or a way to save the delegated creds. Solaris 10's sshd uses PAM, to do these. OpenSSH should look at that approach too, then it would not need Kerberos specific code either. No, Solaris 10's sshd does not use PAM to do these two tasks. OpenSolaris' sshd will, however, soon enough. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Openssh, kerberos and Solaris 10
On Wed, Aug 09, 2006 at 02:55:05PM -0500, Douglas E. Engert wrote: Nicolas Williams wrote: gss_store_cred() is a KITTEN WG work item. __gss_userok() is not; should it be? I would say yes. Every service needs to do this, and use the GSS creds to test if it can use the local resource. So it in that regards it is generic. Hmmm. We're working to push authorization of GSS-API principals and handling of delegated credentials to PAM. So, we're working to make public gss_userok() and gss_store_cred() interfaces unnecessary... Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Java 1.5 and name-type mismatch
On Wed, Jun 21, 2006 at 11:18:06AM -0700, Salil Dangi wrote: How do you match two names that have different name-type attributes (UNKNOWN and NT_PRINCIPAL)? You ignore the name-type. Kerberos' name types do not partition the principal namespace and are entirely advisory. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Is Kerberos V5 i18n ready?
Subject: Re: Is Kerberos V5 i18n ready? Answer: no. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: NFSv4 with sec=krb5 mounts not working under Solaris
On Thu, May 25, 2006 at 04:23:26PM -0700, Erich Weiler wrote: kadmin: addprinc -randkey nfs/solaris10host.domain.com kadmin: ktadd -e des-cbc-crc:normal nfs/solaris10host.domain.com /etc/krb5.keytab file was created successfully. Then, as root on solaris10host: % mount -F nfs -o vers=4 -o sec=krb5 nfs4server:/ /mnt nfs mount: mount: /mnt: Permission denied Can't figure out where I'm going wrong. Does anyone have any ideas? Yes: the client doesn't need a nfs/... principal -- it needs a root/... or host/... principal. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: NFSv4 with sec=krb5 mounts not working under Solaris
On Thu, May 25, 2006 at 07:08:30PM -0500, Will Fiveash wrote: Did you install a version of NFS that uses the MIT Kerberos? [...] No such thing exists for Solaris, to my knowledge. On Solaris you can only use the native krb5 implementation for NFS. You can deploy MIT krb5 for other things, but, which? Why? Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: NFSv4 with sec=krb5 mounts not working under Solaris
On Fri, May 26, 2006 at 07:38:52AM -0700, Erich Weiler wrote: I'll blow out my dev box and re-install using Sun's SEAM krb5 and see if that helps. I have a feeling it will. Just so we're absolutely clear: you cannot just replace Solaris' implementation of anything. You can install alternatives alongside (i.e., in different locations, such as /opt, /usr/local, etc...). This probably applies to other OSes generally. So, if there's some functionality in MIT krb5 that just have to have that Solaris 10 doesn't have, then you could configure/build/install MIT krb5 into, say, /usr/local (i.e., ./configure --prefix=/usr/local ...) and then configure/build/install whatever applications needed MIT krb5 and be happy. But really, you should understand why you want to do this at all. This isn't a very good reason, for example: We were using MIT krb5 because all of the other platforms on our network (mostly different flavors of linux) were using MIT krb5, so I thought we should use it on the Suns as well just for the sake of homogeneity. Solaris 10's krb5 support is very good, and it's integrated with the Solaris cryptographic framework, and what not. Sun's version of LDAP had a very tough time reading our OpenLDAP server so we had to build/use OpenLDAP on Solaris instead of the Solaris native LDAP. I thought a similar line of thinking would work for krb5. It looks like I thought wrong. :) Do you mean that Solaris 10's nss_ldap didn't work against your directory? Or something else? If the former, did you replace nss_ldap, and what with? We'd love to hear much more about this, though an OpenSolaris mailing list, specifically the sparks-discuss list, would be a better forum (we don't need to spam kerberos@mit.edu readers): http://www.opensolaris.org/jive/forum.jspa?forumID=119 http://mail.opensolaris.org/mailman/listinfo/sparks-discuss or the opensolaris-bugs list: http://www.opensolaris.org/jive/forum.jspa?forumID=11 http://mail.opensolaris.org/mailman/listinfo/opensolaris-bugs Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Ticket forwarding failure
On Mon, May 22, 2006 at 03:24:55PM -0400, Jeff Blaine wrote: *NOW* what am I doing wrong? :) Why are my other Nothing. tickets not being forwarded? MIT Kerberos 1.4.3 telnet and telnetd in use. Only TGTs are forwarded. The other tickets in your ccache, client-side, may not be useful to your experience on the server side and, besides, can be obtained using the forwarded TGT as necessary. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Thu, May 18, 2006 at 04:12:00PM -0700, Henry B. Hotz wrote: On May 16, 2006, at 2:32 PM, [EMAIL PROTECTED] wrote: On Heimdal you would normally create the entry and then delete the unwanted encryption key types (if necessary). I think the mechanism is different for Sun or MIT servers: you specify the enc type you want as part of the add? Correct. I wouldn't prohibit des3 across the board just because you have some Sun machines that haven't been upgraded to Solaris 10. Me either. If you move your KDC to Solaris 10 you'll get the benefit of that kadmind heuristic and never (mostly) notice this problem. (The heuristic, IIRC, is that the randkey operation assumes only 1DES is desired -- kadmin/ktadd on S10 always uses the randkey_3 operation, while on S8/9 it always uses randkey.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 06:40:29PM -0400, Jeff Blaine wrote: Yes, MIT k5 1.4.3 The only Solaris piece I ever expect to use is pam_krb5.so And secure NFS? (kgssapi/kmech_krb5, gssd/mech_krb5) I've yet to touch/test Linux + K5, but it will be promptly after I find most of the hiccups with Solaris + MIT for now. Then it's on to Cyrus IMAP integration and other fun stuff. Would you consider running a Solaris 10 KDC? Maybe I'm just sore about it, but perhaps something should be mentioned about this in the docs? Which part? That Solaris 9 only supports the Kerberos V 1DES enctypes should be clear from the krb5.conf man page. As for the Solaris 10 kadmind heuristic I described, I'm not sure where that's documented. I'll find out. I can't really wrap my head around how this bit me and there wasn't a pile of of mailing list archive chatter by other people being bitten (when I searched before posting...). That is, I don't see that I am doing anything rare here. You're mixing two Kerberos V implementations on the same host. This is not so rare for Solaris 8 and 9 systems, actually, but when one does this one should be careful about possibly disjoint feature sets of the two implementations. I'm trying to use MIT K5 as a KDC in a homogenous environment. Out of the box, I got bit the first time I touched anything that didn't come from MIT. If nobody finds that bad, so be it -- I'm not going to drag it out further. See above. And now, I cannot get kadmin.local to NOT make 3DES keys. I have tried: The MIT and Solaris 10 kadmin/kadmin.local have a -e option to ktadd that you should use. The enctype names include a salt type (for your purposes always :normal). That the salt type is not optional is just awful, IMO. No dice. It appears to be blindly ignoring everything EXCEPT '-e des-crc-cbc:normal' as part of ktadd (which I should not have to do when set up this way). Here's a bug, too :) kadmin.local: ktadd -e des-cbc-crc host/noodle.foo.com ktadd: Invalid argument while parsing keysalts de ^^ This is about the time I start getting really worried. As has been pointed out you didn't include the :normal (though you included it in your e-mail). Worried that either I am *really* stupid, or... wow :( No, the interface isn't very friendly. Perhaps we need to get this behaviour into MIT krb5, since you're using it alongside Solaris' krb5 support. I assume you're using MIT's KDC software. Above - and I think that's a great idea. I'll file a report in the MIT krb5 RT. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 02:23:16PM -0400, Jeff Blaine wrote: authentication failed: Bad encryption type bash-2.05# /export/home/krb5/sbin/ktutil ktutil: rkt /etc/krb5.keytab ktutil: list slot KVNO Principal - 14host/[EMAIL PROTECTED] 24host/[EMAIL PROTECTED] 34 host/[EMAIL PROTECTED] 44 host/[EMAIL PROTECTED] What does klist -ke /etc/krb5/krb5.keytab say? It's possible that your host principal has keys of enctypes other than des-cbc-crc or des-cbc-md5 -- since those are the only enctypes that Solaris 9 supports this would be a misconfiguration. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 03:10:04PM -0400, Jeff Blaine wrote: Nicolas Williams wrote: What does klist -ke /etc/krb5/krb5.keytab say? bash-2.05# /export/home/krb5/bin/klist -ke /etc/krb5/krb5.keytab Keytab name: FILE:/etc/krb5/krb5.keytab KVNO Principal -- 4 host/[EMAIL PROTECTED] (Triple DES cbc mode with HMAC/sha1) 4 host/[EMAIL PROTECTED] (DES cbc mode with CRC-32) 4 host/[EMAIL PROTECTED] (Triple DES cbc mode with HMAC/sha1) 4 host/[EMAIL PROTECTED] (DES cbc mode with CRC-32) 3 cvs/[EMAIL PROTECTED] (Triple DES cbc mode with HMAC/sha1) 3 cvs/[EMAIL PROTECTED] (DES cbc mode with CRC-32) 3 cvs/[EMAIL PROTECTED] (Triple DES cbc mode with HMAC/sha1) 3 cvs/[EMAIL PROTECTED] (DES cbc mode with CRC-32) bash-2.05# It's possible that your host principal has keys of enctypes other than des-cbc-crc or des-cbc-md5 -- since those are the only enctypes that Solaris 9 supports this would be a misconfiguration. That's exactly it then. Solaris 9 does not support the 3DES enctypes. Change your host principal's keys to be only des-cbc-crc. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 04:01:11PM -0400, Jeff Blaine wrote: I'm confused, then, Nicolas. As I read the output, there are 2 keys stored for these principals: 1 using Triple DES cbc mode with HMAC/sha1 1 using DES cbc mode with CRC-32 And the first matching enctype is supposed to be used, which would be des-cbc-crc (and des3-hmac-sha1 would not, as it is not common to the client and server. What does kadmin -q getprinc host/[EMAIL PROTECTED] say? I bet the des3-hmac-sha1 key comes before the des-cbc-crc key. That means that when the stock pam_krb5/mech_krb5 do a TGS-REQ to get a service ticket [for the PAM_USER with host/[EMAIL PROTECTED] as the service principal name] with which to validate the user's TGT the ticket will come back encrypted in host/[EMAIL PROTECTED]'s 3DES key (because the KDC will select that long-term key because it's first in the KDB entry), which, sadly, the Solaris 9 mech_krb5 doesn't support. You could upgrade to Solaris 10 and get support for AES (in addition to 3DES and HMAC-RC4)... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 05:32:45PM -0400, Jeff Blaine wrote: Nicolas Williams wrote: What does kadmin -q getprinc host/[EMAIL PROTECTED] say? I bet the des3-hmac-sha1 key comes before the des-cbc-crc key. Yes, it does. Well, that's it then. Switch to des-cbc-crc. Yes, the krb5 team at Sun greatly upgraded enctype support in Solaris 10. No, this can't be easily backported to Solaris 9. That means that when the stock pam_krb5/mech_krb5 do a TGS-REQ to get a service ticket [for the PAM_USER with host/[EMAIL PROTECTED] as the service principal name] with which to validate the user's TGT the ticket will come back encrypted in host/[EMAIL PROTECTED]'s 3DES key (because the KDC will select that long-term key because it's first in the KDB entry), which, sadly, the Solaris 9 mech_krb5 doesn't support. I guess this is what I want: http://www.ietf.org/internet-drafts/draft-zhu-kerb-enctype-nego-04.txt No, this is not applicable to your situation. This helped just now though. What a mess. http://learningsolaris.com/docs/krb_enctypes_so10.pdf Looks like I'll redo my existing stuff to only ever allow 1DES enctype (boggles my mind) via 'supported_enctypes' in kdc.conf. Hmmm, OK, this is complicated, and I'd rather not go into all these details, but: - the Solaris 10 kadmind has a heuristic to detect Solaris 8 and 9 kadmin clients so that changing a service principal's keys results in getting only 1DES keys, - while for changing user passwords results in all supported_enctypes being allowed for the user. - at the same time, the Solaris 10 kadmin client's ktadd sub-command acts as though the -e all permitted_enctypes option had been given, if it wasn't. So that if you have a Solaris 10 KDC and Solaris 8, 9 and 10 systems deployed you should not normally notice this 1DES vs. other enctypes issue. Perhaps we need to get this behaviour into MIT krb5, since you're using it alongside Solaris' krb5 support. I assume you're using MIT's KDC software. MIT? That seems a real shame -- Use 1DES in any homogenous environment or you may really hurt yourself. Sadly, it also doesn't appear one can remove just *one* enctype instance of a key (the 3DES one in my case). You could ktadd again, with -e des-cbc-crc:normal,... but though this is better than not having 3DES keys at all, it doesn't really buy you much security. I'm glad I am finding all of this out now on a testbed machine :O You could upgrade to Solaris 10 and get support for AES (in addition to 3DES and HMAC-RC4)... Not an option. :( Thanks for your help, Nico and Doug. NP. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris 9, stock sshd, pam_krb5, MIT 1.4.3 KDC
On Tue, May 16, 2006 at 04:57:29PM -0500, Nicolas Williams wrote: Hmmm, OK, this is complicated, and I'd rather not go into all these details, but: ^ right now Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: SRV records and canonicalization
On Thu, Apr 13, 2006 at 01:12:36PM +0100, Simon Wilkinson wrote: I'm interested in what people feel the 'correct' approach is to the following situation. See: draft-ietf-kitten-gssapi-domain-based-names-01.txt draft-ietf-kitten-krb5-gssapi-domain-based-names-01.txt You have found a third example use case for domain-based service principal names. Which reminds me, I need to jump on that... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris ssh pam_krb
On Tue, Apr 04, 2006 at 12:29:04PM -0500, [EMAIL PROTECTED] wrote: On Mar 31, 8:22pm, Jeffrey Hutzelman wrote: } Subject: Re: Solaris ssh pam_krb But in a multi-application PAG world, _no_ application can directly use the real PAG ID as an identifier, because it changes too much. Instead they need an application-specific identifier. That applies to encrypted filesystems, to AFS, and, I suspect, to NFS as well, though you might not yet recognize that. An interesting comment. I would relax the above a little, in light of other comments in this thread: In a multi-application PAG world applications MAY use the PAG ID directly but MUST NOT change it directly and MUST be able to cope with PAG IDs changing under its feet. In a multi-application PAG world applications SHOULD construct their own session/process group/whatever IDs, while the multi-application PAG framework, in turn, MUST support association of arbitrary application-specific IDs to PAGs (and PIDs, or something, for make-before-break). Particularly given that notion that our open authorization architecture was predicated on each 'service' having its own unique identity. Remember, PAGs do not provide process group separation _locally_, not in the AFS model and not in the multi-application variant we've been discussing. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris ssh pam_krb
Let's uplevel a bit. To me PAGs provide a useful distinction between processes in some sort of session, sharing some common characteristics, one that is better than environment variables in that it is easily (cheaply) observable from the IPC peers. PAGs have, for me, at least these uses: - As an Identity Selection Problem tool. This is how I understand you use PAGs. Different sessions of the same user can be associated with different network authentication credentials (or ID selection hints/preferences), possibly representing slightly different views of the same user's identity(ies) (e.g., joe vs. joe/admin) or very different identities altogether. - As a link from cred_t to user-land that can be used to extend cred_t. Specifically, Solaris RBAC authorization and profile assingments to users are almost like group memberships, except that RBAC is implemented purely in userl-land (e.g., pfexec(1) relies on being set-uid 0 and evaluates requests according to the caller's RBAC profile). I can see a useful extension for RBAC whereby additional profiles and authorizations can be granted su-like after logging in, or when logging in on concolse but not remotely, etc... And for this tracking granted authorizations and profiles via PAGs could be useful. - As a better point for tracking extant references to network authentication credentials than UIDs. I'm not too sure that this is correct though. Bill Sommerfeld and/or Casper Dick (IIRC) have suggested tracking references to UIDs by cred_t's and to heck with PAGs, and as seducing as PAGs are I'm not sure that the above two uses really add all that much value, and perhaps they are right. Finally, PAGs, whether AFS-like or not, do not make a good process group separation facility. Unless they are modelled as labels (as in Trusted Solaris), which, I think, *may* be out of the question. PAGs, like group memberships, do not, IMO, make a good access control on process tracing, and they are orthogonal to [local] filesystem access controls. It would be rather surprising if one could not trace/debug one's processes from different login sessions. Yet if PAGs have no impact on process tracing authorization decisions then I don't think one should claim that PAGs provide *any* inter- session protection for network authentication credentials, and the like. Here we have an all-or-nothing choice: either PAGs really do provide a process group separation feature, or they don't. I'm pretty sure that, taking backwards compatibility requirements into account, and maybe even regardless, they cannot. Cheers, Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris ssh pam_krb
On Mon, Apr 03, 2006 at 01:23:48PM -0400, Jeffrey Hutzelman wrote: On Monday, April 03, 2006 11:11:14 AM -0500 Nicolas Williams [EMAIL PROTECTED] wrote: Let's uplevel a bit. To me PAGs provide a useful distinction between processes in some sort of session, sharing some common characteristics, one that is better than environment variables in that it is easily (cheaply) observable from the IPC peers. PAGs have, for me, at least these uses: - As an Identity Selection Problem tool. Yes. - As a link from cred_t to user-land that can be used to extend cred_t. Yes. - As a better point for tracking extant references to network authentication credentials than UIDs. It's unclear to me what you mean here. That I'd rather count references to network credentials from sessions than from processes that might have done a seteuid() to temporarily be like you. But maybe this is wrong anyways. PAGs, like group memberships, do not, IMO, make a good access control on process tracing, and they are orthogonal to [local] filesystem access controls. It would be rather surprising if one could not trace/debug one's processes from different login sessions. It would not be surprising if that's what you were expecting. Noone expects the Spanish Inquisition today... But let's leave the issue of paranoid people aside for a moment, on concentrate on PAG's as an identity selection mechanism. You're right; that's essentially how they are used in AFS today, though in a roundabout way. Essentially, we use PAG's to separate management and use of credentials between sessions. So a user can have multiple sessions with different PAG's, and they don't interfere with each other. He can create a new PAG and set credentials for a different identity, and processes in the new PAG get the new identity while processes in the old PAG get the old one. Now, the issue is that when you're talking about a caching distributed filesystem, your identity affects not only what credentials are used to establish connections to fileservers on your behalf, but also what you are allowed to do with cached data and connections. For example... Yes, clearly, but this doesn't make PAGs a process group separation feature *locally*. On the wire your PAGs look as though they were different identities; locally they are not. Now, the thing that makes PAG's more than just identity selection is that you can't arbitrarily select a PAG -- you can use the one you have, or ask for a new (empty) one, but you can't pick up one arbitrarily and use it. It's trivial to implement this, but we could also allow one to join arbitrary PAGs that one also owns (i.e., label PAGs with the euid [or ruid?] of the process that creates them and let the same user's other processes join any PAGs that user owns). I.e., because of this: Now, I know this has no real security value as long as it is trivial to cross PAG boundaries, [...] I'm willing to consider being able to join PAGs one owns (and then dismiss this as an unnecessary complication). but the people who are really paranoid will do something about it, perhaps by disabling tracing altogether (along with dtrace, and the ability to load kernel modules or touch kernel memory in any way, etc, etc, etc). Interestinly, while PAG's don't directly provide process group separate, it occurs to me that given the ability for in-kernel code to determine a process's PAG, they could be used to _implement_ stronger session separation. I don't know enough about the internals of Solaris, but I bet I could write a security module for Linux that did exactly this, using AFS PAG's. A module? On Solaris you'd have to change a variety of existing functions, like secpolicy*(). Would it be much easier to do this in Linux? But again, process group separation isn't really the point. OK, then I won't pursue that :) (But you knew I wouldn't anyways). The point, for AFS, is making identity selection work. Yes. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris ssh pam_krb
On Mon, Apr 03, 2006 at 02:27:36PM -0400, Jeffrey Hutzelman wrote: Now, the issue is that when you're talking about a caching distributed filesystem, your identity affects not only what credentials are used to establish connections to fileservers on your behalf, but also what you are allowed to do with cached data and connections. For example... Yes, clearly, but this doesn't make PAGs a process group separation feature *locally*. On the wire your PAGs look as though they were different identities; locally they are not. Sure. But it turns out to be incredibly useful to have weak separation, in which you have to go to some effort to use an identity other than the one associated with your own PAG. When you involve human users or complex software build processes, the principle of least privilege becomes fairly important. I won't insist on having it both ways, and I wish you didn't either :) Since you've agreed that PAGs are not a session separation feature I'll just put you firmly in that column and ignore any further protestations that PAGs are a form of weak separation ;) :] My point is that in order to make this work right, the (kernel-mode) cache manager must be able to find out what AFS PAG a process belongs to. If you have simple PAG's, then we make AFS PAG's congruent to those and we're done. Sure. If you have multi-app PAG's, then it gets harder. How? A little bit of IPC to talk to the right daemon that keeps the PAG-network-authentication-credential association doesn't seem hard to me... I just don't want it to be so hard that I have to do at least one upcall to user-mode for every vnode or file op on a file in AFS. Oh no, not for every vnode or file op -- for every authentication attempt. Surely AFS does not do an AP exchange for every vnode or file op, right? Right?! Please tell me that it doesn't... If not I may have to laugh at AFS :) Now, the thing that makes PAG's more than just identity selection is that you can't arbitrarily select a PAG -- you can use the one you have, or ask for a new (empty) one, but you can't pick up one arbitrarily and use it. It's trivial to implement this, but we could also allow one to join arbitrary PAGs that one also owns (i.e., label PAGs with the euid [or ruid?] of the process that creates them and let the same user's other processes join any PAGs that user owns). Yes, you could do that. Those wouldn't be the same semantics as AFS, but that's not necessarily a problem. It _would_ be very similar to the semantics of Kerberos file ccaches, which can also be useful. Since you agree that PAGs are not a session separation feature I don't see how the semantics of this join PAGs I own feature would be incompatible with the semantics of AFS PAGs, but an extension. I'm willing to consider being able to join PAGs one owns (and then dismiss this as an unnecessary complication). I'm not sure what to think about this. Personally, I make fairly heavy use of the idea that I can pick up an existing ccache owned by me and use it. I should correct myself here -- the right way to implement join PAGs one owns in the scheme I've proposed is to find the associations of the PAG you want to join, join new PAG instead, and then establish all those associations for the new PAG. (To keep continuity I'd make associations to PID before joining a new PAG -- make before break.) Interestinly, while PAG's don't directly provide process group separate, it occurs to me that given the ability for in-kernel code to determine a process's PAG, they could be used to _implement_ stronger session separation. I don't know enough about the internals of Solaris, but I bet I could write a security module for Linux that did exactly this, using AFS PAG's. A module? On Solaris you'd have to change a variety of existing functions, like secpolicy*(). Would it be much easier to do this in Linux? Yes, to some extent. Linux has a pluggable security module framework, in which the registered security module is called in various places to determine if an operation is allowed, what its effects will be, and/or whether there are side-effects. I'm afraid the placement of such hooks is a little haphazard and probably sparser than it should be, but I believe there are enough that I could prevent processes in different PAG's from tracing each other. Well, good luck. I think one should be rigorous, not haphazard in designing a kernel-level extensible authorization scheme. I've not looked at these hooks in Linux, so I'll take your word as far as their placement. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Solaris ssh pam_krb
On Mon, Apr 03, 2006 at 02:27:36PM -0400, Jeffrey Hutzelman wrote: On Monday, April 03, 2006 12:56:34 PM -0500 Nicolas Williams [EMAIL PROTECTED] wrote: That I'd rather count references to network credentials from sessions than from processes that might have done a seteuid() to temporarily be like you. But maybe this is wrong anyways. I guess I'm not sure what you mean by references here. PAG's are intended as a better way to select which credentials to use than looking at the UID, since UID's have rather narrow meaning and a user can't just decide he wants a new one for this session. :-) But if you're talking about reference counting on credentials, then what I am. you do depends on your model. If you want to tie credentials to an open file, you need to refcount based on the file, not the UID or PAG. [...] File descriptors in Solaris already retain a reference to the cred_t used to open the file. So UID or PAG is not relevant here. Neither is processes with that UID or PAG. What is relevant is references to that UID or PAG from cred_t instances. Incidentally, AFS refcounts credentials, but only with regards to active So does Solaris. I believe one must in order to support various standard behaviours (e.g., file descriptor passing over IPC + distributed filesystems [NFS, AFS, CIFS, whatever]). use, like establishing a new connection (actually, just for creating the new connection. The security object which contains the ticket has its own refcounting that the RPC layer does). On most platforms we couldn't get notified on every process creation or exit even if we wanted to, and the OS doesn't know about our PAG's so it can't tell us when a PAG is no longer in use. So, we can't refcount credentials, and instead we do a periodic mark-and-sweep, destroying any credentials belonging to a PAG which no longer contains any processes. The PAG itself has no associated data structure; it's just a number. Right. But I'd like the OS to provide a fall to zero refcount facility for either cred_t instances referencing some UID or cred_t instances referencing some PAG. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos