[Attempting to combine Jeff's two latest mails into one reply]
On Thu, 1 Nov 2012, Jeffrey Hutzelman wrote:
On Wed, 2012-10-31 at 22:59 -0400, Benjamin Kaduk wrote:
On Mon, 29 Oct 2012, Jeffrey Hutzelman wrote:
A CM wanting to combine its token with a user's knows both keys, but the
key in the user's token is also known to the user. We want the combined
token to have primarily the user's identity, so that filesystem accesses
are done as the user. But we want the key to be one the user doesn't
know, so he cannot poison the cache.
Right.
Furthermore, we don't want the user to be able to unduly influence the
properties of the combined token, either through his knowledge of one of
the input keys or through his ability to influence or control some of
the properties of the input token. For example, contrary to Simon's
comment, we don't want to take the lowest security level of all tokens,
because that would allow a client to force a weaker level than the CM is
willing to live with (this is not as bad as with rxkad, because rxgk's
weakest level is still fairly strong, but it's still worth considering).
The same applies to enctype selection.
I'm starting to think that there may be scenarios where either the
stronger or weaker level and/or enctype may be desired. A CM must not be
poisoned by a user, yes, but perhaps there is a single one-time data
transfer that is desired at clear or auth. I'll expand on these thoughts
below.
matters. Since we will be generating a new key anyway, the right answer
is probably to have the client send a list of supported enctypes and
have the server issuing the combined token select a suitable one.
It is tempting, as GSSNegotiate() already does so. However, we need to
come up with the same enctype on both client and server, and communicating
the result back from the server to the client could get complicated.
Huh? Why? The server already has to communicate the new token back to
the client; why can't it also communicate the enctype used?
It changes the RPC signature, which is by no means a blocking objection
(but might, say, hurt interoperability with already-deployed
implementations).
It also may present something of a future-proofing problem: currently, we
pass two tokens in and get a token out. If we need to start passing more
information back, we might get into trouble if an application has
app-specific data into the token that also needs information passed back.
If we can keep the interface as tokens-in, tokens-out only, that situation
is much cleaner.
Though, if we're going to be defining app-specific RPCs anyway
(AFSCombineTokens), maybe this is not a huge concern.
(I don't have a good reason for unease, feel free to tell me I'm
wrong); can we get away with requiring that the enctypes must be the same?
No, I don't think we can, and I see no reason to do so. The construct
in question is well-formed and uses the primitives exactly as intended.
Right, random bits are random bits.
If we can't get away with requiring identical enctypes, I think I'd rather
just say "take the enctype of K0" than try a negotiation. With some text
like "the client SHOULD ensure that if K0 and K1 have different enctypes,
the stronger enctype is presented as K0", that would probably be okay.
Not if you also have rules about where the client identity in the
combined token comes from.
What form would/could such rules take? afs3-rxgk-afs does mention
ordering, but I did not see anything about how to combine identities.
Back to negotiation, a brainstorming thought: rather than have the server
send back a selected enctype (which would want to be protected under one
or the other of K0/K1), we could have the client send a nonce and the
server send it back encrypted in Kn. The client would then build Kn for
each enctype in the list and see if the nonce decrypts properly, to tell
which enctype was used. Still kind of ugly, huh.
Ugh, that sounds expensive. I see no reason the server's enctype
indication needs to be protected -- either it will match the session key
type in the token or it won't, and if it doesn't, it's not going to
work.
I suppose it would not be too great a DoS risk to drop something if the
indicated enctype does not match the token (which does not match the key
generated by the client). We would not be able to check the parenthetical
until we try to use the token, though, but we already have similar
behavior.
On Thu, 1 Nov 2012, Jeffrey Hutzelman wrote:
On Thu, 2012-11-01 at 11:43 +0000, Simon Wilkinson wrote:
On 1 Nov 2012, at 02:59, Benjamin Kaduk wrote:
I'm not sure you understand the model in play here - perhaps we need
more text explaining the purpose of CombineTokens. It isn't intended
that CombineTokens be used for tokens acquired for two different
servers. Instead, it's supposed to combine multiple user tokens (For
example, if Alice and Bob join forces to access a particular system)
into a single authentication entity.
I think that was my understanding; I may have expressed myself poorly.
(In the AFS case, 'Alice' might be the cache manager and not a user.)
Secondly, the encryption types of the server principal
(afs3-bos/myserver.example.org or afs3-rxgk/_afs.example.org) don't
have any relevance here. Instead, what you care about is the encryption
type that has been negotiated between the server and client as part of
the earlier GSSNegotiate operation, which will be chosen from the
intersection of the list of rxgk encryption types supported by the
client and the server. Bear in mind that rxgk is designed to be used as
part of a GSSAPI negotiation - there's no requirement that the
underlying mechanism is Kerberos based.
Sure; I think I worded things badly in my mail. (Wrongly, really.)
But, as long as the two keys/tokens being combined were not part of a
negotiation between the same client/server, using the same settings, they
could have negotiated different values. There's not a good out-of-band
way to ensure that the negotiated values are always the same (and arguably
there should not be one).
Anyway, Jeff has assuaged any concerns I had that were in the text you
were replying to (which I trimmed, oops).
Things do get complicated when we look at AFS specifically where there
is an extended AFSCombineTokens RPC. This servers a number of purposes.
Secondly, it allows user key material to be combined with cache manager
key material to avoid the cache poisoning attack.
This, however, is just a semi-special case of the functionality offered
by the generic combine-tokens operation. While the user/CM thing is
specific to AFS, I think the generic combine-tokens needs to be flexible
enough to meet this need directly. Ultimately, I think that means...
The caller needs to know all of the input keys.
The output key should be a function of all input keys, so that no one
who holds only some of the input tokens can know, choose, or unduly
influence the selection of the output key.
Right. I don't think anyone disagrees with either of these two
statements.
The caller should be able to assert any identity ordering, presumably by
the order in which the input tokens are provided. That is, if Alice and
Bob "combine forces", the caller gets to decide whether the identity
expressed by the output token is [Alice,Bob] or [Bob,Alice], and the
spec says how to do so. Of course, the meaning of multiple identities
in a token is up to the application, and in any case, might end up being
considerably more complex than just a list.
Do I need to go read brashear-afs3-pts-extended-names to have context for
this, or are these from previous discussions elsewhere? I'm not sure I
see why there would need to be a possibility for different treatment of
different orders of an identity list.
Restrictions such as lifetime must be constrained by the values in all
input tokens. This is separate from the question of how the lifetime of
an rxgk token relates to that of the credentials from which it was
originally derived, and also from the question of whether and to what
extent an "expired" can still be used. An rxgk token derived from
another rxgk token must not have a lifetime longer than the original.
I agree, and hope this is less controversial than the credential
lifetime->token lifetime question. We do have explicit text in
afs3-rxgk-afs mentioning that the CM should update its token frequently,
"to avoid combined tokens having unnecessarily close expiration times".
Choice of enctype should be by selecting the strongest available
enctype, where "available" means those enctypes both supported by the
server and known to be supported by the client, and "strongest" is up to
the server to determine. The list of enctypes could be taken from all of
the provided tokens, but I think it would be better for the caller to
provide an explicit list, as this allows upgrade to an enctype that is
supported by the caller and server but may not have been supported by
the original token issuers (or by the clients that got the tokens).
I agree that there is sense in allowing an enctype other than the two
present in the provided tokens, but it's not imeediately/obviously clear
that it must always be the stronger one. Certainly the server can reject
the request if it doesn't like any of the enctypes in question, but I feel
like the client should still have some input -- maybe it is an embedded
system and has hardware acceleration for some enctype but others are very
expensive.
Simon says in other mail that "From an encryption type perspective, the
intent was that the server should pick the enctype which appears highest
within its preference list." That seems fine to me, but we should clarify
which set of enctypes the server is choosing from. I think that the only
real options on the table are:
(1) choose from the enctypes of the two presented tokens
(2) choose from a list of enctypes presented by the client
(2) would give the ability to combine two DES keys into an AES key, which
seems appealing. It may not be the most realistic scenario to consider
for general use, though.
Choice of level probably wants to be the strongest of the levels of the
input tokens, or possibly a stronger (but not weaker) level requested by
the caller. I'm not entirely sure about how this should work, though,
and I think it bears some discussion. The issue here is you don't want
the holder of one of the tokens to be able to unilaterally reduce the
level (Bob forces clear when Alice wants crypt). And, I'm not sure you
want the caller to be able to reduce the level either -- should a CM be
able to request cleartext communication when the user clearly wanted
encryption?
I agree there is more discussion needed here.
Hmm, Simon wanted to hold off on talk of negotiation until we get to
AFSCombineTokens() (though maybe that was just for level and not
enctype?). (Maybe I was too aggressive in starting a separate thread for
CombineToken and security level, as it's leaking over here.)
It somehow feels like using a lower level is more "legitimate" than
wanting to use weaker crypto -- if CPU time is a concern, then one can
still use a strong enctype and just fall back to auth or clear.
But does that make sense in this particular case?
We have (1) user + server/CM and (2) user + user.
(2) seems like, absent a trusted third party, it requires Alice to give
her token+key to Bob, thereby allowing all file access. In principal, Bob
could go grab Alice's data and retransmit in the clear wherever he wanted.
Now, there may be super-sensitive data that is only accessible to the
combined token of Alice+Bob, in which case this identity should require
active consent from both Alice and Bob; in this case we should be
concerned if Bob could reduce the level without consent from Alice. (If
the data is so secure so as to require two people to be present to access
it, it probably ought to be encrypted on the wire!) But, if Bob wants to
be evil, he already has "all" of Alice's data from her token+key.
So, there does not seem like much security loss from using the lower of
the two levels (if they differ). It seems like there would need to be a
trusted third party to combine the tokens+keys in order for the
requires-both-users ACLing to be particularly useful.
(1) can be split into (1a) where the CM protects itself against an evil
user, and (1b) where there is an ACL to user+machine.
In (1a), we assume that the CM is the one performing the CombineTokens
operation; an evil CM can already do all sorts of nasty things and need
not be considered. The CM should then respect the user's preference;
we could have the CM's token be 'crypt' and then allow the combined token
to use the lower level, as a possible implementation. For (1b), it is
less clear that using the lower level of the two is reasonable. If data
is ACLd to a machine, then that machine('s CM) should have a say in
whether that data can go over the wire in the clear. In this case, it is
the CM's preference that should take priority and using the lower of the
two levels could compromise the security of some data.
Unless we want to claim that if data is ACLd to a machine, then untrusted
users should not be allowed on the machine at all...which is not entirely
unreasonable.
Does the above analysis seem reasonably complete and correct? If so, it
would seem to support Simon's opinion of using the lower level of the two
tokens.
-Ben
_______________________________________________
AFS3-standardization mailing list
[email protected]
http://lists.openafs.org/mailman/listinfo/afs3-standardization