Pardon my intrusion into the discussion, but I've been following this with mild interest and am wondering about the scope of what is proposed. For example, if we just want to worry about encrypting so that a single user's files are protected with a key held by that user (or the user's client), how is this really different from, for instance, just using EncFS on top of AFS? I currently do this with no problems and the admins, or anyone else for that matter, certainly can only see gobble-dee-gook when they look at my files.
If the scope of this effort is intended to be much broader than this, then it seems that the real issue really is in the (auto)magic management of keys and policies. I certainly agree with what has already been stated about the clients doing any encryption necessary. It would, on the other hand, be pretty cool if the key-management were implemented such that the user could specify other users/groups that can have access to the encrypted data. I think someone already suggested something like this where the common encryption key of the file contents is encrypted for each user with their own public key (speaking in terms of PKI language). Spencer Olson On Wednesday 31 March 2010 04:26, Derrick Brashear wrote: > On Wed, Mar 31, 2010 at 12:06 AM, Tom Keiser <[email protected]> wrote: > > On Thu, Mar 25, 2010 at 5:44 AM, Simon Wilkinson <[email protected]> wrote: > >> On 25 Mar 2010, at 08:54, Rod Widdowson wrote: > >>>> I'll step back and ask: what's your threat model? What are you > >>>> trying to protect against? > >> > >> The threat model is pretty clear, I think. It's for an environment where > >> users want to be able to store files in a way that a server > >> administrator cannot read them. That is, they trust the server to store > >> the data they give it (and to back it up, etc) but they don't trust it > >> not to eavesdrop on those contents, or to not disclose them to a third > >> party. > >> > >> In GSoC, the problem I think is tractable is the single user case, > >> modelled around a user who wishes to encrypt their home directory so > >> that it cannot be read without access to their key. In my environment, > >> this is functionality that is regularly requested. It has the additional > >> benefit that it allows some of the harder issues around key management > >> to be deferred. > > > > I'll third Derek and Rod's calls for a detailed discussion of the > > threat model. While this use case is quite compelling, it also raises > > a number of questions that still feel open to me. > > Assuming I as an end user don't want the server admin to have access > to my data, what's a detailed way to express that? > > Yes, that sounds excessively snarky. But during the time that I was > one, that was a question I got. > "Can you see my files?" > "Yes" > > > Is the plan to get the encryption layer working as part of GSoC, while > > deferring the issues regarding policy and key management until a later > > date? The reason I ask is it seems that a particularly hard part of > > this problem, in addition to those issues broached by Derek and Rod, > > is going to be policy enforcement. > > > > For example, if we stipulate that specific volume IDs or FIDs are > > encrypted, how are we going to protect against malicious servers > > performing security downgrade attacks via policy metadata spoofing? > > If this is a case of the user specifying that they with their data > encrypted, e.g. a user-activated function rather than an > admin-activated function, the policy engine would seem to be "I tell > my client to encrypt (some subset of) my files". If you have access to > my client, you can downgrade me. If you have access to my client, I > have already lost. > > > I'll certainly grant that feeding the file server ciphertext is an > > excellent step in the right direction; will the new threat model > > address casual attackers, while explicitly side-stepping active > > attackers who attempt downgrade attacks? > > > > My core concern is that the AFS security model has always assumed that > > servers are trustworthy. It seems inevitable that we must change that > > model (e.g. rxgk departmental servers). However, I think we need to > > be careful about thinking through the implications of these changes. > > For example, changing the trustworthy server assumption has > > potentially far-reaching implications for other ongoing development > > efforts (e.g. XCB, cooperative caching, ...). > > I don't think I'd want to use *this* project to address the > departmental fileserver problem. At all, in fact. > > > I think others have brought up some of the following before. For > > completeness, I'll note that I think we eventually need to discuss: > > > > * data key lifetime/byte lifetime/key rotation/byte range keys(?) > > I think this can be deferred, because: > > * if file data keys are not immutable, cache coherence concerns > > If you rekey, you are responsible for rewriting the file back to disk. > Old DV data is now stale, and no new code is require. > > > * would it be worthwhile to also support checksumming/HMACs? > > -only, e.g. clear payload? > > > * finding a good, extensible, performant means of storing the keys as > > metadata * what, if any, block chaining modes are acceptable (and > > associated implications on chunk size and protocol semantic changes to > > enforce writes along block boundaries) > > (agreed) > > > * key escrow > > none. my client my data. > > > * required volume dump format changes (or are we considering that a > > separable xattr issue?) > > i'd think so. > > > * do we need to address directory objects? > > i'd propose not. _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
