On Thu, Mar 25, 2010 at 5:44 AM, Simon Wilkinson <[email protected]> wrote:
>
> On 25 Mar 2010, at 08:54, Rod Widdowson wrote:
>
>>> I'll step back and ask:  what's your threat model?  What are you trying
>>> to protect against?
>
> The threat model is pretty clear, I think. It's for an environment where 
> users want to be able to store files in a way that a server administrator 
> cannot read them. That is, they trust the server to store the data they give 
> it (and to back it up, etc) but they don't trust it not to eavesdrop on those 
> contents, or to not disclose them to a third party.
>
> In GSoC, the problem I think is tractable is the single user case, modelled 
> around a user who wishes to encrypt their home directory so that it cannot be 
> read without access to their key. In my environment, this is functionality 
> that is regularly requested. It has the additional benefit that it allows 
> some of the harder issues around key management to be deferred.
>

I'll third Derek and Rod's calls for a detailed discussion of the
threat model.  While this use case is quite compelling, it also raises
a number of questions that still feel open to me.

Is the plan to get the encryption layer working as part of GSoC, while
deferring the issues regarding policy and key management until a later
date?  The reason I ask is it seems that a particularly hard part of
this problem, in addition to those issues broached by Derek and Rod,
is going to be policy enforcement.

For example, if we stipulate that specific volume IDs or FIDs are
encrypted, how are we going to protect against malicious servers
performing security downgrade attacks via policy metadata spoofing?
I'll certainly grant that feeding the file server ciphertext is an
excellent step in the right direction; will the new threat model
address casual attackers, while explicitly side-stepping active
attackers who attempt downgrade attacks?

My core concern is that the AFS security model has always assumed that
servers are trustworthy.  It seems inevitable that we must change that
model (e.g. rxgk departmental servers).  However, I think we need to
be careful about thinking through the implications of these changes.
For example, changing the trustworthy server assumption has
potentially far-reaching implications for other ongoing development
efforts (e.g. XCB, cooperative caching, ...).

I think others have brought up some of the following before.  For
completeness, I'll note that I think we eventually need to discuss:

* data key lifetime/byte lifetime/key rotation/byte range keys(?)
* if file data keys are not immutable, cache coherence concerns
* would it be worthwhile to also support checksumming/HMACs?
* finding a good, extensible, performant means of storing the keys as metadata
* what, if any, block chaining modes are acceptable (and associated
implications on chunk size and protocol semantic changes to enforce
writes along block boundaries)
* key escrow
* required volume dump format changes (or are we considering that a
separable xattr issue?)
* do we need to address directory objects?

Cheers,

-Tom
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to