Please forgive my organizations' utter reliance on an email client that can't 
even quote correctly. Not to mention it keeps trying to make hyperlinks in a 
plain text message.

> The technical specs are in a separate draft. We modified S/MIME but there is 
> nothing 
> to stop this being applied to PGP.
> https://datatracker.ietf.org/doc/draft-schaad-plasma-cms/

Thanks. I'll put it on my list. Is this the correct place to provide feedback 
or is there another IETF list?

>Plasma does not attempt DRM in that it places no technical obligations on the 
>receipt 
>when we release the decryption key. 
> ...
> So if you wanted to send content covered by a policy, you would get the same 
> protection 
> and policy enforcement as if you published via the web.

I'm having a hard time wrapping my head around the notion of enforcement 
comingling with "no obligations on receipt" of the key. Enforcement is an 
obligation.

> Not all polices require a receipt list.

Why would you send something to someone if you weren't going to allow them to 
decrypt it? If you need finer grained control than an email list affords, 
perhaps an email list is not an appropriate solution. All email-related 
policies should require a recipient list, and in addition the policy's list 
should be synchronized with the email's recipient list (albeit, the policy can 
refer to the same list of people by non email identifiers). It's probably going 
to be hard enough just making sure that everyone's identifier somehow tracks 
back to their email address, and you're not giving away access to someone you 
didn't send the message to.

Section 4.1 seems too centered on the end user's ISP. I have a few email 
accounts, none of which are from my ISP. (gmail, yahoo, work, etc.) Suggest 
changing to "mail provider" or equivalent.

>From section 4.4.1: 

  (5)  Frank clicks the "send email" button. The client signs the email
       using his smart card private key and includes the certificate
       with the appropriate public key for verification of the signature
       by recipients. The client then encrypts the message and obtains a
       token for the message from a server that will enable the
       recipients' servers to enforce the access control requirements
       for Frank, and sends the email to his email server.

Unless the client is encrypting with Frank's smart card private key, I think it 
needs to contact "a server that will enable the recipients' servers to enforce 
the access control requirements" to obtain the encryption key. 

Editorial note: I think it would be helpful to define how many servers there 
are, what they are called, and the role for each one then use that nomenclature 
consistently throughout the use cases. I know you outlined a "generic access 
control model", and include a vocabulary, but the list appears to be missing 
the thing that makes and releases keys, which is kind of a cornerstone of this 
architecture. I'm also somewhat unclear as to how many of the "xx xx Point" 
items are servers, and which are clients.


  (7)  Once Grace has shown she passes the policy requirements, the PDEP
       releases the message Content Encryption Key (CEK) to Grace using
       her level 3 encryption certificate.

Does the PDEP do all the key management? (e.g., does Frank contact the PDEP to 
get a new key for his message?)

  (9)  The CEK Grace received has a Time To Live (TTL)  value which
       defines when Grace must discard the CEK and reapply for a new
       CEK.  

I'd like to point out that an adversary would just keep the CEK. Or make a 
plaintext/alternate ciphertext copy of the content while the CEK is still 
valid. 

Even with well behaved actors, in the case of attachments, it is likely that 
the attachment will have to be decrypted, saved on disk in plaintext, and 
opened with an external application. They have a "forever" copy even when the 
CEK expires. Why would they even notice that a CEK expired?

It is probably worth stressing in security considerations that the only 
rational assumption is: "once decrypted, always accessible". While the example 
is designed  to show that confidentiality policies change over time, the 
solution proposed by this system only allows for an expansion of the pool of 
actors in possession of your sensitive data, never a reduction. Ever. 

> Another aspect is Plasma allows for device attributes so before we 
> release the decryption key we can requires information about the 
> state of the environment. That way if the recipient is infected, we 
> can block decryption which is another plus for late binding. Again 
> his is option so for low assurance polices you probably not care. 

For high assurance policies, I would assume infected/malicious clients to be 
lying. Particularly if they were not administered by me, and probably even 
then. Should you ever expect any response other than "I'm healthy and you can 
trust me!"? Am I missing something? 

Thanks,
Bryce

_______________________________________________
Endymail mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/endymail

Reply via email to