[ 
https://issues.apache.org/jira/browse/HDFS-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14050320#comment-14050320
 ] 

Andrew Wang commented on HDFS-6605:
-----------------------------------

Hey Yi, thanks for the review! I'll handle it in the next patch rev, just a few 
replies:

bq.  I see there is no place calling FSDirectory#setFileEncryptionInfo...

Sure, I can do that. I was planning on doing that later since we haven't 
finished KeyProvider integration to generate EDEKs pending HADOOP-10719, but I 
could add some stub usage in for now.

bq. We forgot the algorithm block size?

Looking at the standard Ciphers in javax.crypto.Cipher [1], the key size is 
hardcoded for each of the String constants (i.e. AES/CBC/NoPadding (128)). 
Wikipedia also says that the AES blocksize is always 16 bytes regardless of key 
size [2]. I'm not sure how all of this fits together for other ciphers, so I'm 
open to suggestions on how to best model it.

[1] http://docs.oracle.com/javase/7/docs/api/javax/crypto/Cipher.html
[2] http://en.wikipedia.org/wiki/Block_size_(cryptography)

> Client server negotiation of cipher suite
> -----------------------------------------
>
>                 Key: HDFS-6605
>                 URL: https://issues.apache.org/jira/browse/HDFS-6605
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: security
>    Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-6605.001.patch
>
>
> For compatibility purposes, the client and server should negotiate what 
> cipher suite to use based on their respective capabilities. This is also a 
> way for the server to reject old clients that do not support encryption.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to