[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15905411#comment-15905411
 ] 

Yongjun Zhang commented on HADOOP-14104:
----------------------------------------

Thanks [~andrew.wang].

{quote}
The FileEncryptionInfo has a cipher type field. If the client doesn't support 
the cipher, then it can't read/write the file, and will throw an exception
{quote}
Does the exception get thrown when we do {{final CryptoCodec codec = 
getCryptoCodec(conf, feInfo);}} or later when we write file? I prefer the 
former.

BTW [~rushabh.shah], 

In rev2,  {{DistributedFileSyste#addDelegationTokens}} put the keyProvider 
entry to scretMap only when {{dfs.isHDFSEncryptionEnabled()}} is true.  
{{dfs.isHDFSEncryptionEnabled()}} calls {{getServerDefaults}} to find out 
whether encryption is enabled when there is no keyProvider entry in the 
secretMap. This means, If encryption is not enabled, then we don't put any 
entry into the secretMap, and each task of mapreduce job will always call 
getServerDefaults, which we try to avoid.

I suggest to add an empty entry to the secretMap after we call 
getServerDefaults for the first time and find encryption is disabled. Then 
later if we find an entry in the map, and if it's empty, we know encryption is 
not enabled, if it's non-empty, it's enabled. This is what I tried to do in 
rev3 to avoid the extra getServerDefaults call for the case when encryption is 
disabled. For your reference.
 
Thanks.
 





> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
>                 Key: HADOOP-14104
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14104
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: kms
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>         Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to