[
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15891394#comment-15891394
]
Yongjun Zhang commented on HADOOP-14104:
----------------------------------------
HI [~shahrs87],
Thanks much for working on this issue!
I had a few nits and question about the patch here:
1. Since this is public API, we should introduce a new constructor with the
addiitional parameter keyProviderUri to be backward compatible, instead of just
modifying the existing one.
{code}
@InterfaceAudience.Public
@InterfaceStability.Evolving
public class FsServerDefaults implements Writable {
......
public FsServerDefaults(long blockSize, int bytesPerChecksum,
int writePacketSize, short replication, int fileBufferSize,
boolean encryptDataTransfer, long trashInterval,
DataChecksum.Type checksumType,
String keyProviderUri) {
{code}
2. Suggest to add a KEY_PROVIDER_URI_DEFAULT to replce the "" here (in both
FtpConfigKeys.java and LocalConfigKeys.java):
{code}
protected static FsServerDefaults getServerDefaults() throws IOException {
return new FsServerDefaults(
BLOCK_SIZE_DEFAULT,
......
CHECKSUM_TYPE_DEFAULT,
"");
{code}
3. the following method swallows IOException and return false. Suggest to
remove the {{@throws IOException}} and add a comment in the catch block about
why it can be iegnored and false should be returned here. And a
dd a space after the word {{catrch}}.
{code}
/**
* Probe for encryption enabled on this filesystem.
* See {@link DFSUtilClient#isHDFSEncryptionEnabled(Configuration)}
* @return true if encryption is enabled
* @throws IOException
*/
public boolean isHDFSEncryptionEnabled() {
try {
return DFSUtilClient.isHDFSEncryptionEnabled(getKeyProviderUri());
} catch(IOException ioe) {
return false;
}
}
{code}
4. Line 84 of KeyProviderCache.java (not introduced by your changei)
{code}
LOG.error("Could not create KeyProvider for DFSClient !!", e.getCause());
{code}
suggest to replace e.getCause() with e, so we can see the full stack.
5. Currently {{getServerDefaults()}} contact NN every hour, to find if there is
any update of keyprovider. If keyprovider changed within the hour,
client code may get into exception, wonder if we have mechanism to handle the
exception and update the keyprovider and try again?
Thanks.
> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
> Issue Type: Improvement
> Components: kms
> Reporter: Rushabh S Shah
> Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch
>
>
> According to current implementation of kms provider in client conf, there can
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]