[
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16534397#comment-16534397
]
Xiao Chen commented on HADOOP-14104:
------------------------------------
Hi [~aajayaraj],
As the above discussion goes, the way to have this working again after taking
this change, is to configure the 2 clusters to have different HDFS name
service. Can do this by either changing one of the cluster's nameservice
altogether, or hand-craft a specific configuration for the distcp job and pass
that in with --config
> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
> Issue Type: Improvement
> Components: kms
> Reporter: Rushabh S Shah
> Assignee: Rushabh S Shah
> Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14104-branch-2.8.patch,
> HADOOP-14104-branch-2.patch, HADOOP-14104-trunk-v1.patch,
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch,
> HADOOP-14104-trunk-v4.patch, HADOOP-14104-trunk-v5.patch,
> HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]