[
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376733#comment-14376733
]
Haohui Mai commented on HDFS-7037:
----------------------------------
[~atm], sorry for the delay as I'm busy with 2.7 blockers.
bq. Note that in the latest patch allowing connections to fall back to an
insecure cluster is configurable, and disabled by default.
Yes you can disable it through configuration but as this is a global
configuration that affects every HFTP connections misconfiguration is still a
concern from a practical point of view (which I raised in HDFS-6776). I think
[~cnauroth] has an excellent articulation on the issue in
https://issues.apache.org/jira/browse/HADOOP-11321?focusedCommentId=14225238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14225238:
{quote}
...
This is pretty standard Hadoop code review feedback. As a result, Hadoop now
has 762 configuration properties. That's from a grep -c of core-default.xml,
hdfs-default.xml, yarn-default.xml and mapred-default.xml, so the count doesn't
include undocumented properties.
...
{quote}
Also, the fallback behavior is problematic from a security point of view. Chris
has also proposed HADOOP-11701 to limit the impacts of potential configuration.
Indeed it is not an ideal solution but it is a practical one given the
constraints on backward compatibility. Maybe we can do something similar in
this jira.
To summarize:
* -1 on putting fallback logics in FileSystem in general due to potential
security vulnerabilities.
* Given the fact that HFTP is deprecated and it is used in limited use cases,
I'm willing to change it to -0 if there are solutions like HADOOP-11701 to
limit the impact of such a configuration.
> Using distcp to copy data from insecure to secure cluster via hftp doesn't
> work (branch-2 only)
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: security, tools
> Affects Versions: 2.6.0
> Reporter: Yongjun Zhang
> Assignee: Yongjun Zhang
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there.
> Issuing "distcp hftp://<insecureCluster> hdfs://<secureCluster>" gave the
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
> at
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
> at
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
> at
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation:
> PriviledgedActionException as:[email protected] (auth:KERBEROS)
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered
> java.io.IOException: Unable to obtain remote token
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
> at
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
> at
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
> at
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> Caused by: java.io.IOException: Error when dealing remote token: Internal
> Server Error
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
> ... 22 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)