[
https://issues.apache.org/jira/browse/HDFS-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804611#comment-14804611
]
Aaron T. Myers commented on HDFS-7037:
--------------------------------------
bq. Please correct me if I misunderstood. (1) The current behavior of RPC /
WebHDFS is less than ideal and it is vulnerable to attack. (2) You argue that
the proposed changes makes HFTP vulnerable for the fallback, but it is no worse
than what we have in RPC / WebHDFS today.
Correct.
bq. As an analogy, it seems to me that the argument is that it's okay to have a
broken window given that we have many broken windows already?
I don't think that's a reasonable analogy. The point you were making is that
this change introduces a possible security vulnerability. I'm saying that this
is demonstrably not a security vulnerability, since we consciously chose to add
this capability to other interfaces. HADOOP-11701 will make things configurably
more secure for all interfaces, but that's a separate discussion.
bq. My question is that is there a need to create yet another workaround, given
that we know that it is prone for security vulnerability?
Like I said above, this should not be considered a security vulnerability. If
it is, then we should have never added this capability to WebHDFS/RPC, and we
should be reverting it from WebHDFS/RPC right now.
bq. I'd like to understand your use cases better? Can you please elaborate why
you'll need another workaround in HFTP, given that you guys have put the
workaround in WebHDFS already?
Simple: because some users use HFTP and not WebHDFS, specifically for distcp
from older clusters.
> Using distcp to copy data from insecure to secure cluster via hftp doesn't
> work (branch-2 only)
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-7037
> URL: https://issues.apache.org/jira/browse/HDFS-7037
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: security, tools
> Affects Versions: 2.6.0
> Reporter: Yongjun Zhang
> Assignee: Yongjun Zhang
> Labels: BB2015-05-TBR
> Attachments: HDFS-7037.001.patch
>
>
> This is a branch-2 only issue since hftp is only supported there.
> Issuing "distcp hftp://<insecureCluster> hdfs://<secureCluster>" gave the
> following failure exception:
> {code}
> 14/09/13 22:07:40 INFO tools.DelegationTokenFetcher: Error when dealing
> remote token:
> java.io.IOException: Error when dealing remote token: Internal Server Error
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
> at
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
> at
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
> at
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> 14/09/13 22:07:40 WARN security.UserGroupInformation:
> PriviledgedActionException as:[email protected] (auth:KERBEROS)
> cause:java.io.IOException: Unable to obtain remote token
> 14/09/13 22:07:40 ERROR tools.DistCp: Exception encountered
> java.io.IOException: Unable to obtain remote token
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:252)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:247)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:247)
> at
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:140)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:337)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:324)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:457)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:472)
> at
> org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:501)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)
> at
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:81)
> at
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:390)
> Caused by: java.io.IOException: Error when dealing remote token: Internal
> Server Error
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:375)
> at
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
> ... 22 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)