[
https://issues.apache.org/jira/browse/HDFS-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13132866#comment-13132866
]
Suresh Srinivas commented on HDFS-2450:
---------------------------------------
# FileSystem.java
#* Please cache canonical URI for defaultURI and FileSystem URI, instead of
getting it call to checkPath.
#* Please add more details to getCanonicalUri() method as to what is expected
in uri's hostname (FQDN).
#* Code duplication in the following snippet can be avoided by creating a
protected static method.
{noformat}
+ if (thisScheme.equalsIgnoreCase(defaultUri.getScheme())) {
+ if (thisAuthority.equalsIgnoreCase(defaultUri.getAuthority()))
+ return;
+ defaultUri = NetUtils.getCanonicalUri(defaultUri, getDefaultPort());
+ thisAuthority = getCanonicalUri().getAuthority();
+ if (thisAuthority.equalsIgnoreCase(defaultUri.getAuthority()))
+ return;
{noformat}
# DistributedFileSystem.java - The existing methods checkPath() and
makeQualified() allowed default ports exclusively (indicated in the javadoc). I
am not sure why that was done. Why is this patch removing that?
# In tests how does host become host.a.b?
> Only complete hostname is supported to access data via hdfs://
> --------------------------------------------------------------
>
> Key: HDFS-2450
> URL: https://issues.apache.org/jira/browse/HDFS-2450
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.205.0
> Reporter: Rajit Saha
> Assignee: Daryn Sharp
> Attachments: HDFS-2450-1.patch, HDFS-2450.patch
>
>
> If my complete hostname is host1.abc.xyz.com, only complete hostname must be
> used to access data via hdfs://
> I am running following in .20.205 Client to get data from .20.205 NN (host1)
> $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1/tmp
> copyFromLocal: Wrong FS: hdfs://host1/tmp, expected: hdfs://host1.abc.xyz.com
> Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>]
> $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1.abc/tmp/
> copyFromLocal: Wrong FS: hdfs://host1.blue/tmp/1, expected:
> hdfs://host1.abc.xyz.com
> Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>]
> $hadoop dfs -copyFromLocal /etc/passwd hftp://host1.abc.xyz/tmp/
> copyFromLocal: Wrong FS: hdfs://host1.blue/tmp/1, expected:
> hdfs://host1.abc.xyz.com
> Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>]
> Only following is supported
> $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1.abc.xyz.com/tmp/
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira