[
https://issues.apache.org/jira/browse/HDFS-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771275#comment-13771275
]
Hudson commented on HDFS-5122:
------------------------------
SUCCESS: Integrated in Hadoop-trunk-Commit #4438 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/4438/])
Move HDFS-5122 from Release 2.1.1-beta to Release 2.3.0 in CHANGES.txt (jing9:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1524581)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Support failover and retry in WebHdfsFileSystem for NN HA
> ---------------------------------------------------------
>
> Key: HDFS-5122
> URL: https://issues.apache.org/jira/browse/HDFS-5122
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ha, webhdfs
> Affects Versions: 2.1.0-beta
> Reporter: Arpit Gupta
> Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5122.001.patch, HDFS-5122.002.patch,
> HDFS-5122.003.patch, HDFS-5122.004.patch, HDFS-5122.patch
>
>
> Bug reported by [~arpitgupta]:
> If the dfs.nameservices is set to arpit,
> {code}
> hdfs dfs -ls webhdfs://arpit/tmp
> {code}
> does not work. You have to provide the exact active namenode hostname. On an
> HA cluster using dfs client one should not need to provide the active nn
> hostname.
> To fix this, we try to
> 1) let WebHdfsFileSystem support logical NN service name
> 2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira