[
https://issues.apache.org/jira/browse/HDFS-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chuan Liu resolved HDFS-5227.
-----------------------------
Resolution: Not A Problem
I noticed we have the following code in {{Namenode.initializeGenericKeys()}}
method. This method is executed before {{Namenode.initialize()}} method in the
namenode constructor. Our problem should be fixed with the following code. I
think we run into failure previously because "dfs.namenode.rpc-address" is not
properly configured.
{code:java}
// If the RPC address is set use it to (re-)configure the default FS
if (conf.get(DFS_NAMENODE_RPC_ADDRESS_KEY) != null) {
URI defaultUri = URI.create(HdfsConstants.HDFS_URI_SCHEME + "://"
+ conf.get(DFS_NAMENODE_RPC_ADDRESS_KEY));
conf.set(FS_DEFAULT_NAME_KEY, defaultUri.toString());
LOG.debug("Setting " + FS_DEFAULT_NAME_KEY + " to " +
defaultUri.toString());
}
{code}
As for the unit test failure above, the federated namenode addresses of
MiniDFSCluster will be wrong in TestDeleteBlockPool if the {{getAddress()}} is
modified according to the patch.
Resolve this one as "not a problem" because the behavior is the same as
desired, i.e. getAddress() will return "dfs.namenode.rpc-address" if namenode
is initialized and "dfs.namenode.rpc-address" is configured.
> Namenode.getAddress() should return namenode rpc-address
> --------------------------------------------------------
>
> Key: HDFS-5227
> URL: https://issues.apache.org/jira/browse/HDFS-5227
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0, 2.1.1-beta
> Reporter: Chuan Liu
> Assignee: Chuan Liu
> Attachments: HDFS-5227-trunk.patch
>
>
> Currently, {{Namenode.getAddress(Configuration conf)}} will return default
> file system address as its result. The correct behavior should be returning
> config value of "dfs.namenode.rpc-address" if it presents in the
> configurations. Otherwise namenode will fail to start if the default file
> system is configured to another file system other than the one running in the
> cluster. We have a similar issue in 1.0 code base. The JIRA is HDFS-4320. The
> previous JIRA is closed and we cannot open it. Thus create a new one to track
> the issue in trunk.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira