[
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14002565#comment-14002565
]
Jing Zhao commented on HDFS-6420:
---------------------------------
Thanks for the comments, [~aw].
I think to make decommission/recommission work properly, the users should make
sure the include/exclude files are updated correctly in both NameNode. For
decommission, if the include file has not been updated in one of the NameNodes,
the corresponding DN will be set as disallowed in that NN. This is just like
currently users send refreshNodes request to one of the NN (either the one
specified in the -fs option, or the first NN specified in the configuration
property "dfs.ha.namenodes.$nameserviceId") and have not updated the hosts
files correctly (maybe because the user does not understand which NN the
command will be sent to).
Have you seen any bad scenarios if we send refreshNodes to both NameNodes by
default, [~aw]?
> DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
> ------------------------------------------------------------------
>
> Key: HDFS-6420
> URL: https://issues.apache.org/jira/browse/HDFS-6420
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Jing Zhao
> Assignee: Jing Zhao
> Attachments: HDFS-6420.000.patch
>
>
> Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command
> is sent to the NameNode first specified in the configuration by default.
> Users can use "-fs" option to specify which NN to connect to, but in this
> case, they usually need to send two separate commands. We should let
> refreshNodes be sent to both NameNodes by default.
--
This message was sent by Atlassian JIRA
(v6.2#6252)