decommission node is both in the "Live Datanodes" with "In Service" status, and 
in the "Dead Datanodes" of the dfs namenode web ui.
-----------------------------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-3499
                 URL: https://issues.apache.org/jira/browse/HADOOP-3499
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.17.0
         Environment: linux-2.6.9
            Reporter: lixiangna


try to decommission a node by the following the steps:
(1) write the hostname of node which will be decommissioned in a file (the 
exclude file)
(2) specified the absolute path of the exclude file as a configuration 
parameter dfs.hosts.exclude.
(3) run "bin/hadoop dfsadmin -refreshNodes".

It is surprising that the node is found both in the "Live Datanodes" with "In 
Service" status, and in the "Dead Datanodes" of 

the dfs namenode web ui. When copy new data to the HDFS, its Used size is 
increasing as other un-decommissioned nodes. 

Obviously it is in service. Restarting the HDFS or waiting a long time(two day) 
havn't make the decommission yet.

the more strange thing, If nodes are configured as the include nodes by similar 
steps, then these include nodes and
the exclude node are all only in the "Dead Datanodes" lists. 

I did many times tests in both 0.17.0 and 0.15.1. The results is same. So i 
think there maybe bugs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to