James,
After issuing a command to decommission a node, you should at least be able to 
see the following log messages in the namenode logs

Setting the excludes files to some_file_contains_decommissioing_hostname
Refreshing hosts (include/exclude) list

If you do not see these log messages, maybe you want to check

1)      Weather you have set

<property>

  <name>dfs.hosts.exclude</name>

  <value>some_file_contains_decommissioing_hostname</value>

</property>
In hdfs-site.xml

2)      If you have your this decommissioning hostname file in place.
Regards,
Tanping
From: James Litton [mailto:james.lit...@chacha.com]
Sent: Friday, February 11, 2011 1:10 PM
To: hdfs-user@hadoop.apache.org
Subject: Decommissioning Nodes

While decommissioning nodes I am seeing the following in my namenode logs:
2011-02-11 21:05:16,290 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough 
replicas, still in need of 5

I haven't seen any progress of decommissioning nodes in several days. I have 12 
total nodes with 6 being decommissioned and a replication factor of 3. How long 
should I expect this to take? Is there a way to force this to move forward?

Thank you.

Reply via email to