Hi,

I tried decommissioning a node in my Hadoop cluster. I am running Apache Hadoop 
1.0.2 and ours is a four node cluster. I also have HBase installed in my 
cluster. I have shut down region server in this node.

For decommissioning, I did the following steps


  *   Added the following XML in hdfs-site.xml

<property>

<name>dfs.hosts.exclude</name>

<value>/full/path/of/host/exclude/file</value>

</property>


*         Ran "<HADOOP_HOME>/bin/hadoop dfsadmin -refreshNodes"



But node decommissioning is running for the last 6 hrs. I don't know when it 
will get over. I am in need of this node for other activities.



>From HDFS health status JSP:

Cluster Summary
338 files and directories, 200 blocks = 538 total. Heap Size is 16.62 MB / 
888.94 MB (1%)
Configured Capacity

:

1.35 TB

DFS Used

:

759.57 MB

Non DFS Used

:

179.36 GB

DFS Remaining

:

1.17 TB

DFS Used%

:

0.05 %

DFS Remaining%

:

86.92 %

Live Nodes

:

4

Dead Nodes

:

0

Decommissioning Nodes

:

1

Number of Under-Replicated Blocks

:

129




Please share if you have any idea. Thanks a lot.



Regards,

Anand.C


Reply via email to