Once the nodes are listed as dead, if you still have the host names in your
conf/exclude file, remove the entries and then run hadoop dfsadmin
-refreshNodes.


This works for us on our cluster.



-paul


On Tue, Jan 27, 2009 at 5:08 PM, Bill Au <bill.w...@gmail.com> wrote:

> I was able to decommission a datanode successfully without having to stop
> my
> cluster.  But I noticed that after a node has been decommissioned, it shows
> up as a dead node in the web base interface to the namenode (ie
> dfshealth.jsp).  My cluster is relatively small and losing a datanode will
> have performance impact.  So I have a need to monitor the health of my
> cluster and take steps to revive any dead datanode in a timely fashion.  So
> is there any way to altogether "get rid of" any decommissioned datanode
> from
> the web interace of the namenode?  Or is there a better way to monitor the
> health of the cluster?
>
> Bill
>

Reply via email to