On Tuesday 27 March 2007 09:27:58 Andrzej Bialecki wrote:
> Espen Amble Kolstad wrote:
> > Hi,
> >
> > I'm trying to decommission a node with hadoop-0.12.2.
> > I use the property dfs.hosts.exclude, since the command haddop
> > dfsadmin -decommission seems to be gone.
> > I then start the cluster with an emtpy exclude-file, add the name of the
> > node to decommission and run hadoop dfsadmin -refreshNodes.
> > The log then says:
> > 2007-03-27 08:42:59,168 INFO  fs.FSNamesystem - Start Decommissioning
> > node 81.93.168.215:50010
> >
> > But nothing happens.
> > I've left it in this state over night, but still nothing.
> >
> > Am I missing something ?
>
> What does the dfsadmin -report says about this node? It takes time to
> ensure that all blocks are replicated from this node to other nodes.

Hi,

dfsadmin -report:

Name: 81.93.168.215:50010
State          : Decommission in progress
Total raw bytes: 1438871724032 (1.30 TB)
Used raw bytes: 270070137404 (0.24 TB)
% used: 18.76%
Last contact: Tue Mar 27 09:42:26 CEST 2007

In the web-interface (dfshealth.jsp) no change can be seen in % or the number 
of blocks on any of the nodes.

- Espen

Reply via email to