For anyone who runs into this in the future, I seem to have figured out what
caused it.  When I added the hostname to the exclude file, it duplicated my
node on the web report page with one in IP form and the other in hostname
form when I moused over. When I added the IP (not hostname) to the exclude
file, it worked as expected.  I would imagine it should work either way, but
I guess not.  In the slaves file though I'm still using hostnames and it's
working fine.

Oh well, I'm happy it works now.

On Wed, Feb 25, 2009 at 6:44 PM, Brian Bockelman <[email protected]>wrote:

> Hey Roger,
>
> This sounds vaguely familiar to me.  Do you have multiple hostnames or
> multiple IPs in that node?
>
> In one of our dual-homed host, I think the sysadmin had to do something
> different to decommission it - something like list the IP in the exclude
> hosts?  I can't remember.
>
> Brian
>
>
> On Feb 25, 2009, at 8:18 PM, Roger Donahue wrote:
>
>  Hello,
>>
>> I'm having trouble decommissioning nodes in Hadoop. This worked previously
>> for me in this version of hadoop (0.18.2) and I cannot pinpoint what is
>> wrong.  I am adding the host I wish to decommission to the exclude file
>> specified in site.xml.  I then run refreshNodes on the master.  Rather
>> than
>> the "Decommission in progress" that I'm used to seeing, it instead lists
>> the
>> node both in Live datanodes and Dead Datanodes.
>>
>> When I go to the DFS health webpage, the only difference I see is when I
>> mouseover the node in Live nodes, it appears as IP:PORT in the tooltip
>> popup.  When I mouseover the version of it that appears as dead, it lists
>> as
>> HOSTNAME:PORT.  These are both the same node, just listed twice.  The port
>> is the same between both mouseovers.
>>
>> Does anyone know what I am doing wrong?  I checked slaves, and that node
>> is
>> only listed once.  It's also listed in exclude only once.  At no point do
>> I
>> have IP addresses listed.
>>
>> Thanks for any suggestions.
>>
>
>

Reply via email to