Hi Keith

Your NameNode is not up still. What does the NN logs say?

Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: anil gupta <anilgupt...@gmail.com>
Date: Fri, 27 Jul 2012 11:30:57 
To: <common-user@hadoop.apache.org>
Reply-To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server: localhost/127.0.0.1:9000.

Hi Keith,

Does ping to localhost returns a reply? Try telneting to localhost 9000.

Thanks,
Anil

On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley <kwi...@keithwiley.com> wrote:

> I'm plagued with this error:
> Retrying connect to server: localhost/127.0.0.1:9000.
>
> I'm trying to set up hadoop on a new machine, just a basic
> pseudo-distributed setup.  I've done this quite a few times on other
> machines, but this time I'm kinda stuck.  I formatted the namenode without
> obvious errors and ran start-all.sh with no errors to stdout.  However, the
> logs are full of that error above and if I attempt to access hdfs (ala
> "hadoop fs -ls /") I get that error again.  Obviously, my core-site.xml
> sets fs.default.name to "hdfs://localhost:9000".
>
> I assume something is wrong with /etc/hosts, but I'm not sure how to fix
> it.  If "hostname" returns X and "hostname -f" returns Y, then what are the
> corresponding entries in /etc/hosts?
>
> Thanks for any help.
>
>
> ________________________________________________________________________________
> Keith Wiley     kwi...@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "I used to be with it, but then they changed what it was.  Now, what I'm
> with
> isn't it, and what's it seems weird and scary to me."
>                                            --  Abe (Grandpa) Simpson
>
> ________________________________________________________________________________
>
>


-- 
Thanks & Regards,
Anil Gupta

Reply via email to