I had a problem that it listened only on 8020, even though I told it to use
9000

On Fri, Feb 13, 2009 at 7:50 AM, Norbert Burger <norbert.bur...@gmail.com>wrote:

> On Fri, Feb 13, 2009 at 8:37 AM, Steve Loughran <ste...@apache.org> wrote:
>
> > Michael Lynch wrote:
> >
> >> Hi,
> >>
> >> As far as I can tell I've followed the setup instructions for a hadoop
> >> cluster to the letter,
> >> but I find that the datanodes can't connect to the namenode on port 9000
> >> because it is only
> >> listening for connections from localhost.
> >>
> >> In my case, the namenode is called centos1, and the datanode is called
> >> centos2. They are
> >> centos 5.1 servers with an unmodified sun java 6 runtime.
> >>
> >
> > fs.default.name takes a URL to the filesystem. such as
> > hdfs://centos1:9000/
> >
> > If the machine is only binding to localhost, that may mean DNS fun. Try a
> > fully qualified name instead
>
>
> (fs.default.name is defined in conf/hadoop-site.xml, overriding entries
> from
> conf/hadoop-default.xml).
>
> Also, check your /etc/hosts file on both machines.  Could be that you have
> a
> incorrect setup where both localhost and the namenode hostname (centos1)
> are
> aliased to 127.0.0.1.
>
> Norbert
>

Reply via email to