Re: Namenode not listening for remote connections to port 9000

2009-02-15 Thread Michael Lynch

Hmmm - I checked all the /etc/hosts files, and they're all fine. Then I 
switched the
conf/hadoop-site.xml to specify ip addresses instead of host names. Then oddly 
enough
it starts working...

Now the funny thing is this: It's fine ssh-ing to the correct machines to start 
up
datanodes, but when the datanode thread tries to make the connection back to the
namenode (from within a java app I assume) it doesn't resolve the names 
correctly.

Just looking at this makes me want to think that the namenode does its ssh work
in some non-java way, actually checking the hosts file, while the datanode does
its thing in Java, which doesn't seem to check the hosts file. Could this be 
some
Java funnyness that it's not checking the hosts file?

Now there's just one sucky thing about my setup: If I change my file with a list
of datanodes (dfs.hosts property) to also have IP addresses instead of 
hostnames,
then it fails. So parts of my config is specifying hostnames, and other parts 
are
specifying IP addresses.

Oh well - for development purposes this is good enough, 'cause out in the real 
world
I won't be using the hosts file to string it all together.

Thanks for the responses.


Mark Kerzner wrote:

I had a problem that it listened only on 8020, even though I told it to use
9000

On Fri, Feb 13, 2009 at 7:50 AM, Norbert Burger norbert.bur...@gmail.comwrote:


On Fri, Feb 13, 2009 at 8:37 AM, Steve Loughran ste...@apache.org wrote:


Michael Lynch wrote:


Hi,

As far as I can tell I've followed the setup instructions for a hadoop
cluster to the letter,
but I find that the datanodes can't connect to the namenode on port 9000
because it is only
listening for connections from localhost.

In my case, the namenode is called centos1, and the datanode is called
centos2. They are
centos 5.1 servers with an unmodified sun java 6 runtime.


fs.default.name takes a URL to the filesystem. such as
hdfs://centos1:9000/

If the machine is only binding to localhost, that may mean DNS fun. Try a
fully qualified name instead


(fs.default.name is defined in conf/hadoop-site.xml, overriding entries
from
conf/hadoop-default.xml).

Also, check your /etc/hosts file on both machines.  Could be that you have
a
incorrect setup where both localhost and the namenode hostname (centos1)
are
aliased to 127.0.0.1.

Norbert





Re: Namenode not listening for remote connections to port 9000

2009-02-13 Thread Steve Loughran

Michael Lynch wrote:

Hi,

As far as I can tell I've followed the setup instructions for a hadoop 
cluster to the letter,
but I find that the datanodes can't connect to the namenode on port 9000 
because it is only

listening for connections from localhost.

In my case, the namenode is called centos1, and the datanode is called 
centos2. They are

centos 5.1 servers with an unmodified sun java 6 runtime.


fs.default.name takes a URL to the filesystem. such as hdfs://centos1:9000/

If the machine is only binding to localhost, that may mean DNS fun. Try 
a fully qualified name instead


Re: Namenode not listening for remote connections to port 9000

2009-02-13 Thread Norbert Burger
On Fri, Feb 13, 2009 at 8:37 AM, Steve Loughran ste...@apache.org wrote:

 Michael Lynch wrote:

 Hi,

 As far as I can tell I've followed the setup instructions for a hadoop
 cluster to the letter,
 but I find that the datanodes can't connect to the namenode on port 9000
 because it is only
 listening for connections from localhost.

 In my case, the namenode is called centos1, and the datanode is called
 centos2. They are
 centos 5.1 servers with an unmodified sun java 6 runtime.


 fs.default.name takes a URL to the filesystem. such as
 hdfs://centos1:9000/

 If the machine is only binding to localhost, that may mean DNS fun. Try a
 fully qualified name instead


(fs.default.name is defined in conf/hadoop-site.xml, overriding entries from
conf/hadoop-default.xml).

Also, check your /etc/hosts file on both machines.  Could be that you have a
incorrect setup where both localhost and the namenode hostname (centos1) are
aliased to 127.0.0.1.

Norbert


Re: Namenode not listening for remote connections to port 9000

2009-02-13 Thread Mark Kerzner
I had a problem that it listened only on 8020, even though I told it to use
9000

On Fri, Feb 13, 2009 at 7:50 AM, Norbert Burger norbert.bur...@gmail.comwrote:

 On Fri, Feb 13, 2009 at 8:37 AM, Steve Loughran ste...@apache.org wrote:

  Michael Lynch wrote:
 
  Hi,
 
  As far as I can tell I've followed the setup instructions for a hadoop
  cluster to the letter,
  but I find that the datanodes can't connect to the namenode on port 9000
  because it is only
  listening for connections from localhost.
 
  In my case, the namenode is called centos1, and the datanode is called
  centos2. They are
  centos 5.1 servers with an unmodified sun java 6 runtime.
 
 
  fs.default.name takes a URL to the filesystem. such as
  hdfs://centos1:9000/
 
  If the machine is only binding to localhost, that may mean DNS fun. Try a
  fully qualified name instead


 (fs.default.name is defined in conf/hadoop-site.xml, overriding entries
 from
 conf/hadoop-default.xml).

 Also, check your /etc/hosts file on both machines.  Could be that you have
 a
 incorrect setup where both localhost and the namenode hostname (centos1)
 are
 aliased to 127.0.0.1.

 Norbert