Hi,

I read various mailing list archives and played a little bit with my
configuration. It seems other had similar problems (remote access to the
namenode) in the past.

I'm now one step further. On both the hadoop server and the
client which doesn't run any hadoop daemon I have replaced
the hostname with the actual IP of the server.
Modified configuration files: core-site.xml, masters, slaves,
mapred-site.xml.

Now I can access the namenode and file system from the client with the
web interface. Also "telnet hadoopserver 9000" works.

But running "bin/hadoop fs -ls /" at the client still gives me:

10/08/12 14:08:11 INFO ipc.Client: Retrying connect to server: 
/129.69.216.55:9000. Already tried 0 time(s).
10/08/12 14:08:12 INFO ipc.Client: Retrying connect to server: 
/129.69.216.55:9000. Already tried 1 time(s).
10/08/12 14:08:13 INFO ipc.Client: Retrying connect to server: 
/129.69.216.55:9000. Already tried 2 time(s). 
...

This error doesn't generate any log messages. Is it possible to get a
more verbose output for debugging?

Any idea what could be wrong?

Thanks a lot!
Björn

Reply via email to