On 11/15/06, Anurup Pavuluri <[EMAIL PROTECTED]> wrote:

Hi,

I am installing Hadoop for the first time. I am running the name node on
the master (node0). In the worker I set the fs.default.name property to
node0:40010. After I format the namenode and start-all.sh, I run
/bin/hadoop dfs -ls from the worker. Then it says Retrying connecting to
server 128.2.99.10:40010: Already tried 2 time(s). This happens at the
master aswell if I set the fs.default.name property to node0:40010
instead of local.

When I say stop-all.sh, it says that the namenode is not running.

What has gone wrong here and how can I fix it? Help will be appreciated.

Thanks in advance,
Anurup



try to look at the namenode log, under hadoop/log/...

very likely that you have not create the dfs directory before
formating, this is a bug in 0.8.0....see my old post..

Reply via email to