Hi Everyone:

I am quite new to hadoop here. I am attempting to set up Hadoop locally in
two machines, connected by LAN. Both of them pass the single-node test.
However, I failed in two-node cluster setup, as shown in the 2 cases below:

1) set one as dedicated namenode and the other as dedicated datanode
2) set one as both name- and data-node, and the other as just datanode

I launch *start-dfs.sh *on the namenode. Since I have all the *ssh *issues
cleared, thus I can always observe the startup of daemon in every datanode.
However, by website of *http://(URI of namenode):50070 *it shows only 0 live
node for (1) and 1 live node for (2), which is the same as the output by
command-line *hadoop dfsadmin -report*

Generally it appears that from the namenode you cannot observe the remote
datanode alive, let alone a normal across-node MapReduce execution.

Could anyone give some hints / instructions at this point? I really
appreciate it!

Thank.

Best Regards
Yours Sincerely

Jingwei Lu

Reply via email to