i use "start-all.sh" in namenode machine,it shows this message
 
starting namenode, logging to
/usr/local/search_engine/asarc/nutch-2007-09-17_04-01-13/bin/../logs/hadoop-root-namenode-search12.nipa.co.th.out
- STATE* Network topology has 0 racks and 0 datanodes
- STATE* UnderReplicatedBlocks has 0 blocks
- Checking Resource aliases
- Version Jetty/5.1.4
- Started HttpContext[/static,/static]
- Started HttpContext[/logs,/logs]
[EMAIL PROTECTED]'s password:
search21.nipa.co.th: starting datanode, logging to
/usr/local/search_engine/asarc/nutch-2007-09-17_04-01-13/bin/../logs/hadoop-root-datanode-search21.nipa.co.th.out
cat:
/usr/local/search_engine/asarc/nutch-2007-09-17_04-01-13/bin/../conf/masters:
No such file or directory
starting jobtracker, logging to
/usr/local/search_engine/asarc/nutch-2007-09-17_04-01-13/bin/../logs/hadoop-root-jobtracker-search12.nipa.co.th.out
- IPC Server listener on 9001: starting
- IPC Server handler 1 on 9001: starting
- IPC Server handler 0 on 9001: starting
- IPC Server handler 2 on 9001: starting
- IPC Server handler 3 on 9001: starting
- IPC Server handler 4 on 9001: starting
- IPC Server handler 5 on 9001: starting
- IPC Server handler 6 on 9001: starting
- IPC Server handler 7 on 9001: starting
- IPC Server handler 8 on 9001: starting
[EMAIL PROTECTED]'s password:
search21.nipa.co.th: starting tasktracker, logging to
/usr/local/search_engine/asarc/nutch-2007-09-17_04-01-13/bin/../logs/hadoop-root-tasktracker-search21.nipa.co.th.out

after that, i use "./bin/hadoop dfs -mkdir inputs" in datanode machine but
it can't connect, show that

- Retrying connect to server: search12.nipa.co.th/203.146.127.155:9000.
Already tried 1 time(s).
- Retrying connect to server: search12.nipa.co.th/203.146.127.155:9000.
Already tried 2 time(s).

why? can i solve this problem? 
-- 
View this message in context: 
http://www.nabble.com/Datanode-can-not-connect-Namenode-tf4471394.html#a12749215
Sent from the Hadoop Users mailing list archive at Nabble.com.

Reply via email to