Hi, I am using nuctch 0.9 over Hadoop 0.16. Where I start the crawl job I
get an error:

Injector: starting
Injector: crawlDb: crawled/crawldb
Injector: urlDir: urls/urls
Injector: Converting injected urls to crawl db entries.
Exception in thread "main" java.io.IOException: Could not get block
locations. Aborting...
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:1824)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java:1479)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1571)


Can any one help me as to what exactly is going wrong?

Regards,
Ninad Raut.

Reply via email to