HI,

Even I followed then tutorial provided by hadoop
http://wiki.apache.org/nutch/Nutch0%2e9-Hadoop0%2e10-Tutorial

I'm also in the scenario like you , able run with single node but incase of
master and slaves I'm struck at crawl stage...

This is the output after this no error and no movement...

bin/nutch crawl urls -dir crawled -depth 3
crawl started in: crawled
rootUrlDir = urls
threads = 10
depth = 3
Injector: starting
Injector: crawlDb: crawled/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.

Please any one help me from this..

Regards,
Santhosh.Ch


On 8/30/07, Yiping Han <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
>
>
> We are using Hadoop streaming to utilize our existing codes. We have
> successfully run our code on single node. Now comes the problem: how can
> the mapper and reducer modules be distributed onto nodes easily? Anyone
> can share with me your experience? Thanks!
>
>
>
>
>
> -- Yiping Han
>
>


-- 
Do it Right And Forget It.
Santhosh Kumar.Ch

Reply via email to