Hi everyone,

I am hoping someone could help me on this. I am indexing ~ 2 million URLs on 12 machines
and I found out that the results were not quite scalable, for example:

when mapred.reduce.tasks was set to 12, it took total about 20 minutes to complete the job
(11 minutes for reduce);
when mapred.reduce.tasks was set to 24, it took total about 28 minutes to complete the job
(20 minutes for reduce);
when mapred.reduce.tasks was set to 6, it took total about 24 minutes to complete the job
(16 minutes for reduce).

Is hadoop/nutch scalable at all or I can tune some other parameters?

I already have:
mapred.map.tasks set to 100
mapred.job.tracker is not local
mapred.tasktracker.tasks.maximum is 2.
and everything else is default.

I would appreciate any advice on this.
Thank you.

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/



-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to