Hey everyone,
 
I just added a third crawler to my hadoop cluster and now when I run a nutch 
crawl, the server that was formerly the only slave now freezes at about 80-90% 
on the reduce method. I have 3 reduce tasks running concurrently on each of the 
crawlers and the other two complete just fine. All three have identical storage 
set up. 2- 15K 300Gb drives in a Raid 0. This only happened after I added the 
other slave. I also noticed that furing all of the other map tasks, the tasks 
that this crawler has, get killed. I'm wondering if this could be a 
communication issue between the slave and master? 
 
 
Thanks,
Chris

Reply via email to