I figured out the cause of this. It was a rookie mistake. I forgot to
add the new crawler to the host file of the other crawler. Sorry to
waste everyones time.

-----Original Message-----
From: Chris Woolum [mailto:[email protected]] 
Sent: Thursday, December 23, 2010 10:22 PM
To: [email protected]
Subject: Poor Performance on Reduce

Hey everyone,
 
I just added a third crawler to my hadoop cluster and now when I run a
nutch crawl, the server that was formerly the only slave now freezes at
about 80-90% on the reduce method. I have 3 reduce tasks running
concurrently on each of the crawlers and the other two complete just
fine. All three have identical storage set up. 2- 15K 300Gb drives in a
Raid 0. This only happened after I added the other slave. I also noticed
that furing all of the other map tasks, the tasks that this crawler has,
get killed. I'm wondering if this could be a communication issue between
the slave and master? 
 
 
Thanks,
Chris

Reply via email to