From what you have said below your machines are probably swapping too much as you wouldn't have enough Ram. By default the hadoop servers default to 1G of Ram.

Dennis Kubes

[EMAIL PROTECTED] wrote:
hi...
hi im running nutch(nightly build ...last week) in 2 machines...1 master
n both as slaves...
but the crawling takes too much time....my machines r of 1gb ram...is
that the problem...
but normal crawling(ie without hadoop clusters) is working fine
pls help me
thanx kishore

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.
WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.

www.wipro.com


Reply via email to