Eric Osgood wrote:
This is the error I keep getting whenever I try to fetch more than 400K files at a time using a 4 node hadoop cluster running nutch 1.0.

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /user/hadoop/crawl/segments/20091013161641/crawl_fetch/part-00015/index for DFSClient_attempt_200910131302_0011_r_000015_2 on client 192.168.1.201 because current leaseholder is trying to recreate file.

Please see this issue:

https://issues.apache.org/jira/browse/NUTCH-692

Apply the patch that is attached there, rebuild Nutch, and tell me if this fixes your problem.

(the patch will be applied to trunk anyway, since others confirmed that it fixes this issue).


Can anybody shed some light on this issue? I was under the impression that 400K was small potatoes for a nutch hadoop combo?

It is. This problem is rare - I think I crawled cumulatively ~500mln pages in various configs and it didn't occur to me personally. It requires a few things to go wrong (see the issue comments).


--
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com

Reply via email to