Charlie Williams wrote:
> I had this same problem, we had gathered about 90% of a 1.5M page 
> fetch only
> to have the system crash at the reduce phase. We now do cycles of 
> about 50k
> pages at a time to minimize loss.


I may have a possible workaround for this issue. It would help to know 
the failure modes in your cases - i.e. what were the symptoms, and what 
lead to the ultimate job failure - was it a real crash of the machine, 
the tasktracker, or the task was submitted too many times, or it 
appeared to be stuck for a long time, or you just killed the job - and 
whether you used the regex URLFilter.

BTW. creating a "fetcher.done" file doesn't work in 0.8+ - the output 
itself was simply discarded when the job ended without closing its file 
descriptors. If you were running on a local FS you may be able to 
recover some of the data from temporary files - but if you ran with DFS 
then the files physically don't exist anywhere, and all data has been 
discarded ...

-- 
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to