Hi Andrzej,

The job stopped because there was no space left on the disk:

FATAL fetcher.Fetcher - org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device FATAL fetcher.Fetcher - at org.apache.hadoop.fs.LocalFileSystem$LocalFSFileOutputStream.write(LocalFileSystem.java:150) FATAL fetcher.Fetcher - at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:112)

We use a local FS. Temporary data is stored in /tmp/hadoop/mapred/

Mathijs

Andrzej Bialecki wrote:
Charlie Williams wrote:
I had this same problem, we had gathered about 90% of a 1.5M page fetch only to have the system crash at the reduce phase. We now do cycles of about 50k
pages at a time to minimize loss.


I may have a possible workaround for this issue. It would help to know the failure modes in your cases - i.e. what were the symptoms, and what lead to the ultimate job failure - was it a real crash of the machine, the tasktracker, or the task was submitted too many times, or it appeared to be stuck for a long time, or you just killed the job - and whether you used the regex URLFilter.

BTW. creating a "fetcher.done" file doesn't work in 0.8+ - the output itself was simply discarded when the job ended without closing its file descriptors. If you were running on a local FS you may be able to recover some of the data from temporary files - but if you ran with DFS then the files physically don't exist anywhere, and all data has been discarded ...


Reply via email to