A recurring question: you're most likely out of disk space in /tmp. Consider 
using another location for hadoop.tmp.dir with plenty of room for large 
transient files or using a Hadoop cluster.

> Hello,
> 
> I am getting some exception while fetching:
> 
> 2011-07-10 23:25:21,427 WARN  mapred.LocalJobRunner - job_local_0001
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> taskTracker/jobcache/job_local_0001/attempt_local_0001_m_000000_0/output/sp
> ill0.out in any of the configured local directories
>         at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToR
> ead(LocalDirAllocator.java:389) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocato
> r.java:138) at
> org.apache.hadoop.mapred.MapOutputFile.getSpillFile(MapOutputFile.java:94)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1
> 443) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1154)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:359) at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> 2011-07-10 23:25:22,279 FATAL fetcher.Fetcher - Fetcher:
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
>         at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:1107)
>         at org.apache.nutch.fetcher.Fetcher.run(Fetcher.java:1145)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:1116)
> 
> What should I do? What happens if I restart the fetch job?
> 
> Best Regards,
> C.B.

Reply via email to