The getLocalPathForWrite function that throws this Exception assumes that you have space on the disks that mapred.local.dir is configured on. Can you verify with `df` that those disks have space available? You might also try moving mapred.local.dir off of /tmp if it's configured to use /tmp right now; I believe some systems have quotas on /tmp.
Hope this helps. Alex On Tue, Apr 7, 2009 at 7:22 PM, Jim Twensky <[email protected]> wrote: > Hi, > > I'm using Hadoop 0.19.1 and I have a very small test cluster with 9 nodes, > 8 > of them being task trackers. I'm getting the following error and my jobs > keep failing when map processes start hitting 30%: > > org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any > valid local directory for > > taskTracker/jobcache/job_200904072051_0001/attempt_200904072051_0001_m_000000_1/output/file.out > at > > org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:335) > at > > org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124) > at > > org.apache.hadoop.mapred.MapOutputFile.getOutputFileForWrite(MapOutputFile.java:61) > at > > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1209) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:867) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.Child.main(Child.java:158) > > > I googled many blogs and web pages but I could neither understand why this > happens nor found a solution to this. What does that error message mean and > how can avoid it, any suggestions? > > Thanks in advance, > -jim >
