Checked the "mapred.tmp.local" directory on the node which is running the
reducer attempt and seems that there is available space around 1G(though
it's less).

On Tue, Aug 28, 2012 at 3:55 PM, Joshi, Rekha <[email protected]>wrote:

>  Hi Abhay,
>
>  Ideally the error line - "Caused by:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
> valid local directory for output/map_128.out" suggests you either do not
> have permissions for output folder or disk is full.
>
>  Also 5 is not a big number on thread spawning, (infact, default on
> parallelcopies) to recommend reducing it, but a lower value might work.only
> long-term indications are for your system to under-go node maintenance.
>
>  Thanks
> Rekha
>
>   From: Abhay Ratnaparkhi <[email protected]>
> Reply-To: <[email protected]>
> Date: Tue, 28 Aug 2012 14:52:27 +0530
> To: <[email protected]>
> Subject: error in shuffle in InMemoryMerger
>
>  Hello,
>
> I am getting following error when reduce task is running.
> "mapreduce.reduce.shuffle.parallelcopies"  property is set to 5.
>
> org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in
> shuffle in InMemoryMerger - Thread to merge in-memory shuffled map-outputs
> at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:124) at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362) at
> org.apache.hadoop.mapred.Child$4.run(Child.java:217) at
> java.security.AccessController.doPrivileged(AccessController.java:284) at
> javax.security.auth.Subject.doAs(Subject.java:573) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:773)
> at org.apache.hadoop.mapred.Child.main(Child.java:211) Caused by:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
> valid local directory for output/map_128.out at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:351)
> at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:132)
> at org.apache.hadoop.mapred.MapOutputFile.getInputFileForWrite(MapOutputF
> org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in
> shuffle in InMemoryMerger - Thread to merge in-memory shuffled map-outputs
> at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:124) at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362) at
> org.apache.hadoop.mapred.Child$4.run(Child.java:217) at
> java.security.AccessController.doPrivileged(AccessController.java:284) at
> javax.security.auth.Subject.doAs(Subject.java:573) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:773)
> at org.apache.hadoop.mapred.Child.main(Child.java:211) Caused by:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
> valid local directory for output/map_119.out at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:351)
> at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:132)
> at org.apache.hadoop.mapred.MapOutputFile.getInputFileForWrite(MapOutputF
>
> Regards,
> Abhay
>
>

Reply via email to