[
https://issues.apache.org/jira/browse/HADOOP-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12657906#action_12657906
]
Zheng Shao commented on HADOOP-4915:
------------------------------------
Error log:
2008-12-18 11:42:46,319 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 done copying
task_200812161126_7976_m_000462_0 output from hadoop5355.snc1.facebook.com..
2008-12-18 11:42:46,593 WARN org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Intermediate Merge of the inmemory files
threw an exception: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:633)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:95)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.<init>(ZlibDecompressor.java:108)
at
org.apache.hadoop.io.compress.GzipCodec.createDecompressor(GzipCodec.java:188)
at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1564)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1442)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1363)
at
org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:2989)
at
org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.merge(SequenceFile.java:2804)
at
org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2556)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:1632)
2008-12-18 11:42:47,272 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1: Got 0 new map-outputs & 0 obsolete
map-outputs from tasktracker and 0 map-outputs from previous failures
2008-12-18 11:42:47,272 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Got 384 known map output location(s);
scheduling...
2008-12-18 11:42:47,273 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Scheduled 4 of 384 known outputs (0 slow
hosts and 380 dup hosts)
2008-12-18 11:42:47,273 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Copying task_200812161126_7976_m_002498_0
output from hadoop5373.snc1.facebook.com..
2008-12-18 11:42:47,274 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Copying task_200812161126_7976_m_000879_0
output from hadoop5158.snc1.facebook.com..
2008-12-18 11:42:47,274 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Copying task_200812161126_7976_m_005415_0
output from hadoop5354.snc1.facebook.com..
2008-12-18 11:42:47,274 INFO org.apache.hadoop.mapred.ReduceTask:
task_200812161126_7976_r_000272_1 Copying task_200812161126_7976_m_006464_0
output from hadoop5323.snc1.facebook.com..
2008-12-18 11:42:47,276 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
java.io.IOException: task_200812161126_7976_r_000272_1The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:329)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2139)
2008-12-18 11:42:47,277 ERROR org.apache.hadoop.mapred.ReduceTask: Map output
copy failure: java.lang.NullPointerException
at
org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem.reserveSpace(InMemoryFileSystem.java:324)
at
org.apache.hadoop.fs.InMemoryFileSystem.reserveSpaceWithCheckSum(InMemoryFileSystem.java:436)
at
org.apache.hadoop.mapred.MapOutputLocation.getFile(MapOutputLocation.java:153)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:818)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:767)
2008-12-18 11:42:47,279 ERROR org.apache.hadoop.mapred.ReduceTask: Map output
copy failure: java.lang.NullPointerException
at
org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem.reserveSpace(InMemoryFileSystem.java:324)
at
org.apache.hadoop.fs.InMemoryFileSystem.reserveSpaceWithCheckSum(InMemoryFileSystem.java:436)
at
org.apache.hadoop.mapred.MapOutputLocation.getFile(MapOutputLocation.java:153)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:818)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:767)
> Out of Memory error in reduce shuffling phase when compression is turned on
> ---------------------------------------------------------------------------
>
> Key: HADOOP-4915
> URL: https://issues.apache.org/jira/browse/HADOOP-4915
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.17.2
> Reporter: Zheng Shao
>
> mapred.compress.map.output is set to true, and the job has 6860 mappers and
> 300 reducers.
> Several reducers failed because:out of memory error in the shuffling phase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.