[ 
https://issues.apache.org/jira/browse/HADOOP-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571792#action_12571792
 ] 

Runping Qi commented on HADOOP-2887:
------------------------------------


The problem persists even when heapsize set to 1200m:

2008-02-23 21:56:55,425 ERROR org.apache.hadoop.mapred.ReduceTask: Map output 
copy failure: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.io.BufferedOutputStream.(BufferedOutputStream.java:59)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:190)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:353)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:260)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:139)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:116)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:196)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:403)
        at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:745)
        at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:666)

2008-02-23 21:56:55,425 INFO org.apache.hadoop.mapred.ReduceTask: 
task_200802232037_0001_r_000212_1 Copying task_200802232037_0001_m_002145_0 
output from gs202434.inktomisearch.com.
2008-02-23 21:56:56,677 ERROR org.apache.hadoop.mapred.ReduceTask: Map output 
copy failure: java.lang.OutOfMemoryError: Java heap space
        at java.io.BufferedInputStream.(BufferedInputStream.java:178)
        at 
org.apache.hadoop.fs.BufferedFSInputStream.(BufferedFSInputStream.java:44)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:144)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:244)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:138)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:116)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:196)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:394)
        at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:745)
        at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:666)


> Reducers throw oom exceptions during fetching map outputs
> ---------------------------------------------------------
>
>                 Key: HADOOP-2887
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2887
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.15.3
>            Reporter: Runping Qi
>
> I have a job that ran fine if the flag for compressing the map output data to 
> false.
> However, if the flag is set to true and the compression type set to block, 
> then 
> the reducers all died due to out of memory exceptions.
> The heap size was set to 512M.
> The problem persists even when the heapsize set to 1000M.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to