Hi Zheng,

We could never run map-reduce at scale with transient compression turned on
up to hadoop-0.17.

Good news: it works in hadoop-0.18

-Christian


On 12/18/08 2:53 PM, "Zheng Shao (JIRA)" <[email protected]> wrote:

> 
>      [ 
> https://issues.apache.org/jira/browse/HADOOP-4915?page=com.atlassian.jira.plug
> in.system.issuetabpanels:all-tabpanel ]
> 
> Zheng Shao resolved HADOOP-4915.
> --------------------------------
> 
>        Resolution: Duplicate
>     Fix Version/s: 0.18.0
> 
> Duplicate of https://issues.apache.org/jira/browse/HADOOP-2095
> 
>> Out of Memory error in reduce shuffling phase when compression is turned on
>> ---------------------------------------------------------------------------
>> 
>>                 Key: HADOOP-4915
>>                 URL: https://issues.apache.org/jira/browse/HADOOP-4915
>>             Project: Hadoop Core
>>          Issue Type: Bug
>>          Components: mapred
>>    Affects Versions: 0.17.2
>>            Reporter: Zheng Shao
>>             Fix For: 0.18.0
>> 
>> 
>> mapred.compress.map.output is set to true, and the job has 6860 mappers and
>> 300 reducers.
>> Several reducers failed because:out of memory error in the shuffling phase.

Reply via email to