[ 
https://issues.apache.org/jira/browse/HADOOP-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564252#action_12564252
 ] 

Arun C Murthy commented on HADOOP-2751:
---------------------------------------

Point taken... 

However it's prolly 4 * 200M + 2 * 1000M = 2.8G, since HADOOP_HEAPSIZE defaults 
to 1000M... is it lowered for Hadoop daemons running on Amazon?

Anyway, HADOOP-1867 should be the right approach (auto-magically configuring 
buffers based on the given heapsize), so I'd rather fix that... and close this 
as "Won't Fix".

> Increase map/reduce child tasks' heapsize from current default of 200M to 512M
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-2751
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2751
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.0
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>             Fix For: 0.17.0
>
>
> I guess we should look to check why we get OOMs with 200M, I'd suspect 
> io.sort.mb hogs a lot of the default 200M. However, HADOOP-1867 should be the 
> right way to solve it. 
> For now, I propose we bump up the child-vm default heapsize to 512M; too many 
> people are getting burnt by 200M.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to