[ 
https://issues.apache.org/jira/browse/AMBARI-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14569414#comment-14569414
 ] 

Srimanth Gunturi commented on AMBARI-11627:
-------------------------------------------

{{yarn.scheduler.minimum-allocation-mb}} is calculated at 682MB.
So the {{mapreduce.reduce.memory.mb}} correctly calculates itself as twice the 
amount, or 1364MB.

However, even though {{mapreduce.map.memory.mb}} correctly starts off with 
682MB, there is code added to [set it to 1500MB if Pig is 
installed|https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py#L636].
 Here [is the 
commit|https://github.com/apache/ambari/commit/c3690fecf23eb787901a70f077add6b1df54fd5b].

> Default Map memory should be less than default reduce memory
> ------------------------------------------------------------
>
>                 Key: AMBARI-11627
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11627
>             Project: Ambari
>          Issue Type: Bug
>          Components: contrib
>    Affects Versions: 2.1.0
>            Reporter: Srimanth Gunturi
>            Assignee: Srimanth Gunturi
>             Fix For: 2.1.0
>
>
> Its typically the reverse. Reducers have more memory than maps.
> Map - 1.465GB Reduce - 1.332 GB
> Standard cluster on GCE but with 5 nodes instead of 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to