[ 
https://issues.apache.org/jira/browse/HIVE-14916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15563990#comment-15563990
 ] 

Siddharth Seth commented on HIVE-14916:
---------------------------------------

Thanks for taking this up [~dapengsun].
Think one more change is required.
{code}
 conf.setInt(YarnConfiguration.YARN_MINICLUSTER_NM_PMEM_MB, 4096);
486           conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 
1024);
487           conf.setInt(YarnConfiguration.RM_SCHEDULER_MAXIMUM_ALLOCATION_MB, 
4096);
{code}
HIVE-14877 sets up these values to the same as the defaults in YARN (which is 
what is used today). After changing this to 512, we should be able to half all 
of these to actually reduce memory. Otherwise, this can end up launching 8 
512MB containers, instead of 4 1024MB containers.

Interestingly enough, the change in this patch, along with the change to the 
YARN MiniCluster configuration is what I had tried on internal runs when 
working on HIVE-14877 - and ran into Spark QTest failures. Had tried 256MB for 
sure, and I think 512MB. If it works though, great.

> Reduce the memory requirements for Spark tests
> ----------------------------------------------
>
>                 Key: HIVE-14916
>                 URL: https://issues.apache.org/jira/browse/HIVE-14916
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Ferdinand Xu
>            Assignee: Dapeng Sun
>         Attachments: HIVE-14916.001.patch, HIVE-14916.002.patch
>
>
> As HIVE-14887, we need to reduce the memory requirements for Spark tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to