[
https://issues.apache.org/jira/browse/HIVE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16122933#comment-16122933
]
Rui Li commented on HIVE-17292:
-------------------------------
{{spark_vectorized_dynamic_partition_pruning}} doesn't work and is tracked by
HIVE-17122.
Looking at the code, we're already setting the min allocated mem:
https://github.com/apache/hive/blob/master/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java#L494
Do you know why it doesn't work?
> Change TestMiniSparkOnYarnCliDriver test configuration to use the configured
> cores
> ----------------------------------------------------------------------------------
>
> Key: HIVE-17292
> URL: https://issues.apache.org/jira/browse/HIVE-17292
> Project: Hive
> Issue Type: Sub-task
> Components: Spark, Test
> Affects Versions: 3.0.0
> Reporter: Peter Vary
> Assignee: Peter Vary
> Attachments: HIVE-17292.1.patch
>
>
> Currently the {{hive-site.xml}} for the {{TestMiniSparkOnYarnCliDriver}} test
> defines 2 cores, and 2 executors, but only 1 is used, because the MiniCluster
> does not allows the creation of the 3rd container.
> The FairScheduler uses 1GB increments for memory, but the containers would
> like to use only 512MB. We should change the fairscheduler configuration to
> use only the requested 512MB
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)