[
https://issues.apache.org/jira/browse/HIVE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Peter Vary updated HIVE-17292:
------------------------------
Attachment: HIVE-17292.6.patch
The patch contains the following changes:
- Changing Hadoop23Shims.java, so the MiniSparkShim will able to provide the
requested 2 executors.
- Changing QTestUtil.setSparkSession, so we will wait until every executor is
available, not only the 1st.
- Changing SparkSessionImpl.getMemoryAndCores, so we use the client provided
paralellism in case of local spark.master too.
- Regenerating golden files (numReducers, and number of files changed in the
explain plans)
The change contains 2 golden file changes
(spark_dynamic_partition_pruning_mapjoin_only.q.out,
spark_dynamic_partition_pruning.q.out) which are containing other neccessary
changes for a green run, so this patch should be regenerated after their
corresponding jiras are solved (HIVE-17347, HIVE-17346)
> Change TestMiniSparkOnYarnCliDriver test configuration to use the configured
> cores
> ----------------------------------------------------------------------------------
>
> Key: HIVE-17292
> URL: https://issues.apache.org/jira/browse/HIVE-17292
> Project: Hive
> Issue Type: Sub-task
> Components: Spark, Test
> Affects Versions: 3.0.0
> Reporter: Peter Vary
> Assignee: Peter Vary
> Attachments: HIVE-17292.1.patch, HIVE-17292.2.patch,
> HIVE-17292.3.patch, HIVE-17292.5.patch, HIVE-17292.6.patch
>
>
> Currently the {{hive-site.xml}} for the {{TestMiniSparkOnYarnCliDriver}} test
> defines 2 cores, and 2 executors, but only 1 is used, because the MiniCluster
> does not allows the creation of the 3rd container.
> The FairScheduler uses 1GB increments for memory, but the containers would
> like to use only 512MB. We should change the fairscheduler configuration to
> use only the requested 512MB
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)