[ 
https://issues.apache.org/jira/browse/HIVE-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800443#comment-15800443
 ] 

Hive QA commented on HIVE-15543:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12845692/HIVE-15543.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10900 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=233)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[case_sensitivity] 
(batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_testxpath] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_coalesce] 
(batchId=75)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=134)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a]
 (batchId=135)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_varchar_simple]
 (batchId=151)
org.apache.hadoop.hive.cli.TestSparkCliDriver.org.apache.hadoop.hive.cli.TestSparkCliDriver
 (batchId=96)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2791/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2791/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2791/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12845692 - PreCommit-HIVE-Build

> Don't try to get memory/cores to decide parallelism when Spark dynamic 
> allocation is enabled
> --------------------------------------------------------------------------------------------
>
>                 Key: HIVE-15543
>                 URL: https://issues.apache.org/jira/browse/HIVE-15543
>             Project: Hive
>          Issue Type: Improvement
>          Components: Spark
>    Affects Versions: 2.2.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>         Attachments: HIVE-15543.patch
>
>
> Presently Hive tries to get numbers for memory and cores from the Spark 
> application and use them to determine RS parallelism. However, this doesn't 
> make sense when Spark dynamic allocation is enabled because the current 
> numbers doesn't represent available computing resources, especially when 
> SparkContext is initially launched.
> Thus, it makes send not to do that when dynamic allocation is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to