[
https://issues.apache.org/jira/browse/HIVE-15796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851400#comment-15851400
]
Hive QA commented on HIVE-15796:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12850767/HIVE-15796.wip.1.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 52 failed/errored test(s), 11027 tests
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out)
(batchId=235)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys]
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[constprog_partitioner]
(batchId=162)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[leftsemijoin_mr]
(batchId=160)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14]
(batchId=223)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join0]
(batchId=132)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join23]
(batchId=103)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_12]
(batchId=109)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_tez1]
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_tez2]
(batchId=100)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_gby]
(batchId=114)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_limit]
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_semijoin]
(batchId=111)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_subq_not_in]
(batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cross_product_check_1]
(batchId=115)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cross_product_check_2]
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[dynamic_rdd_cache]
(batchId=118)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby1_map_skew]
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby2_map_skew]
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby6_map_skew]
(batchId=112)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby7_map_skew]
(batchId=113)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby8_map_skew]
(batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_sort_skew_1_23]
(batchId=99)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join0] (batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join23]
(batchId=113)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_alt_syntax]
(batchId=129)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_1]
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_3]
(batchId=106)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_unqual1]
(batchId=114)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_unqual3]
(batchId=115)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[leftsemijoin_mr]
(batchId=102)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[limit_pushdown]
(batchId=124)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[mapjoin_mapjoin]
(batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[multi_insert_gby3]
(batchId=127)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parallel_join0]
(batchId=126)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_25]
(batchId=99)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_in]
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multiinsert]
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[tez_join_tests]
(batchId=130)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[tez_joins_explain]
(batchId=111)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union11]
(batchId=124)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union14]
(batchId=99)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union15]
(batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union17]
(batchId=125)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union19]
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union20]
(batchId=97)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union3]
(batchId=121)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union5]
(batchId=105)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union7]
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[windowing]
(batchId=117)
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testWaitQueuePreemption
(batchId=282)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3350/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3350/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3350/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 52 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12850767 - PreCommit-HIVE-Build
> HoS: poor reducer parallelism when operator stats are not accurate
> ------------------------------------------------------------------
>
> Key: HIVE-15796
> URL: https://issues.apache.org/jira/browse/HIVE-15796
> Project: Hive
> Issue Type: Improvement
> Components: Statistics
> Affects Versions: 2.2.0
> Reporter: Chao Sun
> Assignee: Chao Sun
> Attachments: HIVE-15796.wip.1.patch, HIVE-15796.wip.patch
>
>
> In HoS we use currently use operator stats to determine reducer parallelism.
> However, it is often the case that operator stats are not accurate,
> especially if column stats are not available. This sometimes will generate
> extremely poor reducer parallelism, and cause HoS query to run forever.
> This JIRA tries to offer an alternative way to compute reducer parallelism,
> similar to how MR does. Here's the approach we are suggesting:
> 1. when computing the parallelism for a MapWork, use stats associated with
> the TableScan operator;
> 2. when computing the parallelism for a ReduceWork, use the *maximum*
> parallelism from all its parents.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)