[
https://issues.apache.org/jira/browse/HIVE-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16131822#comment-16131822
]
Hive QA commented on HIVE-16948:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12882506/HIVE-16948.7.patch
{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10977 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning]
(batchId=169)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14]
(batchId=235)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23]
(batchId=235)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema
(batchId=180)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
(batchId=180)
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
(batchId=180)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6451/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6451/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6451/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12882506 - PreCommit-HIVE-Build
> Invalid explain when running dynamic partition pruning query in Hive On Spark
> -----------------------------------------------------------------------------
>
> Key: HIVE-16948
> URL: https://issues.apache.org/jira/browse/HIVE-16948
> Project: Hive
> Issue Type: Bug
> Reporter: liyunzhang_intel
> Assignee: liyunzhang_intel
> Attachments: HIVE-16948_1.patch, HIVE-16948.2.patch,
> HIVE-16948.5.patch, HIVE-16948.6.patch, HIVE-16948.7.patch, HIVE-16948.patch
>
>
> in
> [union_subquery.q|https://github.com/apache/hive/blob/master/ql/src/test/queries/clientpositive/spark_dynamic_partition_pruning.q#L107]
> in spark_dynamic_partition_pruning.q
> {code}
> set hive.optimize.ppd=true;
> set hive.ppd.remove.duplicatefilters=true;
> set hive.spark.dynamic.partition.pruning=true;
> set hive.optimize.metadataonly=false;
> set hive.optimize.index.filter=true;
> set hive.strict.checks.cartesian.product=false;
> explain select ds from (select distinct(ds) as ds from srcpart union all
> select distinct(ds) as ds from srcpart) s where s.ds in (select
> max(srcpart.ds) from srcpart union all select min(srcpart.ds) from srcpart);
> {code}
> explain
> {code}
> STAGE DEPENDENCIES:
> Stage-2 is a root stage
> Stage-1 depends on stages: Stage-2
> Stage-0 depends on stages: Stage-1
> STAGE PLANS:
> Stage: Stage-2
> Spark
> Edges:
> Reducer 11 <- Map 10 (GROUP, 1)
> Reducer 13 <- Map 12 (GROUP, 1)
> DagName: root_20170622231525_20a777e5-e659-4138-b605-65f8395e18e2:2
> Vertices:
> Map 10
> Map Operator Tree:
> TableScan
> alias: srcpart
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Select Operator
> expressions: ds (type: string)
> outputColumnNames: ds
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Group By Operator
> aggregations: max(ds)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> sort order:
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> value expressions: _col0 (type: string)
> Map 12
> Map Operator Tree:
> TableScan
> alias: srcpart
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Select Operator
> expressions: ds (type: string)
> outputColumnNames: ds
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Group By Operator
> aggregations: min(ds)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> sort order:
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> value expressions: _col0 (type: string)
> Reducer 11
> Reduce Operator Tree:
> Group By Operator
> aggregations: max(VALUE._col0)
> mode: mergepartial
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
> Column stats: NONE
> Filter Operator
> predicate: _col0 is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Spark Partition Pruning Sink Operator
> partition key expr: ds
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> target column name: ds
> target work: Map 1
> Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Spark Partition Pruning Sink Operator
> partition key expr: ds
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> target column name: ds
> target work: Map 4
> Reducer 13
> Reduce Operator Tree:
> Group By Operator
> aggregations: min(VALUE._col0)
> mode: mergepartial
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
> Column stats: NONE
> Filter Operator
> predicate: _col0 is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Spark Partition Pruning Sink Operator
> partition key expr: ds
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> target column name: ds
> target work: Map 1
> Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Spark Partition Pruning Sink Operator
> partition key expr: ds
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> target column name: ds
> target work: Map 4
> Stage: Stage-1
> Spark
> Edges:
> Reducer 2 <- Map 1 (GROUP, 2)
> Reducer 3 <- Reducer 2 (PARTITION-LEVEL SORT, 2), Reducer 2
> (PARTITION-LEVEL SORT, 2), Reducer 7 (PARTITION-LEVEL SORT, 2), Reducer 9
> (PARTITION-LEVEL SORT, 2)
> Reducer 7 <- Map 6 (GROUP, 1)
> Reducer 9 <- Map 8 (GROUP, 1)
> DagName: root_20170622231525_20a777e5-e659-4138-b605-65f8395e18e2:1
> Vertices:
> Map 1
> Map Operator Tree:
> TableScan
> alias: srcpart
> filterExpr: ds is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Group By Operator
> keys: ds (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> COMPLETE Column stats: NONE
> Map 6
> Map Operator Tree:
> TableScan
> alias: srcpart
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Select Operator
> expressions: ds (type: string)
> outputColumnNames: ds
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Group By Operator
> aggregations: max(ds)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> sort order:
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> value expressions: _col0 (type: string)
> Map 8
> Map Operator Tree:
> TableScan
> alias: srcpart
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Select Operator
> expressions: ds (type: string)
> outputColumnNames: ds
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> PARTIAL Column stats: NONE
> Group By Operator
> aggregations: min(ds)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> sort order:
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> value expressions: _col0 (type: string)
> Reducer 2
> Reduce Operator Tree:
> Group By Operator
> keys: KEY._col0 (type: string)
> mode: mergepartial
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 23248 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 2 Data size: 46496 Basic stats:
> COMPLETE Column stats: NONE
> Reducer 3
> Reduce Operator Tree:
> Join Operator
> condition map:
> Left Semi Join 0 to 1
> keys:
> 0 _col0 (type: string)
> 1 _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 51145 Basic stats:
> COMPLETE Column stats: NONE
> File Output Operator
> compressed: false
> Statistics: Num rows: 2 Data size: 51145 Basic stats:
> COMPLETE Column stats: NONE
> table:
> input format:
> org.apache.hadoop.mapred.SequenceFileInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
> serde:
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> Reducer 7
> Reduce Operator Tree:
> Group By Operator
> aggregations: max(VALUE._col0)
> mode: mergepartial
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
> Column stats: NONE
> Filter Operator
> predicate: _col0 is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Reducer 9
> Reduce Operator Tree:
> Group By Operator
> aggregations: min(VALUE._col0)
> mode: mergepartial
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
> Column stats: NONE
> Filter Operator
> predicate: _col0 is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 184 Basic stats:
> COMPLETE Column stats: NONE
> Group By Operator
> keys: _col0 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 2 Data size: 368 Basic stats:
> COMPLETE Column stats: NONE
> Stage: Stage-0
> Fetch Operator
> limit: -1
> Processor Tree:
> ListSink
> {code}
> the target work of Reducer11 and Reducer13 is Map4 , but Map4 does not exist
> in the explain
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)