[jira] [Updated] (HIVE-17087) Remove HoS DPP tree during map-join conversion
[ https://issues.apache.org/jira/browse/HIVE-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17087: Attachment: HIVE-17087.1.patch > Remove HoS DPP tree during map-join conversion > -- > > Key: HIVE-17087 > URL: https://issues.apache.org/jira/browse/HIVE-17087 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17087.1.patch > > > Ran the following query in the {{TestSparkCliDriver}}: > {code:sql} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table partitioned_table1 (col int) partitioned by (part_col int); > create table partitioned_table2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table partitioned_table1 add partition (part_col = 1); > insert into table partitioned_table1 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > alter table partitioned_table2 add partition (part_col = 1); > insert into table partitioned_table2 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > explain select * from partitioned_table1 where partitioned_table1.part_col in > (select regular_table.col from regular_table join partitioned_table2 on > regular_table.col = partitioned_table2.part_col); > {code} > and got the following explain plan: > {code} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-4 depends on stages: Stage-2 > Stage-5 depends on stages: Stage-4 > Stage-3 depends on stages: Stage-5 > Stage-1 depends on stages: Stage-3 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 4 > Map Operator Tree: > TableScan > alias: partitioned_table1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int), part_col (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: _col1 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col > target work: Map 3 > Stage: Stage-4 > Spark > A masked pattern was here > Vertices: > Map 2 > Map Operator Tree: > TableScan > alias: regular_table > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Filter Operator > predicate: col is not null (type: boolean) > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col0 (type: int) > 1 _col0 (type: int) > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr:
[jira] [Updated] (HIVE-17087) Remove HoS DPP tree during map-join conversion
[ https://issues.apache.org/jira/browse/HIVE-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17087: Status: Patch Available (was: Open) > Remove HoS DPP tree during map-join conversion > -- > > Key: HIVE-17087 > URL: https://issues.apache.org/jira/browse/HIVE-17087 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17087.1.patch > > > Ran the following query in the {{TestSparkCliDriver}}: > {code:sql} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table partitioned_table1 (col int) partitioned by (part_col int); > create table partitioned_table2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table partitioned_table1 add partition (part_col = 1); > insert into table partitioned_table1 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > alter table partitioned_table2 add partition (part_col = 1); > insert into table partitioned_table2 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > explain select * from partitioned_table1 where partitioned_table1.part_col in > (select regular_table.col from regular_table join partitioned_table2 on > regular_table.col = partitioned_table2.part_col); > {code} > and got the following explain plan: > {code} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-4 depends on stages: Stage-2 > Stage-5 depends on stages: Stage-4 > Stage-3 depends on stages: Stage-5 > Stage-1 depends on stages: Stage-3 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 4 > Map Operator Tree: > TableScan > alias: partitioned_table1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int), part_col (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: _col1 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col > target work: Map 3 > Stage: Stage-4 > Spark > A masked pattern was here > Vertices: > Map 2 > Map Operator Tree: > TableScan > alias: regular_table > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Filter Operator > predicate: col is not null (type: boolean) > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col0 (type: int) > 1 _col0 (type: int) > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key
[jira] [Updated] (HIVE-17087) Remove HoS DPP tree during map-join conversion
[ https://issues.apache.org/jira/browse/HIVE-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17087: Summary: Remove HoS DPP tree during map-join conversion (was: HoS Query with multiple Partition Pruning Sinks + subquery has incorrect explain) > Remove HoS DPP tree during map-join conversion > -- > > Key: HIVE-17087 > URL: https://issues.apache.org/jira/browse/HIVE-17087 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > > Ran the following query in the {{TestSparkCliDriver}}: > {code:sql} > set hive.spark.dynamic.partition.pruning=true; > set hive.auto.convert.join=true; > create table partitioned_table1 (col int) partitioned by (part_col int); > create table partitioned_table2 (col int) partitioned by (part_col int); > create table regular_table (col int); > insert into table regular_table values (1); > alter table partitioned_table1 add partition (part_col = 1); > insert into table partitioned_table1 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > alter table partitioned_table2 add partition (part_col = 1); > insert into table partitioned_table2 partition (part_col = 1) values (1), > (2), (3), (4), (5), (6), (7), (8), (9), (10); > explain select * from partitioned_table1 where partitioned_table1.part_col in > (select regular_table.col from regular_table join partitioned_table2 on > regular_table.col = partitioned_table2.part_col); > {code} > and got the following explain plan: > {code} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-4 depends on stages: Stage-2 > Stage-5 depends on stages: Stage-4 > Stage-3 depends on stages: Stage-5 > Stage-1 depends on stages: Stage-3 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 4 > Map Operator Tree: > TableScan > alias: partitioned_table1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int), part_col (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: _col1 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > partition key expr: part_col > Statistics: Num rows: 10 Data size: 11 Basic stats: > COMPLETE Column stats: NONE > target column name: part_col > target work: Map 3 > Stage: Stage-4 > Spark > A masked pattern was here > Vertices: > Map 2 > Map Operator Tree: > TableScan > alias: regular_table > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Filter Operator > predicate: col is not null (type: boolean) > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Select Operator > expressions: col (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col0 (type: int) > 1 _col0 (type: int) > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition