[ https://issues.apache.org/jira/browse/SPARK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16189481#comment-16189481 ]
Saktheesh Balaraj edited comment on SPARK-14172 at 10/4/17 9:21 AM: -------------------------------------------------------------------- Similar problem is observed while joining 2 hive tables based on partition columns in Spark *Example* Table A having 1000 partitions (date partition and hour sub-partition) and Table B having 2 partitions. when joining 2 tables based on partition it's going for full table scan in table A i.e using all 1000 partitions instead of taking 2 partitions from Table A and join with Table B. {noformat} sqlContext.sql("select * from tableA a, tableB b where a.trans_date=b.trans_date and a.trans_hour=b.trans_hour") {noformat} (Here trans_date is the partition and trans_hour is the sub-partition on both the tables) *Workaround* selecting 2 partitions from table B and then do lookup on Table A step1: {noformat} select trans_date, a.trans_hour from table B {noformat} step2: {noformat} select * from tableA where trans_date=<substitute from step1> and a.trans_hour =<substitute from step1> {noformat} was (Author: saktheesh): Similar problem is observed while joining 2 hive tables based on partition columns in Spark *Example* Table A having 1000 partitions (date partition and hour sub-partition) and Table B having 2 partition. when joining 2 tables based on partition it's going for full table scan in table A i.e using all 1000 partitions instead of taking 2 partitions from Table A and join with Table B. {noformat} sqlContext.sql("select * from tableA a, tableB b where a.trans_date=b.trans_date and a.trans_hour=b.trans_hour") {noformat} (Here trans_date is the partition and trans_hour is the sub-partition on both the tables) *Workaround* selecting 2 partitions from table B and then do lookup on Table A step1: {noformat} select trans_date, a.trans_hour from table B {noformat} step2: {noformat} select * from tableA where trans_date=<substitute from step1> and a.trans_hour =<substitute from step1> {noformat} > Hive table partition predicate not passed down correctly > -------------------------------------------------------- > > Key: SPARK-14172 > URL: https://issues.apache.org/jira/browse/SPARK-14172 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.6.1 > Reporter: Yingji Zhang > Priority: Critical > > When the hive sql contains nondeterministic fields, spark plan will not push > down the partition predicate to the HiveTableScan. For example: > {code} > -- consider following query which uses a random function to sample rows > SELECT * > FROM table_a > WHERE partition_col = 'some_value' > AND rand() < 0.01; > {code} > The spark plan will not push down the partition predicate to HiveTableScan > which ends up scanning all partitions data from the table. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org