[
https://issues.apache.org/jira/browse/SPARK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347556#comment-15347556
]
Pedro Osorio commented on SPARK-14172:
--------------------------------------
[~lucasmf] I have been able to reproduce the issue by running the following
code in a spark shell:
{code}
sqlContext.sql("drop table test_partition_predicate")
sqlContext.sql("create table test_partition_predicate (col1 string) partitioned
by (partition_col string)")
sqlContext.sql("explain extended select * from test_partition_predicate where
partition_col = '1' and rand() < 0.9").collect().foreach(println)
{code}
I have verified using community.cloud.databricks.com that in spark1.4 the
physical plan uses the partition_col predicate in the HiveTableScan, but in
versions 1.6 and 2.0, this shows up in the filters section, which means the
whole table is scanned.
As [~jiangxb1987] pointed out, the issue is in collectProjectsAndFilters and
was introduced in this patch, I believe:
https://github.com/apache/spark/pull/8486
> Hive table partition predicate not passed down correctly
> --------------------------------------------------------
>
> Key: SPARK-14172
> URL: https://issues.apache.org/jira/browse/SPARK-14172
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.6.1
> Reporter: Yingji Zhang
> Priority: Critical
>
> When the hive sql contains nondeterministic fields, spark plan will not push
> down the partition predicate to the HiveTableScan. For example:
> {code}
> -- consider following query which uses a random function to sample rows
> SELECT *
> FROM table_a
> WHERE partition_col = 'some_value'
> AND rand() < 0.01;
> {code}
> The spark plan will not push down the partition predicate to HiveTableScan
> which ends up scanning all partitions data from the table.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]