cloud-fan commented on code in PR #41088:
URL: https://github.com/apache/spark/pull/41088#discussion_r1281849044
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##########
@@ -189,7 +189,13 @@ object FileSourceStrategy extends Strategy with
PredicateHelper with Logging {
// Partition keys are not available in the statistics of the files.
// `dataColumns` might have partition columns, we need to filter them
out.
val dataColumnsWithoutPartitionCols =
dataColumns.filterNot(partitionSet.contains)
- val dataFilters = normalizedFiltersWithoutSubqueries.flatMap { f =>
+ // Scalar subquery can be pushed down as data filter at runtime, since
we always
+ // execute subquery first.
+ // It has no meaning to push down bloom filter, so skip it.
+ val normalizedFiltersWithScalarSubqueries = normalizedFilters
+ .filterNot(e => e.containsPattern(PLAN_EXPRESSION) &&
!e.containsPattern(SCALAR_SUBQUERY))
Review Comment:
I think we need to make it clear about the 3 different kinds of predicates
1. simple predicates that can be used to prune partitions, or anything that
needs to be accessed during planning
2. foldable subquery expressions that can be turned into literal before
further use (is data source filter pushdown the only use?)
3. subquery expressions that can be used to do additional pruning after
planning but before execution.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]