ulysses-you commented on code in PR #41088:
URL: https://github.com/apache/spark/pull/41088#discussion_r1189335057
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala:
##########
@@ -543,7 +561,7 @@ case class FileSourceScanExec(
dataSchema = relation.dataSchema,
partitionSchema = relation.partitionSchema,
requiredSchema = requiredSchema,
- filters = pushedDownFilters,
+ filters = dynamicallyPushedDownFilters,
Review Comment:
If we first filter out scalar subqueries and then add them back, then the
ordering of pushed filter is changed. e.g.,
`c1 > (select min(x) from t) and c2 > 1` -> `c2 > 1 and c1 > (select min(x)
from t)`
I'm not sure if it affects the performance in parquet side. So I simply
re-translate the whole data filter.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]