Yaohua628 commented on a change in pull request #34575:
URL: https://github.com/apache/spark/pull/34575#discussion_r753878814
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -355,7 +377,14 @@ case class FileSourceScanExec(
@transient
private lazy val pushedDownFilters = {
val supportNestedPredicatePushdown =
DataSourceUtils.supportNestedPredicatePushdown(relation)
- dataFilters.flatMap(DataSourceStrategy.translateFilter(_,
supportNestedPredicatePushdown))
+ dataFilters
+ .filterNot(
+ _.references.exists {
+ case MetadataAttribute(_) => true
Review comment:
Definitely a great point! That is something we want to do in follow-up
PRs. We can probably only send files we want to scan (based on the filter), as
you said.
Right now, we want to make sure of the correctness, see test `filter`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]