guykhazma commented on a change in pull request #27157: [SPARK-30475][SQL] File
source V2: Push data filters for file listing
URL: https://github.com/apache/spark/pull/27157#discussion_r365145483
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
##########
@@ -92,11 +97,13 @@ private[sql] object PruneFileSourcePartitions extends
Rule[LogicalPlan] {
case op @ PhysicalOperation(projects, filters,
v2Relation @ DataSourceV2ScanRelation(_, scan: FileScan, output))
if filters.nonEmpty && scan.readDataSchema.nonEmpty =>
- val partitionKeyFilters = getPartitionKeyFilters(scan.sparkSession,
- v2Relation, scan.readPartitionSchema, filters, output)
- if (partitionKeyFilters.nonEmpty) {
+ val (partitionKeyFilters, dataFilters) =
+ getPartitionKeyFiltersAndDataFilters(scan.sparkSession, v2Relation,
+ scan.readPartitionSchema, filters, output)
+ // The dataFilters are pushed down only once
+ if (partitionKeyFilters.nonEmpty || (dataFilters.nonEmpty &&
scan.dataFilters.isEmpty)) {
Review comment:
The reason for the condition
```
(dataFilters.nonEmpty && scan.dataFilters.isEmpty)
````
Is that unlike the `partitionFilters` which are pushed down and don't need
to be reevaluated (which will make the `partitionKeyFilters.nonEmpty` to be
`false` in the next iteration) the `dataFilters` will remain non empty so
`scan.dataFilters.isEmpty` is needed to make sure we don't get stack overflow.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]