GithubZhitao commented on a change in pull request #33650:
URL: https://github.com/apache/spark/pull/33650#discussion_r730662794



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownUtils.scala
##########
@@ -40,37 +40,43 @@ object PushDownUtils extends PredicateHelper {
   def pushFilters(
       scanBuilder: ScanBuilder,
       filters: Seq[Expression]): (Seq[sources.Filter], Seq[Expression]) = {
+    // A map from translated data source leaf node filters to original 
catalyst filter
+    // expressions. For a `And`/`Or` predicate, it is possible that the 
predicate is partially
+    // pushed down. This map can be used to construct a catalyst filter 
expression from the
+    // input filter, or a superset(partial push down filter) of the input 
filter.

Review comment:
       How to get the postScan Filters in DS V2 classes(implemented 
ScanBuilder). 
   I tried to with trait SupportsPushDownFilters or SupportsPushDownV2Filters, 
neither of them can push postScan Filters down .  Is this by design ? 
   If users can have all the filters , it would be great to do more 
optimization to our own business.  It depends to us to choose how to use theses 
conditions? Is this  better ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to