huaxingao commented on pull request #33822:
URL: https://github.com/apache/spark/pull/33822#issuecomment-905595201


   I took a look at v2 path. Seems to me that filter push down logic in v2 is 
different from v1. 
   
   V2
   `val (pushed, unSupported) = filters.partition(JDBCRDD.compileFilter(_, 
dialect).isDefined)`
   
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCScanBuilder.scala#L51
   We only push down the supported filters, the unSupported is returned as the 
post scan filters. 
   
   V1:
   `val (unhandledPredicates, pushedFilters, handledFilters) = 
selectFilters(relation.relation, candidatePredicates)`
   
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala#L388
   We push down the `pushedFilters`, not the `handledFilters`. 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala#L436
   Seems to me that we should push down handledFilters, not the translated 
filters (`pushedFilters`)
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to