stczwd edited a comment on pull request #35669:
URL: https://github.com/apache/spark/pull/35669#issuecomment-1058842991


   >  Seems to me that data filters and partition filters are separated 
differently in V1 and V2 file sources and your optimization only work for V2 
file source?
   
   Yes, this is only worked with v2 file source. It won't work with v1 file 
source even if `spark.sql.parquet.filterPushdown.partition` is set to `true`. I 
did find the different of filter pushing down between v1 and v2 file source, 
thus I only considered v2 filesource here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to