cloud-fan commented on pull request #33584:
URL: https://github.com/apache/spark/pull/33584#issuecomment-893613577


   > Shall we simply add duplicated code to resolve the partition filters on 
pushing down Aggregation in V2 first? We can look back and see whether we need 
to do refactoring later
   
   What stops us from doing a refactor now to make the code clean? This is 
targeting Spark 3.3 and we have plenty of time to make a proper implementation 
for parquet aggregate pushdown. It's super weird that file source v2 does 
operator pushdown with 2 rules, can we unify it? We should try out best to not 
special case file source v2, to prove that external v2 sources can implement 
the same features as file source v2.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to