beyond1920 commented on PR #8102:
URL: https://github.com/apache/hudi/pull/8102#issuecomment-1486571880

   > The original partition push down is not powerful enough, it can only 
filter out simple partition expressions, what is the corner case of the new 
push down for which we need a fallback then?
   The original partition push down is executed in Flink framework, it actually 
could handle more complex partition expression, such as `MyUdf(part2) < 3`, 
`trim(part1) = 'A'.  
   
   > Does the 1st way work too?
   No. The default behavior of Flink optimizer is doing partition push down in 
front of filter push down.
   If wanna to modify this behavior, I have to define `FlinkBatchProgram` or 
`FlinkStreamProgram`. It is the property of Job tableEnv. It belongs to user's 
job code. So I could not wrap this behavior in Hoodie connectors.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to