[ 
https://issues.apache.org/jira/browse/FLINK-23663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17403725#comment-17403725
 ] 

Ingo Bürk commented on FLINK-23663:
-----------------------------------

After discussion with [~twalthr] we think the best way to go would be to add a 
new physical rule. This rule looks for non-PK filters which have been pushed 
into a source, and if there is a ChangelogNormalize pushes them back out of the 
source either to the Calc in front of the ChangelogNormalize (if that exists) 
or as a new Calc.

Of course if someone has suggestions or an opinion that'd be welcome as well.

> Reduce state size in ChangelogNormalize through filter push down
> ----------------------------------------------------------------
>
>                 Key: FLINK-23663
>                 URL: https://issues.apache.org/jira/browse/FLINK-23663
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Planner
>            Reporter: Timo Walther
>            Assignee: Ingo Bürk
>            Priority: Major
>
> {{ChangelogNormalize}} is an expensive stateful operation as it stores data 
> for each key. 
> Filters are generally not pushed through a ChangelogNormalize node which 
> means that users have no possibility to at least limit the key space. Pushing 
> filters like {{a < 10}} into a source like {{upsert-kafka}} that is emitting 
> {{+I[key1, a=9]}} and {{-D[key1, a=10]}}, is problematic as the deletion will 
> be filtered and leads to wrong results. But limiting the filter push down to 
> key space should be safe.
> Furthermore, it seems the current implementation is also wrong as it pushes 
> any kind of filter through {{ChangelogNormalize}} but only if the source 
> implements filter push down.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to