[
https://issues.apache.org/jira/browse/FLINK-23663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Timo Walther closed FLINK-23663.
--------------------------------
Fix Version/s: 1.15.0
1.14.0
Resolution: Fixed
Fixed in master:
commit c1bf5da8696b0369a3475d9cda355d149c652cff
[table-planner] Push primary key filters through ChangelogNormalize
commit bc69c1c77ec94a57b3603c5eefcbddb8ce83ea64
[table-planner] Introduce TableFactoryHarness
Fixed in 1.14:
commit 5c96e5b985ecce6b39bb3913470f11a8e3a2ad83
[table-planner] Push primary key filters through ChangelogNormalize
commit d49eb8c22be4084fea318d1701abbbc23db72c4d
[table-planner] Introduce TableFactoryHarness
> Reduce state size in ChangelogNormalize through filter push down
> ----------------------------------------------------------------
>
> Key: FLINK-23663
> URL: https://issues.apache.org/jira/browse/FLINK-23663
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / Planner
> Reporter: Timo Walther
> Assignee: Ingo Bürk
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.14.0, 1.15.0
>
>
> {{ChangelogNormalize}} is an expensive stateful operation as it stores data
> for each key.
> Filters are generally not pushed through a ChangelogNormalize node which
> means that users have no possibility to at least limit the key space. Pushing
> filters like {{a < 10}} into a source like {{upsert-kafka}} that is emitting
> {{+I[key1, a=9]}} and {{-D[key1, a=10]}}, is problematic as the deletion will
> be filtered and leads to wrong results. But limiting the filter push down to
> key space should be safe.
> Furthermore, it seems the current implementation is also wrong as it pushes
> any kind of filter through {{ChangelogNormalize}} but only if the source
> implements filter push down.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)