[ 
https://issues.apache.org/jira/browse/FLINK-29692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17718710#comment-17718710
 ] 

Charles Tan commented on FLINK-29692:
-------------------------------------

[~jark] your point makes sense. However, it can be cumbersome to come up with 
workarounds for every use case that would benefit from an early fire feature – 
imagine if you have 100 ksqlDB workloads that you are trying to migrate to 
Flink. While many use cases can be supported through Flink's existing features, 
it would still be nice for Flink to support early fire windows with window TVF.

Out of curiosity, what are the difficult parts of supporting early fire on 
window TVF? From what I understand, Flink already supports window triggers in 
the DataStream API and the old windowing functions already supported this 
configuration. I am not very familiar with the implementation details of SQL 
operators in Flink, but if there was a list of tasks relevant to supporting 
this feature maybe I could lend a hand in taking a few of those items on.

> Support early/late fires for Windowing TVFs
> -------------------------------------------
>
>                 Key: FLINK-29692
>                 URL: https://issues.apache.org/jira/browse/FLINK-29692
>             Project: Flink
>          Issue Type: New Feature
>          Components: Table SQL / Planner
>    Affects Versions: 1.15.3
>            Reporter: Canope Nerda
>            Priority: Major
>
> I have cases where I need to 1) output data as soon as possible and 2) handle 
> late arriving data to achieve eventual correctness. In the logic, I need to 
> do window deduplication which is based on Windowing TVFs and according to 
> source code, early/late fires are not supported yet in Windowing TVFs.
> Actually 1) contradicts with 2). Without early/late fires, we had to 
> compromise, either live with fresh incorrect data or tolerate excess latency 
> for correctness.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to