Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-09 Thread Fabian Hueske
Hi, SQL does not support any custom triggers or timers. In general, computations are performed when they are complete with respect to the watermarks (applies for GROUP BY windows, OVER windows, windowed and time-versioned joins, etc. Best, Fabian Am Fr., 9. Nov. 2018 um 05:08 Uhr schrieb

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread yinhua.dai
I am able to write a single operator as you suggested, thank you. And then I saw ContinuousProcessingTimeTrigger from flink source code, it looks like it's something that I am looking for, if there is a way that I can customize the SQL "TUMBLE" window to use this trigger instead of

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread yinhua.dai
Hi Piotr, Thank you for your explanation. I basically understand your meaning, as far as my understanding, we can write a custom window assigner and custom trigger, and we can register the timer when the window process elements. But How can we register a timer when no elements received during

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread yinhua.dai
Hi Piotr, Thank you for your explanation. I basically understand your meaning, as far as my understanding, we can write a custom window assigner and custom trigger, and we can register the timer when the window process elements. But How can we register a timer when no elements received during

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread yinhua.dai
Hi Piotr, Thank you for your explanation. I basically understand your meaning, as far as my understanding, we can write a custom window assigner and custom trigger, and we can register the timer when the window process elements. But How can we register a timer when no elements received during a

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread yinhua.dai
Hi Piotr, Thank you for your explanation. I basically understand your meaning, as far as my understanding, we can write a custom window assigner and custom trigger, and we can register the timer when the window process elements. But How can we register a timer when no elements received during a

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-08 Thread Piotr Nowojski
Re-adding user mailing list to CC Hi, > I basically understand your meaning, as far as my understanding, we can write > a custom window assigner and custom trigger, and we can register the timer > when the window process elements. No I was actually suggesting to write your own operator to

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-07 Thread Piotr Nowojski
Hi, You would have to register timers (probably based on event time). Your operator would be a vastly simplified window operator, where for given window you keep emitted record from your SQL, sth like: MapState emittedRecords; // map window start -> emitted record When you process elements,

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-06 Thread yinhua.dai
Hi Piotr, Can you elaborate more on the solution with the custom operator? I don't think there will be any records from the SQL query if no input data in coming in within the time window even if we convert the result to a datastream. -- Sent from:

Re: Always trigger calculation of a tumble window in Flink SQL

2018-11-06 Thread Piotr Nowojski
Hi, You might come up with some magical self join that could do the trick - join/window join the the aggregation result with self and then aggregate it again. I don’t know if that’s possible (probably you would need to write custom aggregate function) and would be inefficient. It will be

Always trigger calculation of a tumble window in Flink SQL

2018-11-05 Thread yinhua.dai
We have a requirement that always want to trigger a calculation on a timer basis e.g. every 1 minute. *If there are records come in flink during the time window then calculate it with the normal way, i.e. aggregate for each record and getResult() at end of the time window.* *If there are no