[ 
https://issues.apache.org/jira/browse/FLINK-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937869#comment-15937869
 ] 

ASF GitHub Bot commented on FLINK-5990:
---------------------------------------

Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/3585
  
    I think you can do exactly the same with both approaches, but a single 
timer has the benefit of reduced overhead. If you have records `(id, ts)`
    
    ```
    (1, 1), (2, 2), (3, 5), WM 4, (4, 7), (5, 10), (6, 6), WM 8
    ```
    
    Using timestamp timers would result in
    ```
    processElement((1,1))
    processElement((2,2))
    processElement((3,5))
    onTimer(1)
    onTimer(2)
    processElement((4,7))
    processElement((5,10))
    processElement((6,6))
    onTimer(5)
    onTimer(6)
    onTimer(7)
    ```
    
    Where `onTimer(1)` and `onTimer(2)` (or `onTimer(5)`, `onTimer(6)`, and 
`onTimer(7)`) could share the access to the `MapState`
    
    Using a single watermark timer we would have
    ```
    processElement((1,1))
    processElement((2,2))
    processElement((3,5))
    onTimer(_) // emit all records with ts < current watermark (= 4): (1,1) and 
(2,2)
    processElement((4,7))
    processElement((5,10))
    processElement((6,6))
    onTimer(_) // emit all records with ts < current watermark (= 8): (3,5), 
(6,6), (4,7)
    ```
    
    Since we can check against the current watermark also in this case, we can 
avoid to emit records too early. It should also be possible integrate an 
allowed lateness parameter into this approach in the future. 
    The benefit of processing multiple rows (for different timestamps) by a 
single `onTimer()` call is that we can iterate the list of `Long` keys just 
once.


> Add [partitioned] event time OVER ROWS BETWEEN x PRECEDING aggregation to SQL
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-5990
>                 URL: https://issues.apache.org/jira/browse/FLINK-5990
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: sunjincheng
>            Assignee: sunjincheng
>
> The goal of this issue is to add support for OVER ROWS aggregations on event 
> time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY rowTime() ROWS BETWEEN 2 PRECEDING AND 
> CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY rowTime() ROWS BETWEEN 2 PRECEDING AND 
> CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is required
> - The ORDER BY clause may only have rowTime() as parameter. rowTime() is a 
> parameterless scalar function that just indicates event time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5803)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to