[ 
https://issues.apache.org/jira/browse/FLINK-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15940830#comment-15940830
 ] 

ASF GitHub Bot commented on FLINK-5654:
---------------------------------------

Github user rtudoran commented on the issue:

    https://github.com/apache/flink/pull/3590
  
    @fhueske @sunjincheng121 @hongyuhong @stefanobortoli 
    
    I have run a test to compare the 3 approaches: 
    -windows  based  #3550 
    -processfunction based with events managed in ValueState[Queue] - this PR
    -processfunction based with events managed in MapState[Long,JList]   #3607 
    
    The simple benchmark that I run generates events 1 ms apart (a 5 tuple like 
the one we used in the tests). There are 2 scenarios that I run a simple 
counting over the window contents
    
    Scenario 1)
    
    2 second window (~2000 events in a window) - 100K events  in total generated
    Window based solution:  113839 ms
    Process based (with Queue): 111792 ms
    Process based on MapState: 110533 ms
    
    10 second window (~10000 events in a window) - 200K events  in total 
generated
    Window based solution:  218399ms
    Process based (with Queue): 217343ms
    Process based on MapState: 217657ms
    
    I would say that the approaches are similar in performance (with some small 
advantage for ProcessingFunctions). Regarding the 2 approaches for handing data 
in process windows, I would say that the price to pay for 
serializing/deserializing the whole list of events is matched by 
(serializing/deserializing the timestamp keys + independently deserializing the 
events that need to be removed). Considering that the performance are similar 
personally I believe that the approach with Queue is preferred because we can 
actually gain something (i.e., the order of the events) which will be helpful 
in extending the implementation for full SQL



> Add processing time OVER RANGE BETWEEN x PRECEDING aggregation to SQL
> ---------------------------------------------------------------------
>
>                 Key: FLINK-5654
>                 URL: https://issues.apache.org/jira/browse/FLINK-5654
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: Fabian Hueske
>            Assignee: radu
>
> The goal of this issue is to add support for OVER RANGE aggregations on 
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single 
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a 
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5657)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to