[ 
https://issues.apache.org/jira/browse/FLINK-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938544#comment-15938544
 ] 

ASF GitHub Bot commented on FLINK-5654:
---------------------------------------

Github user rtudoran commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3590#discussion_r107695656
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala
 ---
    @@ -119,6 +150,57 @@ class DataStreamOverAggregate(
     
       }
     
    +  def createTimeBoundedProcessingTimeOverWindow(inputDS: DataStream[Row]): 
DataStream[Row] = {
    +
    +    val overWindow: Group = logicWindow.groups.get(0)
    +    val partitionKeys: Array[Int] = overWindow.keys.toArray
    +    val namedAggregates: Seq[CalcitePair[AggregateCall, String]] = 
generateNamedAggregates
    +
    +    val index = 
overWindow.lowerBound.getOffset.asInstanceOf[RexInputRef].getIndex
    +    val count = input.getRowType().getFieldCount()
    +    val lowerboundIndex = index - count
    +    
    +    
    +    val time_boundary = 
logicWindow.constants.get(lowerboundIndex).getValue2 match {
    +      case _: java.math.BigDecimal => 
logicWindow.constants.get(lowerboundIndex)
    +         .getValue2.asInstanceOf[java.math.BigDecimal].longValue()
    +      case _ => throw new TableException("OVER Window boundaries must be 
numeric")
    +    }
    +
    +     // get the output types
    +    val rowTypeInfo = 
FlinkTypeFactory.toInternalRowTypeInfo(getRowType).asInstanceOf[RowTypeInfo]
    +         
    +    val result: DataStream[Row] =
    +        // partitioned aggregation
    +        if (partitionKeys.nonEmpty) {
    +          
    +          val processFunction = 
AggregateUtil.CreateTimeBoundedProcessingOverProcessFunction(
    +            namedAggregates,
    +            inputType,
    +            time_boundary)
    +          
    +          inputDS
    +          .keyBy(partitionKeys: _*)
    +          .process(processFunction)
    +          .returns(rowTypeInfo)
    +          .name(aggOpName)
    +          .asInstanceOf[DataStream[Row]]
    +        } else { // non-partitioned aggregation
    +          val processFunction = 
AggregateUtil.CreateTimeBoundedProcessingOverProcessFunction(
    --- End diff --
    
    @fhueske @sunjincheng121 
    FYI i also tested the serialization. I have 3 cases
    1) serializing/deserializing independently 1M Long values
    It takes on a large memory server
    Serialization of 1M Longs1189
    DeSerialization of 1M Longs3174
    and on a small laptop
    Serialization of 1M Longs3038
    DeSerialization of 1M Longs9635
    
    2) serializing/deserializing  1M Long values all kept in a Queue
    It takes on a large memory server
    Serialization of blob263
    DeSerialization of blob161
    and on a small laptop
    Serialization of blob1498
    DeSerialization of blob435
    
    3) Serializing/Deserializing 1M Tuples10<Long,Long....> all kept in one 
queue
    On the server
    Serialization of blob with large Tuples7309
    DeSerialization of blob with large Tuples3569
    
    
    What we can conclude:
    1) Even if we do serialization on a Long, but on a lot of numbers it takes 
a large amount of time
    2) It offers higher performance to keep these longs in one object (e.g. one 
queue and we can still have the order)
    3) if we add to the MapState example also individual 
serialization/deserialization of actual objects the time will become comparable 
or grater than serializing and deserializing the wholle object structure as a 
whole.
    3) not to deserialize everything we can also keep the data into a MapState 
which we access based on the order from the queue. What do you think?
    ...i believe this offers the highest performance but pays the price of 
keeping Long values duplicated
    



> Add processing time OVER RANGE BETWEEN x PRECEDING aggregation to SQL
> ---------------------------------------------------------------------
>
>                 Key: FLINK-5654
>                 URL: https://issues.apache.org/jira/browse/FLINK-5654
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: Fabian Hueske
>            Assignee: radu
>
> The goal of this issue is to add support for OVER RANGE aggregations on 
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single 
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a 
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5657)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to