[
https://issues.apache.org/jira/browse/FLINK-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937932#comment-15937932
]
ASF GitHub Bot commented on FLINK-5654:
---------------------------------------
Github user rtudoran commented on the issue:
https://github.com/apache/flink/pull/3590
@fhueske @sunjincheng121
I have done another commit (actually i did it last evening but the network
of the machine was down and did not push it). What i did was:
1)address the formatting issues (i hope i did not miss (too) many)
2) added some new tests including for processingtime. I did not find the
reference to what @fhueske mentioned but i created a custom source that emits
events 1 second apart. This gives us a good framework about events arriving
with processing time 1 second apart, so we can run any tests and validate. I
think this is quite relevant
3) I did update the processing functions. However, i am still using the
ValueState[Queue[JTuple2[Long,Row]]]= _ for buffering the events. As per my
previosu mentioning - i strongly believe this is a better approach. Having the
events sorted by their order is an advantage that we should not loose.
Otherwise we will pay a high price later (sorting, limit, top, distinct...). We
pay some price for serialziation, but as we do this over a full object it
should be relatively ok rather than serializing independent objects (like in
hash map...IMHO)
4) I kept the partitioned/non partitioned cases separately ... as per
initial argument of Fabian (i think it is worth paying the price of having an
extra class) not to add extra operators that need to be maintained, dedicate
resource, monitored for liveness and redeploy in case of failures.
> Add processing time OVER RANGE BETWEEN x PRECEDING aggregation to SQL
> ---------------------------------------------------------------------
>
> Key: FLINK-5654
> URL: https://issues.apache.org/jira/browse/FLINK-5654
> Project: Flink
> Issue Type: Sub-task
> Components: Table API & SQL
> Reporter: Fabian Hueske
> Assignee: radu
>
> The goal of this issue is to add support for OVER RANGE aggregations on
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT
> a,
> SUM(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1'
> HOUR PRECEDING AND CURRENT ROW) AS sumB,
> MIN(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1'
> HOUR PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5657)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some
> of the restrictions are trivial to address, we can add the functionality in
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with
> RexOver expression).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)