[ 
https://issues.apache.org/jira/browse/FLINK-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15947513#comment-15947513
 ] 

ASF GitHub Bot commented on FLINK-5654:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3641#discussion_r108720517
  
    --- Diff: flink-libraries/flink-table/pom.xml ---
    @@ -140,6 +140,20 @@ under the License.
                        <version>${project.version}</version>
                        <scope>test</scope>
                </dependency>
    +           <dependency>
    +                   <groupId>org.apache.flink</groupId>
    +                   <artifactId>flink-streaming-java_2.10</artifactId>
    +                   <version>${project.version}</version>
    +                   <type>test-jar</type>
    +                   <scope>test</scope>
    +           </dependency>
    +           <dependency>
    +                   <groupId>org.apache.flink</groupId>
    +                   <artifactId>flink-runtime_2.10</artifactId>
    +                   <version>${project.version}</version>
    +                   <scope>test</scope>
    +                   <type>test-jar</type>
    +           </dependency>
    --- End diff --
    
    Yes, the dependency is required for the test harness class which allows to 
control processing time. This is needed to properly test processing time 
operators that have a semantic that depends on the time (so not only sorting by 
time which is implicitly true for processing time). The processing time OVER 
RANGE window groups data by processing time. If we want to test this without 
manual timing (which is not good unit test practice) we need the test harness. 
There will be more operators in the future that require this dependency.


> Add processing time OVER RANGE BETWEEN x PRECEDING aggregation to SQL
> ---------------------------------------------------------------------
>
>                 Key: FLINK-5654
>                 URL: https://issues.apache.org/jira/browse/FLINK-5654
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API & SQL
>            Reporter: Fabian Hueske
>            Assignee: radu
>
> The goal of this issue is to add support for OVER RANGE aggregations on 
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT 
>   a, 
>   SUM(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS sumB,
>   MIN(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1' 
> HOUR PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single 
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a 
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5657)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some 
> of the restrictions are trivial to address, we can add the functionality in 
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with 
> RexOver expression).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to