[
https://issues.apache.org/jira/browse/FLINK-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15897384#comment-15897384
]
ASF GitHub Bot commented on FLINK-5654:
---------------------------------------
Github user huawei-flink commented on the issue:
https://github.com/apache/flink/pull/3459
Related to the building failure - I see that this fails only for one
particular case. I looked into the error and it is not related to my
modifications as you cans see below. In fact I did not touch on the Cassandra
connector which is the one failing nor I caused, I would say, any things to
conflict with it.
From my point of view this could be pulled in
[INFO] flink-libraries .................................... SUCCESS [
0.271 s]
[INFO] flink-table ........................................ SUCCESS [02:44
min]
[INFO] flink-jdbc ......................................... SUCCESS [
0.898 s]
[INFO] flink-hbase ........................................ SUCCESS [
48.336 s]
[INFO] flink-hcatalog ..................................... SUCCESS [
8.864 s]
[INFO] flink-metrics-jmx .................................. SUCCESS [
0.487 s]
[INFO] flink-connector-kafka-base ......................... SUCCESS [
4.050 s]
[INFO] flink-connector-kafka-0.8 .......................... SUCCESS [
3.325 s]
[INFO] flink-connector-kafka-0.9 .......................... SUCCESS [
3.302 s]
[INFO] flink-connector-kafka-0.10 ......................... SUCCESS [
1.495 s]
[INFO] flink-connector-elasticsearch-base ................. SUCCESS [
5.535 s]
[INFO] flink-connector-elasticsearch ...................... SUCCESS [01:07
min]
[INFO] flink-connector-elasticsearch2 ..................... SUCCESS [
14.613 s]
[INFO] flink-connector-rabbitmq ........................... SUCCESS [
0.493 s]
[INFO] flink-connector-twitter ............................ SUCCESS [
2.241 s]
[INFO] flink-connector-nifi ............................... SUCCESS [
0.816 s]
[INFO] flink-connector-cassandra .......................... FAILURE [02:15
min]
[INFO] flink-connector-filesystem ......................... SKIPPED
[INFO] flink-connector-kinesis ............................ SKIPPED
[INFO] flink-connector-elasticsearch5 ..................... SKIPPED
[INFO] flink-examples-streaming ........................... SKIPPED
[INFO] flink-gelly ........................................ SKIPPED
[INFO] flink-gelly-scala .................................. SKIPPED
[INFO] flink-gelly-examples ............................... SKIPPED
[INFO] flink-python ....................................... SKIPPED
[INFO] flink-ml ........................................... SKIPPED
[INFO] flink-cep .......................................... SKIPPED
[INFO] flink-cep-scala .................................... SKIPPED
[INFO] flink-scala-shell .................................. SKIPPED
[INFO] flink-quickstart ................................... SKIPPED
[INFO] flink-quickstart-java .............................. SKIPPED
[INFO] flink-quickstart-scala ............................. SKIPPED
[INFO] flink-storm ........................................ SKIPPED
[INFO] flink-storm-examples ............................... SKIPPED
[INFO] flink-streaming-contrib ............................ SKIPPED
[INFO] flink-tweet-inputformat ............................ SKIPPED
[INFO] flink-connector-wikiedits .......................... SKIPPED
[INFO] flink-mesos ........................................ SKIPPED
[INFO] flink-yarn ......................................... SKIPPED
[INFO] flink-metrics-dropwizard ........................... SKIPPED
[INFO] flink-metrics-ganglia .............................. SKIPPED
[INFO] flink-metrics-graphite ............................. SKIPPED
[INFO] flink-metrics-statsd ............................... SKIPPED
[INFO] flink-dist ......................................... SKIPPED
[INFO] flink-fs-tests ..................................... SKIPPED
[INFO] flink-yarn-tests ................................... SKIPPED
[INFO]
------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 25:31 min
[INFO] Finished at: 2017-03-06T12:07:47+00:00
[INFO] Final Memory: 161M/493M
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (integration-tests)
on project flink-connector-cassandra_2.10: There are test failures.
[ERROR]
[ERROR] Please refer to
/home/travis/build/apache/flink/flink-connectors/flink-connector-cassandra/target/surefire-reports
for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR] mvn <goals> -rf :flink-connector-cassandra_2.10
Trying to KILL watchdog (1345).
./tools/travis_mvn_watchdog.sh: line 210: 1345 Terminated
watchdog
MVN exited with EXIT CODE: 1.
java.io.FileNotFoundException: build-target/lib/flink-dist-*.jar (No such
file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:220)
at java.util.zip.ZipFile.<init>(ZipFile.java:150)
at java.util.zip.ZipFile.<init>(ZipFile.java:121)
at sun.tools.jar.Main.list(Main.java:1060)
at sun.tools.jar.Main.run(Main.java:291)
at sun.tools.jar.Main.main(Main.java:1233)
find: `./flink-yarn-tests/target/flink-yarn-tests*': No such file or
directory
> Add processing time OVER RANGE BETWEEN x PRECEDING aggregation to SQL
> ---------------------------------------------------------------------
>
> Key: FLINK-5654
> URL: https://issues.apache.org/jira/browse/FLINK-5654
> Project: Flink
> Issue Type: Sub-task
> Components: Table API & SQL
> Reporter: Fabian Hueske
> Assignee: radu
>
> The goal of this issue is to add support for OVER RANGE aggregations on
> processing time streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT
> a,
> SUM(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1'
> HOUR PRECEDING AND CURRENT ROW) AS sumB,
> MIN(b) OVER (PARTITION BY c ORDER BY procTime() RANGE BETWEEN INTERVAL '1'
> HOUR PRECEDING AND CURRENT ROW) AS minB
> FROM myStream
> {code}
> The following restrictions should initially apply:
> - All OVER clauses in the same SELECT clause must be exactly the same.
> - The PARTITION BY clause is optional (no partitioning results in single
> threaded execution).
> - The ORDER BY clause may only have procTime() as parameter. procTime() is a
> parameterless scalar function that just indicates processing time mode.
> - UNBOUNDED PRECEDING is not supported (see FLINK-5657)
> - FOLLOWING is not supported.
> The restrictions will be resolved in follow up issues. If we find that some
> of the restrictions are trivial to address, we can add the functionality in
> this issue as well.
> This issue includes:
> - Design of the DataStream operator to compute OVER ROW aggregates
> - Translation from Calcite's RelNode representation (LogicalProject with
> RexOver expression).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)