[jira] [Assigned] (FLINK-11968) Fix runtime SingleElementIterator.iterator and remove table.SingleElementIterator
[ https://issues.apache.org/jira/browse/FLINK-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-11968: --- Assignee: Alexander Chermenin > Fix runtime SingleElementIterator.iterator and remove > table.SingleElementIterator > - > > Key: FLINK-11968 > URL: https://issues.apache.org/jira/browse/FLINK-11968 > Project: Flink > Issue Type: Bug > Components: Runtime / Operators >Reporter: Jingsong Lee >Assignee: Alexander Chermenin >Priority: Major > > {code:java} > @Override > public Iterator iterator() { >return this; > } > {code} > In iterator we need set available to true otherwise we can only iterator once. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] flink issue #3105: [FLINK-4641] [cep] Support branching CEP patterns
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/3105 @Aitozi I just didn't have enough time to finish my work on this task and the Pattern API was changed at the same time. Now it's needed to rewrite the code to support all updates. ---
[GitHub] flink pull request #3026: [FLINK-2980] [table] Support for GROUPING SETS cla...
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/3026 ---
[GitHub] flink issue #3026: [FLINK-2980] [table] Support for GROUPING SETS clause in ...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/3026 @aljoscha I think I'll close this PR. Any way the code will be here, if it's needed. ---
[GitHub] flink pull request #3105: [FLINK-4641] [cep] Support branching CEP patterns
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/3105 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-4641) Support branching CEP patterns
[ https://issues.apache.org/jira/browse/FLINK-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16050551#comment-16050551 ] Alexander Chermenin commented on FLINK-4641: Hi [~dian.fu]! Yes of course, you're welcome! Unfortunately I don't have enough time to do it. > Support branching CEP patterns > --- > > Key: FLINK-4641 > URL: https://issues.apache.org/jira/browse/FLINK-4641 > Project: Flink > Issue Type: Improvement > Components: CEP >Reporter: Till Rohrmann >Assignee: Alexander Chermenin > > We should add support for branching CEP patterns to the Pattern API. > {code} > |--> B --| > || > A -- --> D > || > |--> C --| > {code} > This feature will require changes to the {{Pattern}} class and the > {{NFACompiler}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #2937: [FLINK-4303] [cep] Examples for CEP library.
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2937 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2937: [FLINK-4303] [cep] Examples for CEP library.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2937 Thanks for your comment, but I'm afraid I don't have enough time to work on this task now. I'll just close this PR for now. It possible I'll come back to the task in the future if it will be needed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-4565) Support for SQL IN operator
[ https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-4565: -- Assignee: (was: Alexander Chermenin) > Support for SQL IN operator > --- > > Key: FLINK-4565 > URL: https://issues.apache.org/jira/browse/FLINK-4565 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Reporter: Timo Walther > > It seems that Flink SQL supports the uncorrelated sub-query IN operator. But > it should also be available in the Table API and tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-4303) Add CEP examples
[ https://issues.apache.org/jira/browse/FLINK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-4303: -- Assignee: Alexander Chermenin > Add CEP examples > > > Key: FLINK-4303 > URL: https://issues.apache.org/jira/browse/FLINK-4303 > Project: Flink > Issue Type: Improvement > Components: CEP >Affects Versions: 1.1.0 >Reporter: Timo Walther >Assignee: Alexander Chermenin > > Neither CEP Java nor CEP Scala contain a runnable example. The example on the > website is also not runnable without adding some additional code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request #3021: [FLINK-3617] Prevent NPE in CaseClassSerializer an...
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/3021 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2396: [FLINK-4395][cep] Eager processing of late arrival...
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2396#discussion_r97272232 --- Diff: flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/operator/ProcessingType.java --- @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cep.operator; + +/** + * Defines what time should be considered when even is being processed. + */ +public enum ProcessingType { + // Consider time of event occurance when processing an event + EVENT_TIME, + processingType, // Consider local system time when processing an event --- End diff -- I think this value is unnecessary. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2361: [FLINK-3318][cep] Add support for quantifiers to C...
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2361#discussion_r97264252 --- Diff: flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/State.java --- @@ -43,7 +43,14 @@ public State(final String name, final StateType stateType) { this.name = name; this.stateType = stateType; - stateTransitions = new ArrayList>(); + stateTransitions = new ArrayList<>(); + } + + public State(String name, StateType stateType, Collection> stateTransitions) { --- End diff -- It seems that this constructor is never used. What is that for? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2396: [FLINK-4395][cep] Eager processing of late arrivals in CE...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2396 @mushketyk Could you rebase this PR, please? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3105: [FLINK-4641] Support branching CEP patterns
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/3105 [FLINK-4641] Support branching CEP patterns Support for branched CEP patterns was added in this PR. After merging that we will be able to use follow code to define more complex patterns: ``` Pattern pattern = EventPattern.event("start") .next( Pattern.or( EventPattern.event("middle_1").subtype(F.class)), EventPattern.event("middle_2").where(new MyFilterFunction()) )) .followedBy(EventPattern.event("end")); ``` This PR will close https://issues.apache.org/jira/browse/FLINK-4641. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-4641 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3105.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3105 commit 026ada648d1277fd57f2fb2361a36bf0c8f5e57b Author: Aleksandr Chermenin Date: 2017-01-12T09:54:44Z [FLINK-4641] Base Java implementation. commit f82fc8386493e84e824110a26d5e059333efaec0 Author: Aleksandr Chermenin Date: 2017-01-12T10:07:53Z [FLINK-4641] Fixed branching pattern. commit ad074e2e2c1faf8571b8b8e7ce3144c0fbc5e31d Author: Aleksandr Chermenin Date: 2017-01-12T10:21:15Z [FLINK-4641] Fixed Scala API. commit 38e14a89b001bd443133746216d422ac46176c3f Author: Aleksandr Chermenin Date: 2017-01-12T10:56:22Z [FLINK-4641] Fixed tests for Scala API. commit 9ba130df964ece5b8756e8b46b6ec22dcde69877 Author: Aleksandr Chermenin Date: 2017-01-12T12:15:01Z [FLINK-4641] Fixed CEP Java 8 lambda test. commit 8d490aae497e85003a402ca6c1fd687e30c3b55f Author: Aleksandr Chermenin Date: 2017-01-12T12:24:52Z [FLINK-4641] Improved code documentation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-4565) Support for SQL IN operator
[ https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-4565: -- Assignee: Alexander Chermenin (was: Nikolay Vasilishin) > Support for SQL IN operator > --- > > Key: FLINK-4565 > URL: https://issues.apache.org/jira/browse/FLINK-4565 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Reporter: Timo Walther > Assignee: Alexander Chermenin > > It seems that Flink SQL supports the uncorrelated sub-query IN operator. But > it should also be available in the Table API and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink issue #2361: [FLINK-3318][cep] Add support for quantifiers to CEP's pa...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2361 @mushketyk It seems needed to rebase this PR. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3021: [FLINK-3617] Prevent NPE in CaseClassSerializer and infor...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/3021 @StephanEwen What do you think about merging this PR? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-3617) NPE from CaseClassSerializer when dealing with null Option field
[ https://issues.apache.org/jira/browse/FLINK-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-3617: -- Assignee: Alexander Chermenin > NPE from CaseClassSerializer when dealing with null Option field > > > Key: FLINK-3617 > URL: https://issues.apache.org/jira/browse/FLINK-3617 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.0.0 >Reporter: Jamie Grier >Assignee: Alexander Chermenin > > This error occurs when serializing a Scala case class with an field of > Option[] type where the value is not Some or None, but null. > If this is not supported we should have a good error message. > java.lang.RuntimeException: ConsumerThread threw an exception: null > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09.run(FlinkKafkaConsumer09.java:336) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:81) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.StreamSource$NonTimestampContext.collect(StreamSource.java:158) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09$ConsumerThread.run(FlinkKafkaConsumer09.java:473) > Caused by: java.lang.NullPointerException > at > org.apache.flink.api.scala.typeutils.CaseClassSerializer.serialize(CaseClassSerializer.scala:100) > at > org.apache.flink.api.scala.typeutils.CaseClassSerializer.serialize(CaseClassSerializer.scala:30) > at > org.apache.flink.api.scala.typeutils.CaseClassSerializer.serialize(CaseClassSerializer.scala:100) > at > org.apache.flink.api.scala.typeutils.CaseClassSerializer.serialize(CaseClassSerializer.scala:30) > at > org.apache.flink.streaming.runtime.streamrecord.StreamRecordSerializer.serialize(StreamRecordSerializer.java:107) > at > org.apache.flink.streaming.runtime.streamrecord.StreamRecordSerializer.serialize(StreamRecordSerializer.java:42) > at > org.apache.flink.runtime.plugable.SerializationDelegate.write(SerializationDelegate.java:56) > at > org.apache.flink.runtime.io.network.api.serialization.SpanningRecordSerializer.addRecord(SpanningRecordSerializer.java:79) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:84) > at > org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:86) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:78) > ... 3 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink issue #2976: [FLINK-5303] [table] Support for SQL GROUPING SETS clause...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2976 Big thanks for the review @twalthr! I fixed the code for most of the comments. I will fix the tests at this weekend. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2976: [FLINK-5303] [table] Support for SQL GROUPING SETS clause...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2976 I cleaned up my code and I will write tests for the plan at this weekend. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-4641) Support branching CEP patterns
[ https://issues.apache.org/jira/browse/FLINK-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-4641: -- Assignee: Alexander Chermenin > Support branching CEP patterns > --- > > Key: FLINK-4641 > URL: https://issues.apache.org/jira/browse/FLINK-4641 > Project: Flink > Issue Type: Improvement > Components: CEP >Reporter: Till Rohrmann > Assignee: Alexander Chermenin > > We should add support for branching CEP patterns to the Pattern API. > {code} > |--> B --| > || > A -- --> D > || > |--> C --| > {code} > This feature will require changes to the {{Pattern}} class and the > {{NFACompiler}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-3615) Add support for non-native SQL types
[ https://issues.apache.org/jira/browse/FLINK-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15782410#comment-15782410 ] Alexander Chermenin commented on FLINK-3615: Hi all. Is it an actual issue or it has been solved in FLINK-3916? > Add support for non-native SQL types > > > Key: FLINK-3615 > URL: https://issues.apache.org/jira/browse/FLINK-3615 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Reporter: Vasia Kalavri > > The TypeConverter of the Table API currently only supports basic types. We > should maybe re-design the way {{sqlTypeToTypeInfo}} works. It is used in the > {{CodeGenerator}} for visiting literals, in {{DataSetAggregate}} to create > the {{RowTypeInfo}} and in {{determineReturnType}}. We could maybe provide a > custom implementation per operator to determine the return type, based on the > input fields. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink issue #3021: [FLINK-3617] Prevent NPE in CaseClassSerializer and infor...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/3021 I've got results and here are: Benchmark | Mode | Cnt |Score | Error | Units ---|---|--|---|-| Just operation | avgt | 200 | 74.672 | 0.944 | ns/op if-else (_not_ null value) | avgt | 200 | 915.718 | 5.534 | ns/op try-catch (_not_ null value)| avgt | 200 | 74.183 | 0.405 | ns/op if-else (null value) | avgt | 200 | 901.775 | 5.368 | ns/op try-catch (null value) | avgt | 200 | 8649.317 | 40.462 | ns/op In this way `if-else-check` always adds more than 800 nanoseconds per operation to run-time. While `try-catch-block` doesn't slow down normal performance. We really need this option, because null value isn't normal value for us now. I will change the code in the closest future. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3021: [FLINK-3617] Prevent NPE in CaseClassSerializer and infor...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/3021 Yep, i'll do a benchmark to test approaches and compare them each other at this week. I will write the results here. Big thanks for your idea) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3026: [FLINK-2980] [table] Support for GROUPING SETS cla...
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/3026 [FLINK-2980] [table] Support for GROUPING SETS clause in Table API. Support for operators GROUPING SETS / ROLLUP / CUBE in Table AP was added in this PR. Also added some tests for check execution of SQL queries with them and improved documentation. PR will close next issue: https://issues.apache.org/jira/browse/FLINK-2980. This PR must be reviewed and merged only after PR #2976. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-2980 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3026.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3026 commit d88bd67c82249a2533d58b1ff231adb1a441e6a3 Author: Alexander Chermenin Date: 2016-12-10T20:09:25Z [FLINK-2980] Base implementation of grouping sets. commit 70c12a240b85333ab3a801e2f4f6ee6d1e961427 Author: Aleksandr Chermenin Date: 2016-12-12T08:58:17Z [FLINK-2980] Implemented grouped expressions. commit de15fdddb35fac5002ae76ec9cec6aa14c9ffef0 Author: Aleksandr Chermenin Date: 2016-12-12T10:18:42Z [FLINK-2980] Added grouping functions. commit 5468db9fd39c26e752554445af5cd2a0e1da5aae Author: Aleksandr Chermenin Date: 2016-12-12T11:39:09Z [FLINK-2980] Improved expressions parser. commit e6aa37255695e0d23d7df43934a88ff53737d303 Author: Aleksandr Chermenin Date: 2016-12-12T12:05:25Z [FLINK-2980] Added support for grouping functions. commit a3d30f39b91ee85d76da4e034b20e6f63ef50fd7 Author: Aleksandr Chermenin Date: 2016-12-12T12:54:19Z [FLINK-2980] Small fixes. commit 8e8d7e16e12c3f50c4a93a1961e975860fb0e4a5 Author: Aleksandr Chermenin Date: 2016-12-12T13:32:57Z [FLINK-2980] Windowed table with grouping sets. commit b13382b3328e1b244dae624c45c5c01580d80098 Author: Aleksandr Chermenin Date: 2016-12-15T10:13:45Z [FLINK-2980] Added tests. commit 6f5371882bf354b73587d4a7f9e526c911794eba Author: Aleksandr Chermenin Date: 2016-12-15T10:27:44Z [FLINK-2980] Small fixes. commit 36ce2bf704a8798f398719b634c30003765c7bbf Author: Aleksandr Chermenin Date: 2016-12-16T08:12:07Z [FLINK-2980] Improved documentation. commit 8d2824f37ebb7215fb901fc10a868717f424eb56 Author: Aleksandr Chermenin Date: 2016-12-16T09:42:30Z [FLINK-2980] Small docs improvements. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3021: [FLINK-3617] Prevent NPE in CaseClassSerializer an...
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/3021 [FLINK-3617] Prevent NPE in CaseClassSerializer and informative error message. To not to change current implementation of serialization now just NullFieldException has been added. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-3617 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3021.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3021 commit 9f4dee80bfb5b5df41f211f3da8231d59310fa29 Author: Aleksandr Chermenin Date: 2016-12-16T11:42:50Z [FLINK-3617] Added null value check. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2723 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 Well, sure, I understand you. Let's discuss all aspects of a improvement of a serialization model (include `null`'s processing, of course) in mailing list and will implement everything after that? And I will close this PR now :) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 Okay, we can add a null encoding support for serializers of containing types, and it will be done at this weekends. But even in this case we must save a null encoding support for serializing the concrete type. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 No-no, i'm forced to disagree with you... IMHO, `null` is a value of concrete type and it must be processed by individual serializer for this type. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 I think that if some type can have `null` as value then there must be possibility to serialize this value (it may be Option or any other type). And by the way `CaseClassSerializer` class have been fixed :) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15748508#comment-15748508 ] Alexander Chermenin commented on FLINK-5319: Well, we can change the signature of {{map}} methods (and others) from {code}public MapOperator map(MapFunction mapper){code} to {code}public MapOperator map(MapFunction mapper){code} to make possible such code as next one: {code}DataSet intDataSet = env.fromElements(1, 2, 3); DataSet longDataSet = env.fromElements(1L, 2L, 3L); MapFunction function = Number::doubleValue; List intToDoubles = intDataSet.map(function).collect(); List longToDoubles = longDataSet.map(function).collect();{code} What do you think about it? > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin >Assignee: Timo Walther > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) > at > org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > o
[jira] [Commented] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15748212#comment-15748212 ] Alexander Chermenin commented on FLINK-5319: Here is a link to the bug: https://bugs.openjdk.java.net/browse/JDK-8154236 > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin >Assignee: Timo Walther > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) > at > org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:269) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 >
[GitHub] flink pull request #2849: [FLINK-4631] Fixed stream task that was interrupte...
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2849 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15747820#comment-15747820 ] Alexander Chermenin commented on FLINK-5319: It seems there is a Java bug. I used next piece of code and I've got the same result: {code}package org.sample.flink.examples.mappers; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.Serializable; class Task { interface Mapper extends Serializable { OUT map(IN value); } private byte[] bytes; private IN input; private Task(Mapper mapper, IN input) { this.bytes = serializeObject(mapper); this.input = input; } public static void main(String[] args) { Task longTask = new Task<>(Number::doubleValue, 1L); Task intTask = new Task<>(Number::doubleValue, 1); System.out.println(longTask.exec()); System.out.println(intTask.exec()); } private static Object deserializeObject(byte[] bytes) { try (ObjectInputStream oois = new ObjectInputStream(new ByteArrayInputStream(bytes))) { return oois.readObject(); } catch (IOException | ClassNotFoundException e) { e.printStackTrace(); return null; } } private static byte[] serializeObject(Object o) { try (ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos)) { oos.writeObject(o); oos.flush(); return baos.toByteArray(); } catch (IOException e) { e.printStackTrace(); return new byte[0]; } } @SuppressWarnings("unchecked") private OUT exec() { Mapper mapper = (Mapper) deserializeObject(bytes); return (OUT) mapper.map(input); } }{code} Exception: {code}Exception in thread "main" java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long at org.sample.flink.examples.mappers.Task.exec(Task.java:55) at org.sample.flink.examples.mappers.Task.main(Task.java:28){code} > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin >Assignee: Timo Walther > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin
[jira] [Commented] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742901#comment-15742901 ] Alexander Chermenin commented on FLINK-5319: More simple test with the same result: {code}ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet longDataSet = env.fromCollection(Arrays.asList(1L, 2L, 3L, 4L, 5L)); DataSet intDataSet = env.fromCollection(Arrays.asList(1, 2, 3, 4, 5)); longDataSet.map(Number::doubleValue).print(); intDataSet.map(Number::doubleValue).print();{code} > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) > at > org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke
[jira] [Commented] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742827#comment-15742827 ] Alexander Chermenin commented on FLINK-5319: There are no problems if use code either only with B or only with C. > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) > at > org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:269) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 >
[jira] [Updated] (FLINK-5319) ClassCastException when reusing an inherited method as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin updated FLINK-5319: --- Description: Code sample: {code}static abstract class A { int id; A(int id) {this.id = id; } int getId() { return id; } } static class B extends A { B(int id) { super(id % 3); } } static class C extends A { C(int id) { super(id % 2); } } private static B b(int id) { return new B(id); } private static C c(int id) { return new C(id); } /** * Main method. */ public static void main(String[] args) throws Exception { StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment(); B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); DataStreamSource bStream = environment.fromElements(bs); DataStreamSource cStream = environment.fromElements(cs); bStream.keyBy((KeySelector) A::getId).print(); cStream.keyBy((KeySelector) A::getId).print(); environment.execute(); } {code} This code throws next exception: {code}Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: Could not extract key from org.sample.flink.examples.Test$C@5e1a8111 at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) at org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:269) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Could not extract key from org.sample.flink.examples.Test$C@5e1a8111 at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:61) at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:32) at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:83) at org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:86) at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:72) ... 11 more Caused by: java.lang.ClassCastException: org.sample.flink.examples.Test$C cannot be cast to org.sample.flink.examples.Test$B at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:59) ... 15 more{code} This problem occurs when we use method reference as KeySelector. And there are no problems when we use anonymous class or lamb
[jira] [Updated] (FLINK-5319) ClassCastException when reusing an inherited method reference as KeySelector for different classes
[ https://issues.apache.org/jira/browse/FLINK-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin updated FLINK-5319: --- Summary: ClassCastException when reusing an inherited method reference as KeySelector for different classes (was: ClassCastException when reusing an inherited method as KeySelector for different classes) > ClassCastException when reusing an inherited method reference as KeySelector > for different classes > -- > > Key: FLINK-5319 > URL: https://issues.apache.org/jira/browse/FLINK-5319 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.2.0 >Reporter: Alexander Chermenin > > Code sample: > {code}static abstract class A { > int id; > A(int id) {this.id = id; } > int getId() { return id; } > } > static class B extends A { B(int id) { super(id % 3); } } > static class C extends A { C(int id) { super(id % 2); } } > private static B b(int id) { return new B(id); } > private static C c(int id) { return new C(id); } > /** > * Main method. > */ > public static void main(String[] args) throws Exception { > StreamExecutionEnvironment environment = > StreamExecutionEnvironment.getExecutionEnvironment(); > B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); > C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); > DataStreamSource bStream = environment.fromElements(bs); > DataStreamSource cStream = environment.fromElements(cs); > bStream.keyBy((KeySelector) A::getId).print(); > cStream.keyBy((KeySelector) A::getId).print(); > environment.execute(); > } > {code} > This code throws next exception: > {code}Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sample.flink.examples.Test$C@5e1a8111 > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) > at > org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) > at > org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) > at > org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:269) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Could not extract key from > org.sa
[jira] [Created] (FLINK-5319) ClassCastException when reusing an inherited method as KeySelector for different classes
Alexander Chermenin created FLINK-5319: -- Summary: ClassCastException when reusing an inherited method as KeySelector for different classes Key: FLINK-5319 URL: https://issues.apache.org/jira/browse/FLINK-5319 Project: Flink Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Alexander Chermenin Code sample: {code}static abstract class A { int id; A(int id) {this.id = id; } int getId() { return id; } } static class B extends A { B(int id) { super(id % 3); } } static class C extends A { C(int id) { super(id % 2); } } private static B b(int id) { return new B(id); } private static C c(int id) { return new C(id); } /** * Main method. */ public static void main(String[] args) throws Exception { StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment(); B[] bs = IntStream.range(0, 10).mapToObj(Test::b).toArray(B[]::new); C[] cs = IntStream.range(0, 10).mapToObj(Test::c).toArray(C[]::new); DataStreamSource bStream = environment.fromElements(bs); DataStreamSource cStream = environment.fromElements(cs); bStream.keyBy((KeySelector) A::getId).print(); cStream.keyBy((KeySelector) A::getId).print(); environment.execute(); } {code} This code throws next exception: {code}Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:901) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:844) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: Could not extract key from org.sample.flink.examples.Test$C@5e1a8111 at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:746) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:724) at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:84) at org.apache.flink.streaming.api.functions.source.FromElementsFunction.run(FromElementsFunction.java:127) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:75) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56) at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:269) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Could not extract key from org.sample.flink.examples.Test$C@5e1a8111 at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:61) at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannels(KeyGroupStreamPartitioner.java:32) at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:83) at org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:86) at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:72) ... 11 more Caused by: java.lang.ClassCastException: org.sample.flink.examples.Test$C cannot be cast to org.sample.flink.examples.Test$B
[GitHub] flink issue #2965: [FLINK-5303] [table] Support for SQL GROUPING SETS clause...
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2965 PR closed to rename the branch. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2965: [FLINK-5303] [table] Support for SQL GROUPING SETS...
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2965 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2976: [FLINK-5303] [table] Support for SQL GROUPING SETS...
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2976 [FLINK-5303] [table] Support for SQL GROUPING SETS clause. Support for operators GROUPING SETS / ROLLUP / CUBE was added in this PR. Also added some tests for check execution of SQL queries with them. PR will close next issue: https://issues.apache.org/jira/browse/FLINK-5303. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-5303 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2976.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2976 commit 51832104b5bb9aac06b6b86c98944a2d512e358c Author: Aleksandr Chermenin Date: 2016-12-07T07:57:04Z [FLINK-5303] Added GROUPING SETS implementation. commit 9594a197148b77ffd4873d6fb77efafe01915c6e Author: Aleksandr Chermenin Date: 2016-12-07T14:23:35Z [FLINK-5303] Fixed grouping sets implementation. commit a1aa9b2315974e63fee4f948b0e99580c49413ab Author: Aleksandr Chermenin Date: 2016-12-07T14:35:46Z [FLINK-5303] Small fixes. commit c1170e2ce6111a77d31d29fbbad6b2c660d9e980 Author: Aleksandr Chermenin Date: 2016-12-08T07:46:09Z [FLINK-5303] Some improvements. commit 400c78d4b78fd092da0756177fa7e6dcfe7544b8 Author: Aleksandr Chermenin Date: 2016-12-08T09:32:53Z [FLINK-5303] Added tests. commit 8f30cbadca6a610ae9b4894065c4af38ec7ab12d Author: Aleksandr Chermenin Date: 2016-12-08T09:34:35Z [FLINK-5303] Test small fix. commit eaa745bb907695bc70a0b61bc4e322ad617cd1b1 Author: Aleksandr Chermenin Date: 2016-12-08T11:34:19Z [FLINK-5303] Grouping sets tests and fixes. commit 543b2be72ec30f6fce2a25371cc0b8b95a49f832 Author: Aleksandr Chermenin Date: 2016-12-08T11:44:41Z [FLINK-5303] Some cleanup. commit 3976cea7ce3ad98381e6467d6cf1e02f1d19b103 Author: Aleksandr Chermenin Date: 2016-12-08T13:14:14Z [FLINK-5303] Have supplemented documentation. commit 92955c58fc464be34f3e3af0a83d38a6261edca3 Author: Aleksandr Chermenin Date: 2016-12-08T14:56:00Z [FLINK-5303] Improved documentation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (FLINK-5303) Add CUBE/ROLLUP/GROUPING SETS operator in SQL
Alexander Chermenin created FLINK-5303: -- Summary: Add CUBE/ROLLUP/GROUPING SETS operator in SQL Key: FLINK-5303 URL: https://issues.apache.org/jira/browse/FLINK-5303 Project: Flink Issue Type: New Feature Components: Documentation, Table API & SQL Reporter: Alexander Chermenin Assignee: Alexander Chermenin Add support for such operators as CUBE, ROLLUP and GROUPING SETS in SQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request #2965: [FLINK-2980] [table] Support for SQL GROUPING SETS...
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2965 [FLINK-2980] [table] Support for SQL GROUPING SETS clause. Support for operators GROUPING SETS / ROLLUP / CUBE was added in this PR. Also added some tests for check execution of SQL queries with them. PR will close next issue: https://issues.apache.org/jira/browse/FLINK-2980. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-2980 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2965.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2965 commit 9739ae74dd44bf8ded43eaa330e8f4488bcdf2a5 Author: Aleksandr Chermenin Date: 2016-12-07T07:57:04Z [FLINK-2980] Added GROUPING SETS implementation. commit 355633914251af6202a54ff4a583daf3b10228f7 Author: Aleksandr Chermenin Date: 2016-12-07T14:23:35Z [FLINK-2980] Fixed grouping sets implementation. commit 4d1a1f010b8b1ad2cc030e1243ccd7faebea267e Author: Aleksandr Chermenin Date: 2016-12-07T14:35:46Z [FLINK-2980] Small fixes. commit 31120f4c8d787ebfc6613fb57177a1cd35c9f105 Author: Aleksandr Chermenin Date: 2016-12-08T07:46:09Z [FLINK-2980] Some improvements. commit 2d89df3aa9cdca19d4f99023fc8f565b31ef84d5 Author: Aleksandr Chermenin Date: 2016-12-08T09:32:53Z [FLINK-2980] Added tests. commit ba9ca9a3317a38b31cc1a0efd213519127da1273 Author: Aleksandr Chermenin Date: 2016-12-08T09:34:35Z [FLINK-2980] Test small fix. commit 38ff4a5496d75cf10cd5f37a041e798dcbfa4ebd Author: Aleksandr Chermenin Date: 2016-12-08T11:34:19Z [FLINK-2980] Grouping sets tests and fixes. commit 179b5a0fb8915b46f577c94e161e8ef8d2c356e8 Author: Aleksandr Chermenin Date: 2016-12-08T11:44:41Z [FLINK-2980] Some cleanup. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (FLINK-2980) Add CUBE/ROLLUP/GROUPING SETS operator in Table API.
[ https://issues.apache.org/jira/browse/FLINK-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Chermenin reassigned FLINK-2980: -- Assignee: Alexander Chermenin > Add CUBE/ROLLUP/GROUPING SETS operator in Table API. > > > Key: FLINK-2980 > URL: https://issues.apache.org/jira/browse/FLINK-2980 > Project: Flink > Issue Type: New Feature > Components: Documentation, Table API & SQL >Reporter: Chengxiang Li > Assignee: Alexander Chermenin > Attachments: Cube-Rollup-GroupSet design doc in Flink.pdf > > > Computing aggregates over a cube/rollup/grouping sets of several dimensions > is a common operation in data warehousing. It would be nice to have them in > Table API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request #2937: [FLINK-4303] Examples for CEP library.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2937 [FLINK-4303] Examples for CEP library. Added example programs written in Java and Scala. They based on @tillrohrmann 's monitoring example. This issue will be closed by this PR: https://issues.apache.org/jira/browse/FLINK-4303. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-4303 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2937.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2937 commit a051a55d8b9022c1fa1c71194fc17f43224c9b21 Author: Aleksandr Chermenin Date: 2016-12-05T09:44:39Z [FLINK-4303] Added examples for CEP library. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-4303) Add CEP examples
[ https://issues.apache.org/jira/browse/FLINK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15714936#comment-15714936 ] Alexander Chermenin commented on FLINK-4303: Hi all. And what about using this [~till.rohrmann]'s example: https://github.com/tillrohrmann/cep-monitoring ? It may be adapted a bit and I can rewrite it via Scala as example. > Add CEP examples > > > Key: FLINK-4303 > URL: https://issues.apache.org/jira/browse/FLINK-4303 > Project: Flink > Issue Type: Improvement > Components: CEP >Affects Versions: 1.1.0 >Reporter: Timo Walther > > Neither CEP Java nor CEP Scala contain a runnable example. The example on the > website is also not runnable without adding some additional code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink issue #2856: Removed excessive tests.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2856 Done. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2856: Removed excessive tests.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2856 I removed a method from ITCase only. There is `testJavaArraysAsList` method in `KryoCollectionsSerializerTest` class to check it. Yep, before merging #2623, they have not been successful. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2856: Removed excessive tests.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2856 Removed excessive tests. It's connected with PR #2623. Excessive test methods were removed from `GroupReduceITCase`. There are appropriate targeted unit tests in `KryoCollectionsSerializerTest` class. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink test-fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2856.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2856 commit ebed44f24313007fcba5a4d030f8e428afb75c72 Author: Aleksandr Chermenin Date: 2016-11-23T12:15:55Z Removed excessive tests. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2623: [FLINK-2608] Updated Twitter Chill version.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2623 Yep, fixed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2623: [FLINK-2608] Updated Twitter Chill version.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2623 @StephanEwen What about merging this PR? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 @StephanEwen @fhueske Can we discuss this PR? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2849: [FLINK-4631] Fixed stream task that was interrupte...
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2849 [FLINK-4631] Fixed stream task that was interrupted before it was initialized. I think this is the best solution for this issue - to check that initialization was finished before call `cleanup` method. May be some tests is needed to check this piece of code. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-4631 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2849.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2849 commit ea478999b4cecab9603ba1f58fe445ffd0fd8010 Author: Aleksandr Chermenin Date: 2016-11-22T14:58:51Z [FLINK-4631] Restored OneInputStreamTask class. commit c153f1b9e75e44def781099b1fd168c6960b01cc Author: Aleksandr Chermenin Date: 2016-11-22T15:03:40Z [FLINK-4631] Fixed stream task. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2807: [FLINK-4631] Prevent some possible NPEs.
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2807 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2807: [FLINK-4631] Prevent some possible NPEs.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2807 [FLINK-4631] Prevent some possible NPEs. Added additional conditions in several places to check possible NPEs. This PR must completely solve [FLINK-4631](https://issues.apache.org/jira/browse/FLINK-4631). You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-4631 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2807.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2807 commit b8e8a4507de1c4a101863116b92edefc12fb14f1 Author: Aleksandr Chermenin Date: 2016-10-28T09:36:01Z [FLINK-4631] Avoided NPE in OneInputStreamTask. commit 9a8ec134900eda502539448e8ecde42dc019fe7a Author: Aleksandr Chermenin Date: 2016-11-14T09:49:48Z [FLINK-4631] Fixed sink functions. commit 78176dae7168006a8430f27d9df2abc8e5a9f364 Author: Aleksandr Chermenin Date: 2016-11-14T14:06:21Z [FLINK-4631] Fixed sources and stream tasks. commit dd70279a1a0b31f5c49c96efebed7399a77811f5 Author: Aleksandr Chermenin Date: 2016-11-15T09:27:09Z [FLINK-4631] Some streaming fixes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2782: [hotfix] Prevent possible NPE in FlumeSink.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2782 I close this PR. Changes will be included into other PR to complete solve [FLINK-4631](https://issues.apache.org/jira/browse/FLINK-4631) (reopened issue). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2782: [hotfix] Prevent possible NPE in FlumeSink.
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2782 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-4631) NullPointerException during stream task cleanup
[ https://issues.apache.org/jira/browse/FLINK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656726#comment-15656726 ] Alexander Chermenin commented on FLINK-4631: I think this ticket must be reopened because there only one of many cases was solved. If it possible I can check other elements of streaming job in the same PR. > NullPointerException during stream task cleanup > --- > > Key: FLINK-4631 > URL: https://issues.apache.org/jira/browse/FLINK-4631 > Project: Flink > Issue Type: Bug >Affects Versions: 1.1.2 > Environment: Ubuntu server 12.04.5 64 bit > java version "1.8.0_40" > Java(TM) SE Runtime Environment (build 1.8.0_40-b26) > Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode) >Reporter: Avihai Berkovitz > Fix For: 1.2.0 > > > If a streaming job failed during startup (in my case, due to lack of network > buffers), all the tasks are being cancelled before they started. This causes > many instances of the following exception: > {noformat} > 2016-09-18 14:17:12,177 ERROR > org.apache.flink.streaming.runtime.tasks.StreamTask - Error during > cleanup of stream task > java.lang.NullPointerException > at > org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.cleanup(OneInputStreamTask.java:73) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:323) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink issue #2782: [hotfix] Prevent possible NPE in FlumeSink.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2782 1) I researched this question a little. It based on issue [FLINK-4631](https://issues.apache.org/jira/browse/FLINK-4631) and affect PR #2709. So, it's more wide issue and all elements of streaming job must be checked. I will add ticket soon. 2) In `FlumeSink` in other cases `client` already must be initialized. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2782: [hotfix] Prevent possible NPE in FlumeSink.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2782 [hotfix] Prevent possible NPE in FlumeSink. Prevent possible NPE in FlumeSink. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flume-sink Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2782.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2782 commit add835e3b6535ee63cb99d0f5f6a2177530e64c6 Author: Aleksandr Chermenin Date: 2016-11-10T14:59:08Z [hotfix] Prevent possible NPE in FlumeSink. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 @StephanEwen I restored test to check `toString()` method. This check added into `deepEquals()` method and performed only if `toString()` is overridden. In other cases this check is useless, isn't it? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 I see. I will change this code in the near future. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2723 @StephanEwen I think `deepEquals` method must be used for this and not `toString`. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2723#discussion_r85812829 --- Diff: flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/OptionSerializer.scala --- @@ -65,7 +65,7 @@ class OptionSerializer[A](val elemSerializer: TypeSerializer[A]) case Some(a) => target.writeBoolean(true) elemSerializer.serialize(a, target) -case None => +case _ => --- End diff -- I understand. Is there possible this case as serialization with new serializer and deserialization with old one? Exception will be in this case, isn't it? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2723#discussion_r85782914 --- Diff: flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/OptionSerializer.scala --- @@ -65,7 +65,7 @@ class OptionSerializer[A](val elemSerializer: TypeSerializer[A]) case Some(a) => target.writeBoolean(true) elemSerializer.serialize(a, target) -case None => +case _ => --- End diff -- If use `2` for `null` value we will get `true` from `readBoolean` method too. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2723#discussion_r85776021 --- Diff: flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/CaseClassComparatorTest.scala --- @@ -34,7 +34,7 @@ import org.mockito.Mockito class CaseClassComparatorTest { - case class CaseTestClass(a: Int, b: Int, c: Int, d: String) + case class CaseTestClass(a: Int, b: Int, c: Int, d: String, e: Option[Int]) --- End diff -- Ok, I will do it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Simple fix for OptionSerializer.
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2723#discussion_r85775790 --- Diff: flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/OptionSerializer.scala --- @@ -65,7 +65,7 @@ class OptionSerializer[A](val elemSerializer: TypeSerializer[A]) case Some(a) => target.writeBoolean(true) elemSerializer.serialize(a, target) -case None => +case _ => --- End diff -- Yes, but in this case we lose compatibility with previous implementations of serialization. Is it possible? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2723: [FLINK-3617] Simple fix for OptionSerializer.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2723 [FLINK-3617] Simple fix for OptionSerializer. This fix prevent possible NullPointerException at serialization process, but after deserialization Option variable will be initialized with None value and not null (as was before serialization). This decision was made for compatibility between versions. What thoughts about it? This PR solve [FLINK-3617](https://issues.apache.org/jira/browse/FLINK-3617). You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-3617 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2723.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2723 commit b742af340dbba4c927471d7f09e8d0b486dc6637 Author: Aleksandr Chermenin Date: 2016-10-28T13:52:26Z [FLINK-3617] Simple fix for OptionSerializer. This fix prevent possible NullPointerException at serialization process, but after deserialization Option variable will be initialized with None value and not null (as was before serialization). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2709: [FLINK-4631] Avoided NPE in OneInputStreamTask.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2709 [FLINK-4631] Avoided NPE in OneInputStreamTask. Added additional condition to check possible NPE. This PR solve [FLINK-4631](https://issues.apache.org/jira/browse/FLINK-4631). You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-4631 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2709.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2709 commit b8e8a4507de1c4a101863116b92edefc12fb14f1 Author: Aleksandr Chermenin Date: 2016-10-28T09:36:01Z [FLINK-4631] Avoided NPE in OneInputStreamTask. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2623: [FLINK-2608] Updated Twitter Chill version.
Github user chermenin commented on a diff in the pull request: https://github.com/apache/flink/pull/2623#discussion_r85369265 --- Diff: flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/util/CollectionDataSets.java --- @@ -710,7 +712,103 @@ public String toString() { pwc2.bigInt = BigInteger.valueOf(Long.MAX_VALUE).multiply(BigInteger.TEN); pwc2.scalaBigInt = BigInt.int2bigInt(31104000); pwc2.bigDecimalKeepItNull = null; - + + GregorianCalendar gcl2 = new GregorianCalendar(1976, 4, 3); + pwc2.sqlDate = new java.sql.Date(gcl2.getTimeInMillis()); // 1976 + + + data.add(pwc1); + data.add(pwc2); + + return env.fromCollection(data); + } + + public static DataSet getPojoWithArraysAsListCollection(ExecutionEnvironment env) { --- End diff -- I think it can be left. Just on the safe side. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2623: [FLINK-2608] Updated Twitter Chill version.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2623 I exclude Kryo as a dependency from Chill and add simple test to check Java collections. Tests added to `flink-tests` because it is depends on `flink-runtime` (where used Chill). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #2623: [FLINK-2608] Updated Twitter Chill version.
Github user chermenin commented on the issue: https://github.com/apache/flink/pull/2623 @rmetzger If I'm not mistaken Kryo 2.24 and 3.0 versions is compatible each other at the level of standard IO serialization. And if it possible we will be able to a little wait for Chill 0.7.5 version (with needed fixes and Kryo 2.x) or migrate Flink to Kryo 3.x version. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2623: [FLINK-2608] Updated Twitter Chill version.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2623 [FLINK-2608] Updated Twitter Chill version. Fixed JIRA issue [[FLINK-2608] Arrays.asList(..) does not work with CollectionInputFormat](https://issues.apache.org/jira/browse/FLINK-2608) as issue in Twitter Chill project (https://github.com/twitter/chill/issues/255). Updated version of Twitter Chill in `pom.xml`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink flink-2608 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2623.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2623 commit 24b9743a3d2c264b64a27772f3a01303dbce12d6 Author: Aleksandr Chermenin Date: 2016-10-11T14:40:36Z [FLINK-2608] Updated Twitter Chill version. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-2608) Arrays.asList(..) does not work with CollectionInputFormat
[ https://issues.apache.org/jira/browse/FLINK-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15437141#comment-15437141 ] Alexander Chermenin commented on FLINK-2608: This is a bug in ArraysAsListSerializer from Twitter Chill (https://github.com/twitter/chill/issues/255). It will be fixed after building next release and changing dependency version for this one. > Arrays.asList(..) does not work with CollectionInputFormat > -- > > Key: FLINK-2608 > URL: https://issues.apache.org/jira/browse/FLINK-2608 > Project: Flink > Issue Type: Bug > Components: Type Serialization System >Affects Versions: 0.9, 0.10.0 >Reporter: Maximilian Michels >Priority: Minor > Fix For: 1.0.0 > > > When using Arrays.asList(..) as input for a CollectionInputFormat, the > serialization/deserialization fails when deploying the task. > See the following program: > {code:java} > public class WordCountExample { > public static void main(String[] args) throws Exception { > final ExecutionEnvironment env = > ExecutionEnvironment.getExecutionEnvironment(); > DataSet text = env.fromElements( > "Who's there?", > "I think I hear them. Stand, ho! Who's there?"); > // DOES NOT WORK > List elements = Arrays.asList(0, 0, 0); > // The following works: > //List elements = new ArrayList<>(new int[] {0,0,0}); > DataSet set = env.fromElements(new TestClass(elements)); > DataSet> wordCounts = text > .flatMap(new LineSplitter()) > .withBroadcastSet(set, "set") > .groupBy(0) > .sum(1); > wordCounts.print(); > } > public static class LineSplitter implements FlatMapFunction Tuple2> { > @Override > public void flatMap(String line, Collector Integer>> out) { > for (String word : line.split(" ")) { > out.collect(new Tuple2(word, 1)); > } > } > } > public static class TestClass implements Serializable { > private static final long serialVersionUID = -2932037991574118651L; > List integerList; > public TestClass(List integerList){ > this.integerList=integerList; > } > } > {code} > {noformat} > Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task > 'DataSource (at main(Test.java:32) > (org.apache.flink.api.java.io.CollectionInputFormat))': Deserializing the > InputFormat ([mytests.Test$TestClass@4d6025c5]) failed: unread block data > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$4.apply(JobManager.scala:523) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$4.apply(JobManager.scala:507) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:507) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:190) > at > scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) > at > scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) > at > scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) > at > org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:43) > at > org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:29) > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) > at > org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:29) > at akka.actor.Actor$class.aroundReceive(Actor.scala:465) > at > org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:92) > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) > at akka.actor.ActorCell.invoke(ActorCell.scala
[GitHub] flink pull request #2328: [hotfix] Fix TypeExtractor.
Github user chermenin closed the pull request at: https://github.com/apache/flink/pull/2328 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #2328: [hotfix] Fix TypeExtractor.
GitHub user chermenin opened a pull request: https://github.com/apache/flink/pull/2328 [hotfix] Fix TypeExtractor. When function is a method reference to an instance method of an arbitrary object of a particular type it hasn't any parameters and getting of input type throws exception. For example this code ``` environment.fromElements(1, 2, 3, 4, 5).map(Object::toString).print(); ``` throws exception: ``` Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:350) at org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:304) at org.apache.flink.api.java.typeutils.TypeExtractor.getMapReturnTypes(TypeExtractor.java:119) at org.apache.flink.api.java.DataSet.map(DataSet.java:215) at ru.chermenin.flink.task.TestTask.main(TestTask.java:14) ``` but this is executed normally: ``` environment.fromElements(1, 2, 3, 4, 5).map(i -> i.toString()).print(); ``` You can merge this pull request into a Git repository by running: $ git pull https://github.com/chermenin/flink master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2328.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2328 commit e5ca7a91e954043d8c217810b2004825ce68ee5e Author: Alex Chermenin Date: 2016-08-03T07:34:21Z [hotfix] Fix TypeExtractor. When function is a method reference to an instance method of an arbitrary object of a particular type it hasn't any parameters and getting of input type throws exception. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---