[jira] [Updated] (FLINK-6637) Move registerFunction to TableEnvironment

2017-05-23 Thread Shaoxuan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaoxuan Wang updated FLINK-6637:
-
Component/s: Table API & SQL

> Move registerFunction to TableEnvironment
> -
>
> Key: FLINK-6637
> URL: https://issues.apache.org/jira/browse/FLINK-6637
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Shaoxuan Wang
>
> We are trying to unify the stream and batch. This unification should cover 
> the tableAPI query as well as the function registration (as part of DDL). 
> Currently the registerFunction for UDTF and UDAGG are defined in 
> BatchTableEnvironment and StreamTableEnvironment separately.  We should move 
> registerFunction to TableEnvironment.
> The reason that we did not put registerFunction into TableEnvironment for 
> UDTF and UDAGG is that we need different registerFunction for java and scala 
> codes, as java needs a special way to generate and pass implicit value of 
> typeInfo:
> {code:xml}
> implicit val typeInfo: TypeInformation[T] = TypeExtractor
>   .createTypeInfo(tf, classOf[TableFunction[_]], tf.getClass, 0)
>   .asInstanceOf[TypeInformation[T]]
> {code}
> It seems that we need duplicate TableEnvironment class, one for java and one 
> for scala.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6613:


+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have as 
much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:10 AM:
-

+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have as 
much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have as 
much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:12 AM:
-

. + 1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
. +1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:13 AM:
-

. + 1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}} in the 
{{KafkaConsumerThread}}. There shouldn't be any excessive objects created.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
. + 1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}} in the 
{{KafkaConsumerThread}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:11 AM:
-

.+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have as 
much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:12 AM:
-

. +1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
.+1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022285#comment-16022285
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6613 at 5/24/17 4:12 AM:
-

. + 1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}} in the 
{{KafkaConsumerThread}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.


was (Author: tzulitai):
. + 1 to what [~rmetzger] mentioned. The Kafka consumer (0.9+) will always have 
as much as 2 {{ConsumerRecords}}, one in the {{Handover}} awaiting to be 
processed, another awaiting to be added to {{Handover}}.

Regarding what you expect:
"2) Read next batch of messages only when previous batch processed" --> this is 
already what is happening, with only a size 1 buffer in the {{Handover}}. Also, 
I don't think it will solve the root cause of whats causing your OOM, even if 
we remove buffering completely.
"1) KafkaConsumerThread read messages with total size ~1G." --> as Robert 
mentioned, you should be able to just directly configure the Kafka client for 
that, and will likely solve your problem.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-05-23 Thread Fang Yong (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022231#comment-16022231
 ] 

Fang Yong commented on FLINK-6494:
--

If so, I think another two subtasks should be created to migrate configuration 
options of yarn and mesos. I find YarnConfigOptions has alread exist, and 
MesosConfigOptions should be created also, what do you think of this? 

> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6477) The first time to click Taskmanager cannot get the actual data

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022183#comment-16022183
 ] 

ASF GitHub Bot commented on FLINK-6477:
---

Github user chenzio commented on the issue:

https://github.com/apache/flink/pull/3971
  
@zentol This is just an exception possible, in the normal scene will not 
happen, BTW, whether the update interval  reduced to 1s will be better


> The first time to click Taskmanager cannot get the actual data
> --
>
> Key: FLINK-6477
> URL: https://issues.apache.org/jira/browse/FLINK-6477
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: zhihao chen
>Assignee: zhihao chen
> Attachments: errDisplay.jpg
>
>
> Flink web page first click Taskmanager to get less than the actual data, when 
> the parameter “jobmanager.web.refresh-interval” is set to a larger value, eg: 
> 180, if you do not manually refresh the page you need to wait time after 
> the timeout normal display



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3971: [FLINK-6477][web] Waiting the metrics retrieved from the ...

2017-05-23 Thread chenzio
Github user chenzio commented on the issue:

https://github.com/apache/flink/pull/3971
  
@zentol This is just an exception possible, in the normal scene will not 
happen, BTW, whether the update interval  reduced to 1s will be better


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-5486) Lack of synchronization in BucketingSink#handleRestoredBucketState()

2017-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822346#comment-15822346
 ] 

Ted Yu edited comment on FLINK-5486 at 5/24/17 12:14 AM:
-

Lock on State.bucketStates should be held in the following method:

{code}
  private void handleRestoredBucketState(State restoredState) {
Preconditions.checkNotNull(restoredState);

for (BucketState bucketState : restoredState.bucketStates.values()) {
{code}


was (Author: yuzhih...@gmail.com):
Lock on State.bucketStates should be held in the following method:
{code}
  private void handleRestoredBucketState(State restoredState) {
Preconditions.checkNotNull(restoredState);

for (BucketState bucketState : restoredState.bucketStates.values()) {
{code}

> Lack of synchronization in BucketingSink#handleRestoredBucketState()
> 
>
> Key: FLINK-5486
> URL: https://issues.apache.org/jira/browse/FLINK-5486
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Ted Yu
>
> Here is related code:
> {code}
>   
> handlePendingFilesForPreviousCheckpoints(bucketState.pendingFilesPerCheckpoint);
>   synchronized (bucketState.pendingFilesPerCheckpoint) {
> bucketState.pendingFilesPerCheckpoint.clear();
>   }
> {code}
> The handlePendingFilesForPreviousCheckpoints() call should be enclosed inside 
> the synchronization block. Otherwise during the processing of 
> handlePendingFilesForPreviousCheckpoints(), some entries of the map may be 
> cleared.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-4534) Lack of synchronization in BucketingSink#restoreState()

2017-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-4534:
--
Description: 
Iteration over state.bucketStates is protected by synchronization in other 
methods, except for the following in restoreState():
{code}
for (BucketState bucketState : state.bucketStates.values()) {
{code}
and following in close():
{code}
for (Map.Entry entry : 
state.bucketStates.entrySet()) {
  closeCurrentPartFile(entry.getValue());
{code}
w.r.t. bucketState.pendingFilesPerCheckpoint , there is similar issue starting 
line 752:
{code}
  Set pastCheckpointIds = 
bucketState.pendingFilesPerCheckpoint.keySet();
  LOG.debug("Moving pending files to final location on restore.");
  for (Long pastCheckpointId : pastCheckpointIds) {
{code}

  was:
Iteration over state.bucketStates is protected by synchronization in other 
methods, except for the following in restoreState():
{code}
for (BucketState bucketState : state.bucketStates.values()) {
{code}
and following in close():

{code}
for (Map.Entry entry : 
state.bucketStates.entrySet()) {
  closeCurrentPartFile(entry.getValue());
{code}
w.r.t. bucketState.pendingFilesPerCheckpoint , there is similar issue starting 
line 752:
{code}
  Set pastCheckpointIds = 
bucketState.pendingFilesPerCheckpoint.keySet();
  LOG.debug("Moving pending files to final location on restore.");
  for (Long pastCheckpointId : pastCheckpointIds) {
{code}


> Lack of synchronization in BucketingSink#restoreState()
> ---
>
> Key: FLINK-4534
> URL: https://issues.apache.org/jira/browse/FLINK-4534
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Ted Yu
>
> Iteration over state.bucketStates is protected by synchronization in other 
> methods, except for the following in restoreState():
> {code}
> for (BucketState bucketState : state.bucketStates.values()) {
> {code}
> and following in close():
> {code}
> for (Map.Entry entry : 
> state.bucketStates.entrySet()) {
>   closeCurrentPartFile(entry.getValue());
> {code}
> w.r.t. bucketState.pendingFilesPerCheckpoint , there is similar issue 
> starting line 752:
> {code}
>   Set pastCheckpointIds = 
> bucketState.pendingFilesPerCheckpoint.keySet();
>   LOG.debug("Moving pending files to final location on restore.");
>   for (Long pastCheckpointId : pastCheckpointIds) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6693) Support DATE_FORMAT function in the SQL API

2017-05-23 Thread Zhenqiu Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022107#comment-16022107
 ] 

Zhenqiu Huang commented on FLINK-6693:
--

I am interested with this task. If you haven't started work on it yet, I am 
glad to pick it up.

> Support DATE_FORMAT function in the SQL API
> ---
>
> Key: FLINK-6693
> URL: https://issues.apache.org/jira/browse/FLINK-6693
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
> support various date / time related operations:
> The specification of the {{DATE_FORMAT}} function can be found in 
> https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6693) Support DATE_FORMAT function in the SQL API

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6693:
-

 Summary: Support DATE_FORMAT function in the SQL API
 Key: FLINK-6693
 URL: https://issues.apache.org/jira/browse/FLINK-6693
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Haohui Mai
Assignee: Haohui Mai


It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
support various date / time related operations:

The specification of the {{DATE_FORMAT}} function can be found in 
https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6672) Support CAST(timestamp AS BIGINT)

2017-05-23 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned FLINK-6672:
-

Assignee: Haohui Mai

> Support CAST(timestamp AS BIGINT)
> -
>
> Key: FLINK-6672
> URL: https://issues.apache.org/jira/browse/FLINK-6672
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: Haohui Mai
>
> It is not possible to cast a TIMESTAMP, TIME, or DATE to BIGINT, INT, INT in 
> SQL. The Table API and the code generation support this, but the SQL 
> validation seems to prohibit it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6692) The flink-dist jar contains unshaded netty jar

2017-05-23 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated FLINK-6692:
--
Summary: The flink-dist jar contains unshaded netty jar  (was: The 
flink-dist jar contains unshaded nettyjar)

> The flink-dist jar contains unshaded netty jar
> --
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6692) The flink-dist jar contains unshaded nettyjar

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6692:
-

 Summary: The flink-dist jar contains unshaded nettyjar
 Key: FLINK-6692
 URL: https://issues.apache.org/jira/browse/FLINK-6692
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0


The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:

{noformat}
io/netty/handler/codec/http/router/
io/netty/handler/codec/http/router/BadClientSilencer.class
io/netty/handler/codec/http/router/MethodRouted.class
io/netty/handler/codec/http/router/Handler.class
io/netty/handler/codec/http/router/Router.class
io/netty/handler/codec/http/router/DualMethodRouter.class
io/netty/handler/codec/http/router/Routed.class
io/netty/handler/codec/http/router/AbstractHandler.class
io/netty/handler/codec/http/router/KeepAliveWrite.class
io/netty/handler/codec/http/router/DualAbstractHandler.class
io/netty/handler/codec/http/router/MethodRouter.class
{noformat}

{noformat}
org/jboss/netty/util/internal/jzlib/InfBlocks.class
org/jboss/netty/util/internal/jzlib/InfCodes.class
org/jboss/netty/util/internal/jzlib/InfTree.class
org/jboss/netty/util/internal/jzlib/Inflate$1.class
org/jboss/netty/util/internal/jzlib/Inflate.class
org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
org/jboss/netty/util/internal/jzlib/JZlib.class
org/jboss/netty/util/internal/jzlib/StaticTree.class
org/jboss/netty/util/internal/jzlib/Tree.class
org/jboss/netty/util/internal/jzlib/ZStream$1.class
org/jboss/netty/util/internal/jzlib/ZStream.class
{noformat}

Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6692) The flink-dist jar contains unshaded nettyjar

2017-05-23 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated FLINK-6692:
--
Component/s: Build System

> The flink-dist jar contains unshaded nettyjar
> -
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6613) OOM during reading big messages from Kafka

2017-05-23 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021747#comment-16021747
 ] 

Robert Metzger commented on FLINK-6613:
---

There are some options in the Kafka consumer (that you can just pass using the 
configuration properties of Flink's consumer) that influence the fetch behavior.
For example {{max.poll.records}} limits the number of records per poll, 
{{fetch.max.bytes}} the bytes per poll. I would try to play around with these 
settings to see if it solves the problem.

Flink itself won't buffer any records directly on the heap. The only thing that 
buffers in Flink is the network stack, but that is strictly bounded by a 
configuration parameter.

> OOM during reading big messages from Kafka
> --
>
> Key: FLINK-6613
> URL: https://issues.apache.org/jira/browse/FLINK-6613
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Setup Task manager with 2G heap size
> 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010)
> 3) Send 3300 messages each 635Kb. So total size is ~2G
> 4) OOM in task manager.
> According to heap dump:
> 1) KafkaConsumerThread read messages with total size ~1G.
> 2) Pass them to the next operator using 
> org.apache.flink.streaming.connectors.kafka.internal.Handover
> 3) Then began to read another batch of messages. 
> 4) Task manager was able to read next batch of ~500Mb messages until OOM.
> Expected:
> 1) Either have constraint like "number of messages in-flight" OR
> 2) Read next batch of messages only when previous batch processed OR
> 3) Any other option which will solve OOM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021708#comment-16021708
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3973
  
+1


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3973: [FLINK-6687] [web] Activate strict checkstyle for flink-r...

2017-05-23 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3973
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021701#comment-16021701
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3973
  
@greghogan I've added a separate import block for scala and addressed all 
comments.


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3973: [FLINK-6687] [web] Activate strict checkstyle for flink-r...

2017-05-23 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3973
  
@greghogan I've added a separate import block for scala and addressed all 
comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6690) Rescaling broken

2017-05-23 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021694#comment-16021694
 ] 

Stefan Richter commented on FLINK-6690:
---

I have found the cause in a bug that was introduced as part of the serializer 
backwards compatibility story. Reverting ~4 lines fixes the problem. I will 
publish the fix tomorrow.

> Rescaling broken
> 
>
> Key: FLINK-6690
> URL: https://issues.apache.org/jira/browse/FLINK-6690
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Stefan Richter
>Priority: Blocker
>
> Rescaling appears to be broken for both 1.3 and 1.4. WHen i tried it out i 
> got the following exception:
> {code}
> java.lang.IllegalStateException: Could not initialize keyed state backend.
>at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:321)
>at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:217)
>at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:675)
>at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:662)
>at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:251)
>at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)   
> at java.lang.Thread.run(Thread.java:745) Caused by: 
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0   at 
> java.util.ArrayList.rangeCheck(ArrayList.java:635)   at 
> java.util.ArrayList.get(ArrayList.java:411)   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKVStateData(RocksDBKeyedStateBackend.java:1183)
>at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKeyGroupsInStateHandle(RocksDBKeyedStateBackend.java:1089)
>at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.doRestore(RocksDBKeyedStateBackend.java:1070)
>at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:957)
>at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(StreamTask.java:771)
>at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:311)
>... 6 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6691) Add checkstyle import block rule for scala imports

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6691:
---

 Summary: Add checkstyle import block rule for scala imports
 Key: FLINK-6691
 URL: https://issues.apache.org/jira/browse/FLINK-6691
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


Similar to java and javax imports we should give scala imports a separate 
import block.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6690) Rescaling broken

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6690:
---

 Summary: Rescaling broken
 Key: FLINK-6690
 URL: https://issues.apache.org/jira/browse/FLINK-6690
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.3.0, 1.4.0
Reporter: Chesnay Schepler
Assignee: Stefan Richter
Priority: Blocker


Rescaling appears to be broken for both 1.3 and 1.4. WHen i tried it out i got 
the following exception:

{code}
java.lang.IllegalStateException: Could not initialize keyed state backend.  
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:321)
   at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:217)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:675)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:662)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:251) 
  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)   at 
java.lang.Thread.run(Thread.java:745) Caused by: 
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0   at 
java.util.ArrayList.rangeCheck(ArrayList.java:635)   at 
java.util.ArrayList.get(ArrayList.java:411)   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKVStateData(RocksDBKeyedStateBackend.java:1183)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.restoreKeyGroupsInStateHandle(RocksDBKeyedStateBackend.java:1089)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullRestoreOperation.doRestore(RocksDBKeyedStateBackend.java:1070)
   at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:957)
   at 
org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(StreamTask.java:771)
   at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:311)
   ... 6 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (FLINK-4719) KryoSerializer random exception

2017-05-23 Thread Joshua Griffith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua Griffith updated FLINK-4719:
---
Comment: was deleted

(was: I'm also experiencing this issue when using the distinct transform. I 
thought it was an issue with the generic serializers so I wrote and registered 
custom Kryo serializers for each type but I'm still getting this error when 
spilling to disk using Flink 1.3.0-RC0. Does anyone know of a workaround?)

> KryoSerializer random exception
> ---
>
> Key: FLINK-4719
> URL: https://issues.apache.org/jira/browse/FLINK-4719
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.1
>Reporter: Flavio Pompermaier
>  Labels: kryo, serialization
>
> There's a random exception that involves somehow the KryoSerializer when 
> using POJOs in Flink jobs reading large volumes of data.
> It is usually thrown in several places, e.g. (the Exceptions reported here 
> can refer to previous versions of Flink...):
> {code}
> java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
> (GroupReduce at createResult(IndexMappingExecutor.java:42)) -> Map (Map at 
> main(Jsonizer.java:128))' , caused an error: Error obtaining the sorted 
> input: Thread 'SortMerger spilling thread' terminated due to an exception: 
> Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:456)
> at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:345)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger spilling thread' terminated due to an exception: Unable to 
> find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:619)
> at 
> org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1079)
> at 
> org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:94)
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:450)
> ... 3 more
> Caused by: java.io.IOException: Thread 'SortMerger spilling thread' 
> terminated due to an exception: Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:800)
> Caused by: com.esotericsoftware.kryo.KryoException: Unable to find class: 
> java.ttil.HashSet
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:228)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:242)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:252)
> at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.copy(PojoSerializer.java:556)
> at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:75)
> at 
> org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:499)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1344)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:796)
> Caused by: java.lang.ClassNotFoundException: java.ttil.HashSet
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
> {code}
> {code}
> Caused by: java.io.IOException: Serializer consumed more bytes than the 
> record had. This indicates broken serialization. 

[jira] [Commented] (FLINK-4719) KryoSerializer random exception

2017-05-23 Thread Joshua Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021656#comment-16021656
 ] 

Joshua Griffith commented on FLINK-4719:


I'm also experiencing this issue when using the distinct transform. I thought 
it was an issue with the generic serializers so I wrote and registered custom 
Kryo serializers for each type but I'm still getting this error when spilling 
to disk using Flink 1.3.0-RC0. Does anyone know of a workaround?

> KryoSerializer random exception
> ---
>
> Key: FLINK-4719
> URL: https://issues.apache.org/jira/browse/FLINK-4719
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.1
>Reporter: Flavio Pompermaier
>  Labels: kryo, serialization
>
> There's a random exception that involves somehow the KryoSerializer when 
> using POJOs in Flink jobs reading large volumes of data.
> It is usually thrown in several places, e.g. (the Exceptions reported here 
> can refer to previous versions of Flink...):
> {code}
> java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
> (GroupReduce at createResult(IndexMappingExecutor.java:42)) -> Map (Map at 
> main(Jsonizer.java:128))' , caused an error: Error obtaining the sorted 
> input: Thread 'SortMerger spilling thread' terminated due to an exception: 
> Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:456)
> at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:345)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger spilling thread' terminated due to an exception: Unable to 
> find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:619)
> at 
> org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1079)
> at 
> org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:94)
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:450)
> ... 3 more
> Caused by: java.io.IOException: Thread 'SortMerger spilling thread' 
> terminated due to an exception: Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:800)
> Caused by: com.esotericsoftware.kryo.KryoException: Unable to find class: 
> java.ttil.HashSet
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:228)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:242)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:252)
> at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.copy(PojoSerializer.java:556)
> at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:75)
> at 
> org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:499)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1344)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:796)
> Caused by: java.lang.ClassNotFoundException: java.ttil.HashSet
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
> {code}
> {code}
> Caused by: java.io.IOException: Serializer consumed more bytes than the 
> record had. This indicates broken 

[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021653#comment-16021653
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118075403
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/RuntimeMonitorHandler.java
 ---
@@ -29,15 +34,8 @@
 import io.netty.handler.codec.http.HttpVersion;
 import io.netty.handler.codec.http.router.KeepAliveWrite;
 import io.netty.handler.codec.http.router.Routed;
-
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.webmonitor.handlers.RequestHandler;
-import org.apache.flink.util.ExceptionUtils;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
--- End diff --

Agreed.


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021650#comment-16021650
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118075328
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/RequestHandler.java
 ---
@@ -36,13 +37,15 @@
 * respond with a full http response, including content-type, 
content-length, etc.
 *
 * Exceptions may be throws and will be handled.
-* 
+*
+*
--- End diff --

remnants of a regex replacement gone wrong :/


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118075328
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/RequestHandler.java
 ---
@@ -36,13 +37,15 @@
 * respond with a full http response, including content-type, 
content-length, etc.
 *
 * Exceptions may be throws and will be handled.
-* 
+*
+*
--- End diff --

remnants of a regex replacement gone wrong :/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118075403
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/RuntimeMonitorHandler.java
 ---
@@ -29,15 +34,8 @@
 import io.netty.handler.codec.http.HttpVersion;
 import io.netty.handler.codec.http.router.KeepAliveWrite;
 import io.netty.handler.codec.http.router.Routed;
-
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.webmonitor.handlers.RequestHandler;
-import org.apache.flink.util.ExceptionUtils;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
--- End diff --

Agreed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021621#comment-16021621
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118071671
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/metrics/TaskManagerMetricsHandlerTest.java
 ---
@@ -33,6 +34,9 @@
 import static org.junit.Assert.assertNull;
 import static org.powermock.api.mockito.PowerMockito.mock;
 
+/**
+ * Tests for the TaskManagersMetricsHandler.
--- End diff --

`Manager`


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021619#comment-16021619
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118060440
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/metrics/MetricStoreTest.java
 ---
@@ -15,17 +15,22 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.flink.runtime.webmonitor.metrics;
 
 import org.apache.flink.runtime.metrics.dump.MetricDump;
 import org.apache.flink.runtime.metrics.dump.QueryScopeInfo;
 import org.apache.flink.util.TestLogger;
+
 import org.junit.Test;
 
 import java.io.IOException;
 
 import static org.junit.Assert.assertEquals;
 
+/**
+ * Tests for teh MetricStore.
--- End diff --

spelling


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021618#comment-16021618
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118071367
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/handlers/checkpoints/CheckpointStatsSubtaskDetailsHandlerTest.java
 ---
@@ -57,6 +57,9 @@
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
+/**
+ * Tests for the CheckpointStatsSubtaskDetailsHandlerTest.
--- End diff --

Self-reference


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021623#comment-16021623
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118057623
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/RequestHandler.java
 ---
@@ -36,13 +37,15 @@
 * respond with a full http response, including content-type, 
content-length, etc.
 *
 * Exceptions may be throws and will be handled.
-* 
+*
+*
--- End diff --

Indentation lines 41 and 48.


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021622#comment-16021622
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118058165
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagerLogHandler.java
 ---
@@ -100,26 +101,27 @@
private static final String TASKMANAGER_LOG_REST_PATH = 
"/taskmanagers/:taskmanagerid/log";
private static final String TASKMANAGER_OUT_REST_PATH = 
"/taskmanagers/:taskmanagerid/stdout";
 
-   /** Keep track of last transmitted log, to clean up old ones */
+   /** Keep track of last transmitted log, to clean up old ones. */
private final HashMap lastSubmittedLog = new 
HashMap<>();
private final HashMap lastSubmittedStdout = new 
HashMap<>();
 
-   /** Keep track of request status, prevents multiple log requests for a 
single TM running concurrently */
+   /** Keep track of request status, prevents multiple log requests for a 
single TM running concurrently. */
private final ConcurrentHashMap lastRequestPending = 
new ConcurrentHashMap<>();
private final Configuration config;
 
-   /** Future of the blob cache */
+   /** Future of the blob cache. */
private Future cache;
 
-   /** Indicates which log file should be displayed; true indicates .log, 
false indicates .out */
-   private boolean serveLogFile;
+   /** Indicates which log file should be displayed;. */
--- End diff --

Unnecessary semicolon.


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021620#comment-16021620
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118056115
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/RuntimeMonitorHandler.java
 ---
@@ -29,15 +34,8 @@
 import io.netty.handler.codec.http.HttpVersion;
 import io.netty.handler.codec.http.router.KeepAliveWrite;
 import io.netty.handler.codec.http.router.Routed;
-
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.webmonitor.handlers.RequestHandler;
-import org.apache.flink.util.ExceptionUtils;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
--- End diff --

We should consider giving `scala` its own block as with `java` and `javax`.


> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118071367
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/handlers/checkpoints/CheckpointStatsSubtaskDetailsHandlerTest.java
 ---
@@ -57,6 +57,9 @@
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
+/**
+ * Tests for the CheckpointStatsSubtaskDetailsHandlerTest.
--- End diff --

Self-reference


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118060440
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/metrics/MetricStoreTest.java
 ---
@@ -15,17 +15,22 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.flink.runtime.webmonitor.metrics;
 
 import org.apache.flink.runtime.metrics.dump.MetricDump;
 import org.apache.flink.runtime.metrics.dump.QueryScopeInfo;
 import org.apache.flink.util.TestLogger;
+
 import org.junit.Test;
 
 import java.io.IOException;
 
 import static org.junit.Assert.assertEquals;
 
+/**
+ * Tests for teh MetricStore.
--- End diff --

spelling


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118071671
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/metrics/TaskManagerMetricsHandlerTest.java
 ---
@@ -33,6 +34,9 @@
 import static org.junit.Assert.assertNull;
 import static org.powermock.api.mockito.PowerMockito.mock;
 
+/**
+ * Tests for the TaskManagersMetricsHandler.
--- End diff --

`Manager`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118058165
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagerLogHandler.java
 ---
@@ -100,26 +101,27 @@
private static final String TASKMANAGER_LOG_REST_PATH = 
"/taskmanagers/:taskmanagerid/log";
private static final String TASKMANAGER_OUT_REST_PATH = 
"/taskmanagers/:taskmanagerid/stdout";
 
-   /** Keep track of last transmitted log, to clean up old ones */
+   /** Keep track of last transmitted log, to clean up old ones. */
private final HashMap lastSubmittedLog = new 
HashMap<>();
private final HashMap lastSubmittedStdout = new 
HashMap<>();
 
-   /** Keep track of request status, prevents multiple log requests for a 
single TM running concurrently */
+   /** Keep track of request status, prevents multiple log requests for a 
single TM running concurrently. */
private final ConcurrentHashMap lastRequestPending = 
new ConcurrentHashMap<>();
private final Configuration config;
 
-   /** Future of the blob cache */
+   /** Future of the blob cache. */
private Future cache;
 
-   /** Indicates which log file should be displayed; true indicates .log, 
false indicates .out */
-   private boolean serveLogFile;
+   /** Indicates which log file should be displayed;. */
--- End diff --

Unnecessary semicolon.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118056115
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/RuntimeMonitorHandler.java
 ---
@@ -29,15 +34,8 @@
 import io.netty.handler.codec.http.HttpVersion;
 import io.netty.handler.codec.http.router.KeepAliveWrite;
 import io.netty.handler.codec.http.router.Routed;
-
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.runtime.instance.ActorGateway;
-import org.apache.flink.runtime.webmonitor.handlers.RequestHandler;
-import org.apache.flink.util.ExceptionUtils;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
--- End diff --

We should consider giving `scala` its own block as with `java` and `javax`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3973#discussion_r118057623
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/RequestHandler.java
 ---
@@ -36,13 +37,15 @@
 * respond with a full http response, including content-type, 
content-length, etc.
 *
 * Exceptions may be throws and will be handled.
-* 
+*
+*
--- End diff --

Indentation lines 41 and 48.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-6662.
--
   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.0

1.4.0: ac6e5c9a03177ad18899e27c8877efb0c9211842
1.3.0: d552b34470748de803a999c2c4c1557c49b30045

> ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.4.0
>
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
> at 
> 

[jira] [Commented] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021599#comment-16021599
 ] 

ASF GitHub Bot commented on FLINK-6662:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3972


> ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:305)
> at 
> 

[GitHub] flink pull request #3972: [FLINK-6662] [errMsg] Improve error message if rec...

2017-05-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3972


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021589#comment-16021589
 ] 

ASF GitHub Bot commented on FLINK-6662:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3972#discussion_r118068470
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
 ---
@@ -376,8 +377,14 @@ private static CompletedCheckpoint 
retrieveCompletedCheckpoint(Tuple2 ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at 

[jira] [Commented] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021591#comment-16021591
 ] 

ASF GitHub Bot commented on FLINK-6662:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3972
  
Thanks for the review @zentol.


> ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:305)
> 

[GitHub] flink issue #3972: [FLINK-6662] [errMsg] Improve error message if recovery f...

2017-05-23 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3972
  
Thanks for the review @zentol.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3972: [FLINK-6662] [errMsg] Improve error message if rec...

2017-05-23 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/3972#discussion_r118068470
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
 ---
@@ -376,8 +377,14 @@ private static CompletedCheckpoint 
retrieveCompletedCheckpoint(Tuple2

[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021514#comment-16021514
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118055738
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

Yes, I am also referring to those methods. :)


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-23 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118055738
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

Yes, I am also referring to those methods. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021497#comment-16021497
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118051044
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

I was referring to the 2 methods that follow :) . The ones that for example 
use the `foreach`.


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-23 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118051044
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

I was referring to the 2 methods that follow :) . The ones that for example 
use the `foreach`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021495#comment-16021495
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118050667
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

This is the scala way of applying `flatFunction`. We also provide both of 
those alternatives for e.g. `DataStream#flatMap`.


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-23 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/3963#discussion_r118050667
  
--- Diff: 
flink-libraries/flink-cep-scala/src/main/scala/org/apache/flink/cep/scala/PatternStream.scala
 ---
@@ -296,25 +294,94 @@ class PatternStream[T](jPatternStream: 
JPatternStream[T]) {
 * timeout events wrapped in a [[Either]] type.
 */
   def flatSelect[L: TypeInformation, R: TypeInformation](
-  patternFlatTimeoutFunction: (mutable.Map[String, JList[T]], Long, 
Collector[L]) => Unit) (
-  patternFlatSelectFunction: (mutable.Map[String, JList[T]], 
Collector[R]) => Unit)
+  patternFlatTimeoutFunction: (Map[String, Iterable[T]], Long, 
Collector[L]) => Unit) (
+  patternFlatSelectFunction: (Map[String, Iterable[T]], Collector[R]) 
=> Unit)
 : DataStream[Either[L, R]] = {
 
 val cleanSelectFun = cleanClosure(patternFlatSelectFunction)
 val cleanTimeoutFun = cleanClosure(patternFlatTimeoutFunction)
 
 val patternFlatSelectFun = new PatternFlatSelectFunction[T, R] {
-  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit = {
-cleanSelectFun(pattern.asScala, out)
+  override def flatSelect(pattern: JMap[String, JList[T]], out: 
Collector[R]): Unit =
+cleanSelectFun(mapToScala(pattern), out)
+}
+
+val patternFlatTimeoutFun = new PatternFlatTimeoutFunction[T, L] {
+  override def timeout(
+pattern: JMap[String, JList[T]],
+timeoutTimestamp: Long, out: Collector[L])
+  : Unit = {
+cleanTimeoutFun(mapToScala(pattern), timeoutTimestamp, out)
   }
 }
 
--- End diff --

This is the scala way of applying `flatFunction`. We also provide both of 
those alternatives for e.g. `DataStream#flatMap`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6689) Remote StreamExecutionEnvironment fails to submit jobs against LocalFlinkMiniCluster

2017-05-23 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6689:
--

 Summary: Remote StreamExecutionEnvironment fails to submit jobs 
against LocalFlinkMiniCluster
 Key: FLINK-6689
 URL: https://issues.apache.org/jira/browse/FLINK-6689
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission
Affects Versions: 1.3.0
Reporter: Nico Kruber
 Fix For: 1.3.0


The following Flink programs fails to execute with the current 1.3 branch (1.2 
works):

{code:java}
final String jobManagerAddress = "localhost";
final int jobManagerPort = ConfigConstants.DEFAULT_JOB_MANAGER_IPC_PORT;

final Configuration config = new Configuration();
config.setString(ConfigConstants.JOB_MANAGER_IPC_ADDRESS_KEY, 
jobManagerAddress);
config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 
jobManagerPort);
config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);

final LocalFlinkMiniCluster cluster = new LocalFlinkMiniCluster(config, false);
cluster.start(true);

final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.createRemoteEnvironment(jobManagerAddress, 
jobManagerPort);

env.fromElements(1l).addSink(new DiscardingSink());

// fails due to leader session id being wrong:
env.execute("test");
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6038) Add deep links to Apache Bahir Flink streaming connector documentations

2017-05-23 Thread David Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021471#comment-16021471
 ] 

David Anderson commented on FLINK-6038:
---

I'm thinking of linking to the github READMEs rather than the project's pages 
-- so for example, 
https://github.com/apache/bahir-flink/tree/master/flink-connector-redis rather 
than http://bahir.apache.org/docs/flink/current/flink-streaming-redis/. The 
content is identical, but I find the github versions more readable. More 
importantly, Bahir has yet to make a release, so to use these connectors you 
have to go to github and clone the repo, so it seems more helpful to link 
directly there.

[~tzulitai] Is this a bad idea?

> Add deep links to Apache Bahir Flink streaming connector documentations
> ---
>
> Key: FLINK-6038
> URL: https://issues.apache.org/jira/browse/FLINK-6038
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: David Anderson
>
> Recently, the Bahir documentation for Flink streaming connectors in Bahir was 
> added to Bahir's website: BAHIR-90.
> We should add deep links to the individual Bahir connector dos under 
> {{/dev/connectors/overview}}, instead of just shallow links to the source 
> {{README.md}} s in the community ecosystem page.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6688) Activate strict checkstyle in flink-test-utils

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6688:
---

 Summary: Activate strict checkstyle in flink-test-utils
 Key: FLINK-6688
 URL: https://issues.apache.org/jira/browse/FLINK-6688
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021433#comment-16021433
 ] 

ASF GitHub Bot commented on FLINK-6687:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3973

[FLINK-6687] [web] Activate strict checkstyle for flink-runtime-web

Lot's of missing javadocs, trailing spaces in javadocs, trailing tabs and 
missing dots.

Very little code is affected.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6687_checkstyle_web

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3973.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3973


commit b680a445d17e2f9c55ea6bed7319e93ca1982e8e
Author: zentol 
Date:   2017-05-23T16:36:51Z

[FLINK-6687] [web] Activate strict checkstyle for flink-runtime-web




> Activate strict checkstyle for flink-runtime-web
> 
>
> Key: FLINK-6687
> URL: https://issues.apache.org/jira/browse/FLINK-6687
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3973: [FLINK-6687] [web] Activate strict checkstyle for ...

2017-05-23 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3973

[FLINK-6687] [web] Activate strict checkstyle for flink-runtime-web

Lot's of missing javadocs, trailing spaces in javadocs, trailing tabs and 
missing dots.

Very little code is affected.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6687_checkstyle_web

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3973.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3973


commit b680a445d17e2f9c55ea6bed7319e93ca1982e8e
Author: zentol 
Date:   2017-05-23T16:36:51Z

[FLINK-6687] [web] Activate strict checkstyle for flink-runtime-web




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6687) Activate strict checkstyle for flink-runtime-web

2017-05-23 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6687:
---

 Summary: Activate strict checkstyle for flink-runtime-web
 Key: FLINK-6687
 URL: https://issues.apache.org/jira/browse/FLINK-6687
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6650) Fix Non-windowed group-aggregate error when using append-table mode.

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021414#comment-16021414
 ] 

ASF GitHub Bot commented on FLINK-6650:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3958
  
Hi @twalthr Thanks for review the PR. I have updated the PR. According your 
comment.

Best,
SunJincheng


> Fix Non-windowed group-aggregate error when using append-table mode.
> 
>
> Key: FLINK-6650
> URL: https://issues.apache.org/jira/browse/FLINK-6650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When I test Non-windowed group-aggregate with {{stream.toTable(tEnv, 'a, 'b, 
> 'c).select('a.sum, weightAvgFun('a, 'b)).toAppendStream[Row].addSink(new 
> StreamITCase.StringSink)}}, I got the error as follows:
> {code}
> org.apache.flink.table.api.TableException: Table is not an append-only table. 
> Output needs to handle update and delete changes.
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:631)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:607)
>   at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:219)
>   at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:195)
>   at 
> org.apache.flink.table.api.scala.TableConversions.toAppendStream(TableConversions.scala:121)
> {code}
> The reason is {{DataStreamGroupAggregate#producesUpdates}} as follows:
> {code}
> override def producesUpdates = true
> {code}
> I think in the view of the user, what user want are(for example):
> Data:
> {code}
> val data = List(
>   (1L, 1, "Hello"),
>   (2L, 2, "Hello"),
>   (3L, 3, "Hello"),
>   (4L, 4, "Hello"),
>   (5L, 5, "Hello"),
>   (6L, 6, "Hello"),
>   (7L, 7, "Hello World"),
>   (8L, 8, "Hello World"),
>   (20L, 20, "Hello World"))
> {code}
> * Case1:
> TableAPI
> {code}
>  stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum)
>  .toAppendStream[Row].addSink(new StreamITCase.StringSink)
> {code}
> Result
> {code}
> // StringSink process datas:
> 1
> 3
> 6
> 10
> 15
> 21
> 28
> 36
> 56
> // Last output datas:
> 1
> 3
> 6
> 10
> 15
> 21
> 28
> 36
> 56
> {code}
> * Case 2:
> TableAPI
> {code}
> stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum).toRetractStream[Row]
> .addSink(new StreamITCase.RetractingSink)
> {code}
> Result:
> {code}
> // RetractingSink process datas:
> (true,1)
> (false,1)
> (true,3)
> (false,3)
> (true,6)
> (false,6)
> (true,10)
> (false,10)
> (true,15)
> (false,15)
> (true,21)
> (false,21)
> (true,28)
> (false,28)
> (true,36)
> (false,36)
> (true,56)
> // Last output data:
> 56
> {code}
> In fact about #Case 1,we can using unbounded OVER windows, as follows:
> TableAPI
> {code}
> stream.toTable(tEnv, 'a, 'b, 'c, 'proctime.proctime)
> .window(Over orderBy 'proctime preceding UNBOUNDED_ROW as 'w)
> .select('a.sum over 'w)
> .toAppendStream[Row].addSink(new StreamITCase.StringSink)
> {code}
> Result
> {code}
> Same as #Case1
> {code}
> But after the [FLINK-6649 | https://issues.apache.org/jira/browse/FLINK-6649] 
> OVER can not express the #Case1 with earlyFiring.
> So I still think that Non-windowed group-aggregate not always update-table, 
> user can decide which mode to use.
> Is there any drawback to this improvement? Welcome anyone feedback?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3958: [FLINK-6650][table] Improve the error message for toAppen...

2017-05-23 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3958
  
Hi @twalthr Thanks for review the PR. I have updated the PR. According your 
comment.

Best,
SunJincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6650) Fix Non-windowed group-aggregate error when using append-table mode.

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021408#comment-16021408
 ] 

ASF GitHub Bot commented on FLINK-6650:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3958#discussion_r118038355
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -629,8 +629,7 @@ abstract class StreamTableEnvironment(
 // if no change flags are requested, verify table is an insert-only 
(append-only) table.
 if (!withChangeFlag && !isAppendOnly(logicalPlan)) {
   throw new TableException(
-"Table is not an append-only table. " +
-  "Output needs to handle update and delete changes.")
+"Table is not an append-only table. Try calling the 
[table.toRetractStream] method.")
--- End diff --

Yes, make sense to user. should update it.


> Fix Non-windowed group-aggregate error when using append-table mode.
> 
>
> Key: FLINK-6650
> URL: https://issues.apache.org/jira/browse/FLINK-6650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When I test Non-windowed group-aggregate with {{stream.toTable(tEnv, 'a, 'b, 
> 'c).select('a.sum, weightAvgFun('a, 'b)).toAppendStream[Row].addSink(new 
> StreamITCase.StringSink)}}, I got the error as follows:
> {code}
> org.apache.flink.table.api.TableException: Table is not an append-only table. 
> Output needs to handle update and delete changes.
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:631)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:607)
>   at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:219)
>   at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:195)
>   at 
> org.apache.flink.table.api.scala.TableConversions.toAppendStream(TableConversions.scala:121)
> {code}
> The reason is {{DataStreamGroupAggregate#producesUpdates}} as follows:
> {code}
> override def producesUpdates = true
> {code}
> I think in the view of the user, what user want are(for example):
> Data:
> {code}
> val data = List(
>   (1L, 1, "Hello"),
>   (2L, 2, "Hello"),
>   (3L, 3, "Hello"),
>   (4L, 4, "Hello"),
>   (5L, 5, "Hello"),
>   (6L, 6, "Hello"),
>   (7L, 7, "Hello World"),
>   (8L, 8, "Hello World"),
>   (20L, 20, "Hello World"))
> {code}
> * Case1:
> TableAPI
> {code}
>  stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum)
>  .toAppendStream[Row].addSink(new StreamITCase.StringSink)
> {code}
> Result
> {code}
> // StringSink process datas:
> 1
> 3
> 6
> 10
> 15
> 21
> 28
> 36
> 56
> // Last output datas:
> 1
> 3
> 6
> 10
> 15
> 21
> 28
> 36
> 56
> {code}
> * Case 2:
> TableAPI
> {code}
> stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum).toRetractStream[Row]
> .addSink(new StreamITCase.RetractingSink)
> {code}
> Result:
> {code}
> // RetractingSink process datas:
> (true,1)
> (false,1)
> (true,3)
> (false,3)
> (true,6)
> (false,6)
> (true,10)
> (false,10)
> (true,15)
> (false,15)
> (true,21)
> (false,21)
> (true,28)
> (false,28)
> (true,36)
> (false,36)
> (true,56)
> // Last output data:
> 56
> {code}
> In fact about #Case 1,we can using unbounded OVER windows, as follows:
> TableAPI
> {code}
> stream.toTable(tEnv, 'a, 'b, 'c, 'proctime.proctime)
> .window(Over orderBy 'proctime preceding UNBOUNDED_ROW as 'w)
> .select('a.sum over 'w)
> .toAppendStream[Row].addSink(new StreamITCase.StringSink)
> {code}
> Result
> {code}
> Same as #Case1
> {code}
> But after the [FLINK-6649 | https://issues.apache.org/jira/browse/FLINK-6649] 
> OVER can not express the #Case1 with earlyFiring.
> So I still think that Non-windowed group-aggregate not always update-table, 
> user can decide which mode to use.
> Is there any drawback to this improvement? Welcome anyone feedback?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3958: [FLINK-6650][table] Improve the error message for ...

2017-05-23 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3958#discussion_r118038355
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -629,8 +629,7 @@ abstract class StreamTableEnvironment(
 // if no change flags are requested, verify table is an insert-only 
(append-only) table.
 if (!withChangeFlag && !isAppendOnly(logicalPlan)) {
   throw new TableException(
-"Table is not an append-only table. " +
-  "Output needs to handle update and delete changes.")
+"Table is not an append-only table. Try calling the 
[table.toRetractStream] method.")
--- End diff --

Yes, make sense to user. should update it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6686) Improve UDXF(UDF,UDTF,UDAF) test case

2017-05-23 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6686:
--

 Summary: Improve UDXF(UDF,UDTF,UDAF) test case
 Key: FLINK-6686
 URL: https://issues.apache.org/jira/browse/FLINK-6686
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.3.0
Reporter: sunjincheng
Assignee: sunjincheng


1. Add Check that UDF, UDTF, and UDAF are working properly in group-windows and 
over-windows.
2. Add Check that all built-in Agg on Batch and Stream are working properly.
Let types such as Timestamp, BigDecimal or Pojo flow through UDF. UDTF, UDAF 
(input and output types)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021398#comment-16021398
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
Hi @twalthr  Thanks a log for your reviewing. I completely agree with your 
suggestions. I petty want refactoring the test structure. I have updated the 
PR. according your comments.

Thanks,
SunJincheng




> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-05-23 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
Hi @twalthr  Thanks a log for your reviewing. I completely agree with your 
suggestions. I petty want refactoring the test structure. I have updated the 
PR. according your comments.

Thanks,
SunJincheng




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6515) KafkaConsumer checkpointing fails because of ClassLoader issues

2017-05-23 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021344#comment-16021344
 ] 

Yassine Marzougui commented on FLINK-6515:
--

Yes, I used flink.version=1.4-SNAPSHOT in the user code.

In addition, it looks like the fix 
(https://github.com/apache/flink/commit/6f8022e35e0a49d5dfffa0ab7fd1c964b1c1bf0d
 ) didn't modify the kafka-connector code. And the exception I encountered is 
actually from the new code
{code}Caused by: org.apache.flink.util.FlinkRuntimeException: Could not copy 
element via serialization{code}


> KafkaConsumer checkpointing fails because of ClassLoader issues
> ---
>
> Key: FLINK-6515
> URL: https://issues.apache.org/jira/browse/FLINK-6515
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0
>Reporter: Aljoscha Krettek
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> A job with Kafka and checkpointing enabled fails with:
> {code}
> java.lang.Exception: Error while triggering checkpoint 1 for Source: Custom 
> Source -> Map -> Sink: Unnamed (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1195)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:526)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:112)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1184)
>   ... 5 more
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:407)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:520)
>   ... 7 more
> Caused by: java.lang.RuntimeException: Could not copy instance of 
> (KafkaTopicPartition{topic='test-input', partition=0},-1).
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:54)
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:32)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:71)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.(DefaultOperatorStateBackend.java:368)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.deepCopy(DefaultOperatorStateBackend.java:380)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.snapshot(DefaultOperatorStateBackend.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:392)
>   ... 12 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at 

[jira] [Commented] (FLINK-6432) Activate strict checkstyle for flink-python

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021341#comment-16021341
 ] 

ASF GitHub Bot commented on FLINK-6432:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3969
  
+1 to merge after Travis gives the green light.


> Activate strict checkstyle for flink-python
> ---
>
> Key: FLINK-6432
> URL: https://issues.apache.org/jira/browse/FLINK-6432
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021337#comment-16021337
 ] 

ASF GitHub Bot commented on FLINK-6662:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3972#discussion_r118026253
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
 ---
@@ -376,8 +377,14 @@ private static CompletedCheckpoint 
retrieveCompletedCheckpoint(Tuple2 ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
> at 

[GitHub] flink issue #3969: [FLINK-6432] [py] Activate strict checkstyle for flink-py...

2017-05-23 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3969
  
+1 to merge after Travis gives the green light.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3972: [FLINK-6662] [errMsg] Improve error message if rec...

2017-05-23 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3972#discussion_r118026253
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java
 ---
@@ -376,8 +377,14 @@ private static CompletedCheckpoint 
retrieveCompletedCheckpoint(Tuple2

[jira] [Commented] (FLINK-6675) Activate strict checkstyle for flink-annotations

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021327#comment-16021327
 ] 

ASF GitHub Bot commented on FLINK-6675:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3970
  
+1 to merge after Travis gives the green light.


> Activate strict checkstyle for flink-annotations
> 
>
> Key: FLINK-6675
> URL: https://issues.apache.org/jira/browse/FLINK-6675
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3970: [FLINK-6675] Activate strict checkstyle for flink-annotat...

2017-05-23 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3970
  
+1 to merge after Travis gives the green light.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3968: [FLINK-6431] [metrics] Activate strict checkstyle in flin...

2017-05-23 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3968
  
+1 to merge after Travis gives the green light.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6431) Activate strict checkstyle for flink-metrics

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021326#comment-16021326
 ] 

ASF GitHub Bot commented on FLINK-6431:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3968
  
+1 to merge after Travis gives the green light.


> Activate strict checkstyle for flink-metrics
> 
>
> Key: FLINK-6431
> URL: https://issues.apache.org/jira/browse/FLINK-6431
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6654) missing maven dependency on "flink-shaded-hadoop2-uber" in flink-dist

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021323#comment-16021323
 ] 

ASF GitHub Bot commented on FLINK-6654:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/3960
  
you probably need to clean up your whole source with `mvn clean` and try 
from there - I'm not quite sure, my initial command will clean up everything 
that is not in the dependencies of the flink-dist project


> missing maven dependency on "flink-shaded-hadoop2-uber" in flink-dist
> -
>
> Key: FLINK-6654
> URL: https://issues.apache.org/jira/browse/FLINK-6654
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
> Fix For: 1.3.0
>
>
> Since applying FLINK-6514, flink-dist includes 
> {{flink-shaded-hadoop2-uber-*.jar}} but without giving this dependency in its 
> {{pom.xml}}. This may lead to concurrency issues during builds but also fails 
> building the flink-dist module only (with dependencies) as in
> {code}
> mvn clean install -pl flink-dist -am
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3960: [FLINK-6654][build] let 'flink-dist' properly depend on '...

2017-05-23 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/3960
  
you probably need to clean up your whole source with `mvn clean` and try 
from there - I'm not quite sure, my initial command will clean up everything 
that is not in the dependencies of the flink-dist project


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6431) Activate strict checkstyle for flink-metrics

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021318#comment-16021318
 ] 

ASF GitHub Bot commented on FLINK-6431:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3968
  
I've included the sources, change the path and added the removed comment. 
Moving on to the other 2 checkstyle PRs...


> Activate strict checkstyle for flink-metrics
> 
>
> Key: FLINK-6431
> URL: https://issues.apache.org/jira/browse/FLINK-6431
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3968: [FLINK-6431] [metrics] Activate strict checkstyle in flin...

2017-05-23 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3968
  
I've included the sources, change the path and added the removed comment. 
Moving on to the other 2 checkstyle PRs...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6431) Activate strict checkstyle for flink-metrics

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021291#comment-16021291
 ] 

ASF GitHub Bot commented on FLINK-6431:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3968#discussion_r118016840
  
--- Diff: flink-metrics/pom.xml ---
@@ -60,4 +60,42 @@ under the License.


 
+   
+   
+   
+   org.apache.maven.plugins
+   maven-checkstyle-plugin
+   2.17
+   
+   
+   
com.puppycrawl.tools
+   
checkstyle
+   6.19
+   
+   
+   
+   
../tools/maven/strict-checkstyle.xml
--- End diff --

huh, I'm surprised that the configuration in streaming-java actually works.


> Activate strict checkstyle for flink-metrics
> 
>
> Key: FLINK-6431
> URL: https://issues.apache.org/jira/browse/FLINK-6431
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3968: [FLINK-6431] [metrics] Activate strict checkstyle ...

2017-05-23 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3968#discussion_r118016840
  
--- Diff: flink-metrics/pom.xml ---
@@ -60,4 +60,42 @@ under the License.


 
+   
+   
+   
+   org.apache.maven.plugins
+   maven-checkstyle-plugin
+   2.17
+   
+   
+   
com.puppycrawl.tools
+   
checkstyle
+   6.19
+   
+   
+   
+   
../tools/maven/strict-checkstyle.xml
--- End diff --

huh, I'm surprised that the configuration in streaming-java actually works.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6654) missing maven dependency on "flink-shaded-hadoop2-uber" in flink-dist

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021287#comment-16021287
 ] 

ASF GitHub Bot commented on FLINK-6654:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3960
  
Not sure about the initial issue. For me the command worked (without yoru 
changes on the 1.3 branch)

``` 
[INFO] 

[INFO] BUILD SUCCESS
[INFO] 

[INFO] Total time: 10:12 min
[INFO] Finished at: 2017-05-23T16:50:31+02:00
[INFO] Final Memory: 131M/441M
[INFO] 

mvn clean install -pl flink-dist -am -DskipTests  
```


> missing maven dependency on "flink-shaded-hadoop2-uber" in flink-dist
> -
>
> Key: FLINK-6654
> URL: https://issues.apache.org/jira/browse/FLINK-6654
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
> Fix For: 1.3.0
>
>
> Since applying FLINK-6514, flink-dist includes 
> {{flink-shaded-hadoop2-uber-*.jar}} but without giving this dependency in its 
> {{pom.xml}}. This may lead to concurrency issues during builds but also fails 
> building the flink-dist module only (with dependencies) as in
> {code}
> mvn clean install -pl flink-dist -am
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3960: [FLINK-6654][build] let 'flink-dist' properly depend on '...

2017-05-23 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3960
  
Not sure about the initial issue. For me the command worked (without yoru 
changes on the 1.3 branch)

``` 
[INFO] 

[INFO] BUILD SUCCESS
[INFO] 

[INFO] Total time: 10:12 min
[INFO] Finished at: 2017-05-23T16:50:31+02:00
[INFO] Final Memory: 131M/441M
[INFO] 

mvn clean install -pl flink-dist -am -DskipTests  
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6515) KafkaConsumer checkpointing fails because of ClassLoader issues

2017-05-23 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021275#comment-16021275
 ] 

Robert Metzger commented on FLINK-6515:
---

Are you sure that you've re-build your user code using he correct Flink version?
The Kafka connector code is usually located in the user jar, so you need to 
update both Flink and your user code.

> KafkaConsumer checkpointing fails because of ClassLoader issues
> ---
>
> Key: FLINK-6515
> URL: https://issues.apache.org/jira/browse/FLINK-6515
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0
>Reporter: Aljoscha Krettek
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> A job with Kafka and checkpointing enabled fails with:
> {code}
> java.lang.Exception: Error while triggering checkpoint 1 for Source: Custom 
> Source -> Map -> Sink: Unnamed (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1195)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:526)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:112)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1184)
>   ... 5 more
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:407)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:520)
>   ... 7 more
> Caused by: java.lang.RuntimeException: Could not copy instance of 
> (KafkaTopicPartition{topic='test-input', partition=0},-1).
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:54)
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:32)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:71)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.(DefaultOperatorStateBackend.java:368)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.deepCopy(DefaultOperatorStateBackend.java:380)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.snapshot(DefaultOperatorStateBackend.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:392)
>   ... 12 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> 

[jira] [Resolved] (FLINK-6660) expand the streaming connectors overview page

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-6660.

   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.0

Thanks for the contribution David!
Resolved for master via 557540a51cf8a1d6fef1e2e80ad0db4c148b3302.
Resolved for release-1.3 via ce685dbdae011b6220934836339b0a0130929ba4.

> expand the streaming connectors overview page 
> --
>
> Key: FLINK-6660
> URL: https://issues.apache.org/jira/browse/FLINK-6660
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Streaming Connectors
>Affects Versions: 1.3.0, 1.4.0
>Reporter: David Anderson
>Assignee: David Anderson
> Fix For: 1.3.0, 1.4.0
>
>
> The overview page for streaming connectors is too lean -- it should provide 
> more context and also guide the reader toward related topics.
> Note that FLINK-6038 will add links to the Bahir connectors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-6492) Unclosed DataOutputViewStream in GenericArraySerializerConfigSnapshot#write()

2017-05-23 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-6492.

   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.1

Fixed for master with ded464b8bd60d8f19221c0f1589346684c11c78d.
Fixed for release-1.3 with 0ae98d3863fb49f67ea4afdf66790b74c1d64d3d.

> Unclosed DataOutputViewStream in GenericArraySerializerConfigSnapshot#write()
> -
>
> Key: FLINK-6492
> URL: https://issues.apache.org/jira/browse/FLINK-6492
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ted Yu
>Priority: Minor
> Fix For: 1.3.1, 1.4.0
>
>
> {code}
>InstantiationUtil.serializeObject(new DataOutputViewStream(out), 
> componentClass);
> {code}
> DataOutputViewStream instance should be closed upon return.
> TupleSerializerConfigSnapshot has similar issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6492) Unclosed DataOutputViewStream in GenericArraySerializerConfigSnapshot#write()

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021252#comment-16021252
 ] 

ASF GitHub Bot commented on FLINK-6492:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3898


> Unclosed DataOutputViewStream in GenericArraySerializerConfigSnapshot#write()
> -
>
> Key: FLINK-6492
> URL: https://issues.apache.org/jira/browse/FLINK-6492
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>InstantiationUtil.serializeObject(new DataOutputViewStream(out), 
> componentClass);
> {code}
> DataOutputViewStream instance should be closed upon return.
> TupleSerializerConfigSnapshot has similar issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6660) expand the streaming connectors overview page

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021253#comment-16021253
 ] 

ASF GitHub Bot commented on FLINK-6660:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3964


> expand the streaming connectors overview page 
> --
>
> Key: FLINK-6660
> URL: https://issues.apache.org/jira/browse/FLINK-6660
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Streaming Connectors
>Affects Versions: 1.3.0, 1.4.0
>Reporter: David Anderson
>Assignee: David Anderson
>
> The overview page for streaming connectors is too lean -- it should provide 
> more context and also guide the reader toward related topics.
> Note that FLINK-6038 will add links to the Bahir connectors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3898: [FLINK-6492] Fix unclosed DataOutputViewStream usa...

2017-05-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3898


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3964: [FLINK-6660][docs] expand the connectors overview ...

2017-05-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3964


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6537) Umbrella issue for fixes to incremental snapshots

2017-05-23 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021248#comment-16021248
 ] 

Stefan Richter commented on FLINK-6537:
---

So this is not strictly related to incremental checkpoints, but from a 
different problem. I know how to easily fix this for the release and found that 
some things could be nicer about this code in general (can refactor in 1.4). 
Tracking in FLINK-6685 and FLINK-6684.

> Umbrella issue for fixes to incremental snapshots
> -
>
> Key: FLINK-6537
> URL: https://issues.apache.org/jira/browse/FLINK-6537
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
> Fix For: 1.3.0
>
>
> This issue tracks ongoing fixes in the incremental checkpointing feature for 
> the 1.3 release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6662) ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021206#comment-16021206
 ] 

ASF GitHub Bot commented on FLINK-6662:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3972

[FLINK-6662] [errMsg] Improve error message if recovery from 
RetrievableStateHandles fails

When recovering state from a ZooKeeperStateHandleStore it can happen that 
the deserialization
fails, because one tries to recover state from an old Flink version which 
is not compatible.
In this case we should output a better error message such that the user can 
easily spot the
problem.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink improveErrorMessages

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3972.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3972


commit 31d099c4768f1ee8dfbecfd8eddc6f05842425e6
Author: Till Rohrmann 
Date:   2017-05-23T13:42:38Z

[FLINK-6662] [errMsg] Improve error message if recovery from 
RetrievableStateHandles fails

When recovering state from a ZooKeeperStateHandleStore it can happen that 
the deserialization
fails, because one tries to recover state from an old Flink version which 
is not compatible.
In this case we should output a better error message such that the user can 
easily spot the
problem.




> ClassNotFoundException: o.a.f.r.j.t.JobSnapshottingSettings recovering job
> --
>
> Key: FLINK-6662
> URL: https://issues.apache.org/jira/browse/FLINK-6662
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Mesos, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Jared Stehler
>Assignee: Till Rohrmann
>
> Running flink mesos on 1.3-release branch, I'm seeing the following error on 
> appmaster startup:
> {noformat}
> 2017-05-22 15:32:45.946 [flink-akka.actor.default-dispatcher-17] WARN  
> o.a.flink.mesos.runtime.clusterframework.MesosJobManager  - Failed to recover 
> job 088027410f1a628e7dfc59dc23df3ded.
> java.lang.Exception: Failed to retrieve the submitted job graph from state 
> handle.
> at 
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore.recoverJobGraph(ZooKeeperSubmittedJobGraphStore.java:186)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(JobManager.scala:536)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1$$anonfun$apply$mcV$sp$1.apply(JobManager.scala:533)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply$mcV$sp(JobManager.scala:533)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$1.apply(JobManager.scala:529)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.runtime.jobgraph.tasks.JobSnapshottingSettings
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> 

[GitHub] flink pull request #3972: [FLINK-6662] [errMsg] Improve error message if rec...

2017-05-23 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/3972

[FLINK-6662] [errMsg] Improve error message if recovery from 
RetrievableStateHandles fails

When recovering state from a ZooKeeperStateHandleStore it can happen that 
the deserialization
fails, because one tries to recover state from an old Flink version which 
is not compatible.
In this case we should output a better error message such that the user can 
easily spot the
problem.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink improveErrorMessages

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3972.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3972


commit 31d099c4768f1ee8dfbecfd8eddc6f05842425e6
Author: Till Rohrmann 
Date:   2017-05-23T13:42:38Z

[FLINK-6662] [errMsg] Improve error message if recovery from 
RetrievableStateHandles fails

When recovering state from a ZooKeeperStateHandleStore it can happen that 
the deserialization
fails, because one tries to recover state from an old Flink version which 
is not compatible.
In this case we should output a better error message such that the user can 
easily spot the
problem.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6660) expand the streaming connectors overview page

2017-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021198#comment-16021198
 ] 

ASF GitHub Bot commented on FLINK-6660:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3964
  
"3rd party connectors" doesn't seem suitable. There are also some self 
hosted connectors, AFAIK.
Perhaps simply "Connectors in Apache Bahir"? Either way, I'll proceed to 
merge this. The naming could perhaps be refined when the Bahir connectors are 
added to the docs with https://issues.apache.org/jira/browse/FLINK-6038.


> expand the streaming connectors overview page 
> --
>
> Key: FLINK-6660
> URL: https://issues.apache.org/jira/browse/FLINK-6660
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Streaming Connectors
>Affects Versions: 1.3.0, 1.4.0
>Reporter: David Anderson
>Assignee: David Anderson
>
> The overview page for streaming connectors is too lean -- it should provide 
> more context and also guide the reader toward related topics.
> Note that FLINK-6038 will add links to the Bahir connectors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3964: [FLINK-6660][docs] expand the connectors overview page

2017-05-23 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3964
  
"3rd party connectors" doesn't seem suitable. There are also some self 
hosted connectors, AFAIK.
Perhaps simply "Connectors in Apache Bahir"? Either way, I'll proceed to 
merge this. The naming could perhaps be refined when the Bahir connectors are 
added to the docs with https://issues.apache.org/jira/browse/FLINK-6038.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >