[jira] [Commented] (FLINK-3857) Add reconnect attempt to Elasticsearch host

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321935#comment-15321935
 ] 

ASF GitHub Bot commented on FLINK-3857:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/1962#discussion_r66385681
  
--- Diff: 
flink-streaming-connectors/flink-connector-elasticsearch2/src/main/java/org/apache/flink/streaming/connectors/elasticsearch2/ElasticsearchSink.java
 ---
@@ -153,9 +165,90 @@ public ElasticsearchSink(Map 
userConfig, List
 */
@Override
public void open(Configuration configuration) {
+   connect();
+
+   params = ParameterTool.fromMap(userConfig);
+
+   if (params.has(CONFIG_KEY_CONNECTION_RETRIES)) {
+   this.connectionRetries = 
params.getInt(CONFIG_KEY_CONNECTION_RETRIES);
+   }
+
+   buildBulkProcessorIndexer(client);
+   }
+
+   @Override
+   public void invoke(T element) {
+   elasticsearchSinkFunction.process(element, getRuntimeContext(), 
requestIndexer);
+
+   if (hasFailure.get()) {
--- End diff --

Won't there be other causes of failure besides connection error? Attempting 
to reconnect for every kind of failure doesn't seem right.


> Add reconnect attempt to Elasticsearch host
> ---
>
> Key: FLINK-3857
> URL: https://issues.apache.org/jira/browse/FLINK-3857
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 1.1.0, 1.0.2
>Reporter: Fabian Hueske
>Assignee: Subhobrata Dey
>
> Currently, the connection to the Elasticsearch host is opened in 
> {{ElasticsearchSink.open()}}. In case the connection is lost (maybe due to a 
> changed DNS entry), the sink fails.
> I propose to catch the Exception for lost connections in the {{invoke()}} 
> method and try to re-open the connection for a configurable number of times 
> with a certain delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #1962: [FLINK-3857][Streaming Connectors]Add reconnect at...

2016-06-08 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/1962#discussion_r66385681
  
--- Diff: 
flink-streaming-connectors/flink-connector-elasticsearch2/src/main/java/org/apache/flink/streaming/connectors/elasticsearch2/ElasticsearchSink.java
 ---
@@ -153,9 +165,90 @@ public ElasticsearchSink(Map 
userConfig, List
 */
@Override
public void open(Configuration configuration) {
+   connect();
+
+   params = ParameterTool.fromMap(userConfig);
+
+   if (params.has(CONFIG_KEY_CONNECTION_RETRIES)) {
+   this.connectionRetries = 
params.getInt(CONFIG_KEY_CONNECTION_RETRIES);
+   }
+
+   buildBulkProcessorIndexer(client);
+   }
+
+   @Override
+   public void invoke(T element) {
+   elasticsearchSinkFunction.process(element, getRuntimeContext(), 
requestIndexer);
+
+   if (hasFailure.get()) {
--- End diff --

Won't there be other causes of failure besides connection error? Attempting 
to reconnect for every kind of failure doesn't seem right.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3857) Add reconnect attempt to Elasticsearch host

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321931#comment-15321931
 ] 

ASF GitHub Bot commented on FLINK-3857:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/1962#discussion_r66385603
  
--- Diff: 
flink-streaming-connectors/flink-connector-elasticsearch2/src/main/java/org/apache/flink/streaming/connectors/elasticsearch2/ElasticsearchSink.java
 ---
@@ -153,9 +165,90 @@ public ElasticsearchSink(Map 
userConfig, List
 */
@Override
public void open(Configuration configuration) {
+   connect();
+
+   params = ParameterTool.fromMap(userConfig);
+
+   if (params.has(CONFIG_KEY_CONNECTION_RETRIES)) {
+   this.connectionRetries = 
params.getInt(CONFIG_KEY_CONNECTION_RETRIES);
+   }
--- End diff --

Need to have a default value set if not specified by user? Otherwise null 
exception in invoke().


> Add reconnect attempt to Elasticsearch host
> ---
>
> Key: FLINK-3857
> URL: https://issues.apache.org/jira/browse/FLINK-3857
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 1.1.0, 1.0.2
>Reporter: Fabian Hueske
>Assignee: Subhobrata Dey
>
> Currently, the connection to the Elasticsearch host is opened in 
> {{ElasticsearchSink.open()}}. In case the connection is lost (maybe due to a 
> changed DNS entry), the sink fails.
> I propose to catch the Exception for lost connections in the {{invoke()}} 
> method and try to re-open the connection for a configurable number of times 
> with a certain delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #1962: [FLINK-3857][Streaming Connectors]Add reconnect at...

2016-06-08 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/1962#discussion_r66385603
  
--- Diff: 
flink-streaming-connectors/flink-connector-elasticsearch2/src/main/java/org/apache/flink/streaming/connectors/elasticsearch2/ElasticsearchSink.java
 ---
@@ -153,9 +165,90 @@ public ElasticsearchSink(Map 
userConfig, List
 */
@Override
public void open(Configuration configuration) {
+   connect();
+
+   params = ParameterTool.fromMap(userConfig);
+
+   if (params.has(CONFIG_KEY_CONNECTION_RETRIES)) {
+   this.connectionRetries = 
params.getInt(CONFIG_KEY_CONNECTION_RETRIES);
+   }
--- End diff --

Need to have a default value set if not specified by user? Otherwise null 
exception in invoke().


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3857) Add reconnect attempt to Elasticsearch host

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321852#comment-15321852
 ] 

ASF GitHub Bot commented on FLINK-3857:
---

Github user sbcd90 commented on the issue:

https://github.com/apache/flink/pull/1962
  
Hello @StephanEwen ,

I have removed a timer & doing the retry logic directly now. The backoff is 
3s. Please have a look.


> Add reconnect attempt to Elasticsearch host
> ---
>
> Key: FLINK-3857
> URL: https://issues.apache.org/jira/browse/FLINK-3857
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 1.1.0, 1.0.2
>Reporter: Fabian Hueske
>Assignee: Subhobrata Dey
>
> Currently, the connection to the Elasticsearch host is opened in 
> {{ElasticsearchSink.open()}}. In case the connection is lost (maybe due to a 
> changed DNS entry), the sink fails.
> I propose to catch the Exception for lost connections in the {{invoke()}} 
> method and try to re-open the connection for a configurable number of times 
> with a certain delay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3725) Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)

2016-06-08 Thread Ankit Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321682#comment-15321682
 ] 

Ankit Chaudhary commented on FLINK-3725:


I did have this issue with my setup but after including Guava jar in the flink 
classpath, this issue was gone. It looks like Jobmanager requires Guava for 
this class - org.apache.flink.shaded.com.google.common.collect.Iterators

This might still be a valid bug since looks like the plan is to remove guava 
dependency from Flink (as mentioned in FLINK-3821 & several other related 
tickets).

> Exception in thread "main" scala.MatchError: ... (of class scala.Tuple4)
> 
>
> Key: FLINK-3725
> URL: https://issues.apache.org/jira/browse/FLINK-3725
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.0.1
> Environment: \# java -version
> openjdk version "1.8.0_77"
> OpenJDK Runtime Environment (build 1.8.0_77-b03)
> OpenJDK 64-Bit Server VM (build 25.77-b03, mixed mode)
>Reporter: Maxim Dobryakov
>
> When I start standalone cluster with `bin/jobmanager.sh start cluster` 
> command all works fine but then I am using the same command for HA cluster 
> the JobManager raise error and stop:
> *log/flink--jobmanager-0-example-app-1.example.local.out*
> {code}
> Exception in thread "main" scala.MatchError: ({blob.server.port=6130, 
> state.backend.fs.checkpointdir=s3://s3.example.com/example_staging_flink/checkpoints,
>  blob.storage.directory=/flink/data/blob_storage, jobmanager.heap.mb=1024, 
> fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem, 
> restart-strategy.fixed-delay.attempts=2, recovery.mode=zookeeper, 
> jobmanager.web.port=8081, taskmanager.memory.preallocate=false, 
> jobmanager.rpc.port=0, flink.base.dir.path=/flink/conf/.., 
> recovery.zookeeper.storageDir=s3://s3.example.com/example_staging_flink/recovery,
>  taskmanager.tmp.dirs=/flink/data/task_manager, 
> restart-strategy.fixed-delay.delay=60s, taskmanager.data.port=6121, 
> recovery.zookeeper.path.root=/example_staging/flink, parallelism.default=4, 
> taskmanager.numberOfTaskSlots=4, 
> recovery.zookeeper.quorum=zookeeper-1.example.local:2181,zookeeper-2.example.local:2181,zookeeper-3.example.local:2181,
>  fs.hdfs.hadoopconf=/flink/conf, state.backend=filesystem, 
> restart-strategy=none, recovery.jobmanager.port=6123, 
> taskmanager.heap.mb=2048},CLUSTER,null,org.apache.flink.shaded.com.google.common.collect.Iterators$5@3bf7ca37)
>  (of class scala.Tuple4)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1605)
> at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
> {code}
> *log/flink--jobmanager-0-example-app-1.example.local.log*
> {code}
> 2016-04-11 10:58:31,680 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,696 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 2016-04-11 10:58:31,697 DEBUG 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 2016-04-11 10:58:31,699 DEBUG 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, 
> User and group related metrics
> 2016-04-11 10:58:31,951 DEBUG org.apache.hadoop.util.Shell
>   - Failed to detect a valid hadoop home directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:303)
> at org.apache.hadoop.util.Shell.(Shell.java:328)
> at org.apache.hadoop.util.StringUtils.(StringUtils.java:80)
> at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:272)
> at 
> 

[jira] [Created] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka 0.10.0.0

2016-06-08 Thread Elias Levy (JIRA)
Elias Levy created FLINK-4035:
-

 Summary: Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
 Key: FLINK-4035
 URL: https://issues.apache.org/jira/browse/FLINK-4035
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.0.3
Reporter: Elias Levy
Priority: Minor


Kafka 0.10.0.0 introduced protocol changes related to the producer.  Published 
messages now include timestamps and compressed messages now include relative 
offsets.  As it is now, brokers must decompress publisher compressed messages, 
assign offset to them, and recompress them, which is wasteful and makes it less 
likely that compression will be used at all.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1979) Implement Loss Functions

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321354#comment-15321354
 ] 

ASF GitHub Bot commented on FLINK-1979:
---

Github user skavulya commented on the issue:

https://github.com/apache/flink/pull/1985
  
@chiwanpark The PR is ready. Let me know if I need to do anything else. 


> Implement Loss Functions
> 
>
> Key: FLINK-1979
> URL: https://issues.apache.org/jira/browse/FLINK-1979
> Project: Flink
>  Issue Type: Improvement
>  Components: Machine Learning Library
>Reporter: Johannes Günther
>Assignee: Johannes Günther
>Priority: Minor
>  Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a 
> pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #1985: [FLINK-1979] Add logistic loss, hinge loss and regulariza...

2016-06-08 Thread skavulya
Github user skavulya commented on the issue:

https://github.com/apache/flink/pull/1985
  
@chiwanpark The PR is ready. Let me know if I need to do anything else. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3530) Kafka09ITCase.testBigRecordJob fails on Travis

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321245#comment-15321245
 ] 

ASF GitHub Bot commented on FLINK-3530:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2080
  
These are the relevant logs of the task:
```
20:11:35,397 INFO  org.apache.flink.runtime.taskmanager.Task
 - Attempting to cancel task Source: Custom Source -> Map -> Map (5/8)
20:11:35,398 INFO  org.apache.flink.runtime.taskmanager.Task
 - Source: Custom Source -> Map -> Map (5/8) switched to CANCELING
20:11:35,398 INFO  org.apache.flink.runtime.taskmanager.Task
 - Triggering cancellation of task code Source: Custom Source -> Map -> Map 
(5/8) (217f8fe570f1c82eb4ec8191e1a73291).
20:12:05,400 WARN  org.apache.flink.runtime.taskmanager.Task
 - Task 'Source: Custom Source -> Map -> Map (5/8)' did not react to 
cancelling signal, but is stuck in method:
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:235)
org.apache.flink.runtime.taskmanager.Task.run(Task.java:587)
java.lang.Thread.run(Thread.java:745)

20:12:05,510 INFO  org.apache.flink.runtime.taskmanager.Task
 - Source: Custom Source -> Map -> Map (5/8) switched to CANCELED
```

And this is the code where its waiting: 
https://github.com/apache/flink/blob/e7586c3b2d995be164100919d7c04db003a71a90/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L235

I don't exactly know why the line numbers don't really match (I would 
expect the code to block at the synchronized block) . I've also checked the 
lines with the exact commit the error was triggered.

I was not able to reproduce this issue locally. I suspect that somebody is 
not releasing the lock...


> Kafka09ITCase.testBigRecordJob fails on Travis
> --
>
> Key: FLINK-3530
> URL: https://issues.apache.org/jira/browse/FLINK-3530
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.0.0
>Reporter: Till Rohrmann
>Assignee: Robert Metzger
>  Labels: test-stability
>
> The test case {{Kafka09ITCase.testBigRecordJob}} failed on Travis.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/112049279/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2080: [FLINK-3530] Fix Kafka08 instability: Avoid restarts from...

2016-06-08 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2080
  
These are the relevant logs of the task:
```
20:11:35,397 INFO  org.apache.flink.runtime.taskmanager.Task
 - Attempting to cancel task Source: Custom Source -> Map -> Map (5/8)
20:11:35,398 INFO  org.apache.flink.runtime.taskmanager.Task
 - Source: Custom Source -> Map -> Map (5/8) switched to CANCELING
20:11:35,398 INFO  org.apache.flink.runtime.taskmanager.Task
 - Triggering cancellation of task code Source: Custom Source -> Map -> Map 
(5/8) (217f8fe570f1c82eb4ec8191e1a73291).
20:12:05,400 WARN  org.apache.flink.runtime.taskmanager.Task
 - Task 'Source: Custom Source -> Map -> Map (5/8)' did not react to 
cancelling signal, but is stuck in method:
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:235)
org.apache.flink.runtime.taskmanager.Task.run(Task.java:587)
java.lang.Thread.run(Thread.java:745)

20:12:05,510 INFO  org.apache.flink.runtime.taskmanager.Task
 - Source: Custom Source -> Map -> Map (5/8) switched to CANCELED
```

And this is the code where its waiting: 
https://github.com/apache/flink/blob/e7586c3b2d995be164100919d7c04db003a71a90/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L235

I don't exactly know why the line numbers don't really match (I would 
expect the code to block at the synchronized block) . I've also checked the 
lines with the exact commit the error was triggered.

I was not able to reproduce this issue locally. I suspect that somebody is 
not releasing the lock...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-3958) Access to MetricRegistry doesn't have proper synchronization in some classes

2016-06-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved FLINK-3958.
---
Resolution: Not A Problem

> Access to MetricRegistry doesn't have proper synchronization in some classes
> 
>
> Key: FLINK-3958
> URL: https://issues.apache.org/jira/browse/FLINK-3958
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> In GraphiteReporter#getReporter():
> {code}  com.codahale.metrics.graphite.GraphiteReporter.Builder builder =
>   com.codahale.metrics.graphite.GraphiteReporter.forRegistry(registry);
> {code}
> Access to registry should be protected by lock on 
> ScheduledDropwizardReporter.this
> Similar issue exists in GangliaReporter#getReporter()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3958) Access to MetricRegistry doesn't have proper synchronization in some classes

2016-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321232#comment-15321232
 ] 

Stephan Ewen commented on FLINK-3958:
-

I agree with Chesnay.
Since the reporter is not active util properly opened, I think this issue is 
invalid.

> Access to MetricRegistry doesn't have proper synchronization in some classes
> 
>
> Key: FLINK-3958
> URL: https://issues.apache.org/jira/browse/FLINK-3958
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> In GraphiteReporter#getReporter():
> {code}  com.codahale.metrics.graphite.GraphiteReporter.Builder builder =
>   com.codahale.metrics.graphite.GraphiteReporter.forRegistry(registry);
> {code}
> Access to registry should be protected by lock on 
> ScheduledDropwizardReporter.this
> Similar issue exists in GangliaReporter#getReporter()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3977) Subclasses of InternalWindowFunction must support OutputTypeConfigurable

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321226#comment-15321226
 ] 

ASF GitHub Bot commented on FLINK-3977:
---

GitHub user rvdwenden opened a pull request:

https://github.com/apache/flink/pull/2086

[FLINK-3977] initialize FoldApplyWindowFunction properly





You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rvdwenden/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2086


commit b4b42f1584cbc909c85fbd6ab47aa4a7166f9043
Author: rvdwenden 
Date:   2016-06-08T18:53:51Z

[FLINK-3977] initialize FoldApplyWindowFunction properly




> Subclasses of InternalWindowFunction must support OutputTypeConfigurable
> 
>
> Key: FLINK-3977
> URL: https://issues.apache.org/jira/browse/FLINK-3977
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Priority: Critical
>
> Right now, if they wrap functions and a wrapped function implements 
> {{OutputTypeConfigurable}}, {{setOutputType}} is never called. This manifests 
> itself, for example, in FoldFunction on a window with evictor not working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2086: [FLINK-3977] initialize FoldApplyWindowFunction pr...

2016-06-08 Thread rvdwenden
GitHub user rvdwenden opened a pull request:

https://github.com/apache/flink/pull/2086

[FLINK-3977] initialize FoldApplyWindowFunction properly





You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rvdwenden/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2086


commit b4b42f1584cbc909c85fbd6ab47aa4a7166f9043
Author: rvdwenden 
Date:   2016-06-08T18:53:51Z

[FLINK-3977] initialize FoldApplyWindowFunction properly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4034) Dependency convergence on com.101tec:zkclient and com.esotericsoftware.kryo:kryo

2016-06-08 Thread Vladislav Pernin (JIRA)
Vladislav Pernin created FLINK-4034:
---

 Summary: Dependency convergence on com.101tec:zkclient and 
com.esotericsoftware.kryo:kryo
 Key: FLINK-4034
 URL: https://issues.apache.org/jira/browse/FLINK-4034
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Vladislav Pernin


If dependency convergence is enabled and asserted on Maven, projects using 
Flink do not compile.

Example :

{code}
Dependency convergence error for com.esotericsoftware.kryo:kryo:2.24.0 paths to 
dependency are:
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-java:1.0.3
+-org.apache.flink:flink-core:1.0.3
  +-com.esotericsoftware.kryo:kryo:2.24.0
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-streaming-java_2.11:1.0.3
+-org.apache.flink:flink-runtime_2.11:1.0.3
  +-com.twitter:chill_2.11:0.7.4
+-com.twitter:chill-java:0.7.4
  +-com.esotericsoftware.kryo:kryo:2.21
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-streaming-java_2.11:1.0.3
+-org.apache.flink:flink-runtime_2.11:1.0.3
  +-com.twitter:chill_2.11:0.7.4
+-com.esotericsoftware.kryo:kryo:2.21
{code}  

{code}
Dependency convergence error for com.101tec:zkclient:0.7 paths to dependency 
are:
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
+-org.apache.flink:flink-connector-kafka-base_2.11:1.0.3
  +-com.101tec:zkclient:0.7
and
+-groupidXXX:artifactidXXX:versionXXX
  +-org.apache.flink:flink-connector-kafka-0.8_2.11:1.0.3
+-org.apache.kafka:kafka_2.11:0.8.2.2
  +-com.101tec:zkclient:0.3
{code}

I cannot emit a pull request without knowing on which specifics versions you 
rely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3937) Make flink cli list, savepoint, cancel and stop work on Flink-on-YARN clusters

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321130#comment-15321130
 ] 

ASF GitHub Bot commented on FLINK-3937:
---

GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2085

[FLINK-3937] programmatic resuming of clusters

These changes are based on #1978 and #2034. More specifically, they port 
resuming of running Yarn clusters from #2034 to the refactoring of #1978. The 
abstraction in place enables us to plug in other cluster frameworks in the 
future. 

- integrates with and extends the refactoring of FLINK-3667
- enables to resume from Yarn properties or Yarn application id
- introduces additional StandaloneClusterDescriptor
- introduces DefaultCLI to get rid of standalone mode switches in 
CliFrontend
- various fixes and improvements

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-3937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2085.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2085


commit b144e64758a95bcac33bd0ac91ab7eefaf4040e9
Author: Maximilian Michels 
Date:   2016-04-22T17:52:54Z

[FLINK-3667] refactor client communication classes

- ClusterDescriptor: base interface for cluster deployment descriptors
- ClusterDescriptor: YarnClusterDescriptor

- ClusterClient: base class for ClusterClients, handles lifecycle of cluster
- ClusterClient: shares configuration with the implementations
- ClusterClient: StandaloneClusterClient, YarnClusterClient
- ClusterClient: remove run methods and enable detached mode via flag

- CliFrontend: remove all Yarn specific logic
- CliFrontend: remove all cluster setup logic

- CustomCommandLine: interface for other cluster implementations
- Customcommandline: enables creation of new cluster or resuming from 
existing

- Yarn: move Yarn classes and functionality to the yarn module (yarn
  properties, yarn interfaces)
- Yarn: improve reliability of cluster startup
- Yarn Tests: only disable parallel execution of ITCases

commit 73524c89854f04ac41f0c288d9ebf8ef5efe628b
Author: Sebastian Klemke 
Date:   2016-05-25T12:28:59Z

[FLINK-3937] implement -yid option to Flink CLI

- enables to use list, savepoint, cancel and stop subcommands
- adapt FlinkYarnSessionCli to also accept YARN application Id to attach to
- update documentation

commit 1db8c97c39c2bf071db018c1ca505409c847a30b
Author: Maximilian Michels 
Date:   2016-06-01T10:45:52Z

[FLINK-3863] Yarn Cluster shutdown may fail if leader changed recently

commit 1a154fb12474a8630cce7e764d72398513055887
Author: Maximilian Michels 
Date:   2016-06-02T14:28:51Z

[FLINK-3937] programmatic resuming of clusters

- integrates with and extends the refactoring of FLINK-3667
- enables to resume from Yarn properties or Yarn application id
- introduces additional StandaloneClusterDescriptor
- introduces DefaultCLI to get rid of standalone mode switches in 
CliFrontend
- various fixes and improvements




> Make flink cli list, savepoint, cancel and stop work on Flink-on-YARN clusters
> --
>
> Key: FLINK-3937
> URL: https://issues.apache.org/jira/browse/FLINK-3937
> Project: Flink
>  Issue Type: Improvement
>Reporter: Sebastian Klemke
>Assignee: Maximilian Michels
>Priority: Trivial
> Attachments: improve_flink_cli_yarn_integration.patch
>
>
> Currently, flink cli can't figure out JobManager RPC location for 
> Flink-on-YARN clusters. Therefore, list, savepoint, cancel and stop 
> subcommands are hard to invoke if you only know the YARN application ID. As 
> an improvement, I suggest adding a -yid  option to the 
> mentioned subcommands that can be used together with -m yarn-cluster. Flink 
> cli would then retrieve JobManager RPC location from YARN ResourceManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2085: [FLINK-3937] programmatic resuming of clusters

2016-06-08 Thread mxm
GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2085

[FLINK-3937] programmatic resuming of clusters

These changes are based on #1978 and #2034. More specifically, they port 
resuming of running Yarn clusters from #2034 to the refactoring of #1978. The 
abstraction in place enables us to plug in other cluster frameworks in the 
future. 

- integrates with and extends the refactoring of FLINK-3667
- enables to resume from Yarn properties or Yarn application id
- introduces additional StandaloneClusterDescriptor
- introduces DefaultCLI to get rid of standalone mode switches in 
CliFrontend
- various fixes and improvements

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-3937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2085.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2085


commit b144e64758a95bcac33bd0ac91ab7eefaf4040e9
Author: Maximilian Michels 
Date:   2016-04-22T17:52:54Z

[FLINK-3667] refactor client communication classes

- ClusterDescriptor: base interface for cluster deployment descriptors
- ClusterDescriptor: YarnClusterDescriptor

- ClusterClient: base class for ClusterClients, handles lifecycle of cluster
- ClusterClient: shares configuration with the implementations
- ClusterClient: StandaloneClusterClient, YarnClusterClient
- ClusterClient: remove run methods and enable detached mode via flag

- CliFrontend: remove all Yarn specific logic
- CliFrontend: remove all cluster setup logic

- CustomCommandLine: interface for other cluster implementations
- Customcommandline: enables creation of new cluster or resuming from 
existing

- Yarn: move Yarn classes and functionality to the yarn module (yarn
  properties, yarn interfaces)
- Yarn: improve reliability of cluster startup
- Yarn Tests: only disable parallel execution of ITCases

commit 73524c89854f04ac41f0c288d9ebf8ef5efe628b
Author: Sebastian Klemke 
Date:   2016-05-25T12:28:59Z

[FLINK-3937] implement -yid option to Flink CLI

- enables to use list, savepoint, cancel and stop subcommands
- adapt FlinkYarnSessionCli to also accept YARN application Id to attach to
- update documentation

commit 1db8c97c39c2bf071db018c1ca505409c847a30b
Author: Maximilian Michels 
Date:   2016-06-01T10:45:52Z

[FLINK-3863] Yarn Cluster shutdown may fail if leader changed recently

commit 1a154fb12474a8630cce7e764d72398513055887
Author: Maximilian Michels 
Date:   2016-06-02T14:28:51Z

[FLINK-3937] programmatic resuming of clusters

- integrates with and extends the refactoring of FLINK-3667
- enables to resume from Yarn properties or Yarn application id
- introduces additional StandaloneClusterDescriptor
- introduces DefaultCLI to get rid of standalone mode switches in 
CliFrontend
- various fixes and improvements




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3958) Access to MetricRegistry doesn't have proper synchronization in some classes

2016-06-08 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321053#comment-15321053
 ] 

Chesnay Schepler commented on FLINK-3958:
-

Why? no other method will be called on the reporter until getReporter() is 
finished.

> Access to MetricRegistry doesn't have proper synchronization in some classes
> 
>
> Key: FLINK-3958
> URL: https://issues.apache.org/jira/browse/FLINK-3958
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> In GraphiteReporter#getReporter():
> {code}  com.codahale.metrics.graphite.GraphiteReporter.Builder builder =
>   com.codahale.metrics.graphite.GraphiteReporter.forRegistry(registry);
> {code}
> Access to registry should be protected by lock on 
> ScheduledDropwizardReporter.this
> Similar issue exists in GangliaReporter#getReporter()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4033) Missing Scala example snippets for the Kinesis Connector documentation

2016-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-4033:
--

 Summary: Missing Scala example snippets for the Kinesis Connector 
documentation
 Key: FLINK-4033
 URL: https://issues.apache.org/jira/browse/FLINK-4033
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Kinesis Connector, Streaming Connectors
Reporter: Tzu-Li (Gordon) Tai
Priority: Minor
 Fix For: 1.1.0


The documentation for the Kinesis connector is missing Scala version of the 
example snippets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2082: [FLINK-4031] include sources in Maven snapshot dep...

2016-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2082


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4031) Nightly Jenkins job doesn't deploy sources

2016-06-08 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels closed FLINK-4031.
-
Resolution: Fixed

Fixed via fce64e193e32c9f639755f5b57222e6d7e89f150

> Nightly Jenkins job doesn't deploy sources
> --
>
> Key: FLINK-4031
> URL: https://issues.apache.org/jira/browse/FLINK-4031
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> We need to adjust the {{deploy_to_maven.sh}} script to enable deployment of 
> the sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4031) Nightly Jenkins job doesn't deploy sources

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320878#comment-15320878
 ] 

ASF GitHub Bot commented on FLINK-4031:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2082


> Nightly Jenkins job doesn't deploy sources
> --
>
> Key: FLINK-4031
> URL: https://issues.apache.org/jira/browse/FLINK-4031
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> We need to adjust the {{deploy_to_maven.sh}} script to enable deployment of 
> the sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4032) Replace all usage of Guava Preconditions

2016-06-08 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-4032:
---

 Summary: Replace all usage of Guava Preconditions
 Key: FLINK-4032
 URL: https://issues.apache.org/jira/browse/FLINK-4032
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.1.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Trivial
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4032) Replace all usage of Guava Preconditions

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320716#comment-15320716
 ] 

ASF GitHub Bot commented on FLINK-4032:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/2084

[FLINK-4032] Replace all usages Guava preconditions

This PR replaces every usage of the Guava Preconditions in Flink with our 
own Preconditions class.

In addition, 
- the guava dependency was completely removed from the RabbitMQ connector
- a checkstyle rules was added preventing further use of guava preconditions

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink guava

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2084


commit 7bc5099e0fa7731b5fd8ab7d4a32e29c60468bc6
Author: zentol 
Date:   2016-06-08T14:01:19Z

Remove guava dependency from flink-...-rabbitmq

commit 3f30f1f6bdd98b27ff815fbc56d8c9704e1091a6
Author: zentol 
Date:   2016-06-08T14:01:49Z

Replace Preconditions usage in flink-..-elasticsearch2

commit 8e90a16ef75ed4adcd00261b326aa10f04f5ddac
Author: zentol 
Date:   2016-06-08T14:25:26Z

Replace Preconditions usage in flink-table

commit cf61152d703abec9e2a47ecc102d2b148c172add
Author: zentol 
Date:   2016-06-08T14:25:34Z

Replace Preconditions usage in flink-optimizer

commit b6e90150d3dda1b2cd2822e3031a604798f6dcaf
Author: zentol 
Date:   2016-06-08T14:25:45Z

Replace Preconditions usage in flink-runtime-web

commit 910bf63778ba0a0b1b2ec183c66c524c3dd53ffc
Author: zentol 
Date:   2016-06-08T14:25:54Z

Replace Preconditions usage in flink-scala

commit 9012b1dae48ef9aed50e6e2d9f8bd9c59c8abd80
Author: zentol 
Date:   2016-06-08T14:25:58Z

Replace Preconditions usage in flink-yarn

commit 393da586a8ebcbca60d60b958aa21040ae5197d8
Author: zentol 
Date:   2016-06-08T14:26:02Z

Replace Preconditions usage in flink-tests

commit 90723bf1defcd7baf72285b81c4c5732c8a25624
Author: zentol 
Date:   2016-06-08T14:26:13Z

Replace Preconditions usage in flink-streaming-java

commit 5b27a648d363c1130fbff829c00a298b728d0ae7
Author: zentol 
Date:   2016-06-08T14:26:25Z

Replace Preconditions usage in flink-runtime

commit c5ac8b21591e08ce74865d04baeffc344ad7867c
Author: zentol 
Date:   2016-06-08T14:32:20Z

checkstyle rule




> Replace all usage of Guava Preconditions
> 
>
> Key: FLINK-4032
> URL: https://issues.apache.org/jira/browse/FLINK-4032
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2084: [FLINK-4032] Replace all usages Guava precondition...

2016-06-08 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/2084

[FLINK-4032] Replace all usages Guava preconditions

This PR replaces every usage of the Guava Preconditions in Flink with our 
own Preconditions class.

In addition, 
- the guava dependency was completely removed from the RabbitMQ connector
- a checkstyle rules was added preventing further use of guava preconditions

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink guava

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2084


commit 7bc5099e0fa7731b5fd8ab7d4a32e29c60468bc6
Author: zentol 
Date:   2016-06-08T14:01:19Z

Remove guava dependency from flink-...-rabbitmq

commit 3f30f1f6bdd98b27ff815fbc56d8c9704e1091a6
Author: zentol 
Date:   2016-06-08T14:01:49Z

Replace Preconditions usage in flink-..-elasticsearch2

commit 8e90a16ef75ed4adcd00261b326aa10f04f5ddac
Author: zentol 
Date:   2016-06-08T14:25:26Z

Replace Preconditions usage in flink-table

commit cf61152d703abec9e2a47ecc102d2b148c172add
Author: zentol 
Date:   2016-06-08T14:25:34Z

Replace Preconditions usage in flink-optimizer

commit b6e90150d3dda1b2cd2822e3031a604798f6dcaf
Author: zentol 
Date:   2016-06-08T14:25:45Z

Replace Preconditions usage in flink-runtime-web

commit 910bf63778ba0a0b1b2ec183c66c524c3dd53ffc
Author: zentol 
Date:   2016-06-08T14:25:54Z

Replace Preconditions usage in flink-scala

commit 9012b1dae48ef9aed50e6e2d9f8bd9c59c8abd80
Author: zentol 
Date:   2016-06-08T14:25:58Z

Replace Preconditions usage in flink-yarn

commit 393da586a8ebcbca60d60b958aa21040ae5197d8
Author: zentol 
Date:   2016-06-08T14:26:02Z

Replace Preconditions usage in flink-tests

commit 90723bf1defcd7baf72285b81c4c5732c8a25624
Author: zentol 
Date:   2016-06-08T14:26:13Z

Replace Preconditions usage in flink-streaming-java

commit 5b27a648d363c1130fbff829c00a298b728d0ae7
Author: zentol 
Date:   2016-06-08T14:26:25Z

Replace Preconditions usage in flink-runtime

commit c5ac8b21591e08ce74865d04baeffc344ad7867c
Author: zentol 
Date:   2016-06-08T14:32:20Z

checkstyle rule




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2082: [FLINK-4031] include sources in Maven snapshot deployment

2016-06-08 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2082
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4031) Nightly Jenkins job doesn't deploy sources

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320674#comment-15320674
 ] 

ASF GitHub Bot commented on FLINK-4031:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2082
  
+1 to merge


> Nightly Jenkins job doesn't deploy sources
> --
>
> Key: FLINK-4031
> URL: https://issues.apache.org/jira/browse/FLINK-4031
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> We need to adjust the {{deploy_to_maven.sh}} script to enable deployment of 
> the sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2083: [FLINK-3713] [clients, runtime] Use user code clas...

2016-06-08 Thread uce
GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/2083

[FLINK-3713] [clients, runtime] Use user code class loader when disposing 
savepoints

Disposing savepoints via the JobManager fails for state handles or 
descriptors, which contain user classes (for example custom folding state or 
RocksDB handles).

With this change, the user has to provide the job ID of a running job when 
disposing a savepoint in order to use the user code class loader of that job or 
provide the job JARs.

This version breaks the API as the CLI now requires either a JobID or a 
JAR. I think this is reasonable, because the current approach only works for a 
subset of the available state variants.

We can port this back for 1.0.4 and make the JobID or JAR arguments 
optional. What do you think?

I've tested this with a job running on RocksDB state both while the job was 
running and after it terminated. This was not working with the current 1.0.3 
version.

Ideally, we will get rid of the whole disposal business when we make 
savepoints properly self-contained. I'm going to open a JIRA issue with a 
proposal to do so soon.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 3713-dispose_savepoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2083


commit 1cdfe9b4df3584b3c3c48168cd3f17100dbebf4c
Author: Ufuk Celebi 
Date:   2016-06-08T08:59:24Z

[FLINK-3713] [clients, runtime] Use user code class loader when disposing 
savepoint

Disposing savepoints via the JobManager fails for state handles or 
descriptors,
which contain user classes (for example custom folding state or RocksDB 
handles).

With this change, the user has to provide the job ID of a running job when 
disposing
a savepoint in order to use the user code class loader of that job or 
provide the
job JARs.

This version breaks the API as the CLI now requires either a JobID or a 
JAR. I think
this is reasonable, because the current approach only works for a subset of 
the
available state variants.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3713) DisposeSavepoint message uses system classloader to discard state

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320670#comment-15320670
 ] 

ASF GitHub Bot commented on FLINK-3713:
---

GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/2083

[FLINK-3713] [clients, runtime] Use user code class loader when disposing 
savepoints

Disposing savepoints via the JobManager fails for state handles or 
descriptors, which contain user classes (for example custom folding state or 
RocksDB handles).

With this change, the user has to provide the job ID of a running job when 
disposing a savepoint in order to use the user code class loader of that job or 
provide the job JARs.

This version breaks the API as the CLI now requires either a JobID or a 
JAR. I think this is reasonable, because the current approach only works for a 
subset of the available state variants.

We can port this back for 1.0.4 and make the JobID or JAR arguments 
optional. What do you think?

I've tested this with a job running on RocksDB state both while the job was 
running and after it terminated. This was not working with the current 1.0.3 
version.

Ideally, we will get rid of the whole disposal business when we make 
savepoints properly self-contained. I'm going to open a JIRA issue with a 
proposal to do so soon.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 3713-dispose_savepoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2083


commit 1cdfe9b4df3584b3c3c48168cd3f17100dbebf4c
Author: Ufuk Celebi 
Date:   2016-06-08T08:59:24Z

[FLINK-3713] [clients, runtime] Use user code class loader when disposing 
savepoint

Disposing savepoints via the JobManager fails for state handles or 
descriptors,
which contain user classes (for example custom folding state or RocksDB 
handles).

With this change, the user has to provide the job ID of a running job when 
disposing
a savepoint in order to use the user code class loader of that job or 
provide the
job JARs.

This version breaks the API as the CLI now requires either a JobID or a 
JAR. I think
this is reasonable, because the current approach only works for a subset of 
the
available state variants.




> DisposeSavepoint message uses system classloader to discard state
> -
>
> Key: FLINK-3713
> URL: https://issues.apache.org/jira/browse/FLINK-3713
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Robert Metzger
>Assignee: Ufuk Celebi
>
> The {{DisposeSavepoint}} message in the JobManager is using the system 
> classloader to discard the state:
> {code}
> val savepoint = savepointStore.getState(savepointPath)
> log.debug(s"$savepoint")
> // Discard the associated checkpoint
> savepoint.discard(getClass.getClassLoader)
> // Dispose the savepoint
> savepointStore.disposeState(savepointPath)
> {code}
> Which leads to issues when the state contains user classes:
> {code}
> 2016-04-07 03:02:12,225 INFO  org.apache.flink.yarn.YarnJobManager
>   - Disposing savepoint at 
> 'hdfs:/bigdata/flink/savepoints/savepoint-7610540d089f'.
> 2016-04-07 03:02:12,233 WARN  
> org.apache.flink.runtime.checkpoint.StateForTask  - Failed to 
> discard checkpoint state: StateForTask eed5cc5a12dc2e0672848ba81bd8fa6d-0 : 
> SerializedValue
> java.lang.ClassNotFoundException: 
> .MetricsProcessor$CombinedKeysFoldFunction
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:270)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>  

[jira] [Commented] (FLINK-4031) Nightly Jenkins job doesn't deploy sources

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320546#comment-15320546
 ] 

ASF GitHub Bot commented on FLINK-4031:
---

GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2082

[FLINK-4031] include sources in Maven snapshot deployment

As per user request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-4031

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2082.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2082


commit 8a993ffe3165549f3efd61da40db0672e7e99d14
Author: Maximilian Michels 
Date:   2016-06-09T07:29:26Z

[FLINK-4031] include sources in Maven snapshot deployment




> Nightly Jenkins job doesn't deploy sources
> --
>
> Key: FLINK-4031
> URL: https://issues.apache.org/jira/browse/FLINK-4031
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.1.0
>
>
> We need to adjust the {{deploy_to_maven.sh}} script to enable deployment of 
> the sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2082: [FLINK-4031] include sources in Maven snapshot dep...

2016-06-08 Thread mxm
GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2082

[FLINK-4031] include sources in Maven snapshot deployment

As per user request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-4031

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2082.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2082


commit 8a993ffe3165549f3efd61da40db0672e7e99d14
Author: Maximilian Michels 
Date:   2016-06-09T07:29:26Z

[FLINK-4031] include sources in Maven snapshot deployment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4031) Nightly Jenkins job doesn't deploy sources

2016-06-08 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4031:
-

 Summary: Nightly Jenkins job doesn't deploy sources
 Key: FLINK-4031
 URL: https://issues.apache.org/jira/browse/FLINK-4031
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Maximilian Michels
Assignee: Maximilian Michels
Priority: Minor
 Fix For: 1.1.0


We need to adjust the {{deploy_to_maven.sh}} script to enable deployment of the 
sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2832) Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2832.
---

> Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement
> ---
>
> Key: FLINK-2832
> URL: https://issues.apache.org/jira/browse/FLINK-2832
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Vasia Kalavri
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.133 sec 
> <<< FAILURE! - in org.apache.flink.api.java.sampling.RandomSamplerTest
> testReservoirSamplerWithReplacement(org.apache.flink.api.java.sampling.RandomSamplerTest)
>   Time elapsed: 2.534 sec  <<< FAILURE!
> java.lang.AssertionError: KS test result with p value(0.11), d 
> value(0.103090)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyKSTest(RandomSamplerTest.java:342)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyRandomSamplerWithSampleSize(RandomSamplerTest.java:330)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyReservoirSamplerWithReplacement(RandomSamplerTest.java:289)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.testReservoirSamplerWithReplacement(RandomSamplerTest.java:192)
> Results :
> Failed tests: 
>   
> RandomSamplerTest.testReservoirSamplerWithReplacement:192->verifyReservoirSamplerWithReplacement:289->verifyRandomSamplerWithSampleSize:330->verifyKSTest:342
>  KS test result with p value(0.11), d value(0.103090)
> Full log [here|https://travis-ci.org/apache/flink/jobs/84120131].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2832) Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2832.
-
   Resolution: Fixed
Fix Version/s: (was: 1.0.0)
   1.1.0

Fixed via 297d75c2e043026ccc3744d587c9ebbbd81e7d4b

  - 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commitdiff;h=297d75c2e043026ccc3744d587c9ebbbd81e7d4b
  - 
https://github.com/apache/flink/commit/297d75c2e043026ccc3744d587c9ebbbd81e7d4b

> Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement
> ---
>
> Key: FLINK-2832
> URL: https://issues.apache.org/jira/browse/FLINK-2832
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Vasia Kalavri
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.133 sec 
> <<< FAILURE! - in org.apache.flink.api.java.sampling.RandomSamplerTest
> testReservoirSamplerWithReplacement(org.apache.flink.api.java.sampling.RandomSamplerTest)
>   Time elapsed: 2.534 sec  <<< FAILURE!
> java.lang.AssertionError: KS test result with p value(0.11), d 
> value(0.103090)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyKSTest(RandomSamplerTest.java:342)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyRandomSamplerWithSampleSize(RandomSamplerTest.java:330)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyReservoirSamplerWithReplacement(RandomSamplerTest.java:289)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.testReservoirSamplerWithReplacement(RandomSamplerTest.java:192)
> Results :
> Failed tests: 
>   
> RandomSamplerTest.testReservoirSamplerWithReplacement:192->verifyReservoirSamplerWithReplacement:289->verifyRandomSamplerWithSampleSize:330->verifyKSTest:342
>  KS test result with p value(0.11), d value(0.103090)
> Full log [here|https://travis-ci.org/apache/flink/jobs/84120131].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-3922) Infinite recursion on TypeExtractor

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-3922.
---

> Infinite recursion on TypeExtractor
> ---
>
> Key: FLINK-3922
> URL: https://issues.apache.org/jira/browse/FLINK-3922
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.0.2
>Reporter: Flavio Pompermaier
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.1.0
>
>
> This program cause a StackOverflow (infinite recursion) in the TypeExtractor:
> {code:title=TypeSerializerStackOverflowOnRecursivePojo.java|borderStyle=solid}
> public class TypeSerializerStackOverflowOnRecursivePojo {
>   public static class RecursivePojo implements Serializable {
>   private static final long serialVersionUID = 1L;
>   
>   private RecursivePojo parent;
>   public RecursivePojo(){}
>   public RecursivePojo(K k, V v) {
>   }
>   public RecursivePojo getParent() {
>   return parent;
>   }
>   public void setParent(RecursivePojo parent) {
>   this.parent = parent;
>   }
>   
>   }
>   public static class TypedTuple extends Tuple3 RecursivePojo>>{
>   private static final long serialVersionUID = 1L;
>   }
>   
>   public static void main(String[] args) throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   env.fromCollection(Arrays.asList(new RecursivePojo Map>("test",new HashMap(
>   .map(t-> {TypedTuple ret = new TypedTuple();ret.setFields("1", 
> "1", t);return ret;}).returns(TypedTuple.class)
>   .print();
>   }
>   
> }
> {code}
> The thrown Exception is the following:
> {code:title=Exception thrown}
> Exception in thread "main" java.lang.StackOverflowError
>   at 
> sun.reflect.generics.parser.SignatureParser.parsePackageNameAndSimpleClassTypeSignature(SignatureParser.java:328)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseClassTypeSignature(SignatureParser.java:310)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseFieldTypeSignature(SignatureParser.java:289)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseFieldTypeSignature(SignatureParser.java:283)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseTypeSignature(SignatureParser.java:485)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseReturnType(SignatureParser.java:627)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseMethodTypeSignature(SignatureParser.java:577)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseMethodSig(SignatureParser.java:171)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.parse(ConstructorRepository.java:55)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.parse(ConstructorRepository.java:43)
>   at 
> sun.reflect.generics.repository.AbstractRepository.(AbstractRepository.java:74)
>   at 
> sun.reflect.generics.repository.GenericDeclRepository.(GenericDeclRepository.java:49)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.(ConstructorRepository.java:51)
>   at 
> sun.reflect.generics.repository.MethodRepository.(MethodRepository.java:46)
>   at 
> sun.reflect.generics.repository.MethodRepository.make(MethodRepository.java:59)
>   at java.lang.reflect.Method.getGenericInfo(Method.java:102)
>   at java.lang.reflect.Method.getGenericReturnType(Method.java:255)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.isValidPojoField(TypeExtractor.java:1610)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1671)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1559)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:732)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1678)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1559)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:732)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1678)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1559)
>   at 
> 

[jira] [Resolved] (FLINK-3922) Infinite recursion on TypeExtractor

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-3922.
-
   Resolution: Fixed
Fix Version/s: 1.1.0

Fixed via e1b55f033d18b22e8a3f07920fa7c9e5623d6922

  - 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commitdiff;h=e1b55f033d18b22e8a3f07920fa7c9e5623d6922
  - 
https://github.com/apache/flink/commit/e1b55f033d18b22e8a3f07920fa7c9e5623d6922

> Infinite recursion on TypeExtractor
> ---
>
> Key: FLINK-3922
> URL: https://issues.apache.org/jira/browse/FLINK-3922
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.0.2
>Reporter: Flavio Pompermaier
>Assignee: Timo Walther
>Priority: Critical
> Fix For: 1.1.0
>
>
> This program cause a StackOverflow (infinite recursion) in the TypeExtractor:
> {code:title=TypeSerializerStackOverflowOnRecursivePojo.java|borderStyle=solid}
> public class TypeSerializerStackOverflowOnRecursivePojo {
>   public static class RecursivePojo implements Serializable {
>   private static final long serialVersionUID = 1L;
>   
>   private RecursivePojo parent;
>   public RecursivePojo(){}
>   public RecursivePojo(K k, V v) {
>   }
>   public RecursivePojo getParent() {
>   return parent;
>   }
>   public void setParent(RecursivePojo parent) {
>   this.parent = parent;
>   }
>   
>   }
>   public static class TypedTuple extends Tuple3 RecursivePojo>>{
>   private static final long serialVersionUID = 1L;
>   }
>   
>   public static void main(String[] args) throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   env.fromCollection(Arrays.asList(new RecursivePojo Map>("test",new HashMap(
>   .map(t-> {TypedTuple ret = new TypedTuple();ret.setFields("1", 
> "1", t);return ret;}).returns(TypedTuple.class)
>   .print();
>   }
>   
> }
> {code}
> The thrown Exception is the following:
> {code:title=Exception thrown}
> Exception in thread "main" java.lang.StackOverflowError
>   at 
> sun.reflect.generics.parser.SignatureParser.parsePackageNameAndSimpleClassTypeSignature(SignatureParser.java:328)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseClassTypeSignature(SignatureParser.java:310)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseFieldTypeSignature(SignatureParser.java:289)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseFieldTypeSignature(SignatureParser.java:283)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseTypeSignature(SignatureParser.java:485)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseReturnType(SignatureParser.java:627)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseMethodTypeSignature(SignatureParser.java:577)
>   at 
> sun.reflect.generics.parser.SignatureParser.parseMethodSig(SignatureParser.java:171)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.parse(ConstructorRepository.java:55)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.parse(ConstructorRepository.java:43)
>   at 
> sun.reflect.generics.repository.AbstractRepository.(AbstractRepository.java:74)
>   at 
> sun.reflect.generics.repository.GenericDeclRepository.(GenericDeclRepository.java:49)
>   at 
> sun.reflect.generics.repository.ConstructorRepository.(ConstructorRepository.java:51)
>   at 
> sun.reflect.generics.repository.MethodRepository.(MethodRepository.java:46)
>   at 
> sun.reflect.generics.repository.MethodRepository.make(MethodRepository.java:59)
>   at java.lang.reflect.Method.getGenericInfo(Method.java:102)
>   at java.lang.reflect.Method.getGenericReturnType(Method.java:255)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.isValidPojoField(TypeExtractor.java:1610)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1671)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1559)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:732)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1678)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1559)
>   at 
> 

[jira] [Closed] (FLINK-3405) Extend NiFiSource with interface StoppableFunction

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-3405.
---

> Extend NiFiSource with interface StoppableFunction
> --
>
> Key: FLINK-3405
> URL: https://issues.apache.org/jira/browse/FLINK-3405
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Matthias J. Sax
>Assignee: Suneel Marthi
> Fix For: 1.1.0
>
>
> Nifi source is not stoppable right now. To make it stoppable, is must 
> implement {{StoppableFunction}}. Implementing method {{stop()}} must ensure, 
> that the source stops receiving new messages from Nifi and issues a final 
> checkpoint. Afterwards, {{run()}} must return.
> When implementing this, keep in mind, that the gathered checkpoint might 
> later be used as a savepoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4000) Exception: Could not restore checkpointed state to operators and functions; during Job Restart (Job restart is triggered due to one of the task manager failure)

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-4000.
---

> Exception: Could not restore checkpointed state to operators and functions;  
> during Job Restart (Job restart is triggered due to one of the task manager 
> failure)
> -
>
> Key: FLINK-4000
> URL: https://issues.apache.org/jira/browse/FLINK-4000
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.0.3
> Environment: //Fault Tolerance Configuration of the Job
> env.enableCheckpointing(5000); 
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
> env.setRestartStrategy(RestartStrategies.fixedDelayRestart( 3,1));
>Reporter: Aride Chettali
> Fix For: 1.1.0
>
>
> java.lang.Exception: Could not restore checkpointed state to operators and 
> functions
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:457)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:209)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Failed to restore state to function: null
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:168)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:449)
>   ... 3 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:184)
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:80)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:165)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4000) Exception: Could not restore checkpointed state to operators and functions; during Job Restart (Job restart is triggered due to one of the task manager failure)

2016-06-08 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-4000.
-
   Resolution: Fixed
Fix Version/s: 1.1.0

Fixed via ae679bb2aa1e0e239770605e049709fbc6b9962c

  - 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commitdiff;h=ea64921f8b73c38af0362ab4a116ed0cb011ae1c
  - 
https://github.com/apache/flink/commit/ae679bb2aa1e0e239770605e049709fbc6b9962c

Thank you for the contribution!

> Exception: Could not restore checkpointed state to operators and functions;  
> during Job Restart (Job restart is triggered due to one of the task manager 
> failure)
> -
>
> Key: FLINK-4000
> URL: https://issues.apache.org/jira/browse/FLINK-4000
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.0.3
> Environment: //Fault Tolerance Configuration of the Job
> env.enableCheckpointing(5000); 
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
> env.setRestartStrategy(RestartStrategies.fixedDelayRestart( 3,1));
>Reporter: Aride Chettali
> Fix For: 1.1.0
>
>
> java.lang.Exception: Could not restore checkpointed state to operators and 
> functions
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:457)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:209)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Failed to restore state to function: null
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:168)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:449)
>   ... 3 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:184)
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:80)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:165)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4000) Exception: Could not restore checkpointed state to operators and functions; during Job Restart (Job restart is triggered due to one of the task manager failure)

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320527#comment-15320527
 ] 

ASF GitHub Bot commented on FLINK-4000:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2062


> Exception: Could not restore checkpointed state to operators and functions;  
> during Job Restart (Job restart is triggered due to one of the task manager 
> failure)
> -
>
> Key: FLINK-4000
> URL: https://issues.apache.org/jira/browse/FLINK-4000
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.0.3
> Environment: //Fault Tolerance Configuration of the Job
> env.enableCheckpointing(5000); 
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
> env.setRestartStrategy(RestartStrategies.fixedDelayRestart( 3,1));
>Reporter: Aride Chettali
>
> java.lang.Exception: Could not restore checkpointed state to operators and 
> functions
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:457)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:209)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Failed to restore state to function: null
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:168)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreState(StreamTask.java:449)
>   ... 3 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:184)
>   at 
> org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase.restoreState(MessageAcknowledgingSourceBase.java:80)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.restoreState(AbstractUdfStreamOperator.java:165)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2832) Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320529#comment-15320529
 ] 

ASF GitHub Bot commented on FLINK-2832:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2076


> Failing test: RandomSamplerTest.testReservoirSamplerWithReplacement
> ---
>
> Key: FLINK-2832
> URL: https://issues.apache.org/jira/browse/FLINK-2832
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Vasia Kalavri
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.0.0
>
>
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.133 sec 
> <<< FAILURE! - in org.apache.flink.api.java.sampling.RandomSamplerTest
> testReservoirSamplerWithReplacement(org.apache.flink.api.java.sampling.RandomSamplerTest)
>   Time elapsed: 2.534 sec  <<< FAILURE!
> java.lang.AssertionError: KS test result with p value(0.11), d 
> value(0.103090)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyKSTest(RandomSamplerTest.java:342)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyRandomSamplerWithSampleSize(RandomSamplerTest.java:330)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.verifyReservoirSamplerWithReplacement(RandomSamplerTest.java:289)
>   at 
> org.apache.flink.api.java.sampling.RandomSamplerTest.testReservoirSamplerWithReplacement(RandomSamplerTest.java:192)
> Results :
> Failed tests: 
>   
> RandomSamplerTest.testReservoirSamplerWithReplacement:192->verifyReservoirSamplerWithReplacement:289->verifyRandomSamplerWithSampleSize:330->verifyKSTest:342
>  KS test result with p value(0.11), d value(0.103090)
> Full log [here|https://travis-ci.org/apache/flink/jobs/84120131].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2062: [FLINK-4000] Fix for checkpoint state restore at M...

2016-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2062


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3405) Extend NiFiSource with interface StoppableFunction

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320528#comment-15320528
 ] 

ASF GitHub Bot commented on FLINK-3405:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2047


> Extend NiFiSource with interface StoppableFunction
> --
>
> Key: FLINK-3405
> URL: https://issues.apache.org/jira/browse/FLINK-3405
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Matthias J. Sax
>Assignee: Suneel Marthi
> Fix For: 1.1.0, 1.0.4
>
>
> Nifi source is not stoppable right now. To make it stoppable, is must 
> implement {{StoppableFunction}}. Implementing method {{stop()}} must ensure, 
> that the source stops receiving new messages from Nifi and issues a final 
> checkpoint. Afterwards, {{run()}} must return.
> When implementing this, keep in mind, that the gathered checkpoint might 
> later be used as a savepoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2047: FLINK-3405: Extend NiFiSource with interface Stopp...

2016-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2047


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320516#comment-15320516
 ] 

ASF GitHub Bot commented on FLINK-3869:
---

Github user aljoscha closed the pull request at:

https://github.com/apache/flink/pull/1973


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320515#comment-15320515
 ] 

ASF GitHub Bot commented on FLINK-3869:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1973
  
Kk, I added it to our "breaking changes for Flink 2.0" umbrella ticket. 
Closing this.


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #1973: [FLINK-3869] Relax window fold generic parameters

2016-06-08 Thread aljoscha
Github user aljoscha closed the pull request at:

https://github.com/apache/flink/pull/1973


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #1973: [FLINK-3869] Relax window fold generic parameters

2016-06-08 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1973
  
Kk, I added it to our "breaking changes for Flink 2.0" umbrella ticket. 
Closing this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320511#comment-15320511
 ] 

ASF GitHub Bot commented on FLINK-3869:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/1973
  
I would say so


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #1973: [FLINK-3869] Relax window fold generic parameters

2016-06-08 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/1973
  
I would say so


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4030) ScalaShellITCase

2016-06-08 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4030:
-

 Summary: ScalaShellITCase
 Key: FLINK-4030
 URL: https://issues.apache.org/jira/browse/FLINK-4030
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
Priority: Minor
 Fix For: 1.1.0


The {{ScalaShellITCase}} fails regularly on Travis:

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (integration-tests) 
on project flink-scala-shell_2.10: ExecutionException The forked VM terminated 
without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/travis/build/apache/flink/flink-scala-shell/target && 
/usr/lib/jvm/java-8-oracle/jre/bin/java -Xms256m -Xmx800m -Dmvn.forkNumber=1 
-XX:-UseGCOverheadLimit -jar 
/home/travis/build/apache/flink/flink-scala-shell/target/surefire/surefirebooter5669599672364114854.jar
 
/home/travis/build/apache/flink/flink-scala-shell/target/surefire/surefire854521958557782961tmp
 
/home/travis/build/apache/flink/flink-scala-shell/target/surefire/surefire_7186137661441589930637tmp
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4020) Remove shard list querying from Kinesis consumer constructor

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320388#comment-15320388
 ] 

ASF GitHub Bot commented on FLINK-4020:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/2081

[FLINK-4020][streaming-connectors] Move shard list querying to open() for 
Kinesis consumer

Remove shard list querying from the constructor, and let all subtasks 
independently discover which shards it should consume from in open(). This 
change is a prerequisite for 
[FLINK-3231](https://issues.apache.org/jira/browse/FLINK-3231).

Explanation for some changes that might seem irrelevant:
1. Changed naming of some variables / methods: Since the behaviour of shard 
assignment to subtasks is now (and will continue to be in the future after 
FLINK-3231) more like "discovering shards for consuming" instead of "being 
assigned shards", I've changed the "assignedShards" related namings to 
"discoveredShards".
2. I've removed some tests, due to the fact that the corresponding parts of 
the code will be subject to quite a bit of change with the upcoming changes of 
[FLINK-3231](https://issues.apache.org/jira/browse/FLINK-3231). Tests will be 
added back with FLINK-3231.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-4020

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2081.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2081


commit 1db426be73f572aec2041cb1a9da6ad49425f392
Author: Gordon Tai 
Date:   2016-06-08T10:46:02Z

[FLINK-4020] Move shard list querying to open() for Kinesis consumer




> Remove shard list querying from Kinesis consumer constructor
> 
>
> Key: FLINK-4020
> URL: https://issues.apache.org/jira/browse/FLINK-4020
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>
> Currently FlinkKinesisConsumer is querying for the whole list of shards in 
> the constructor, forcing the client to be able to access Kinesis as well. 
> This is also a drawback for handling Kinesis-side resharding, since we'd want 
> all shard listing / shard-to-task assigning / shard end (result of 
> resharding) handling logic to be capable of being independently done within 
> task life cycle methods, with defined and definite results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2081: [FLINK-4020][streaming-connectors] Move shard list...

2016-06-08 Thread tzulitai
GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/2081

[FLINK-4020][streaming-connectors] Move shard list querying to open() for 
Kinesis consumer

Remove shard list querying from the constructor, and let all subtasks 
independently discover which shards it should consume from in open(). This 
change is a prerequisite for 
[FLINK-3231](https://issues.apache.org/jira/browse/FLINK-3231).

Explanation for some changes that might seem irrelevant:
1. Changed naming of some variables / methods: Since the behaviour of shard 
assignment to subtasks is now (and will continue to be in the future after 
FLINK-3231) more like "discovering shards for consuming" instead of "being 
assigned shards", I've changed the "assignedShards" related namings to 
"discoveredShards".
2. I've removed some tests, due to the fact that the corresponding parts of 
the code will be subject to quite a bit of change with the upcoming changes of 
[FLINK-3231](https://issues.apache.org/jira/browse/FLINK-3231). Tests will be 
added back with FLINK-3231.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-4020

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2081.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2081


commit 1db426be73f572aec2041cb1a9da6ad49425f392
Author: Gordon Tai 
Date:   2016-06-08T10:46:02Z

[FLINK-4020] Move shard list querying to open() for Kinesis consumer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4029) Multi-field "sum" function just like "keyBy"

2016-06-08 Thread Rami (JIRA)
Rami created FLINK-4029:
---

 Summary: Multi-field "sum" function just like "keyBy"
 Key: FLINK-4029
 URL: https://issues.apache.org/jira/browse/FLINK-4029
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: Rami
Priority: Minor


I can use keyBy as follows:
stream.keyBy(“pojo.field1”,”pojo.field2”,…)
Would make sense that I can use sum for example, to do its job for more than 
one field:
stream.sum(“pojo.field1”,”pojo.field2”,…)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #:

2016-06-08 Thread uce
Github user uce commented on the pull request:


https://github.com/apache/flink/commit/0cf04108f70375d41ebb7c39629db3a081bd2876#commitcomment-17784955
  
Just noticed that `SubtaskState` cleanup only catches and logs exceptions 
(`TaskForState` before, too). Does anyone recall what the reasoning for this 
is? Is it OK to remove the catch or rethrow the Exception? When discarding a 
savepoint without the proper class loader for example, this will only show up 
in the logs, but the savepoint disposal will be marked as a success.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4011) Unable to access completed job in web frontend

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320297#comment-15320297
 ] 

ASF GitHub Bot commented on FLINK-4011:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2065


> Unable to access completed job in web frontend
> --
>
> Key: FLINK-4011
> URL: https://issues.apache.org/jira/browse/FLINK-4011
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.1.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.1.0
>
>
> In the current master, I'm not able to access a finished job's detail page.
> The JobManager logs shows the following exception:
> {code}
> 2016-06-02 15:23:08,581 WARN  
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
> handling request
> java.lang.RuntimeException: Couldn't deserialize ExecutionConfig.
> at 
> org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:52)
> at 
> org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:61)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:88)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:84)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:44)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> at 
> io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:55)
> at 
> org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:50)
> ... 31 more
> {code}



--
This message 

[jira] [Resolved] (FLINK-4011) Unable to access completed job in web frontend

2016-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-4011.
---
   Resolution: Fixed
Fix Version/s: 1.1.0

Resolved in http://git-wip-us.apache.org/repos/asf/flink/commit/65ee28c3

> Unable to access completed job in web frontend
> --
>
> Key: FLINK-4011
> URL: https://issues.apache.org/jira/browse/FLINK-4011
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.1.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.1.0
>
>
> In the current master, I'm not able to access a finished job's detail page.
> The JobManager logs shows the following exception:
> {code}
> 2016-06-02 15:23:08,581 WARN  
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
> handling request
> java.lang.RuntimeException: Couldn't deserialize ExecutionConfig.
> at 
> org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:52)
> at 
> org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:61)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:88)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:84)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:44)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> at 
> io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:55)
> at 
> org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:50)
> ... 31 more
> {code}



--
This message was sent 

[GitHub] flink pull request #2065: [FLINK-4011] Keep UserCodeClassLoader in archived ...

2016-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2065


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4025) Add possiblity for the RMQ Streaming Source to customize the queue

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320293#comment-15320293
 ] 

ASF GitHub Bot commented on FLINK-4025:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2073


> Add possiblity for the RMQ Streaming Source to customize the queue
> --
>
> Key: FLINK-4025
> URL: https://issues.apache.org/jira/browse/FLINK-4025
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 1.0.2
>Reporter: Dominik Bruhn
> Fix For: 1.1.0
>
>
> This patch adds the possibilty for the user of the RabbitMQ
> Streaming Connector to customize the queue which is used. There
> are use-cases in which you want to set custom parameters for the
> queue (i.e. TTL of the messages if Flink reboots) or the
> possibility to bind the queue to an exchange afterwards.
> The commit doesn't change the actual behaviour but makes it
> possible for users to override the newly create `setupQueue`
> method and cutomize their implementation. This was not possible
> before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2073: [FLINK-4025] Add possiblity for the RMQ Streaming ...

2016-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2073


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3867) Provide virtualized Flink architecture for testing purposes

2016-06-08 Thread Julius Neuffer (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320279#comment-15320279
 ] 

Julius Neuffer commented on FLINK-3867:
---

The corresponding pull request can be found at 
https://github.com/apache/flink/pull/2075

> Provide virtualized Flink architecture for testing purposes
> ---
>
> Key: FLINK-3867
> URL: https://issues.apache.org/jira/browse/FLINK-3867
> Project: Flink
>  Issue Type: Test
>  Components: flink-contrib
>Reporter: Andreas Kempa-Liehr
>
> For developers interested in Apache Flink it would be very helpful to deploy 
> an Apache Flink cluster on a set of virtualized machines, in order to get 
> used to the configuration of the system and the development of basic 
> algorithms.
> This kind of setup could also be used for testing purposes.
> An example implementation on basis of Ansible and Vagrant has been published 
> unter https://github.com/kempa-liehr/flinkVM/tree/master/flink-vm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-06-08 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3869:

Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-3957

> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3986) Rename readFileStream from the StreamExecutionEnvironment

2016-06-08 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3986:

Summary: Rename readFileStream from the StreamExecutionEnvironment  (was: 
Rename the readFileStream from the StreamExecutionEnvironment)

> Rename readFileStream from the StreamExecutionEnvironment
> -
>
> Key: FLINK-3986
> URL: https://issues.apache.org/jira/browse/FLINK-3986
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Kostas Kloudas
> Fix For: 2.0.0
>
>
> The readFileStream(String filePath, long intervalMillis, WatchType watchType) 
> has to be renamed to to readFile match the naming conventions of the rest of 
> the methods, or even removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #1973: [FLINK-3869] Relax window fold generic parameters

2016-06-08 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1973
  
So we shouldn't fix and but keep track of it for Flink 2.0?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320145#comment-15320145
 ] 

ASF GitHub Bot commented on FLINK-3869:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/1973
  
So we shouldn't fix and but keep track of it for Flink 2.0?


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4016) FoldApplyWindowFunction is not properly initialized

2016-06-08 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320140#comment-15320140
 ] 

Aljoscha Krettek commented on FLINK-4016:
-

[~rvdwenden] If you have a fix, please open a new PR against FLINK-3977.

> FoldApplyWindowFunction is not properly initialized
> ---
>
> Key: FLINK-4016
> URL: https://issues.apache.org/jira/browse/FLINK-4016
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.0.3
>Reporter: RWenden
>Priority: Blocker
>  Labels: easyfix
> Fix For: 1.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> FoldApplyWindowFunction's outputtype is not set.
> We're using constructions like (excerpt):
>   .keyBy(0)
>   .countWindow(10, 5)
>   .fold(...)
> Running this stream gives an runtime exception in FoldApplyWindowFunction:
> "No initial value was serialized for the fold window function. Probably the 
> setOutputType method was not called."
> This can be easily fixed in WindowedStream.java by (around line# 449):
> FoldApplyWindowFunction foldApplyWindowFunction = new 
> FoldApplyWindowFunction<>(initialValue, foldFunction, function);
> foldApplyWindowFunction.setOutputType(resultType, 
> input.getExecutionConfig());
> operator = new EvictingWindowOperator<>(windowAssigner,
> 
> windowAssigner.getWindowSerializer(getExecutionEnvironment().getConfig()),
> keySel,
> 
> input.getKeyType().createSerializer(getExecutionEnvironment().getConfig()),
> stateDesc,
> new 
> InternalIterableWindowFunction<>(foldApplyWindowFunction),
> trigger,
> evictor);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)