[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116112#comment-16116112
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4473
  
One other thing:
Also need to add validation for these new configurations.
That should be placed in `KinesisConfigUtil.validateProducerConfigs`


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4473: [FLINK-7367][kinesis connector] Parameterize more configs...

2017-08-06 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4473
  
One other thing:
Also need to add validation for these new configurations.
That should be placed in `KinesisConfigUtil.validateProducerConfigs`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116109#comment-16116109
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131578056
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,30 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
-   /** Maximum number of items to pack into an PutRecords request. **/
+   /** Maximum number of KPL user records to store in a single Kinesis 
Streams record (an aggregated record). */
+   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
+   /** Maximum number of Kinesis Streams records to pack into an 
PutRecords request. */
public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
 
-   /** Maximum number of items to pack into an aggregated record. **/
-   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+   /** Maximum number of connections to open to the backend. HTTP requests 
are
+* sent in parallel over multiple connections */
--- End diff --

Style consistency: missing period at the end.


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116108#comment-16116108
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131577895
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
 ---
@@ -169,14 +169,27 @@ public void open(Configuration parameters) throws 
Exception {
 

producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));

producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
-   if 
(configProps.containsKey(ProducerConfigConstants.COLLECTION_MAX_COUNT)) {
-   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
-   }
-   if 
(configProps.containsKey(ProducerConfigConstants.AGGREGATION_MAX_COUNT)) {
-   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
-   }
+
+   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
+
+   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
+
+   
producerConfig.setMaxConnections(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.MAX_CONNECTIONS, 
producerConfig.getMaxConnections(), LOG));
+
+   producerConfig.setRateLimit(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.RATE_LIMIT, 
producerConfig.getRateLimit(), LOG));
--- End diff --

Starting from this line, the indentation is not consistent.


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4473: [FLINK-7367][kinesis connector] Parameterize more ...

2017-08-06 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131578056
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,30 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
-   /** Maximum number of items to pack into an PutRecords request. **/
+   /** Maximum number of KPL user records to store in a single Kinesis 
Streams record (an aggregated record). */
+   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
+   /** Maximum number of Kinesis Streams records to pack into an 
PutRecords request. */
public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
 
-   /** Maximum number of items to pack into an aggregated record. **/
-   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+   /** Maximum number of connections to open to the backend. HTTP requests 
are
+* sent in parallel over multiple connections */
--- End diff --

Style consistency: missing period at the end.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4473: [FLINK-7367][kinesis connector] Parameterize more ...

2017-08-06 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131577895
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
 ---
@@ -169,14 +169,27 @@ public void open(Configuration parameters) throws 
Exception {
 

producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));

producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
-   if 
(configProps.containsKey(ProducerConfigConstants.COLLECTION_MAX_COUNT)) {
-   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
-   }
-   if 
(configProps.containsKey(ProducerConfigConstants.AGGREGATION_MAX_COUNT)) {
-   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
-   }
+
+   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
+
+   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
+
+   
producerConfig.setMaxConnections(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.MAX_CONNECTIONS, 
producerConfig.getMaxConnections(), LOG));
+
+   producerConfig.setRateLimit(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.RATE_LIMIT, 
producerConfig.getRateLimit(), LOG));
--- End diff --

Starting from this line, the indentation is not consistent.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4480: [FLINK-6995] [docs] Enable is_latest attribute to false

2017-08-06 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4480
  
If we release flink1.4, then we will mark true->false in flink1.3 release 
also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116106#comment-16116106
 ] 

ASF GitHub Bot commented on FLINK-6995:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4480
  
If we release flink1.4, then we will mark true->false in flink1.3 release 
also.


> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>Assignee: mingleizhang
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4480: [FLINK-6995] [docs] Enable is_latest attribute to false

2017-08-06 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4480
  
LGTM, I'll merge this to `release-1.2`.
It seems like we need to add this true -> false toggle somewhere in our 
release procedure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116099#comment-16116099
 ] 

ASF GitHub Bot commented on FLINK-6995:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4480
  
LGTM, I'll merge this to `release-1.2`.
It seems like we need to add this true -> false toggle somewhere in our 
release procedure.


> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>Assignee: mingleizhang
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4469: [hotfix][docs] Updated required Java version for standalo...

2017-08-06 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4469
  
+1, @dawidwys feel free to merge :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7210) Add TwoPhaseCommitSinkFunction (implementing exactly-once semantic in generic way)

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116064#comment-16116064
 ] 

ASF GitHub Bot commented on FLINK-7210:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4368
  
Alright, this looks good to merge now!
Merging ..


> Add TwoPhaseCommitSinkFunction (implementing exactly-once semantic in generic 
> way)
> --
>
> Key: FLINK-7210
> URL: https://issues.apache.org/jira/browse/FLINK-7210
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> To implement exactly-once sink there is a re-occurring pattern for doing it - 
> two phase commit algorithm. It is used both in `BucketingSink` and in 
> `Pravega` sink and it will be used in `Kafka 0.11` connector. It would be 
> good to extract this common logic into one class, both to improve existing 
> implementation (for exampe `Pravega`'s sink doesn't abort interrupted 
> transactions) and to make it easier for the users to implement their own 
> custom exactly-once sinks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4368: [FLINK-7210] Introduce TwoPhaseCommitSinkFunction

2017-08-06 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4368
  
Alright, this looks good to merge now!
Merging ..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7368) MetricStore makes cpu spin at 100%

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116034#comment-16116034
 ] 

ASF GitHub Bot commented on FLINK-7368:
---

Github user nicochen commented on the issue:

https://github.com/apache/flink/pull/4472
  
@zentol Thanks for replying. Indeed, the problem is caused by MetricFetcher 
isn't synchronizing on the `MetricStore` object in MetricFetcher#addMetrics(). 
But in my opinion, synchronizing on the `MetricStore`  is less efficient. 
`MetricStore` wrapps more than one metric stores and they serves different 
components(e.g Jobmanager,Taskmangers)  individually. If synchronizing on the 
`MetricStore` , call of addMetrics() on Jobmanger's metric may wait for 
addMetrics() on another taskmananger's metric done as they both acquire the 
same lock, which is unnecessary.



> MetricStore makes cpu spin at 100%
> --
>
> Key: FLINK-7368
> URL: https://issues.apache.org/jira/browse/FLINK-7368
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Nico Chen
> Attachments: jm-jstack.log, MyHashMapInfiniteLoopTest.java, 
> MyHashMap.java
>
>
> Flink's `MetricStore` is not thread-safe. multi-treads may acess java' 
> hashmap inside `MetricStore` and can tirgger hashmap's infinte loop. 
> Recently I met the case that flink jobmanager consumed 100% cpu. A part of 
> stacktrace is shown below. The full jstack is in the attachment.
> {code:java}
> "ForkJoinPool-1-worker-19" daemon prio=10 tid=0x7fbdacac9800 nid=0x64c1 
> runnable [0x7fbd7d1c2000]
>java.lang.Thread.State: RUNNABLE
> at java.util.HashMap.put(HashMap.java:494)
> at 
> org.apache.flink.runtime.webmonitor.metrics.MetricStore.addMetric(MetricStore.java:176)
> at 
> org.apache.flink.runtime.webmonitor.metrics.MetricStore.add(MetricStore.java:121)
> at 
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher.addMetrics(MetricFetcher.java:198)
> at 
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher.access$500(MetricFetcher.java:58)
> at 
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher$4.onSuccess(MetricFetcher.java:188)
> at akka.dispatch.OnSuccess.internal(Future.scala:212)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:175)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:172)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at 
> scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:28)
> at 
> scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117)
> at 
> scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at 
> java.util.concurrent.ForkJoinTask$AdaptedRunnable.exec(ForkJoinTask.java:1265)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:334)
> at 
> java.util.concurrent.ForkJoinWorkerThread.execTask(ForkJoinWorkerThread.java:604)
> at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:784)
> at java.util.concurrent.ForkJoinPool.work(ForkJoinPool.java:646)
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:398)
> {code}
> There are 24 threads show same stacktrace as above to indicate they are 
> spining at HashMap.put(HashMap.java:494) (I am using Java 1.7.0_6). Many 
> posts indicate multi-threads accessing hashmap cause this problem and I 
> reproduce the case as well. The test code is attached. I only modify the 
> HashMap.transfer() by adding concurrent barriers for different treads in 
> order to simulate the timing of creation of cycles in hashmap's Entry.  My 
> program's stacktrace shows it hangs at same line of 
> HashMap(HashMap.put(HashMap.java:494)) as the stacktrace I post above.
>  Even through `MetricFetcher` has a 10 seconds minimum inteverl between each 
> metrics qurey, it still cannot guarntee query responses do not acess 
> `MtricStore`'s hashmap concurrently.  Thus I think it's a bug to fix.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4472: FLINK-7368: MetricStore makes cpu spin at 100%

2017-08-06 Thread nicochen
Github user nicochen commented on the issue:

https://github.com/apache/flink/pull/4472
  
@zentol Thanks for replying. Indeed, the problem is caused by MetricFetcher 
isn't synchronizing on the `MetricStore` object in MetricFetcher#addMetrics(). 
But in my opinion, synchronizing on the `MetricStore`  is less efficient. 
`MetricStore` wrapps more than one metric stores and they serves different 
components(e.g Jobmanager,Taskmangers)  individually. If synchronizing on the 
`MetricStore` , call of addMetrics() on Jobmanger's metric may wait for 
addMetrics() on another taskmananger's metric done as they both acquire the 
same lock, which is unnecessary.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7377) Implement FixedBufferPool for floating buffers in SingleInputGate

2017-08-06 Thread zhijiang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16116017#comment-16116017
 ] 

zhijiang commented on FLINK-7377:
-

repeated in [FLINK-7378|https://issues.apache.org/jira/browse/FLINK-7378]

> Implement FixedBufferPool for floating buffers in SingleInputGate
> -
>
> Key: FLINK-7377
> URL: https://issues.apache.org/jira/browse/FLINK-7377
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Affects Versions: 1.4.0
>Reporter: zhijiang
>Assignee: zhijiang
>
> Currently the number of network buffers in {{LocalBufferPool}} for 
> {{SingleInputGate}} is limited by {{a *  + b}}, where a 
> is the number of exclusive buffers for each channel and b is the number of 
> floating buffers shared by all channels.
> Considering the credit-based flow control feature, we want to implement a new 
> fixed size buffer pool type used to manage the floating buffers for 
> {{SingleInputGate}}. 
> Compared with {{LocalBufferPool}}, this is a non-rebalancing buffer pool 
> which will not participate in redistributing the left available buffers in 
> {{NetworkBufferPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7377) Implement FixedBufferPool for floating buffers in SingleInputGate

2017-08-06 Thread zhijiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang closed FLINK-7377.
---
Resolution: Duplicate

> Implement FixedBufferPool for floating buffers in SingleInputGate
> -
>
> Key: FLINK-7377
> URL: https://issues.apache.org/jira/browse/FLINK-7377
> Project: Flink
>  Issue Type: Task
>  Components: Core
>Affects Versions: 1.4.0
>Reporter: zhijiang
>Assignee: zhijiang
>
> Currently the number of network buffers in {{LocalBufferPool}} for 
> {{SingleInputGate}} is limited by {{a *  + b}}, where a 
> is the number of exclusive buffers for each channel and b is the number of 
> floating buffers shared by all channels.
> Considering the credit-based flow control feature, we want to implement a new 
> fixed size buffer pool type used to manage the floating buffers for 
> {{SingleInputGate}}. 
> Compared with {{LocalBufferPool}}, this is a non-rebalancing buffer pool 
> which will not participate in redistributing the left available buffers in 
> {{NetworkBufferPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7378) Implement the FixedBufferPool for floating buffers of SingleInputGate

2017-08-06 Thread zhijiang (JIRA)
zhijiang created FLINK-7378:
---

 Summary: Implement the FixedBufferPool for floating buffers of 
SingleInputGate
 Key: FLINK-7378
 URL: https://issues.apache.org/jira/browse/FLINK-7378
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: zhijiang
Assignee: zhijiang
 Fix For: 1.4.0


Currently the number of network buffers in {{LocalBufferPool}} for 
{{SingleInputGate}} is limited by {{a *  + b}}, where a is 
the number of exclusive buffers for each channel and b is the number of 
floating buffers shared by all channels.

Considering the credit-based flow control feature, we want to implement a new 
fixed size buffer pool type used to manage the floating buffers for 
{{SingleInputGate}}.

Compared with {{LocalBufferPool}}, this is a non-rebalancing buffer pool which 
will not participate in redistributing the left available buffers in 
{{NetworkBufferPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7377) Implement FixedBufferPool for floating buffers in SingleInputGate

2017-08-06 Thread zhijiang (JIRA)
zhijiang created FLINK-7377:
---

 Summary: Implement FixedBufferPool for floating buffers in 
SingleInputGate
 Key: FLINK-7377
 URL: https://issues.apache.org/jira/browse/FLINK-7377
 Project: Flink
  Issue Type: Task
  Components: Core
Affects Versions: 1.4.0
Reporter: zhijiang
Assignee: zhijiang


Currently the number of network buffers in {{LocalBufferPool}} for 
{{SingleInputGate}} is limited by {{a *  + b}}, where a is 
the number of exclusive buffers for each channel and b is the number of 
floating buffers shared by all channels.

Considering the credit-based flow control feature, we want to implement a new 
fixed size buffer pool type used to manage the floating buffers for 
{{SingleInputGate}}. 

Compared with {{LocalBufferPool}}, this is a non-rebalancing buffer pool which 
will not participate in redistributing the left available buffers in 
{{NetworkBufferPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4484: Client clean code

2017-08-06 Thread yew1eb
GitHub user yew1eb opened a pull request:

https://github.com/apache/flink/pull/4484

Client clean code

## What is the purpose of the change

Keeping code clean.


## Brief change log

- Move the `args` field to the `CommandLineOptions`(parent class) 
- Clean up the test classes in `flink-clients` module


## Verifying this change

This change is already covered by existing tests.


## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? ( no)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/flink client_clean_code

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4484.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4484


commit 12c04abee88a2251a4bf69115e90100374546489
Author: zhouhai02 
Date:   2017-08-06T17:31:05Z

Move 'args' field to CommandLineOptions

commit 8d8dd4d9c44b027ae1c7431e2d0b74fcd4e6d163
Author: zhouhai02 
Date:   2017-08-06T19:01:02Z

cleanup the test classes in flink-clients

commit 5b7e52cb9af8200fe166d19f25deb8a89295201f
Author: zhouhai02 
Date:   2017-08-06T19:20:45Z

fix checkstyle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5325) Introduce interface for CloseableRegistry to separate user from system-facing functionality

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115913#comment-16115913
 ] 

ASF GitHub Bot commented on FLINK-5325:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2992
  
What's the status of this PR @StefanRRichter ?


> Introduce interface for CloseableRegistry to separate user from system-facing 
> functionality
> ---
>
> Key: FLINK-5325
> URL: https://issues.apache.org/jira/browse/FLINK-5325
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> Currently, the API of {{CloseableRegistry}} exposes the {{close}} method to 
> all client code. We should separate the API into a user-facing interface 
> (allowing only for un/registration of {{Closeable}} and a system-facing part 
> that also exposes the {{close}} method. This prevents users from accidentally 
> calling {{close}}, thus closing resources that other callers registered.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #2992: [FLINK-5325] Splitting user/system-facing API of Closeabl...

2017-08-06 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2992
  
What's the status of this PR @StefanRRichter ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3442: [FLINK-5778] [savepoints] Add savepoint serializer with r...

2017-08-06 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3442
  
What's the status of this PR @uce @tillrohrmann @StefanRRichter 
@StephanEwen ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5778) Split FileStateHandle into fileName and basePath

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115911#comment-16115911
 ] 

ASF GitHub Bot commented on FLINK-5778:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3442
  
What's the status of this PR @uce @tillrohrmann @StefanRRichter 
@StephanEwen ?


> Split FileStateHandle into fileName and basePath
> 
>
> Key: FLINK-5778
> URL: https://issues.apache.org/jira/browse/FLINK-5778
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Store the statePath as a basePath and a fileName and allow to overwrite the 
> basePath. We cannot overwrite the base path as long as the state handle is 
> still in flight and not persisted. Otherwise we risk a resource leak.
> We need this in order to be able to relocate savepoints.
> {code}
> interface RelativeBaseLocationStreamStateHandle {
>void clearBaseLocation();
>void setBaseLocation(String baseLocation);
> }
> {code}
> FileStateHandle should implement this and the SavepointSerializer should 
> forward the calls when a savepoint is stored or loaded, clear before store 
> and set after load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7376) Cleanup options class and test classes in flink-clients

2017-08-06 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7376:
---

 Summary: Cleanup options class and test classes in flink-clients 
 Key: FLINK-7376
 URL: https://issues.apache.org/jira/browse/FLINK-7376
 Project: Flink
  Issue Type: Improvement
  Components: Client
Reporter: Hai Zhou
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115907#comment-16115907
 ] 

ASF GitHub Bot commented on FLINK-6494:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4075
  
I'll try to get this in this week.


> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4075: [FLINK-6494] Migrate ResourceManager/Yarn/Mesos configura...

2017-08-06 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4075
  
I'll try to get this in this week.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4469: [hotfix][docs] Updated required Java version for standalo...

2017-08-06 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4469
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7375) Remove ActorGateway from JobClient

2017-08-06 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7375:


 Summary: Remove ActorGateway from JobClient
 Key: FLINK-7375
 URL: https://issues.apache.org/jira/browse/FLINK-7375
 Project: Flink
  Issue Type: Improvement
  Components: Client
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Minor


Remove {{ActorGateway}} dependency from {{JobClient}}. This will ease the 
transition to the Flip-6 code base because we can reuse the {{JobClient}} code.

I propose to replace the {{ActorGateway}} by a more strictly typed 
{{JobManagerGateway}} which will be extended by the Flip-6 
{{JobMasterGateway}}. This will allow to decouple the {{JarRunHandler}} from 
the {{ActorGateway}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-06 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115821#comment-16115821
 ] 

Till Rohrmann commented on FLINK-7240:
--

My guess with the current solution would be that the Job was never started 
successfully and thus, we see the {{StackOverflowError}} because we are 
recursively calling `requestCheckpoint` in a non successful checkpoint case.

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7372) Remove ActorGateway from JobGraph

2017-08-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115819#comment-16115819
 ] 

ASF GitHub Bot commented on FLINK-7372:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4483

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

## What is the purpose of the change

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In order to get rid of this dependency for 
future Flip-6 changes, this commit retrieves the BlobServer's address 
beforehand and directly passes it to this method as a `InetSocketAddress` 
instance.

## Brief change log

- Introduce `JobClient#retrieveBlobServerAddress` method which retrieves 
from the given `ActorGateway` the `BlobServer` address
- First retrieve `BlobServer` address using the aforementioned method 
wherever the `ActorGateway` was passed to the `JobGraph`

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink removeActorGateways

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4483.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4483


commit 52bf22d5b9b5893e3425c963e5516c1cef5e2938
Author: Till Rohrmann 
Date:   2017-08-04T22:28:15Z

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In
order to get rid of this dependency for future Flip-6 changes, this commit 
retrieves the BlobServer's
address beforehand and directly passes it to this method.




> Remove ActorGateway from JobGraph
> -
>
> Key: FLINK-7372
> URL: https://issues.apache.org/jira/browse/FLINK-7372
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> As a preliminary step for easier Flip-6 integration we should try to decouple 
> as many components from the underlying RPC abstraction as possible. One of 
> these components is the {{JobGraph}} which has a dependency on 
> {{ActorGateway}} via its {{JobGraph#uploadUserJars}} method.
> I propose to get rid of the {{ActorGateway}} parameter and passing instead 
> the BlobServer's address as an {{InetSocketAddress}} instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4483: [FLINK-7372] [JobGraph] Remove ActorGateway from J...

2017-08-06 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4483

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

## What is the purpose of the change

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In order to get rid of this dependency for 
future Flip-6 changes, this commit retrieves the BlobServer's address 
beforehand and directly passes it to this method as a `InetSocketAddress` 
instance.

## Brief change log

- Introduce `JobClient#retrieveBlobServerAddress` method which retrieves 
from the given `ActorGateway` the `BlobServer` address
- First retrieve `BlobServer` address using the aforementioned method 
wherever the `ActorGateway` was passed to the `JobGraph`

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink removeActorGateways

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4483.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4483


commit 52bf22d5b9b5893e3425c963e5516c1cef5e2938
Author: Till Rohrmann 
Date:   2017-08-04T22:28:15Z

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In
order to get rid of this dependency for future Flip-6 changes, this commit 
retrieves the BlobServer's
address beforehand and directly passes it to this method.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-06 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115817#comment-16115817
 ] 

Till Rohrmann commented on FLINK-7240:
--

The problem has been introduced with a hotfix which tries to solve a problem 
with {{TestingJobManagerMessages.WaitForAllVerticesToBeRunning}}. Since the 
commit message does not specify the problem I cannot tell what the original 
problem. Maybe [~stefanrichte...@gmail.com] can tell more.

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-06 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-7240:
-
Labels: test-stability  (was: )

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-06 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115815#comment-16115815
 ] 

Till Rohrmann commented on FLINK-7240:
--

Another instance: https://travis-ci.org/tillrohrmann/flink/jobs/261186890

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7374) Type Erasure in GraphUtils$MapTo

2017-08-06 Thread Matthias Kricke (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Kricke updated FLINK-7374:
---
Description: 
I got the following exception when executing ConnectedComponents or 
GSAConnectedComponents algorithm:


{code:java}
org.apache.flink.api.common.functions.InvalidTypesException: Type of 
TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' could 
not be determined. This is most likely a type erasure problem. The type 
extraction currently supports types with generic variables only in cases where 
all variables in the return type can be deduced from the input type(s).

at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
at org.apache.flink.graph.Graph.run(Graph.java:1792)
{code}


I copied code that is used to test the ConnectedComponents algorithm from 
flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
 to try it on another path, because my own code which converts a Gradoop Graph 
into a Gelly graph and executes the algorithm leads to the afformentioned 
exception.
However, even the testcode gave me the exception.

Any ideas?



  was:
I got the following exception when executing ConnectedComponents or 
GSAConnectedComponents algorithm:

_{{org.apache.flink.api.common.functions.InvalidTypesException: Type of 
TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' could 
not be determined. This is most likely a type erasure problem. The type 
extraction currently supports types with generic variables only in cases where 
all variables in the return type can be deduced from the input type(s).

at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
at org.apache.flink.graph.Graph.run(Graph.java:1792)}}_

I copied code that is used to test the ConnectedComponents algorithm from 
flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
 to try it on another path, because my own code which converts a Gradoop Graph 
into a Gelly graph and executes the algorithm leads to the afformentioned 
exception.
However, even the testcode gave me the exception.

Any ideas?




> Type Erasure in GraphUtils$MapTo
> 
>
> Key: FLINK-7374
> URL: https://issues.apache.org/jira/browse/FLINK-7374
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1
> Environment: Ubuntu 17.04
> java-8-oracle
>Reporter: Matthias Kricke
>Priority: Minor
>
> I got the following exception when executing ConnectedComponents or 
> GSAConnectedComponents algorithm:
> {code:java}
> org.apache.flink.api.common.functions.InvalidTypesException: Type of 
> TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' 
> could not be determined. This is most likely a type erasure problem. The type 
> extraction currently supports types with generic variables only in cases 
> where all variables in the return type can be deduced from the input type(s).
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
>   at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
>   at org.apache.flink.graph.Graph.run(Graph.java:1792)
> {code}
> I copied code that is used to test 

[jira] [Updated] (FLINK-7374) Type Erasure in GraphUtils$MapTo

2017-08-06 Thread Matthias Kricke (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Kricke updated FLINK-7374:
---
Description: 
I got the following exception when executing ConnectedComponents or 
GSAConnectedComponents algorithm:

_{{org.apache.flink.api.common.functions.InvalidTypesException: Type of 
TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' could 
not be determined. This is most likely a type erasure problem. The type 
extraction currently supports types with generic variables only in cases where 
all variables in the return type can be deduced from the input type(s).

at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
at org.apache.flink.graph.Graph.run(Graph.java:1792)}}_

I copied code that is used to test the ConnectedComponents algorithm from 
flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
 to try it on another path, because my own code which converts a Gradoop Graph 
into a Gelly graph and executes the algorithm leads to the afformentioned 
exception.
However, even the testcode gave me the exception.

Any ideas?



  was:
I got the following exception when executing ConnectedComponents or 
GSAConnectedComponents algorithm:

org.apache.flink.api.common.functions.InvalidTypesException: Type of 
TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' could 
not be determined. This is most likely a type erasure problem. The type 
extraction currently supports types with generic variables only in cases where 
all variables in the return type can be deduced from the input type(s).

at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
at org.apache.flink.graph.Graph.run(Graph.java:1792)

I copied code that is used to test the ConnectedComponents algorithm from 
flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
 to try it on another path, because my own code which converts a Gradoop Graph 
into a Gelly graph and executes the algorithm leads to the afformentioned 
exception.
However, even the testcode gave me the exception.

Any ideas?




> Type Erasure in GraphUtils$MapTo
> 
>
> Key: FLINK-7374
> URL: https://issues.apache.org/jira/browse/FLINK-7374
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1
> Environment: Ubuntu 17.04
> java-8-oracle
>Reporter: Matthias Kricke
>Priority: Minor
>
> I got the following exception when executing ConnectedComponents or 
> GSAConnectedComponents algorithm:
> _{{org.apache.flink.api.common.functions.InvalidTypesException: Type of 
> TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' 
> could not be determined. This is most likely a type erasure problem. The type 
> extraction currently supports types with generic variables only in cases 
> where all variables in the return type can be deduced from the input type(s).
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
>   at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
>   at org.apache.flink.graph.Graph.run(Graph.java:1792)}}_
> I copied code that is used to test the ConnectedComponents algorithm from

[jira] [Created] (FLINK-7374) Type Erasure in GraphUtils$MapTo

2017-08-06 Thread Matthias Kricke (JIRA)
Matthias Kricke created FLINK-7374:
--

 Summary: Type Erasure in GraphUtils$MapTo
 Key: FLINK-7374
 URL: https://issues.apache.org/jira/browse/FLINK-7374
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 1.3.1
 Environment: Ubuntu 17.04
java-8-oracle


Reporter: Matthias Kricke
Priority: Minor


I got the following exception when executing ConnectedComponents or 
GSAConnectedComponents algorithm:

org.apache.flink.api.common.functions.InvalidTypesException: Type of 
TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' could 
not be determined. This is most likely a type erasure problem. The type 
extraction currently supports types with generic variables only in cases where 
all variables in the return type can be deduced from the input type(s).

at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
at 
org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
at 
org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
at org.apache.flink.graph.Graph.run(Graph.java:1792)

I copied code that is used to test the ConnectedComponents algorithm from 
flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
 to try it on another path, because my own code which converts a Gradoop Graph 
into a Gelly graph and executes the algorithm leads to the afformentioned 
exception.
However, even the testcode gave me the exception.

Any ideas?





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)