[jira] [Commented] (FLINK-9124) Allow customization of KinesisProxy.getRecords read timeout and retry

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434948#comment-16434948
 ] 

ASF GitHub Bot commented on FLINK-9124:
---

Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/5803#discussion_r180967646
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java
 ---
@@ -176,6 +179,16 @@ protected KinesisProxy(Properties configProps) {
 
}
 
+   /**
+* Create the Kinesis client, using the provided configuration 
properties and default {@link ClientConfiguration}.
+* Derived classes can override this method to customize the client 
configuration.
+* @param configProps
+* @return
+*/
+   protected AmazonKinesis createKinesisClient(Properties configProps) {
--- End diff --

Although it is theoretically possible to override the method and not look 
at `configProps`, it is  rather unlikely that this would be unintended. The 
user that ends up working at this level will probably be in need to control how 
the client config is initialized and the client
is constructed, to make the connector work. My vote is strongly in favor of 
not locking down things unless they are extremely well understood and there is 
a specific reason.

The connectors in general are fluent by nature and warrant a more flexible 
approach that 
empowers users to customize what they need without wholesale forking. By 
now we have run into several cases where behavior of the Kinesis connector had 
to be amended but private constructors or methods got into the way. Who would 
not prefer to spend time improving the connector functionality vs. opening 
JIRAs and PRs for access modification changes?

In our internal custom code we currently have an override that can 
generically set any simple property on the client config from the config 
properties. The approach comes with its own pros and cons and I think it should 
be discussed separately. If there is interest in having it in the Flink 
codebase as default behavior, I'm happy to take it up as a separate PR. I would 
still want to have the ability to override it though.


> Allow customization of KinesisProxy.getRecords read timeout and retry
> -
>
> Key: FLINK-9124
> URL: https://issues.apache.org/jira/browse/FLINK-9124
> Project: Flink
>  Issue Type: Task
>  Components: Kinesis Connector
>Affects Versions: 1.4.2
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Minor
>
> It should be possible to change the socket read timeout and all other 
> configuration parameters of the underlying AWS ClientConfiguration and also 
> have the option to retry after a socket timeout exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5803: [FLINK-9124] [kinesis] Allow customization of Kine...

2018-04-11 Thread tweise
Github user tweise commented on a diff in the pull request:

https://github.com/apache/flink/pull/5803#discussion_r180967646
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java
 ---
@@ -176,6 +179,16 @@ protected KinesisProxy(Properties configProps) {
 
}
 
+   /**
+* Create the Kinesis client, using the provided configuration 
properties and default {@link ClientConfiguration}.
+* Derived classes can override this method to customize the client 
configuration.
+* @param configProps
+* @return
+*/
+   protected AmazonKinesis createKinesisClient(Properties configProps) {
--- End diff --

Although it is theoretically possible to override the method and not look 
at `configProps`, it is  rather unlikely that this would be unintended. The 
user that ends up working at this level will probably be in need to control how 
the client config is initialized and the client
is constructed, to make the connector work. My vote is strongly in favor of 
not locking down things unless they are extremely well understood and there is 
a specific reason.

The connectors in general are fluent by nature and warrant a more flexible 
approach that 
empowers users to customize what they need without wholesale forking. By 
now we have run into several cases where behavior of the Kinesis connector had 
to be amended but private constructors or methods got into the way. Who would 
not prefer to spend time improving the connector functionality vs. opening 
JIRAs and PRs for access modification changes?

In our internal custom code we currently have an override that can 
generically set any simple property on the client config from the config 
properties. The approach comes with its own pros and cons and I think it should 
be discussed separately. If there is interest in having it in the Flink 
codebase as default behavior, I'm happy to take it up as a separate PR. I would 
still want to have the ability to override it though.


---


[jira] [Assigned] (FLINK-9159) Sanity check default timeout values

2018-04-11 Thread vinoyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9159:
---

Assignee: vinoyang

> Sanity check default timeout values
> ---
>
> Key: FLINK-9159
> URL: https://issues.apache.org/jira/browse/FLINK-9159
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Check that the default timeout values for resource release are sanely chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434886#comment-16434886
 ] 

ASF GitHub Bot commented on FLINK-9158:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5839
  
CC: @tillrohrmann 


> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Sihua Zhou
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5839: [FLINK-9158][Distributed Coordination] Set default FixedR...

2018-04-11 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5839
  
CC: @tillrohrmann 


---


[jira] [Commented] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434884#comment-16434884
 ] 

ASF GitHub Bot commented on FLINK-9158:
---

GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/5839

[FLINK-9158][Distributed Coordination] Set default 
FixedRestartDelayStrategy delay to 0s.

## What is the purpose of the change

Set default FixedRestartDelayStrategy delay to 0s.

## Brief change log

- Set default FixedRestartDelayStrategy delay to 0s.


## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink FLINK-9158

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5839


commit 7e6cc7290d76959032deb1c85111856ca88b1bde
Author: sihuazhou 
Date:   2018-04-12T03:10:48Z

Set default FixedRestartDelayStrategy delay to 0s.




> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Sihua Zhou
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-11 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou reassigned FLINK-9158:
-

Assignee: Sihua Zhou

> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Sihua Zhou
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5839: [FLINK-9158][Distributed Coordination] Set default...

2018-04-11 Thread sihuazhou
GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/5839

[FLINK-9158][Distributed Coordination] Set default 
FixedRestartDelayStrategy delay to 0s.

## What is the purpose of the change

Set default FixedRestartDelayStrategy delay to 0s.

## Brief change log

- Set default FixedRestartDelayStrategy delay to 0s.


## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink FLINK-9158

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5839


commit 7e6cc7290d76959032deb1c85111856ca88b1bde
Author: sihuazhou 
Date:   2018-04-12T03:10:48Z

Set default FixedRestartDelayStrategy delay to 0s.




---


[jira] [Commented] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-11 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434861#comment-16434861
 ] 

Sihua Zhou commented on FLINK-9158:
---

Hi [~till.rohrmann] have you already work on this? Or I'd like to take this.

> Set default FixedRestartDelayStrategy delay to 0s
> -
>
> Key: FLINK-9158
> URL: https://issues.apache.org/jira/browse/FLINK-9158
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9160) Make subclasses of RuntimeContext internal that should be internal

2018-04-11 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-9160:
---

 Summary: Make subclasses of RuntimeContext internal that should be 
internal
 Key: FLINK-9160
 URL: https://issues.apache.org/jira/browse/FLINK-9160
 Project: Flink
  Issue Type: Improvement
  Components: DataSet API, DataStream API
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9159) Sanity check default timeout values

2018-04-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9159:


 Summary: Sanity check default timeout values
 Key: FLINK-9159
 URL: https://issues.apache.org/jira/browse/FLINK-9159
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
 Fix For: 1.5.0


Check that the default timeout values for resource release are sanely chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9158) Set default FixedRestartDelayStrategy delay to 0s

2018-04-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9158:


 Summary: Set default FixedRestartDelayStrategy delay to 0s
 Key: FLINK-9158
 URL: https://issues.apache.org/jira/browse/FLINK-9158
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination
Affects Versions: 1.5.0
Reporter: Till Rohrmann
 Fix For: 1.5.0


Set default FixedRestartDelayStrategy delay to 0s.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8624) flink-mesos: The flink rest-api sometimes becomes unresponsive

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8624.

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.5.0)

Could not reproduce the problem.

> flink-mesos: The flink rest-api sometimes becomes unresponsive
> --
>
> Key: FLINK-8624
> URL: https://issues.apache.org/jira/browse/FLINK-8624
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, REST
>Affects Versions: 1.3.2
>Reporter: Bhumika Bayani
>Priority: Blocker
>
> Sometimes flink-mesos-scheduler fails/get killed, and marathon brings it up 
> again on some other node. Sometimes we have observed, the rest-api of the 
> newly created flink instance becomes unresponsive.
> Even if we execute api calls manually with curl, such as 
> http://:/overview or http://:/config
> we do not receive any response. 
> We submit and execute all our flink-jobs using rest-api only. So if rest api 
> becomes un-responsive, that stops us from running any of the flink jobs and 
> no stream processing happens. 
> We tried enabling flink debug logs, but we did not observer anything specific 
> that indicates why rest api is failing/unresponsive.
> We see below exceptions in logs but that is not specific to case when 
> flink-api is hung. We see them in healthy flink-scheduler too: 
>  
> {code:java}
> Timestamp=2018-02-08 05:43:49,175 LogLevel=INFO
>         ThreadId=[Checkpoint Timer] Class=o.a.f.r.c.CheckpointCoordinator 
> Msg=Triggering checkpoint 10181 @ 1518068629174
> Timestamp=2018-02-08 05:43:49,183 LogLevel=DEBUG
>         ThreadId=[nioEventLoopGroup-5-3] Class=o.a.f.r.w.WebRuntimeMonitor 
> Msg=Unhandled exception: {}
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/jobmanager#753807801]] after [1 ms]
>         at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:474)
>  ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:425)
>  ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:429) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:381) 
> ~[flink-dist_2.11-1.4-SNAPSHOT.jar:1.4-SNAPSHOT]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {code}
>  
> During the time rest api is unresponsive, we have observed flink web UI too 
> does not load/show any information. 
> Restarting the flink-scheduler solves this issue sometimes. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8715) RocksDB does not propagate reconfiguration of serializer to the states

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-8715:
-
Fix Version/s: (was: 1.5.1)
   (was: 1.6.0)
   1.5.0

> RocksDB does not propagate reconfiguration of serializer to the states
> --
>
> Key: FLINK-8715
> URL: https://issues.apache.org/jira/browse/FLINK-8715
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.2
>Reporter: Arvid Heise
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Any changes to the serializer done in #ensureCompability are lost during the 
> state creation.
> In particular, 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L68]
>  always uses a fresh copy of the StateDescriptor.
> An easy fix is to pass the reconfigured serializer as an additional parameter 
> in 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java#L1681]
>  , which can be retrieved through the side-output of getColumnFamily
> {code:java}
> kvStateInformation.get(stateDesc.getName()).f1.getStateSerializer()
> {code}
> I encountered it in 1.3.2 but the code in the master seems unchanged (hence 
> the pointer into master). I encountered it in ValueState, but I suspect the 
> same issue can be observed for all kinds of RocksDB states.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8715) RocksDB does not propagate reconfiguration of serializer to the states

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-8715:
-
Fix Version/s: (was: 1.5.0)
   1.5.1
   1.6.0

> RocksDB does not propagate reconfiguration of serializer to the states
> --
>
> Key: FLINK-8715
> URL: https://issues.apache.org/jira/browse/FLINK-8715
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.2
>Reporter: Arvid Heise
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
>
> Any changes to the serializer done in #ensureCompability are lost during the 
> state creation.
> In particular, 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L68]
>  always uses a fresh copy of the StateDescriptor.
> An easy fix is to pass the reconfigured serializer as an additional parameter 
> in 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java#L1681]
>  , which can be retrieved through the side-output of getColumnFamily
> {code:java}
> kvStateInformation.get(stateDesc.getName()).f1.getStateSerializer()
> {code}
> I encountered it in 1.3.2 but the code in the master seems unchanged (hence 
> the pointer into master). I encountered it in ValueState, but I suspect the 
> same issue can be observed for all kinds of RocksDB states.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8981) End-to-end test: Kerberos security

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-8981:
-
Fix Version/s: 1.5.1

> End-to-end test: Kerberos security
> --
>
> Key: FLINK-8981
> URL: https://issues.apache.org/jira/browse/FLINK-8981
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Shuyi Chen
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
>
> We should add an end-to-end test which verifies Flink's integration with 
> Kerberos security. In order to do this, we should start a Kerberos secured 
> Hadoop, ZooKeeper and Kafka cluster. Then we should start a Flink cluster 
> with HA enabled and run a job which reads from and writes to Kafka. We could 
> use a simple pipe job for that purpose which has some state for checkpointing 
> to HDFS.
> See [security docs| 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html]
>  for how more information about Flink's Kerberos integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8978) End-to-end test: Job upgrade

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-8978:
-
Fix Version/s: (was: 1.5.0)
   1.5.1
   1.6.0

> End-to-end test: Job upgrade
> 
>
> Key: FLINK-8978
> URL: https://issues.apache.org/jira/browse/FLINK-8978
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
>
> Job upgrades usually happen during the lifetime of a real world Flink job. 
> Therefore, we should add an end-to-end test which exactly covers this 
> scenario. I suggest to do the follwoing:
> # run the general purpose testing job FLINK-8971
> # take a savepoint
> # Modify the job by introducing a new operator and changing the order of 
> others
> # Resume the modified job from the savepoint
> # Verify that everything went correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8981) End-to-end test: Kerberos security

2018-04-11 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-8981:
-
Fix Version/s: (was: 1.5.0)
   1.6.0

> End-to-end test: Kerberos security
> --
>
> Key: FLINK-8981
> URL: https://issues.apache.org/jira/browse/FLINK-8981
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Shuyi Chen
>Priority: Blocker
> Fix For: 1.6.0
>
>
> We should add an end-to-end test which verifies Flink's integration with 
> Kerberos security. In order to do this, we should start a Kerberos secured 
> Hadoop, ZooKeeper and Kafka cluster. Then we should start a Flink cluster 
> with HA enabled and run a job which reads from and writes to Kafka. We could 
> use a simple pipe job for that purpose which has some state for checkpointing 
> to HDFS.
> See [security docs| 
> https://ci.apache.org/projects/flink/flink-docs-master/ops/security-kerberos.html]
>  for how more information about Flink's Kerberos integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8554) Upgrade AWS SDK

2018-04-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-8554:
--
Description: 
AWS SDK 1.11.271 fixes a lot of bugs.

One of which would exhibit the following:

{code}
Caused by: java.lang.NullPointerException
at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{code}

  was:
AWS SDK 1.11.271 fixes a lot of bugs.

One of which would exhibit the following:
{code}
Caused by: java.lang.NullPointerException
at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{code}


> Upgrade AWS SDK
> ---
>
> Key: FLINK-8554
> URL: https://issues.apache.org/jira/browse/FLINK-8554
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> AWS SDK 1.11.271 fixes a lot of bugs.
> One of which would exhibit the following:
> {code}
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7775) Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs

2018-04-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434788#comment-16434788
 ] 

Ted Yu commented on FLINK-7775:
---

lgtm

> Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs
> ---
>
> Key: FLINK-7775
> URL: https://issues.apache.org/jira/browse/FLINK-7775
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> {code}
>   public int getNumberOfCachedJobs() {
> return jobRefCounters.size();
>   }
> {code}
> The method of PermanentBlobCache is not used.
> We should remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434772#comment-16434772
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zhangminglei commented on a diff in the pull request:

https://github.com/apache/flink/pull/5823#discussion_r180938483
  
--- Diff: flink-end-to-end-tests/test-scripts/test_quickstarts.sh ---
@@ -0,0 +1,117 @@
+#!/usr/bin/env bash

+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# End to end test for quick starts test.
+
+CURRENT_DIR=$(cd "$( dirname "$0"  )" && pwd )
+
+cd $CURRENT_DIR
+
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.flink\
+-DarchetypeArtifactId=flink-quickstart-java\
+-DarchetypeVersion=1.4.2   \
+-DgroupId=org.apache.flink.quickstart  \
+-DartifactId=flink-java-project\
+-Dversion=0.1  \
+-Dpackage=org.apache.flink.quickstart  \
+-DinteractiveMode=false
+
+cd flink-java-project
+
+cp $CURRENT_DIR/test-class/ElasticsearchStreamingJob.java 
$CURRENT_DIR/flink-java-project/src/main/java/org/apache/flink/quickstart/
+
+sed -i -e '80i\
--- End diff --

Yes. very right! I will fix it.


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5823: [FLINK-9008] [e2e] Implements quickstarts end to e...

2018-04-11 Thread zhangminglei
Github user zhangminglei commented on a diff in the pull request:

https://github.com/apache/flink/pull/5823#discussion_r180938483
  
--- Diff: flink-end-to-end-tests/test-scripts/test_quickstarts.sh ---
@@ -0,0 +1,117 @@
+#!/usr/bin/env bash

+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# End to end test for quick starts test.
+
+CURRENT_DIR=$(cd "$( dirname "$0"  )" && pwd )
+
+cd $CURRENT_DIR
+
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.flink\
+-DarchetypeArtifactId=flink-quickstart-java\
+-DarchetypeVersion=1.4.2   \
+-DgroupId=org.apache.flink.quickstart  \
+-DartifactId=flink-java-project\
+-Dversion=0.1  \
+-Dpackage=org.apache.flink.quickstart  \
+-DinteractiveMode=false
+
+cd flink-java-project
+
+cp $CURRENT_DIR/test-class/ElasticsearchStreamingJob.java 
$CURRENT_DIR/flink-java-project/src/main/java/org/apache/flink/quickstart/
+
+sed -i -e '80i\
--- End diff --

Yes. very right! I will fix it.


---


[jira] [Commented] (FLINK-8556) Add proxy feature to Kinesis Connector to acces its endpoint

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434619#comment-16434619
 ] 

ASF GitHub Bot commented on FLINK-8556:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5411
  
Hi @pduveau, sorry for the delay here.
I've previously been busy with recent releases, and am currently traveling 
for Flink Forward.

This PR is still on my backlog, will try to get back to it as soon as 
possible.


> Add proxy feature to Kinesis Connector to acces its endpoint
> 
>
> Key: FLINK-8556
> URL: https://issues.apache.org/jira/browse/FLINK-8556
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Ph.Duveau
>Priority: Major
>  Labels: features
>
> The connector can not be configured to use a proxy to access Kinesis 
> endpoint. This feature is required on EC2 instances which can access internet 
> only through a proxy. VPC Kinesis endpoints are currently available in few 
> AWS' regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5411: [FLINK-8556] [Kinesis Connector] Add proxy feature to the...

2018-04-11 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5411
  
Hi @pduveau, sorry for the delay here.
I've previously been busy with recent releases, and am currently traveling 
for Flink Forward.

This PR is still on my backlog, will try to get back to it as soon as 
possible.


---


[jira] [Commented] (FLINK-9124) Allow customization of KinesisProxy.getRecords read timeout and retry

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434617#comment-16434617
 ] 

ASF GitHub Bot commented on FLINK-9124:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5803#discussion_r180909552
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java
 ---
@@ -176,6 +179,16 @@ protected KinesisProxy(Properties configProps) {
 
}
 
+   /**
+* Create the Kinesis client, using the provided configuration 
properties and default {@link ClientConfiguration}.
+* Derived classes can override this method to customize the client 
configuration.
+* @param configProps
+* @return
+*/
+   protected AmazonKinesis createKinesisClient(Properties configProps) {
--- End diff --

My main concern with allowing overrides of this method, is that override 
implementations can potentially completely ignore the `configProps` settings 
and create a Kinesis client entirely irrelevant from the original 
configuration. IMO, this is not nice design-wise.

As a different approach, would it be possible to traverse keys in the 
`configProps` and set the `ClientConfiguration` appropriately, such that we 
won't need to be aware of all updated / new keys in the AWS Kinesis SDK? 
Ideally, Flink should not need to maintain its own set of config keys and just 
rely on AWS's keys (for example, Flink actually should not need to define its 
own config keys for AWS credentials).


> Allow customization of KinesisProxy.getRecords read timeout and retry
> -
>
> Key: FLINK-9124
> URL: https://issues.apache.org/jira/browse/FLINK-9124
> Project: Flink
>  Issue Type: Task
>  Components: Kinesis Connector
>Affects Versions: 1.4.2
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Minor
>
> It should be possible to change the socket read timeout and all other 
> configuration parameters of the underlying AWS ClientConfiguration and also 
> have the option to retry after a socket timeout exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5803: [FLINK-9124] [kinesis] Allow customization of Kine...

2018-04-11 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5803#discussion_r180909552
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java
 ---
@@ -176,6 +179,16 @@ protected KinesisProxy(Properties configProps) {
 
}
 
+   /**
+* Create the Kinesis client, using the provided configuration 
properties and default {@link ClientConfiguration}.
+* Derived classes can override this method to customize the client 
configuration.
+* @param configProps
+* @return
+*/
+   protected AmazonKinesis createKinesisClient(Properties configProps) {
--- End diff --

My main concern with allowing overrides of this method, is that override 
implementations can potentially completely ignore the `configProps` settings 
and create a Kinesis client entirely irrelevant from the original 
configuration. IMO, this is not nice design-wise.

As a different approach, would it be possible to traverse keys in the 
`configProps` and set the `ClientConfiguration` appropriately, such that we 
won't need to be aware of all updated / new keys in the AWS Kinesis SDK? 
Ideally, Flink should not need to maintain its own set of config keys and just 
rely on AWS's keys (for example, Flink actually should not need to define its 
own config keys for AWS credentials).


---


[jira] [Updated] (FLINK-9157) Support for commonly used external catalog

2018-04-11 Thread Rong Rong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-9157:
-
Summary: Support for commonly used external catalog  (was: Create support 
for commonly used external catalog)

> Support for commonly used external catalog
> --
>
> Key: FLINK-9157
> URL: https://issues.apache.org/jira/browse/FLINK-9157
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Rong Rong
>Priority: Major
>
> It will be great to have SQL-Client to support some external catalogs 
> out-of-the-box for SQL users to configure and utilize easily. Such as Apache 
> HCatalog. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9157) Create support for commonly used external catalog

2018-04-11 Thread Rong Rong (JIRA)
Rong Rong created FLINK-9157:


 Summary: Create support for commonly used external catalog
 Key: FLINK-9157
 URL: https://issues.apache.org/jira/browse/FLINK-9157
 Project: Flink
  Issue Type: Sub-task
  Components: Table API  SQL
Reporter: Rong Rong


It will be great to have SQL-Client to support some external catalogs 
out-of-the-box for SQL users to configure and utilize easily. Such as Apache 
HCatalog. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434117#comment-16434117
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5823#discussion_r180810121
  
--- Diff: flink-end-to-end-tests/test-scripts/test_quickstarts.sh ---
@@ -0,0 +1,117 @@
+#!/usr/bin/env bash

+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# End to end test for quick starts test.
+
+CURRENT_DIR=$(cd "$( dirname "$0"  )" && pwd )
+
+cd $CURRENT_DIR
+
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.flink\
+-DarchetypeArtifactId=flink-quickstart-java\
+-DarchetypeVersion=1.4.2   \
+-DgroupId=org.apache.flink.quickstart  \
+-DartifactId=flink-java-project\
+-Dversion=0.1  \
+-Dpackage=org.apache.flink.quickstart  \
+-DinteractiveMode=false
+
+cd flink-java-project
+
+cp $CURRENT_DIR/test-class/ElasticsearchStreamingJob.java 
$CURRENT_DIR/flink-java-project/src/main/java/org/apache/flink/quickstart/
+
+sed -i -e '80i\
--- End diff --

This is quite brittle. What you have to realize is that any change to the 
original pom may now break this test, even if it is just reorganizing the pom.

A better alternative would be to search for the `` tag and 
insert the dependency after that.


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5823: [FLINK-9008] [e2e] Implements quickstarts end to e...

2018-04-11 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5823#discussion_r180810121
  
--- Diff: flink-end-to-end-tests/test-scripts/test_quickstarts.sh ---
@@ -0,0 +1,117 @@
+#!/usr/bin/env bash

+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# End to end test for quick starts test.
+
+CURRENT_DIR=$(cd "$( dirname "$0"  )" && pwd )
+
+cd $CURRENT_DIR
+
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.flink\
+-DarchetypeArtifactId=flink-quickstart-java\
+-DarchetypeVersion=1.4.2   \
+-DgroupId=org.apache.flink.quickstart  \
+-DartifactId=flink-java-project\
+-Dversion=0.1  \
+-Dpackage=org.apache.flink.quickstart  \
+-DinteractiveMode=false
+
+cd flink-java-project
+
+cp $CURRENT_DIR/test-class/ElasticsearchStreamingJob.java 
$CURRENT_DIR/flink-java-project/src/main/java/org/apache/flink/quickstart/
+
+sed -i -e '80i\
--- End diff --

This is quite brittle. What you have to realize is that any change to the 
original pom may now break this test, even if it is just reorganizing the pom.

A better alternative would be to search for the `` tag and 
insert the dependency after that.


---


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434074#comment-16434074
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Fixing the CI build error...


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5823: [FLINK-9008] [e2e] Implements quickstarts end to end test

2018-04-11 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Fixing the CI build error...


---


[jira] [Commented] (FLINK-9087) Change the method signature of RecordWriter#broadcastEvent() from BufferConsumer to void

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433814#comment-16433814
 ] 

ASF GitHub Bot commented on FLINK-9087:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5802
  
merging.


> Change the method signature of RecordWriter#broadcastEvent() from 
> BufferConsumer to void
> 
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5802: [FLINK-9087] [runtime] change the method signature of Rec...

2018-04-11 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5802
  
merging.


---


[jira] [Commented] (FLINK-9087) Change the method signature of RecordWriter#broadcastEvent() from BufferConsumer to void

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433787#comment-16433787
 ] 

ASF GitHub Bot commented on FLINK-9087:
---

Github user trionesadam commented on the issue:

https://github.com/apache/flink/pull/5802
  
@NicoK  thanks a lot


> Change the method signature of RecordWriter#broadcastEvent() from 
> BufferConsumer to void
> 
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5802: [FLINK-9087] [runtime] change the method signature of Rec...

2018-04-11 Thread trionesadam
Github user trionesadam commented on the issue:

https://github.com/apache/flink/pull/5802
  
@NicoK  thanks a lot


---


[jira] [Commented] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433736#comment-16433736
 ] 

ASF GitHub Bot commented on FLINK-9156:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5838

[FLINK-9156][REST][CLI] Update --jobmanager option logic for REST client

## What is the purpose of the change

With this PR the `--jobmanager` option is properly respected by the 
`RestClusterClient`. The problem was that the existing code was only setting 
`JobManagerOptions.ADDRESS` and `JobManagerOptions.PORT`, but not the 
corresponding `RestOptions` that are used for all REST API calls. The 
`RestOptions` are accessed 
in`HighAvailabilityServicesUtils#createHighAvailabilityServices` to create the 
webmonitor URL that is used by the client.

## Brief change log

* set appropriate RestOptions in `CliFrontend#setJobManagerAddressInConfig`
* add test

## Verifying this change

* run `RestClusterClientTest#testRESTManualConfigurationOverride`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9156

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5838.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5838


commit 661c313fcb4b5f4b4d5b31d52b4d9637a081035e
Author: zentol 
Date:   2018-04-11T10:48:51Z

[FLINK-9156][REST][CLI] Update --jobmanager option logic for REST client




> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> 

[GitHub] flink pull request #5838: [FLINK-9156][REST][CLI] Update --jobmanager option...

2018-04-11 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5838

[FLINK-9156][REST][CLI] Update --jobmanager option logic for REST client

## What is the purpose of the change

With this PR the `--jobmanager` option is properly respected by the 
`RestClusterClient`. The problem was that the existing code was only setting 
`JobManagerOptions.ADDRESS` and `JobManagerOptions.PORT`, but not the 
corresponding `RestOptions` that are used for all REST API calls. The 
`RestOptions` are accessed 
in`HighAvailabilityServicesUtils#createHighAvailabilityServices` to create the 
webmonitor URL that is used by the client.

## Brief change log

* set appropriate RestOptions in `CliFrontend#setJobManagerAddressInConfig`
* add test

## Verifying this change

* run `RestClusterClientTest#testRESTManualConfigurationOverride`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9156

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5838.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5838


commit 661c313fcb4b5f4b4d5b31d52b4d9637a081035e
Author: zentol 
Date:   2018-04-11T10:48:51Z

[FLINK-9156][REST][CLI] Update --jobmanager option logic for REST client




---


[jira] [Assigned] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-9156:
---

Assignee: Chesnay Schepler  (was: Gary Yao)

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at 

[jira] [Assigned] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-9156:
---

Assignee: Gary Yao  (was: Chesnay Schepler)

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at 

[jira] [Assigned] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-9156:
---

Assignee: Chesnay Schepler

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at 

[jira] [Updated] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-9156:

Description: 
*Description*
The CLI does not respect the {{-m, --jobmanager}} option. For example 
submitting a job using 
{noformat}
bin/flink run -m 172.31.35.68:6123 [...]
{noformat}

results in the client trying to connect to what is specified in 
{{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).

*Stacktrace*

{noformat}

 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Could not submit 
job 99b0a48ec5cb4086740b1ffd38efd1af.
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
at 
org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.util.FlinkException: Could not upload job jar files.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
at 
java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
at 
java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
... 7 more
Caused by: org.apache.flink.util.FlinkException: Could not upload job jar files.
... 10 more
Caused by: java.io.IOException: Could not connect to BlobServer at address 
/127.0.0.1:41909
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
at 
org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
... 9 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:118)
... 11 more
{noformat}

  was:
*Description*
The CLI does not respect the {{-m, --jobmanager}} option. For example 
submitting a job using 
{noformat}
bin/flink run -m 172.31.35.68:6123 [...]
{noformat}

results in the client trying to connect to what is specified in 
{{flink-conf.yaml}}.

*Stacktrace*

{noformat}

 

[jira] [Commented] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433690#comment-16433690
 ] 

mingleizhang commented on FLINK-9156:
-

Yes. We should make the CLI as the first choose, can not use the 
{{jobmanager.rpc.address}} in {{flink-conf.yaml}}.

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}}.
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> 

[jira] [Commented] (FLINK-9140) simplify scalastyle configurations

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433674#comment-16433674
 ] 

ASF GitHub Bot commented on FLINK-9140:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5819
  
merging.


> simplify scalastyle configurations
> --
>
> Key: FLINK-9140
> URL: https://issues.apache.org/jira/browse/FLINK-9140
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0, 1.6.0
>
>
> Simplifying {{}} to {{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5819: [FLINK-9140] [Build System] [scalastyle] simplify scalast...

2018-04-11 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5819
  
merging.


---


[GitHub] flink issue #5827: [hotfix][docs][minor] fix typo in documentation

2018-04-11 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5827
  
merging.


---


[jira] [Created] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Gary Yao (JIRA)
Gary Yao created FLINK-9156:
---

 Summary: CLI does not respect -m,--jobmanager option
 Key: FLINK-9156
 URL: https://issues.apache.org/jira/browse/FLINK-9156
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.5.0
 Environment: 1.5 RC1
Reporter: Gary Yao
 Fix For: 1.5.0


*Description*
The CLI does not respect the {{-m}} option. For example submitting a job using 
{noformat}
bin/flink run -m 172.31.35.68:6123 [...]
{noformat}

results in the client trying to connect to what is specified in 
{{flink-conf.yaml}}.

*Stacktrace*

{noformat}

 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Could not submit 
job 99b0a48ec5cb4086740b1ffd38efd1af.
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
at 
org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.util.FlinkException: Could not upload job jar files.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
at 
java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
at 
java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
... 7 more
Caused by: org.apache.flink.util.FlinkException: Could not upload job jar files.
... 10 more
Caused by: java.io.IOException: Could not connect to BlobServer at address 
/127.0.0.1:41909
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
at 
org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
... 9 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:118)
... 11 more
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-9156:

Description: 
*Description*
The CLI does not respect the {{-m, --jobmanager}} option. For example 
submitting a job using 
{noformat}
bin/flink run -m 172.31.35.68:6123 [...]
{noformat}

results in the client trying to connect to what is specified in 
{{flink-conf.yaml}}.

*Stacktrace*

{noformat}

 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Could not submit 
job 99b0a48ec5cb4086740b1ffd38efd1af.
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
at 
org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.util.FlinkException: Could not upload job jar files.
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
at 
java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
at 
java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
... 7 more
Caused by: org.apache.flink.util.FlinkException: Could not upload job jar files.
... 10 more
Caused by: java.io.IOException: Could not connect to BlobServer at address 
/127.0.0.1:41909
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
at 
org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
... 9 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:118)
... 11 more
{noformat}

  was:
*Description*
The CLI does not respect the {{-m}} option. For example submitting a job using 
{noformat}
bin/flink run -m 172.31.35.68:6123 [...]
{noformat}

results in the client trying to connect to what is specified in 
{{flink-conf.yaml}}.

*Stacktrace*

{noformat}

 The program finished with the following exception:


[jira] [Commented] (FLINK-9147) PrometheusReporter jar does not include Prometheus dependencies

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433668#comment-16433668
 ] 

ASF GitHub Bot commented on FLINK-9147:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5828
  
merging.


> PrometheusReporter jar does not include Prometheus dependencies
> ---
>
> Key: FLINK-9147
> URL: https://issues.apache.org/jira/browse/FLINK-9147
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{PrometheusReporter}} seems to lack the shaded Prometheus dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5828: [FLINK-9147] [metrics] Include shaded Prometheus dependen...

2018-04-11 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5828
  
merging.


---


[jira] [Commented] (FLINK-8426) Error in Generating Timestamp/Watermakr doc

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433667#comment-16433667
 ] 

ASF GitHub Bot commented on FLINK-8426:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5837
  
merging


> Error in Generating Timestamp/Watermakr doc
> ---
>
> Key: FLINK-8426
> URL: https://issues.apache.org/jira/browse/FLINK-8426
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Christophe Jolif
>Assignee: Dmitrii Kniazev
>Priority: Trivial
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html
> {{public class BoundedOutOfOrdernessGenerator extends 
> AssignerWithPeriodicWatermarks}}
> should be
> {{public class BoundedOutOfOrdernessGenerator implements 
> AssignerWithPeriodicWatermarks}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5837: [FLINK-8426][docs] Error in Generating Timestamp/Watermak...

2018-04-11 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5837
  
merging


---


[jira] [Commented] (FLINK-8426) Error in Generating Timestamp/Watermakr doc

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433650#comment-16433650
 ] 

ASF GitHub Bot commented on FLINK-8426:
---

GitHub user mylog00 opened a pull request:

https://github.com/apache/flink/pull/5837

[FLINK-8426][docs] Error in Generating Timestamp/Watermakr doc

[FLINK-8426][docs] fix java examples for "Generating Timestamp/Watermakr" 
documentation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mylog00/flink FLINK-8426

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5837.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5837


commit 8b3a47a57ce42a8ab375e5d465e2a9981bd649a1
Author: Dmitrii_Kniazev 
Date:   2018-04-11T09:46:22Z

[FLINK-8426][docs] fix java examples for "Generating Timestamp/Watermakr" 
documentation.




> Error in Generating Timestamp/Watermakr doc
> ---
>
> Key: FLINK-8426
> URL: https://issues.apache.org/jira/browse/FLINK-8426
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Christophe Jolif
>Assignee: Dmitrii Kniazev
>Priority: Trivial
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html
> {{public class BoundedOutOfOrdernessGenerator extends 
> AssignerWithPeriodicWatermarks}}
> should be
> {{public class BoundedOutOfOrdernessGenerator implements 
> AssignerWithPeriodicWatermarks}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5837: [FLINK-8426][docs] Error in Generating Timestamp/W...

2018-04-11 Thread mylog00
GitHub user mylog00 opened a pull request:

https://github.com/apache/flink/pull/5837

[FLINK-8426][docs] Error in Generating Timestamp/Watermakr doc

[FLINK-8426][docs] fix java examples for "Generating Timestamp/Watermakr" 
documentation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mylog00/flink FLINK-8426

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5837.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5837


commit 8b3a47a57ce42a8ab375e5d465e2a9981bd649a1
Author: Dmitrii_Kniazev 
Date:   2018-04-11T09:46:22Z

[FLINK-8426][docs] fix java examples for "Generating Timestamp/Watermakr" 
documentation.




---


[jira] [Assigned] (FLINK-8426) Error in Generating Timestamp/Watermakr doc

2018-04-11 Thread Dmitrii Kniazev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitrii Kniazev reassigned FLINK-8426:
--

Assignee: Dmitrii Kniazev

> Error in Generating Timestamp/Watermakr doc
> ---
>
> Key: FLINK-8426
> URL: https://issues.apache.org/jira/browse/FLINK-8426
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Christophe Jolif
>Assignee: Dmitrii Kniazev
>Priority: Trivial
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html
> {{public class BoundedOutOfOrdernessGenerator extends 
> AssignerWithPeriodicWatermarks}}
> should be
> {{public class BoundedOutOfOrdernessGenerator implements 
> AssignerWithPeriodicWatermarks}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9141) Calling getSideOutput() and split() on one DataStream causes NPE

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433602#comment-16433602
 ] 

ASF GitHub Bot commented on FLINK-9141:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5836

[FLINK-9141][datastream] Fail early when using both split and side-outputs

## What is the purpose of the change

With this PR we fail early if a user attempts to use split() and 
side-outputs on a single DataStream. Previously this would lead to a 
NullPointerException at runtime.

## Brief change log

* keep track of split() calls in `SingleOutputStreamOperator` by overriding 
it and setting the `wasSplitApplied` flag
* add checks to split() and getSideOutput() that throw an exception if the 
other method was already called


## Verifying this change

This change added tests and can be verified as follows:
* run SplitSideOutputTest

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9141

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5836.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5836


commit ed3ec8716c6d26eee31c4d0ff02c8bfdd70a19d4
Author: zentol 
Date:   2018-04-11T09:13:52Z

[FLINK-9141][datastream] Fail early when using both split and side-outputs




> Calling getSideOutput() and split() on one DataStream causes NPE
> 
>
> Key: FLINK-9141
> URL: https://issues.apache.org/jira/browse/FLINK-9141
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>
> Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
> {{NullPointerException}} to be thrown at runtime.
> As a work-around one can add a no-op map function before the split() call.
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproducer:
> {code}
> private static final OutputTag tag = new OutputTag("tag") {};
> public static void main(String[] args) throws Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream dataStream1 = env.fromElements("foo");
>   SingleOutputStreamOperator processedStream = dataStream1
>   .process(new ProcessFunction() {
>   @Override
>   public void processElement(String value, Context ctx, 
> Collector out) {
>   }
>   });
>   processedStream.getSideOutput(tag)
>   .print();
>   processedStream
>   .split(Collections::singletonList)
>   .select("bar")
>   .print();
>   env.execute();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5836: [FLINK-9141][datastream] Fail early when using bot...

2018-04-11 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5836

[FLINK-9141][datastream] Fail early when using both split and side-outputs

## What is the purpose of the change

With this PR we fail early if a user attempts to use split() and 
side-outputs on a single DataStream. Previously this would lead to a 
NullPointerException at runtime.

## Brief change log

* keep track of split() calls in `SingleOutputStreamOperator` by overriding 
it and setting the `wasSplitApplied` flag
* add checks to split() and getSideOutput() that throw an exception if the 
other method was already called


## Verifying this change

This change added tests and can be verified as follows:
* run SplitSideOutputTest

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9141

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5836.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5836


commit ed3ec8716c6d26eee31c4d0ff02c8bfdd70a19d4
Author: zentol 
Date:   2018-04-11T09:13:52Z

[FLINK-9141][datastream] Fail early when using both split and side-outputs




---


[jira] [Assigned] (FLINK-9141) Calling getSideOutput() and split() on one DataStream causes NPE

2018-04-11 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-9141:
---

Assignee: Chesnay Schepler

> Calling getSideOutput() and split() on one DataStream causes NPE
> 
>
> Key: FLINK-9141
> URL: https://issues.apache.org/jira/browse/FLINK-9141
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>
> Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
> {{NullPointerException}} to be thrown at runtime.
> As a work-around one can add a no-op map function before the split() call.
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproducer:
> {code}
> private static final OutputTag tag = new OutputTag("tag") {};
> public static void main(String[] args) throws Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream dataStream1 = env.fromElements("foo");
>   SingleOutputStreamOperator processedStream = dataStream1
>   .process(new ProcessFunction() {
>   @Override
>   public void processElement(String value, Context ctx, 
> Collector out) {
>   }
>   });
>   processedStream.getSideOutput(tag)
>   .print();
>   processedStream
>   .split(Collections::singletonList)
>   .select("bar")
>   .print();
>   env.execute();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9141) Calling getSideOutput() and split() on one DataStream causes NPE

2018-04-11 Thread vinoyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-9141:
---

Assignee: (was: vinoyang)

> Calling getSideOutput() and split() on one DataStream causes NPE
> 
>
> Key: FLINK-9141
> URL: https://issues.apache.org/jira/browse/FLINK-9141
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Priority: Critical
>
> Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
> {{NullPointerException}} to be thrown at runtime.
> As a work-around one can add a no-op map function before the split() call.
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproducer:
> {code}
> private static final OutputTag tag = new OutputTag("tag") {};
> public static void main(String[] args) throws Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream dataStream1 = env.fromElements("foo");
>   SingleOutputStreamOperator processedStream = dataStream1
>   .process(new ProcessFunction() {
>   @Override
>   public void processElement(String value, Context ctx, 
> Collector out) {
>   }
>   });
>   processedStream.getSideOutput(tag)
>   .print();
>   processedStream
>   .split(Collections::singletonList)
>   .select("bar")
>   .print();
>   env.execute();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9141) Calling getSideOutput() and split() on one DataStream causes NPE

2018-04-11 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9141:

Description: 
Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
{{NullPointerException}} to be thrown at runtime.

As a work-around one can add a no-op map function before the split() call.

Exception:
{code}
Caused by: java.lang.NullPointerException
at 
org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproducer:
{code}
private static final OutputTag tag = new OutputTag("tag") {};

public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

DataStream dataStream1 = env.fromElements("foo");

SingleOutputStreamOperator processedStream = dataStream1
.process(new ProcessFunction() {
@Override
public void processElement(String value, Context ctx, 
Collector out) {
}
});

processedStream.getSideOutput(tag)
.print();

processedStream
.split(Collections::singletonList)
.select("bar")
.print();

env.execute();
}
{code}

  was:
Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
{{NullPointerException}} to be thrown at runtime.

As a work-around one can add a no-op map function before the split() call.

Exception:
{code}
Caused by: java.lang.NullPointerException
at 
org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
{code}

Reproducer:
{code}
private static final OutputTag tag = new OutputTag("tag") {};

public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

DataStream dataStream1 = env.fromElements("foo");

SingleOutputStreamOperator processedStream = dataStream1
.process(new ProcessFunction() {
@Override
public void processElement(String value, Context ctx, 
Collector out) {
}
});

processedStream.getSideOutput(tag)
.print();

processedStream
.map(record -> record)
.split(Collections::singletonList)
.select("bar")
.print();

env.execute();
}
{code}


> Calling getSideOutput() and split() on one DataStream causes NPE
> 
>
> Key: FLINK-9141
> URL: https://issues.apache.org/jira/browse/FLINK-9141
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: vinoyang
>Priority: Critical
>
> Calling both {{getSideOutput()}} and {{split()}} on one DataStream causes a 
> {{NullPointerException}} to be thrown at runtime.
> As a work-around one can add a no-op map function before the split() call.
> Exception:
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.api.collector.selector.DirectedOutput.(DirectedOutput.java:79)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:326)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:128)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:274)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Reproducer:
> {code}
> private static final OutputTag tag = new OutputTag("tag") {};
> public static void main(String[] args) throws Exception {
>   StreamExecutionEnvironment env = 

[GitHub] flink issue #5823: [FLINK-9008] [e2e] Implements quickstarts end to end test

2018-04-11 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
This commit seems has an issue. not suitable to review now. I will fix it 
soon.


---


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433570#comment-16433570
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
This commit seems has an issue. not suitable to review now. I will fix it 
soon.


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-2435) Add support for custom CSV field parsers

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433525#comment-16433525
 ] 

ASF GitHub Bot commented on FLINK-2435:
---

GitHub user DmitryKober opened a pull request:

https://github.com/apache/flink/pull/5835

[FLINK-2435] Extending CsvReader capabilities: it is now possible to let 
user-defined classes be presented in csv fields.

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If 

[GitHub] flink pull request #5835: [FLINK-2435] Extending CsvReader capabilities: it ...

2018-04-11 Thread DmitryKober
GitHub user DmitryKober opened a pull request:

https://github.com/apache/flink/pull/5835

[FLINK-2435] Extending CsvReader capabilities: it is now possible to let 
user-defined classes be presented in csv fields.

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DmitryKober/flink flink-2435

Alternatively you