[jira] [Commented] (FLINK-6571) InfiniteSource in SourceStreamOperatorTest should deal with InterruptedExceptions

2018-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429897#comment-16429897
 ] 

ASF GitHub Bot commented on FLINK-6571:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5394
  
I think neither solve the problem.

Variant 2 looks identical to what we have in master.

Variant 1 only allows interrupts after the task was canceled.
According to what @StephanEwen said, if the UDF throws an exception after 
the task was canceled the exception will be suppressed and should not lead to a 
test failure. Since the test did fail it thus must've been thrown _before_ the 
task was cancelled. Given that variant 1 still throws an exception in this case 
we aren't solving the stability issue.


> InfiniteSource in SourceStreamOperatorTest should deal with 
> InterruptedExceptions
> -
>
> Key: FLINK-6571
> URL: https://issues.apache.org/jira/browse/FLINK-6571
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.5.0
>
>
> So this is a new one: i got a failing test 
> ({{testNoMaxWatermarkOnAsyncStop}}) due to an uncatched InterruptedException.
> {code}
> [00:28:15] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.828 sec <<< FAILURE! - in 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest
> [00:28:15] 
> testNoMaxWatermarkOnAsyncStop(org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest)
>   Time elapsed: 0 sec  <<< ERROR!
> [00:28:15] java.lang.InterruptedException: sleep interrupted
> [00:28:15]at java.lang.Thread.sleep(Native Method)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest$InfiniteSource.run(StreamSourceOperatorTest.java:343)
> [00:28:15]at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest.testNoMaxWatermarkOnAsyncStop(StreamSourceOperatorTest.java:176)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5394: [FLINK-6571][tests] Catch InterruptedException in StreamS...

2018-04-08 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5394
  
I think neither solve the problem.

Variant 2 looks identical to what we have in master.

Variant 1 only allows interrupts after the task was canceled.
According to what @StephanEwen said, if the UDF throws an exception after 
the task was canceled the exception will be suppressed and should not lead to a 
test failure. Since the test did fail it thus must've been thrown _before_ the 
task was cancelled. Given that variant 1 still throws an exception in this case 
we aren't solving the stability issue.


---


[jira] [Commented] (FLINK-6571) InfiniteSource in SourceStreamOperatorTest should deal with InterruptedExceptions

2018-04-08 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429896#comment-16429896
 ] 

Chesnay Schepler commented on FLINK-6571:
-

Occurred on travis: https://travis-ci.org/apache/flink/jobs/363107340

> InfiniteSource in SourceStreamOperatorTest should deal with 
> InterruptedExceptions
> -
>
> Key: FLINK-6571
> URL: https://issues.apache.org/jira/browse/FLINK-6571
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.5.0
>
>
> So this is a new one: i got a failing test 
> ({{testNoMaxWatermarkOnAsyncStop}}) due to an uncatched InterruptedException.
> {code}
> [00:28:15] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.828 sec <<< FAILURE! - in 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest
> [00:28:15] 
> testNoMaxWatermarkOnAsyncStop(org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest)
>   Time elapsed: 0 sec  <<< ERROR!
> [00:28:15] java.lang.InterruptedException: sleep interrupted
> [00:28:15]at java.lang.Thread.sleep(Native Method)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest$InfiniteSource.run(StreamSourceOperatorTest.java:343)
> [00:28:15]at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest.testNoMaxWatermarkOnAsyncStop(StreamSourceOperatorTest.java:176)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9148) when deploying flink on kubernetes, the taskmanager report "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name resolution"

2018-04-08 Thread You Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

You Chu updated FLINK-9148:
---
Description: 
refer to 
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:

I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
taskmanager contains failed with error:

java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
resolution
 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
 at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
 at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
 at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
 at java.net.InetAddress.getAllByName(InetAddress.java:1192)
 at java.net.InetAddress.getAllByName(InetAddress.java:1126)
 at java.net.InetAddress.getByName(InetAddress.java:1076)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:172)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:137)
 at 
org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:79)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$.selectNetworkInterfaceAndRunTaskManager(TaskManager.scala:1681)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1592)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1590)
 at java.security.AccessController.doPrivileged(Native Method)

 

I know that the the 
jobmanager-deployment.yaml
taskmanager-deployment.yaml

I know in flink docker image, it uses environment  
{{JOB_MANAGER_RPC_ADDRESS=flink-jobmanager to resolve jobmanager address. 
however in flink task container, it can't resolve the hostname flink-jobmanager.
Can anyone help me to fix it? Should I need to setup a DNS to resolve?}}

  was:
refer 
tohttps://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:

I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
taskmanager contains failed with error:


java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
resolution
 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
 at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
 at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
 at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
 at java.net.InetAddress.getAllByName(InetAddress.java:1192)
 at java.net.InetAddress.getAllByName(InetAddress.java:1126)
 at java.net.InetAddress.getByName(InetAddress.java:1076)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:172)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:137)
 at 
org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:79)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$.selectNetworkInterfaceAndRunTaskManager(TaskManager.scala:1681)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1592)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1590)
 at java.security.AccessController.doPrivileged(Native Method)


> when deploying flink on kubernetes, the taskmanager report 
> "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution"
> --
>
> Key: FLINK-9148
> URL: https://issues.apache.org/jira/browse/FLINK-9148
> Project: Flink
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.4.2
> Environment: kubernetes 1.9
> docker 1.4
> see 
> :https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html
>Reporter: You Chu
>Priority: Blocker
>
> refer to 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:
> I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
> taskmanager contains failed with error:
> java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution
>  at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
>  at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
>  at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
>  at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1192)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1126)
>  at 

[jira] [Created] (FLINK-9148) when deploying flink on kubernetes, the taskmanager report "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name resolution"

2018-04-08 Thread You Chu (JIRA)
You Chu created FLINK-9148:
--

 Summary: when deploying flink on kubernetes, the taskmanager 
report "java.net.UnknownHostException: flink-jobmanager: Temporary failure in 
name resolution"
 Key: FLINK-9148
 URL: https://issues.apache.org/jira/browse/FLINK-9148
 Project: Flink
  Issue Type: Bug
  Components: Docker
Affects Versions: 1.4.2
 Environment: kubernetes 1.9
docker 1.4
see 
:https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html
Reporter: You Chu


refer 
tohttps://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:

I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
taskmanager contains failed with error:


java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
resolution
 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
 at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
 at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
 at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
 at java.net.InetAddress.getAllByName(InetAddress.java:1192)
 at java.net.InetAddress.getAllByName(InetAddress.java:1126)
 at java.net.InetAddress.getByName(InetAddress.java:1076)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:172)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:137)
 at 
org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:79)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$.selectNetworkInterfaceAndRunTaskManager(TaskManager.scala:1681)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1592)
 at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1590)
 at java.security.AccessController.doPrivileged(Native Method)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9040) JobVertex#setMaxParallelism does not validate argument

2018-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429761#comment-16429761
 ] 

ASF GitHub Bot commented on FLINK-9040:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5825
  
I only fix the javadoc for `JobVertex#setMaxParallelism()`, because if we 
validate `maxParallism` in that function we will break some current code, and 
I'am not sure whether these code also need to be changed, e.g: 

If the user didn't set the `maxParallelism` for `Transformation`, than the 
default value is -1, and in `StreamJobGraphGenerator#createJobVertex` we use 
the following code to set the `maxParallelism` for `jobVertex`
```java
   jobVertex.setMaxParallelism(streamNode.getMaxParallelism());
```
also, in the constructor of `ExecutionJobVertex`, we use the following code 
to get the default `maxParallelism` value and the `VALUE_NOT_SET` is `-1`.
```java
  final int configuredMaxParallelism = jobVertex.getMaxParallelism();

  this.maxParallelismConfigured = (VALUE_NOT_SET != 
configuredMaxParallelism);
```

@zentol What do you think? if you still think we should validate 
`maxParallism` in `JobVertex#setMaxParallelism()` please let me know.


> JobVertex#setMaxParallelism does not validate argument
> --
>
> Key: FLINK-9040
> URL: https://issues.apache.org/jira/browse/FLINK-9040
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Sihua Zhou
>Priority: Minor
>
> {code}
> /**
> * Sets the maximum parallelism for the task.
> *
> * @param maxParallelism The maximum parallelism to be set. must be between 1 
> and Short.MAX_VALUE.
> */
> public void setMaxParallelism(int maxParallelism) {
>   this.maxParallelism = maxParallelism;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5825: [FLINK-9040][local runtime] check maxParallelism in JobVe...

2018-04-08 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5825
  
I only fix the javadoc for `JobVertex#setMaxParallelism()`, because if we 
validate `maxParallism` in that function we will break some current code, and 
I'am not sure whether these code also need to be changed, e.g: 

If the user didn't set the `maxParallelism` for `Transformation`, than the 
default value is -1, and in `StreamJobGraphGenerator#createJobVertex` we use 
the following code to set the `maxParallelism` for `jobVertex`
```java
   jobVertex.setMaxParallelism(streamNode.getMaxParallelism());
```
also, in the constructor of `ExecutionJobVertex`, we use the following code 
to get the default `maxParallelism` value and the `VALUE_NOT_SET` is `-1`.
```java
  final int configuredMaxParallelism = jobVertex.getMaxParallelism();

  this.maxParallelismConfigured = (VALUE_NOT_SET != 
configuredMaxParallelism);
```

@zentol What do you think? if you still think we should validate 
`maxParallism` in `JobVertex#setMaxParallelism()` please let me know.


---


[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-08 Thread Triones Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429743#comment-16429743
 ] 

Triones Deng commented on FLINK-9087:
-

thanks for your suggestions, i will follow this.

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9147) PrometheusReporter jar does not include Prometheus dependencies

2018-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429700#comment-16429700
 ] 

ASF GitHub Bot commented on FLINK-9147:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5828
  
+1


> PrometheusReporter jar does not include Prometheus dependencies
> ---
>
> Key: FLINK-9147
> URL: https://issues.apache.org/jira/browse/FLINK-9147
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{PrometheusReporter}} seems to lack the shaded Prometheus dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5828: [FLINK-9147] [metrics] Include shaded Prometheus dependen...

2018-04-08 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5828
  
+1


---


[jira] [Commented] (FLINK-9008) End-to-end test: Quickstarts

2018-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429698#comment-16429698
 ] 

ASF GitHub Bot commented on FLINK-9008:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Since Saturday and Sunday, sorry for later.

Thanks @zentol . I need confirm some points here with you before I getting 
start again with this issue.

1. I have to create an actual quickstart project. Hmm, Is this a project or 
a maven module ? And where do I created for that ? If it is a project(sounds 
weird because all in flink belongs to maven module), I think I should use 
command in bash to create this project. And the bash command is ```curl 
https://flink.apache.org/q/quickstart.sh | bash```. As the position for this 
project, Is it suitable put it to ```flink-end-to-end-tests``` folder ?

2. Verification issue. verify that no core dependencies are contained in 
the jar file. I do not understand very well here. Is that mean we should check 
the jar file that no flink-core dependencies in the jar file since there is 
already have that in flink cluster set up.  If we still have flink-core 
dependency in the jar file, that would cause the jar file very big size.  I 
understand correct ? So, we need have a check here.

3. The job could fail outright, yet the test will still succeed. I also do 
not understand very well. This means I need make the job 
```StreamExecutionEnvironment#enableCheckpointing``` in code ?

If I am wrong, please helps me out here. Thank you in advance. @zentol 

  


> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5823: [FLINK-9008] [e2e] Implements quickstarts end to end test

2018-04-08 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5823
  
Since Saturday and Sunday, sorry for later.

Thanks @zentol . I need confirm some points here with you before I getting 
start again with this issue.

1. I have to create an actual quickstart project. Hmm, Is this a project or 
a maven module ? And where do I created for that ? If it is a project(sounds 
weird because all in flink belongs to maven module), I think I should use 
command in bash to create this project. And the bash command is ```curl 
https://flink.apache.org/q/quickstart.sh | bash```. As the position for this 
project, Is it suitable put it to ```flink-end-to-end-tests``` folder ?

2. Verification issue. verify that no core dependencies are contained in 
the jar file. I do not understand very well here. Is that mean we should check 
the jar file that no flink-core dependencies in the jar file since there is 
already have that in flink cluster set up.  If we still have flink-core 
dependency in the jar file, that would cause the jar file very big size.  I 
understand correct ? So, we need have a check here.

3. The job could fail outright, yet the test will still succeed. I also do 
not understand very well. This means I need make the job 
```StreamExecutionEnvironment#enableCheckpointing``` in code ?

If I am wrong, please helps me out here. Thank you in advance. @zentol 

  


---


[jira] [Commented] (FLINK-9040) JobVertex#setMaxParallelism does not validate argument

2018-04-08 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429655#comment-16429655
 ] 

Sihua Zhou commented on FLINK-9040:
---

Hi [~Zentol] I think the javadoc here is also incorrect, the maxParallelism is 
of between {{1 and 

KeyGroupRangeAssignment#UPPER_BOUND_MAX_PARALLELISM (not Short.MAX_VALUE, cause 
KeyGroupRangeAssignment#UPPER_BOUND_MAX_PARALLELISM == Short.MAX_VALUE + 1)}}, 
am I wrong? 

> JobVertex#setMaxParallelism does not validate argument
> --
>
> Key: FLINK-9040
> URL: https://issues.apache.org/jira/browse/FLINK-9040
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Sihua Zhou
>Priority: Minor
>
> {code}
> /**
> * Sets the maximum parallelism for the task.
> *
> * @param maxParallelism The maximum parallelism to be set. must be between 1 
> and Short.MAX_VALUE.
> */
> public void setMaxParallelism(int maxParallelism) {
>   this.maxParallelism = maxParallelism;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9147) PrometheusReporter jar does not include Prometheus dependencies

2018-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429648#comment-16429648
 ] 

ASF GitHub Bot commented on FLINK-9147:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/5828

[FLINK-9147] [metrics] Include shaded Prometheus dependencies in jar

## What is the purpose of the change

The Prometheus metrics reporter does not include the shaded Prometheus 
dependencies
in its jar. This commit changes this.

## Verifying this change

- Verified manually 

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixPrometheusDependencies

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5828.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5828


commit 1edc725bf6ef46ffe1ba9a6fabf343ec46ad52f2
Author: Till Rohrmann 
Date:   2018-04-08T06:40:42Z

[FLINK-9147] [metrics] Include shaded Prometheus dependencies in jar

The Prometheus metrics reporter does not include the shaded Prometheus 
dependencies
in its jar. This commit changes this.




> PrometheusReporter jar does not include Prometheus dependencies
> ---
>
> Key: FLINK-9147
> URL: https://issues.apache.org/jira/browse/FLINK-9147
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{PrometheusReporter}} seems to lack the shaded Prometheus dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5828: [FLINK-9147] [metrics] Include shaded Prometheus d...

2018-04-08 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/5828

[FLINK-9147] [metrics] Include shaded Prometheus dependencies in jar

## What is the purpose of the change

The Prometheus metrics reporter does not include the shaded Prometheus 
dependencies
in its jar. This commit changes this.

## Verifying this change

- Verified manually 

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixPrometheusDependencies

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5828.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5828


commit 1edc725bf6ef46ffe1ba9a6fabf343ec46ad52f2
Author: Till Rohrmann 
Date:   2018-04-08T06:40:42Z

[FLINK-9147] [metrics] Include shaded Prometheus dependencies in jar

The Prometheus metrics reporter does not include the shaded Prometheus 
dependencies
in its jar. This commit changes this.




---


[jira] [Created] (FLINK-9147) PrometheusReporter jar does not include Prometheus dependencies

2018-04-08 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-9147:


 Summary: PrometheusReporter jar does not include Prometheus 
dependencies
 Key: FLINK-9147
 URL: https://issues.apache.org/jira/browse/FLINK-9147
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.5.0
Reporter: Till Rohrmann
 Fix For: 1.5.0


The {{PrometheusReporter}} seems to lack the shaded Prometheus dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9147) PrometheusReporter jar does not include Prometheus dependencies

2018-04-08 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-9147:


Assignee: Till Rohrmann

> PrometheusReporter jar does not include Prometheus dependencies
> ---
>
> Key: FLINK-9147
> URL: https://issues.apache.org/jira/browse/FLINK-9147
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{PrometheusReporter}} seems to lack the shaded Prometheus dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9146) 1.4 more than local cannot start submitting

2018-04-08 Thread abel-sun (JIRA)
abel-sun created FLINK-9146:
---

 Summary: 1.4 more than local cannot start submitting 
 Key: FLINK-9146
 URL: https://issues.apache.org/jira/browse/FLINK-9146
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.4.2, 1.4.1
 Environment: java version "1.8.0_102"

Scala code runner version 2.11.8 -- Copyright 2002-2016, LAMP/EPFL

flink 1.4.1

 
Reporter: abel-sun
 Attachments: WechatIMG4550.jpeg, WechatIMG4551.jpeg

 
Flink-1.4 quickstart address: 
[https://ci.apache.org/projects/flink/flink-docs-release-1.4/quickstart/setup_quickstart.html].

When I use "./bin/start-local.sh" to start flink following flink-1.4's 
quickstart, then i check [http://localhost:8081/] and make sure everything is 
running, 
{code:java}
//
sunzhenya @ localhost in ~/Work/BigData/app/flink-1.4.1 [13:48:34] C:146
$
./bin/flink run -c org.tko.log.flink.WordCount 
/Users/sunzhenya/Work/BigData/workspace/tko-log/target/tko-log-1.0-SNAPSHOT.jar
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/Users/sunzhenya/Work/BigData/app/flink-1.4.1/lib/flink-shaded-hadoop2-uber-1.4.1.jar)
 to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of 
org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
Cluster configuration: Standalone cluster with JobManager at 
localhost/127.0.0.1:6123
Using address localhost:6123 to connect to JobManager.
JobManager web interface address http://localhost:8081
Starting execution of program
Submitting job with JobID: 583cdaa12bc70a13e888608f9e23e9d9. Waiting for job 
completion.


The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The program 
execution failed: Couldn't retrieve the JobExecutionResult from the JobManager.
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:492)
at 
org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:456)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:444)
at 
org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at 
org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:815)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
at org.tko.log.flink.WordCount.main(WordCount.java:66)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:396)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:802)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:282)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1054)
at org.apache.flink.client.CliFrontend$1.call(CliFrontend.java:1101)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)