[jira] [Commented] (FLINK-9188) Provide a mechanism to configure AmazonKinesisClient in FlinkKinesisConsumer

2018-04-16 Thread Thomas Weise (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440391#comment-16440391
 ] 

Thomas Weise commented on FLINK-9188:
-

[~tzulitai] this is how the proposed generic property support would look like:

[https://github.com/tweise/flink/commit/9ce8ddafaee4f1c6463f568966329b0f293ecbe0]

 

> Provide a mechanism to configure AmazonKinesisClient in FlinkKinesisConsumer
> 
>
> Key: FLINK-9188
> URL: https://issues.apache.org/jira/browse/FLINK-9188
> Project: Flink
>  Issue Type: Task
>  Components: Kinesis Connector
>Reporter: Thomas Weise
>Priority: Major
>
> It should be possible to control the ClientConfiguration to set socket 
> timeout and other properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9188) Provide a mechanism to configure AmazonKinesisClient in FlinkKinesisConsumer

2018-04-16 Thread Thomas Weise (JIRA)
Thomas Weise created FLINK-9188:
---

 Summary: Provide a mechanism to configure AmazonKinesisClient in 
FlinkKinesisConsumer
 Key: FLINK-9188
 URL: https://issues.apache.org/jira/browse/FLINK-9188
 Project: Flink
  Issue Type: Task
  Components: Kinesis Connector
Reporter: Thomas Weise


It should be possible to control the ClientConfiguration to set socket timeout 
and other properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5857: [FLINK-9187][METRICS] add prometheus pushgateway r...

2018-04-16 Thread lamber-ken
GitHub user lamber-ken reopened a pull request:

https://github.com/apache/flink/pull/5857

[FLINK-9187][METRICS] add prometheus pushgateway reporter

## What is the purpose of the change
This pull request makes flink system can send metrics to prometheus via 
pushgateway. when using `yarn-cluster` model, it's useful.

## Brief change log

  - Add prometheus pushgateway repoter
  - Restructure the code of the promethues reporter part

## Verifying this change

This change is already covered by existing tests. [prometheus 
test](https://github.com/apache/flink/tree/master/flink-metrics/flink-metrics-prometheus/src/test/java/org/apache/flink/metrics/prometheus)

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (JavaDocs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lamber-ken/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5857


commit a3503a5d08e4d02d6cf38d656e2697d3b1197cf1
Author: lamber-ken 
Date:   2018-04-16T13:49:56Z

add prometheus pushgateway reporter




---


[jira] [Commented] (FLINK-9187) add prometheus pushgateway reporter

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440289#comment-16440289
 ] 

ASF GitHub Bot commented on FLINK-9187:
---

GitHub user lamber-ken reopened a pull request:

https://github.com/apache/flink/pull/5857

[FLINK-9187][METRICS] add prometheus pushgateway reporter

## What is the purpose of the change
This pull request makes flink system can send metrics to prometheus via 
pushgateway. when using `yarn-cluster` model, it's useful.

## Brief change log

  - Add prometheus pushgateway repoter
  - Restructure the code of the promethues reporter part

## Verifying this change

This change is already covered by existing tests. [prometheus 
test](https://github.com/apache/flink/tree/master/flink-metrics/flink-metrics-prometheus/src/test/java/org/apache/flink/metrics/prometheus)

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (JavaDocs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lamber-ken/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5857


commit a3503a5d08e4d02d6cf38d656e2697d3b1197cf1
Author: lamber-ken 
Date:   2018-04-16T13:49:56Z

add prometheus pushgateway reporter




> add prometheus pushgateway reporter
> ---
>
> Key: FLINK-9187
> URL: https://issues.apache.org/jira/browse/FLINK-9187
> Project: Flink
>  Issue Type: New Feature
>  Components: Metrics
>Affects Versions: 1.4.2
>Reporter: lamber-ken
>Priority: Minor
>  Labels: features
> Fix For: 1.5.0
>
>
> make flink system can send metrics to prometheus via pushgateway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440279#comment-16440279
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
@zhangminglei ,can you cc 
[FLINK-9187](https://issues.apache.org/jira/browse/FLINK-9187)


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
@zhangminglei ,can you cc 
[FLINK-9187](https://issues.apache.org/jira/browse/FLINK-9187)


---


[jira] [Created] (FLINK-9187) add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken (JIRA)
lamber-ken created FLINK-9187:
-

 Summary: add prometheus pushgateway reporter
 Key: FLINK-9187
 URL: https://issues.apache.org/jira/browse/FLINK-9187
 Project: Flink
  Issue Type: New Feature
  Components: Metrics
Affects Versions: 1.4.2
Reporter: lamber-ken
 Fix For: 1.5.0


make flink system can send metrics to prometheus via pushgateway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-16 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9185:
---

Assignee: mingleizhang

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440265#comment-16440265
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440264#comment-16440264
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Yes. Apache hadoop is also different from apache flink. we should obey the 
rules.


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440266#comment-16440266
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken closed the pull request at:

https://github.com/apache/flink/pull/5857


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok


---


[GitHub] flink pull request #5857: [FLINK-9180][METRICS] add prometheus pushgateway r...

2018-04-16 Thread lamber-ken
Github user lamber-ken closed the pull request at:

https://github.com/apache/flink/pull/5857


---


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Yes. Apache hadoop is also different from apache flink. we should obey the 
rules.


---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440261#comment-16440261
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see, thanks a lot. 
I contributed to [ClickHouse](https://github.com/yandex/ClickHouse) before, 
the submission process is different.


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see, thanks a lot. 
I contributed to [ClickHouse](https://github.com/yandex/ClickHouse) before, 
the submission process is different.


---


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Yes. You should create the JIRA first. Then, push a PR to the corresponding 
jira number. 


---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440260#comment-16440260
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Yes. You should create the JIRA first. Then, push a PR to the corresponding 
jira number. 


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440257#comment-16440257
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see. I'll close the PR.
by the way, need to create jira first, and then PR?



> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see. I'll close the PR.
by the way, need to create jira first, and then PR?



---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440256#comment-16440256
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Not ok. 9189 seems does not exist since you can not access that. You can 
check it.


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Not ok. 9189 seems does not exist since you can not access that. You can 
check it.


---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440254#comment-16440254
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see. I'll close the PR.
use `FLINK-9189`, ok?


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
ok, I see. I'll close the PR.
use `FLINK-9189`, ok?


---


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
@lamber-ken You push your code to the incorrect jira number, flink-9180. 
But it is not relevant to your issue. you can check this out : 
https://issues.apache.org/jira/browse/FLINK-9180. 


---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440249#comment-16440249
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
@lamber-ken You push your code to the incorrect jira number, flink-9180. 
But it is not relevant to your issue. you can check this out : 
https://issues.apache.org/jira/browse/FLINK-9180. 


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440248#comment-16440248
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
wrong place? can you point it out, I don't know. thank you


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/5857
  
wrong place? can you point it out, I don't know. thank you


---


[jira] [Commented] (FLINK-9173) RestClient - Received response is abnormal

2018-04-16 Thread Bob Lau (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440232#comment-16440232
 ] 

Bob Lau commented on FLINK-9173:


[~Zentol]   In the last 12 hours, master/release-1.5 version , I saw the 
RestClient.java has override the toString method by [~Zentol] (zentol).  Now  I 
paste up the latest ( 12 hours ago ) exceptions. As follows:
{code:java}
//exception stack
"status":{"id":"COMPLETED"},"job-execution-result":{"id":"29eaea6ee1cee2c8a959764556cc8ef3","accumulator-results":{},"net-runtime":4882,
"failure-cause":{"class":"org.apache.flink.api.common.InvalidProgramException",
"stack-trace":"
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
at org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
at 
org.apache.flink.table.runtime.CRowProcessRunner.compile(CRowProcessRunner.scala:35)
at 
org.apache.flink.table.runtime.CRowProcessRunner.open(CRowProcessRunner.scala:49)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at 
org.apache.flink.streaming.api.operators.ProcessOperator.open(ProcessOperator.java:56)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:439)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:297)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)\n\tat 
java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NoSuchMethodError: 
org.codehaus.commons.compiler.Location.(Ljava/lang/String;SS)V
at org.codehaus.janino.Scanner.location(Scanner.java:261)
at org.codehaus.janino.Parser.location(Parser.java:2742)
at org.codehaus.janino.Parser.parseCompilationUnit(Parser.java:155)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:201)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:192)
at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
at org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:33)
... 9 more

{code}

> RestClient - Received response is abnormal
> --
>
> Key: FLINK-9173
> URL: https://issues.apache.org/jira/browse/FLINK-9173
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST
>Affects Versions: 1.5.0
> Environment: OS:    CentOS 6.8
> JAVA:    1.8.0_161-b12
> maven-plugin:     spring-boot-maven-plugin
> Spring-boot:      1.5.10.RELEASE
>Reporter: Bob Lau
>Priority: Major
>
> The system prints the exception log as follows:
>  
> {code:java}
> //代码占位符
> 09:07:20.755 tysc_log [Flink-RestClusterClient-IO-thread-4] ERROR 
> o.a.flink.runtime.rest.RestClient - Received response was neither of the 
> expected type ([simple type, class 
> org.apache.flink.runtime.rest.messages.job.JobExecutionResultResponseBody]) 
> nor an error. 
> Response=org.apache.flink.runtime.rest.RestClient$JsonResponse@2ac43968
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
>  Unrecognized field "status" (class 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as 
> ignorable (one known property: "errors"])
> at [Source: N/A; line: -1, column: -1] (through reference chain: 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1392)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1346)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:455)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1127)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:298)
> at 
> 

[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440230#comment-16440230
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
And I will delete the incorrect link from the flink-9180 jira. Let you know.


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
And I will delete the incorrect link from the flink-9180 jira. Let you know.


---


[jira] [Commented] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440229#comment-16440229
 ] 

ASF GitHub Bot commented on FLINK-9180:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Hi, You push to the wrong place.


> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5857: [FLINK-9180][METRICS] add prometheus pushgateway reporter

2018-04-16 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5857
  
Hi, You push to the wrong place.


---


[jira] [Updated] (FLINK-8933) Avoid calling Class#newInstance

2018-04-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-8933:
--
Description: 
Class#newInstance is deprecated starting in Java 9 - 
https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
undeclared checked exceptions.

The suggested replacement is getDeclaredConstructor().newInstance(), which 
wraps the checked exceptions in InvocationException.

  was:
Class#newInstance is deprecated starting in Java 9 - 
https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
undeclared checked exceptions.


The suggested replacement is getDeclaredConstructor().newInstance(), which 
wraps the checked exceptions in InvocationException.


> Avoid calling Class#newInstance
> ---
>
> Key: FLINK-8933
> URL: https://issues.apache.org/jira/browse/FLINK-8933
> Project: Flink
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Class#newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions.
> The suggested replacement is getDeclaredConstructor().newInstance(), which 
> wraps the checked exceptions in InvocationException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9150) Prepare for Java 10

2018-04-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9150:
--
Description: 
Java 9 is not a LTS release.

When compiling with Java 10, I see the following compilation error:

{code}
[ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not 
resolve dependencies for project 
org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find artifact 
jdk.tools:jdk.tools:jar:1.6 at specified path /a/jdk-10/../lib/tools.jar -> 
[Help 1]
{code}

  was:
Java 9 is not a LTS release.

When compiling with Java 10, I see the following compilation error:
{code}
[ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not 
resolve dependencies for project 
org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find artifact 
jdk.tools:jdk.tools:jar:1.6 at specified path /a/jdk-10/../lib/tools.jar -> 
[Help 1]
{code}


> Prepare for Java 10
> ---
>
> Key: FLINK-9150
> URL: https://issues.apache.org/jira/browse/FLINK-9150
> Project: Flink
>  Issue Type: Task
>Reporter: Ted Yu
>Priority: Major
>
> Java 9 is not a LTS release.
> When compiling with Java 10, I see the following compilation error:
> {code}
> [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not 
> resolve dependencies for project 
> org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find 
> artifact jdk.tools:jdk.tools:jar:1.6 at specified path 
> /a/jdk-10/../lib/tools.jar -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9138) Enhance BucketingSink to also flush data by time interval

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440162#comment-16440162
 ] 

ASF GitHub Bot commented on FLINK-9138:
---

GitHub user glaksh100 opened a pull request:

https://github.com/apache/flink/pull/5860

[FLINK-9138][filesystem-connectors] Implement time based rollover in 
BucketingSink

## What is the purpose of the change

This pull request enables a time-based rollover of the part file in the 
BucketingSink. This is particularly applicable when when write throughput is 
low and helps data become available at a fixed interval, for consumption.

## Brief change log
  - Add a `batchRolloverInterval` field with a setter 
  - Track a `firstWrittenToTime` for the bucket state
  - Check for `currentProcessingTime` - `firstWrittenToTime` > 
`batchRolloverInterval` and roll over if true

## Verifying this change

This change added tests and can be verified as follows:

  - Added a `testRolloverInterval` test method to the `BucketingSinkTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (**yes** / no)
  - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/glaksh100/flink 
FLINK-9138.bucketingSinkRolloverInterval

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5860


commit fee3ba293f4db4ad2d39b4ac0f3993711da9bda6
Author: Lakshmi Gururaja Rao 
Date:   2018-04-16T23:31:49Z

[FLINK-9138] Implement time based rollover of part file in BucketingSink




> Enhance BucketingSink to also flush data by time interval
> -
>
> Key: FLINK-9138
> URL: https://issues.apache.org/jira/browse/FLINK-9138
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector
>Affects Versions: 1.4.2
>Reporter: Narayanan Arunachalam
>Priority: Major
>
> BucketingSink now supports flushing data to the file system by size limit and 
> by period of inactivity. It will be useful to also flush data by a specified 
> time period. This way, the data will be written out when write throughput is 
> low but there is no significant time period gaps between the writes. This 
> reduces ETA for the data in the file system and should help move the 
> checkpoints faster as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5860: [FLINK-9138][filesystem-connectors] Implement time...

2018-04-16 Thread glaksh100
GitHub user glaksh100 opened a pull request:

https://github.com/apache/flink/pull/5860

[FLINK-9138][filesystem-connectors] Implement time based rollover in 
BucketingSink

## What is the purpose of the change

This pull request enables a time-based rollover of the part file in the 
BucketingSink. This is particularly applicable when when write throughput is 
low and helps data become available at a fixed interval, for consumption.

## Brief change log
  - Add a `batchRolloverInterval` field with a setter 
  - Track a `firstWrittenToTime` for the bucket state
  - Check for `currentProcessingTime` - `firstWrittenToTime` > 
`batchRolloverInterval` and roll over if true

## Verifying this change

This change added tests and can be verified as follows:

  - Added a `testRolloverInterval` test method to the `BucketingSinkTest`

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (**yes** / no)
  - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/glaksh100/flink 
FLINK-9138.bucketingSinkRolloverInterval

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5860


commit fee3ba293f4db4ad2d39b4ac0f3993711da9bda6
Author: Lakshmi Gururaja Rao 
Date:   2018-04-16T23:31:49Z

[FLINK-9138] Implement time based rollover of part file in BucketingSink




---


[jira] [Commented] (FLINK-7484) CaseClassSerializer.duplicate() does not perform proper deep copy

2018-04-16 Thread Josh Lemer (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439956#comment-16439956
 ] 

Josh Lemer commented on FLINK-7484:
---

Hey there folks, are we all sure that this issue has been entirely fixed? I am 
getting very similar errors when using 
`ValueState[scala.collection.mutable.PriorityQueue[(SomeKryoSerializedThing, 
Long, scala.collection.mutable.BitSet)]` with the following stack trace. This 
ONLY happens when async snapshots are enabled using the FileSystem State 
Backend. RocksDB works fine with async snapshots:
{code:java}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.set(ArrayList.java:448)
at 
com.esotericsoftware.kryo.util.MapReferenceResolver.setReadObject(MapReferenceResolver.java:56)
at com.esotericsoftware.kryo.Kryo.reference(Kryo.java:875)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:710)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:189)
at 
org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
at 
org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer$$anonfun$copy$1.apply(TraversableSerializer.scala:69)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer$$anonfun$copy$1.apply(TraversableSerializer.scala:69)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:69)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:33)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:282)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:306)
at 
org.apache.flink.runtime.state.heap.HeapValueState.value(HeapValueState.java:55)
at 
net.districtm.segmentsync.processing.JoinSegmentMappingWithSegmentAssignments.enqueueSegmentAssignment(JoinSegmentMappingWithSegmentAssignments.scala:104)
at 
net.districtm.segmentsync.processing.JoinSegmentMappingWithSegmentAssignments.processElement2(JoinSegmentMappingWithSegmentAssignments.scala:218)
at 
net.districtm.segmentsync.processing.JoinSegmentMappingWithSegmentAssignments.processElement2(JoinSegmentMappingWithSegmentAssignments.scala:77)
at 
org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator.processElement2(KeyedCoProcessOperator.java:86)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:270)
at 
org.apache.flink.streaming.runtime.tasks.TwoInputStreamTask.run(TwoInputStreamTask.java:91)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)

04/16/2018 19:37:54 Job execution switched to status FAILING.
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.set(ArrayList.java:448)
at 
com.esotericsoftware.kryo.util.MapReferenceResolver.setReadObject(MapReferenceResolver.java:56)
at com.esotericsoftware.kryo.Kryo.reference(Kryo.java:875)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:710)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:189)
at 
org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
at 
org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer$$anonfun$copy$1.apply(TraversableSerializer.scala:69)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer$$anonfun$copy$1.apply(TraversableSerializer.scala:69)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:69)
at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:33)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:282)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:306)
at 
org.apache.flink.runtime.state.heap.HeapValueState.value(HeapValueState.java:55)
at 

[jira] [Closed] (FLINK-9089) Upgrade Orc dependency to 1.4.3

2018-04-16 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9089.

   Resolution: Fixed
Fix Version/s: 1.5.0

Fixed on master with afad30a54fa92b90bbaa97d2559713fdc218a53a
Fixed for 1.5.0 with 4c3d018bd75ace6361c02a1ad0b254350220542f

> Upgrade Orc dependency to 1.4.3
> ---
>
> Key: FLINK-9089
> URL: https://issues.apache.org/jira/browse/FLINK-9089
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
> Fix For: 1.5.0
>
>
> Currently flink-orc uses Orc 1.4.1 release.
> This issue upgrades to Orc 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9183.

   Resolution: Fixed
Fix Version/s: 1.4.3
   1.5.0

Fixed on master with b005ea35374f619298cebab649e0ba477aeaf860
Fixed for 1.5.0 with fc58f987144a307e7208de0bdfe439922bed4b55
Fixed for 1.4.3 with a9b497749710c708d076fc45688fff7b72416af1

Thanks for the contribution [~juho.autio.r]

> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
> Fix For: 1.5.0, 1.4.3
>
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-3655) Allow comma-separated or multiple directories to be specified for FileInputFormat

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439934#comment-16439934
 ] 

ASF GitHub Bot commented on FLINK-3655:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1990


> Allow comma-separated or multiple directories to be specified for 
> FileInputFormat
> -
>
> Key: FLINK-3655
> URL: https://issues.apache.org/jira/browse/FLINK-3655
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.0.0
>Reporter: Gna Phetsarath
>Assignee: Fabian Hueske
>Priority: Major
>  Labels: starter
> Fix For: 1.5.0
>
>
> Allow comma-separated or multiple directories to be specified for 
> FileInputFormat so that a DataSource will process the directories 
> sequentially.
>
> env.readFile("/data/2016/01/01/*/*,/data/2016/01/02/*/*,/data/2016/01/03/*/*")
> in Scala
>env.readFile(paths: Seq[String])
> or 
>   env.readFile(path: String, otherPaths: String*)
> Wildcard support would be a bonus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439933#comment-16439933
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5858


> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9089) Upgrade Orc dependency to 1.4.3

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439932#comment-16439932
 ] 

ASF GitHub Bot commented on FLINK-9089:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5826


> Upgrade Orc dependency to 1.4.3
> ---
>
> Key: FLINK-9089
> URL: https://issues.apache.org/jira/browse/FLINK-9089
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently flink-orc uses Orc 1.4.1 release.
> This issue upgrades to Orc 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5826: [FLINK-9089] Upgrade Orc dependency to 1.4.3

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5826


---


[GitHub] flink pull request #1990: [FLINK-3655] Multiple File Paths for InputFileForm...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1990


---


[GitHub] flink pull request #5858: [FLINK-9183] [Documentation] Kafka consumer docs t...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5858


---


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439920#comment-16439920
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
travis is still failing with mysterious error messages..

```
15:58:07.061 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (integration-tests) 
on project flink-tests_2.11: ExecutionException: java.lang.RuntimeException: 
The forked VM terminated without properly saying goodbye. VM crash or 
System.exit called?
15:58:07.061 [ERROR] Command was /bin/sh -c cd 
/home/travis/build/apache/flink/flink-tests/target && 
/usr/lib/jvm/java-8-oracle/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=1 
-XX:+UseG1GC -jar 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefirebooter5397929458643043612.jar
 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefire4818404066082555175tmp
 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefire_1027420331622776884606tmp
```


> Port ClassLoaderITCase to flip6
> ---
>
> Key: FLINK-8955
> URL: https://issues.apache.org/jira/browse/FLINK-8955
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip6

2018-04-16 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5780
  
travis is still failing with mysterious error messages..

```
15:58:07.061 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (integration-tests) 
on project flink-tests_2.11: ExecutionException: java.lang.RuntimeException: 
The forked VM terminated without properly saying goodbye. VM crash or 
System.exit called?
15:58:07.061 [ERROR] Command was /bin/sh -c cd 
/home/travis/build/apache/flink/flink-tests/target && 
/usr/lib/jvm/java-8-oracle/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=1 
-XX:+UseG1GC -jar 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefirebooter5397929458643043612.jar
 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefire4818404066082555175tmp
 
/home/travis/build/apache/flink/flink-tests/target/surefire/surefire_1027420331622776884606tmp
```


---


[jira] [Closed] (FLINK-9045) LocalEnvironment with web UI does not work with flip-6

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-9045.
---
Resolution: Fixed

The logging was adapted to be like in previous version, the 
{{createLocalEnvironmentWithWebUI}} will start the UI on port 8081.

master: 27be32e8a44e3afcce9a17e3b95767869f56ab61
1.5: a241d2af7d640407974dfa460f4693d1f75a5ff2

> LocalEnvironment with web UI does not work with flip-6
> --
>
> Key: FLINK-9045
> URL: https://issues.apache.org/jira/browse/FLINK-9045
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The following code is supposed to start a web UI when executing in-IDE. Does 
> not work with flip-6, as far as I can see.
> {code}
> final ExecutionEnvironment env = 
> ExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8370) Port AbstractAggregatingMetricsHandler to RestServerEndpoint

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8370.
---
Resolution: Fixed

master: c0410d801e406e77b1e6e7134224f7946906a49f
1.5: 23d454364d208d3ce8a55422edaaca365a1c9c79

> Port AbstractAggregatingMetricsHandler to RestServerEndpoint
> 
>
> Key: FLINK-8370
> URL: https://issues.apache.org/jira/browse/FLINK-8370
> Project: Flink
>  Issue Type: Sub-task
>  Components: REST
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Port subclasses of 
> {{org.apache.flink.runtime.rest.handler.legacy.metrics.AbstractAggregatingMetricsHandler}}
>  to new FLIP-6 {{RestServerEndpoint}}.
> The following handlers need to be migrated:
> * {{AggregatingJobsMetricsHandler}}
> * {{AggregatingSubtasksMetricsHandler}}
> * {{AggregatingTaskManagersMetricsHandler}}
> New handlers should then be registered in {{WebMonitorEndpoint}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9177) Link under Installing Mesos goes to a 404 page

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-9177.
---
   Resolution: Fixed
Fix Version/s: 1.5.0

master: 4f73c8dcbcb8c03339882cd3259549ea5b6e38cf
1.5: 39e9e19c5d663d5e69a845af8b00d7de20380101

> Link under Installing Mesos goes to a 404 page
> --
>
> Key: FLINK-9177
> URL: https://issues.apache.org/jira/browse/FLINK-9177
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Arunan Sugunakumar
>Assignee: Arunan Sugunakumar
>Priority: Trivial
> Fix For: 1.5.0
>
>
> Under "Clusters and Depployment - Mesos" 
> [guide|https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/mesos.html#installing-mesos],
>  installing mesos points to an old link. 
> The following link 
> [http://mesos.apache.org/documentation/latest/getting-started/]
> should be changed to 
> [http://mesos.apache.org/documentation/getting-started/|http://mesos.apache.org/getting-started/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-9177) Link under Installing Mesos goes to a 404 page

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-9177:
-

> Link under Installing Mesos goes to a 404 page
> --
>
> Key: FLINK-9177
> URL: https://issues.apache.org/jira/browse/FLINK-9177
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Arunan Sugunakumar
>Assignee: Arunan Sugunakumar
>Priority: Trivial
> Fix For: 1.5.0
>
>
> Under "Clusters and Depployment - Mesos" 
> [guide|https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/mesos.html#installing-mesos],
>  installing mesos points to an old link. 
> The following link 
> [http://mesos.apache.org/documentation/latest/getting-started/]
> should be changed to 
> [http://mesos.apache.org/documentation/getting-started/|http://mesos.apache.org/getting-started/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8961) Port JobRetrievalITCase to flip6

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8961.
---
Resolution: Fixed

master: 2266eb010b377450aa1f01ec589fe8758e9a0c6d
1.5: 2cc77f9f6e999238ae9dd7d24712e5d7a397f4cb

> Port JobRetrievalITCase to flip6
> 
>
> Key: FLINK-8961
> URL: https://issues.apache.org/jira/browse/FLINK-8961
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-16 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-9156.
---
Resolution: Fixed

master: 4f0fa0b3f992da4474e2703a54a8445cf1e29856
1.5: 47909f466b9c9ee1f4caf94e9f6862a21b628817

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> 

[jira] [Commented] (FLINK-9173) RestClient - Received response is abnormal

2018-04-16 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439904#comment-16439904
 ] 

Chesnay Schepler commented on FLINK-9173:
-

Could you try again with the latest master/release-1.5 and provide us with the 
stacktrace of the client? I just pushed some changes that should give us a 
better error message.

> RestClient - Received response is abnormal
> --
>
> Key: FLINK-9173
> URL: https://issues.apache.org/jira/browse/FLINK-9173
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST
>Affects Versions: 1.5.0
> Environment: OS:    CentOS 6.8
> JAVA:    1.8.0_161-b12
> maven-plugin:     spring-boot-maven-plugin
> Spring-boot:      1.5.10.RELEASE
>Reporter: Bob Lau
>Priority: Major
>
> The system prints the exception log as follows:
>  
> {code:java}
> //代码占位符
> 09:07:20.755 tysc_log [Flink-RestClusterClient-IO-thread-4] ERROR 
> o.a.flink.runtime.rest.RestClient - Received response was neither of the 
> expected type ([simple type, class 
> org.apache.flink.runtime.rest.messages.job.JobExecutionResultResponseBody]) 
> nor an error. 
> Response=org.apache.flink.runtime.rest.RestClient$JsonResponse@2ac43968
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
>  Unrecognized field "status" (class 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as 
> ignorable (one known property: "errors"])
> at [Source: N/A; line: -1, column: -1] (through reference chain: 
> org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1392)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1346)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:455)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1127)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:298)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3779)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
> at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2547)
> at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:225)
> at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:210)
> at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
>  In the development environment,such as, Eclipse Luna.
> The job of the application can be submitted to the standalone cluster, via 
> Spring boot Application main method.
> But mvn spring-boot:run will print this exception.
> Local operation system is Mac OSX , the jdk version is 1.8.0_151.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8961) Port JobRetrievalITCase to flip6

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439900#comment-16439900
 ] 

ASF GitHub Bot commented on FLINK-8961:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5730


> Port JobRetrievalITCase to flip6
> 
>
> Key: FLINK-8961
> URL: https://issues.apache.org/jira/browse/FLINK-8961
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9177) Link under Installing Mesos goes to a 404 page

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439899#comment-16439899
 ] 

ASF GitHub Bot commented on FLINK-9177:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5850


> Link under Installing Mesos goes to a 404 page
> --
>
> Key: FLINK-9177
> URL: https://issues.apache.org/jira/browse/FLINK-9177
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Arunan Sugunakumar
>Assignee: Arunan Sugunakumar
>Priority: Trivial
>
> Under "Clusters and Depployment - Mesos" 
> [guide|https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/mesos.html#installing-mesos],
>  installing mesos points to an old link. 
> The following link 
> [http://mesos.apache.org/documentation/latest/getting-started/]
> should be changed to 
> [http://mesos.apache.org/documentation/getting-started/|http://mesos.apache.org/getting-started/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439898#comment-16439898
 ] 

ASF GitHub Bot commented on FLINK-9156:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5838


> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}} ({{jobmanager.rpc.address, jobmanager.rpc.port}}).
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> 

[jira] [Commented] (FLINK-8370) Port AbstractAggregatingMetricsHandler to RestServerEndpoint

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439902#comment-16439902
 ] 

ASF GitHub Bot commented on FLINK-8370:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5805


> Port AbstractAggregatingMetricsHandler to RestServerEndpoint
> 
>
> Key: FLINK-8370
> URL: https://issues.apache.org/jira/browse/FLINK-8370
> Project: Flink
>  Issue Type: Sub-task
>  Components: REST
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Port subclasses of 
> {{org.apache.flink.runtime.rest.handler.legacy.metrics.AbstractAggregatingMetricsHandler}}
>  to new FLIP-6 {{RestServerEndpoint}}.
> The following handlers need to be migrated:
> * {{AggregatingJobsMetricsHandler}}
> * {{AggregatingSubtasksMetricsHandler}}
> * {{AggregatingTaskManagersMetricsHandler}}
> New handlers should then be registered in {{WebMonitorEndpoint}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9045) LocalEnvironment with web UI does not work with flip-6

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439901#comment-16439901
 ] 

ASF GitHub Bot commented on FLINK-9045:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5814


> LocalEnvironment with web UI does not work with flip-6
> --
>
> Key: FLINK-9045
> URL: https://issues.apache.org/jira/browse/FLINK-9045
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The following code is supposed to start a web UI when executing in-IDE. Does 
> not work with flip-6, as far as I can see.
> {code}
> final ExecutionEnvironment env = 
> ExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5850: [FLINK-9177] [Documentation] Update Mesos getting ...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5850


---


[GitHub] flink pull request #5805: [FLINK-8370][REST] Port AggregatingMetricsHandler ...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5805


---


[GitHub] flink pull request #5838: [FLINK-9156][REST][CLI] Update --jobmanager option...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5838


---


[GitHub] flink pull request #5730: [FLINK-8961][tests] Port JobRetrievalITCase to fli...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5730


---


[GitHub] flink pull request #5814: [FLINK-9045][REST] Add logging message for web UI ...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5814


---


[GitHub] flink issue #5848: Minor cleanup of Java example code for AsyncFunctions

2018-04-16 Thread tedyu
Github user tedyu commented on the issue:

https://github.com/apache/flink/pull/5848
  
lgtm


---


[GitHub] flink issue #5848: Minor cleanup of Java example code for AsyncFunctions

2018-04-16 Thread kkrugler
Github user kkrugler commented on the issue:

https://github.com/apache/flink/pull/5848
  
Hi @tedyu - not able to assign you as a reviewer, I guess I would need more 
Git fu to be able to do that.


---


[GitHub] flink issue #5848: Minor cleanup of Java example code for AsyncFunctions

2018-04-16 Thread kkrugler
Github user kkrugler commented on the issue:

https://github.com/apache/flink/pull/5848
  
BTW, how do people check .md editing results before pushing? I've tried a 
few different plugins and command line utilities, but the mix of HTML and 
markdown has meant none of them render properly.


---


[GitHub] flink pull request #5859: [FLINK-9186][build] Enable dependenvy convergence ...

2018-04-16 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5859

[FLINK-9186][build] Enable dependenvy convergence for flink-libraries

## What is the purpose of the change

This PR enables dependency-convergence for `flink-libraries`. There is only 
a single violation in `flink-ml` making this a very low-hanging fruit.

## Brief change log

* pin version of scala-xml in flink-ml to 1.0.5 (conflict was 1.0.5 vs 
1.0.2)
* remove disabling pom entry from flink-libraries
* remove enabling pom entries from flink-table/flink-sql-client (now 
redundant)

## Verifying this change

* run `mvn validate` in flink-libraries to check convergence
* run `flink-ml` tests to cover dependency change

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (**yes**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5859


commit 4380fba1c3befc9fe88a7f869ebd81f0e6b8b40f
Author: zentol 
Date:   2018-04-16T17:21:14Z

[FLINK-9186][build] Enable dependenvy convergence for flink-libraries




---


[jira] [Commented] (FLINK-9186) Enable dependency convergence for flink-libraries

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439755#comment-16439755
 ] 

ASF GitHub Bot commented on FLINK-9186:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5859

[FLINK-9186][build] Enable dependenvy convergence for flink-libraries

## What is the purpose of the change

This PR enables dependency-convergence for `flink-libraries`. There is only 
a single violation in `flink-ml` making this a very low-hanging fruit.

## Brief change log

* pin version of scala-xml in flink-ml to 1.0.5 (conflict was 1.0.5 vs 
1.0.2)
* remove disabling pom entry from flink-libraries
* remove enabling pom entries from flink-table/flink-sql-client (now 
redundant)

## Verifying this change

* run `mvn validate` in flink-libraries to check convergence
* run `flink-ml` tests to cover dependency change

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (**yes**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5859


commit 4380fba1c3befc9fe88a7f869ebd81f0e6b8b40f
Author: zentol 
Date:   2018-04-16T17:21:14Z

[FLINK-9186][build] Enable dependenvy convergence for flink-libraries




> Enable dependency convergence for flink-libraries
> -
>
> Key: FLINK-9186
> URL: https://issues.apache.org/jira/browse/FLINK-9186
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9186) Enable dependency convergence for flink-libraries

2018-04-16 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9186:
---

 Summary: Enable dependency convergence for flink-libraries
 Key: FLINK-9186
 URL: https://issues.apache.org/jira/browse/FLINK-9186
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439700#comment-16439700
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
I'll merge it on `master`, `release-1.5` and `release-1.4`


> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5858: [FLINK-9183] [Documentation] Kafka consumer docs to warn ...

2018-04-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
I'll merge it on `master`, `release-1.5` and `release-1.4`


---


[jira] [Resolved] (FLINK-9177) Link under Installing Mesos goes to a 404 page

2018-04-16 Thread Arunan Sugunakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arunan Sugunakumar resolved FLINK-9177.
---
Resolution: Fixed

> Link under Installing Mesos goes to a 404 page
> --
>
> Key: FLINK-9177
> URL: https://issues.apache.org/jira/browse/FLINK-9177
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Arunan Sugunakumar
>Assignee: Arunan Sugunakumar
>Priority: Trivial
>
> Under "Clusters and Depployment - Mesos" 
> [guide|https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/mesos.html#installing-mesos],
>  installing mesos points to an old link. 
> The following link 
> [http://mesos.apache.org/documentation/latest/getting-started/]
> should be changed to 
> [http://mesos.apache.org/documentation/getting-started/|http://mesos.apache.org/getting-started/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439669#comment-16439669
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

Github user juhoautio commented on the issue:

https://github.com/apache/flink/pull/5858
  
IMHO would be good to have this in 1.5 docs when it goes out. Current 
master seems to aim at 1.6.


> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-16 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9185:
-

 Summary: Potential null dereference in 
PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
 Key: FLINK-9185
 URL: https://issues.apache.org/jira/browse/FLINK-9185
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


{code}
if (alternative != null
  && alternative.hasState()
  && alternative.size() == 1
  && approveFun.apply(reference, alternative.iterator().next())) {
{code}
The return value from approveFun.apply would be unboxed.
We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5858: [FLINK-9183] [Documentation] Kafka consumer docs to warn ...

2018-04-16 Thread juhoautio
Github user juhoautio commented on the issue:

https://github.com/apache/flink/pull/5858
  
IMHO would be good to have this in 1.5 docs when it goes out. Current 
master seems to aim at 1.6.


---


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439663#comment-16439663
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
merging


> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9089) Upgrade Orc dependency to 1.4.3

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439664#comment-16439664
 ] 

ASF GitHub Bot commented on FLINK-9089:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5826
  
merging


> Upgrade Orc dependency to 1.4.3
> ---
>
> Key: FLINK-9089
> URL: https://issues.apache.org/jira/browse/FLINK-9089
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> Currently flink-orc uses Orc 1.4.1 release.
> This issue upgrades to Orc 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5858: [FLINK-9183] [Documentation] Kafka consumer docs to warn ...

2018-04-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
merging


---


[GitHub] flink issue #5826: [FLINK-9089] Upgrade Orc dependency to 1.4.3

2018-04-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5826
  
merging


---


[jira] [Resolved] (FLINK-9145) Website build is broken

2018-04-16 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-9145.
-
   Resolution: Fixed
Fix Version/s: 1.5.0

Fixed in 1.6.0: 185b904aa82dd99a178de0688fa67afba0edf679
Fixed in 1.5.0: c6d45b9225987537493472f20e933a81b63c9cde

> Website build is broken
> ---
>
> Key: FLINK-9145
> URL: https://issues.apache.org/jira/browse/FLINK-9145
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The javadoc generation fails with a dependency-convergence error in 
> flink-json:
> {code}
> [WARNING] 
> Dependency convergence error for commons-beanutils:commons-beanutils:1.8.0 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-digester:commons-digester:1.8.1
> +-commons-beanutils:commons-beanutils:1.8.0
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-beanutils:commons-beanutils:1.8.3
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:commons-compiler:3.0.7 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
>   +-org.codehaus.janino:commons-compiler:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:commons-compiler:2.7.6
> [WARNING] 
> Dependency convergence error for commons-lang:commons-lang:2.6 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-lang:commons-lang:2.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-net.hydromatic:aggdesigner-algorithm:6.0
> +-commons-lang:commons-lang:2.4
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:janino:3.0.7 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:janino:2.7.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9145) Website build is broken

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439648#comment-16439648
 ] 

ASF GitHub Bot commented on FLINK-9145:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5853


> Website build is broken
> ---
>
> Key: FLINK-9145
> URL: https://issues.apache.org/jira/browse/FLINK-9145
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The javadoc generation fails with a dependency-convergence error in 
> flink-json:
> {code}
> [WARNING] 
> Dependency convergence error for commons-beanutils:commons-beanutils:1.8.0 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-digester:commons-digester:1.8.1
> +-commons-beanutils:commons-beanutils:1.8.0
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-beanutils:commons-beanutils:1.8.3
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:commons-compiler:3.0.7 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
>   +-org.codehaus.janino:commons-compiler:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:commons-compiler:2.7.6
> [WARNING] 
> Dependency convergence error for commons-lang:commons-lang:2.6 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-lang:commons-lang:2.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-net.hydromatic:aggdesigner-algorithm:6.0
> +-commons-lang:commons-lang:2.4
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:janino:3.0.7 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:janino:2.7.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5853: [FLINK-9145] [table] Clean up flink-table dependen...

2018-04-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5853


---


[jira] [Commented] (FLINK-9145) Website build is broken

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439642#comment-16439642
 ] 

ASF GitHub Bot commented on FLINK-9145:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5853
  
Thank you @zentol. Will implement you comment and merge this...


> Website build is broken
> ---
>
> Key: FLINK-9145
> URL: https://issues.apache.org/jira/browse/FLINK-9145
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Timo Walther
>Priority: Blocker
>
> The javadoc generation fails with a dependency-convergence error in 
> flink-json:
> {code}
> [WARNING] 
> Dependency convergence error for commons-beanutils:commons-beanutils:1.8.0 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-digester:commons-digester:1.8.1
> +-commons-beanutils:commons-beanutils:1.8.0
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-beanutils:commons-beanutils:1.8.3
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:commons-compiler:3.0.7 
> paths to dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
>   +-org.codehaus.janino:commons-compiler:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:commons-compiler:2.7.6
> [WARNING] 
> Dependency convergence error for commons-lang:commons-lang:2.6 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-commons-configuration:commons-configuration:1.7
>   +-commons-lang:commons-lang:2.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-net.hydromatic:aggdesigner-algorithm:6.0
> +-commons-lang:commons-lang:2.4
> [WARNING] 
> Dependency convergence error for org.codehaus.janino:janino:3.0.7 paths to 
> dependency are:
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.apache.calcite:calcite-core:1.16.0
>   +-org.codehaus.janino:janino:2.7.6
> and
> +-org.apache.flink:flink-json:1.6-SNAPSHOT
>   +-org.apache.flink:flink-table_2.11:1.6-SNAPSHOT
> +-org.codehaus.janino:janino:3.0.7
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5853: [FLINK-9145] [table] Clean up flink-table dependencies

2018-04-16 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5853
  
Thank you @zentol. Will implement you comment and merge this...


---


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439606#comment-16439606
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
Thanks for the PR @juhoautio! This looks good.
+1 to merge


> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5858: [FLINK-9183] [Documentation] Kafka consumer docs to warn ...

2018-04-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/5858
  
Thanks for the PR @juhoautio! This looks good.
+1 to merge


---


[jira] [Commented] (FLINK-8703) Migrate tests from LocalFlinkMiniCluster to MiniClusterResource

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439564#comment-16439564
 ] 

ASF GitHub Bot commented on FLINK-8703:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181777925
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -217,12 +231,12 @@ public void getTaskManagerLogAndStdoutFiles() {
@Test
public void getConfiguration() {
try {
-   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + port + "/jobmanager/config");
+   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + CLUSTER.getWebUIPort() + 
"/jobmanager/config");
 
Map conf = 
WebMonitorUtils.fromKeyValueJsonArray(config);
assertEquals(
-   
cluster.configuration().getString("taskmanager.numberOfTaskSlots", null),
-   
conf.get(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS));
+   
CLUSTER_CONFIGURATION.getString(ConfigConstants.LOCAL_START_WEBSERVER, null),
--- End diff --

Yeah, that's what I meant, the previous code was a bit strange


> Migrate tests from LocalFlinkMiniCluster to MiniClusterResource
> ---
>
> Key: FLINK-8703
> URL: https://issues.apache.org/jira/browse/FLINK-8703
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5665: [FLINK-8703][tests] Port WebFrontendITCase to Mini...

2018-04-16 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181777925
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -217,12 +231,12 @@ public void getTaskManagerLogAndStdoutFiles() {
@Test
public void getConfiguration() {
try {
-   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + port + "/jobmanager/config");
+   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + CLUSTER.getWebUIPort() + 
"/jobmanager/config");
 
Map conf = 
WebMonitorUtils.fromKeyValueJsonArray(config);
assertEquals(
-   
cluster.configuration().getString("taskmanager.numberOfTaskSlots", null),
-   
conf.get(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS));
+   
CLUSTER_CONFIGURATION.getString(ConfigConstants.LOCAL_START_WEBSERVER, null),
--- End diff --

Yeah, that's what I meant, the previous code was a bit strange


---


[jira] [Commented] (FLINK-8703) Migrate tests from LocalFlinkMiniCluster to MiniClusterResource

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439561#comment-16439561
 ] 

ASF GitHub Bot commented on FLINK-8703:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181776571
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -217,12 +231,12 @@ public void getTaskManagerLogAndStdoutFiles() {
@Test
public void getConfiguration() {
try {
-   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + port + "/jobmanager/config");
+   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + CLUSTER.getWebUIPort() + 
"/jobmanager/config");
 
Map conf = 
WebMonitorUtils.fromKeyValueJsonArray(config);
assertEquals(
-   
cluster.configuration().getString("taskmanager.numberOfTaskSlots", null),
-   
conf.get(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS));
+   
CLUSTER_CONFIGURATION.getString(ConfigConstants.LOCAL_START_WEBSERVER, null),
--- End diff --

I wanted an option for which the configured value is different from the 
default. The default for `numberOfTaskSlots` is one, so the test would've 
passed even if the handler returned a vanilla configuration.


> Migrate tests from LocalFlinkMiniCluster to MiniClusterResource
> ---
>
> Key: FLINK-8703
> URL: https://issues.apache.org/jira/browse/FLINK-8703
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5665: [FLINK-8703][tests] Port WebFrontendITCase to Mini...

2018-04-16 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181776571
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -217,12 +231,12 @@ public void getTaskManagerLogAndStdoutFiles() {
@Test
public void getConfiguration() {
try {
-   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + port + "/jobmanager/config");
+   String config = 
TestBaseUtils.getFromHTTP("http://localhost:; + CLUSTER.getWebUIPort() + 
"/jobmanager/config");
 
Map conf = 
WebMonitorUtils.fromKeyValueJsonArray(config);
assertEquals(
-   
cluster.configuration().getString("taskmanager.numberOfTaskSlots", null),
-   
conf.get(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS));
+   
CLUSTER_CONFIGURATION.getString(ConfigConstants.LOCAL_START_WEBSERVER, null),
--- End diff --

I wanted an option for which the configured value is different from the 
default. The default for `numberOfTaskSlots` is one, so the test would've 
passed even if the handler returned a vanilla configuration.


---


[jira] [Commented] (FLINK-8703) Migrate tests from LocalFlinkMiniCluster to MiniClusterResource

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439554#comment-16439554
 ] 

ASF GitHub Bot commented on FLINK-8703:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181775179
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -127,14 +137,18 @@ public void testResponseHeaders() throws Exception {
Assert.assertEquals("application/json; charset=UTF-8", 
taskManagerConnection.getContentType());
 
// check headers in case of an error
-   URL notFoundJobUrl = new URL("http://localhost:; + port + 
"/jobs/dontexist");
+   URL notFoundJobUrl = new URL("http://localhost:; + 
CLUSTER.getWebUIPort() + "/jobs/dontexist");
HttpURLConnection notFoundJobConnection = (HttpURLConnection) 
notFoundJobUrl.openConnection();
notFoundJobConnection.setConnectTimeout(10);
notFoundJobConnection.connect();
if (notFoundJobConnection.getResponseCode() >= 400) {
// we don't set the content-encoding header

Assert.assertNull(notFoundJobConnection.getContentEncoding());
-   Assert.assertEquals("text/plain; charset=UTF-8", 
notFoundJobConnection.getContentType());
+   if (Objects.equals("flip6", 
System.getProperty("codebase"))) {
--- End diff --

this must be updated to "new"


> Migrate tests from LocalFlinkMiniCluster to MiniClusterResource
> ---
>
> Key: FLINK-8703
> URL: https://issues.apache.org/jira/browse/FLINK-8703
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Aljoscha Krettek
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5665: [FLINK-8703][tests] Port WebFrontendITCase to Mini...

2018-04-16 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5665#discussion_r181775179
  
--- Diff: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java
 ---
@@ -127,14 +137,18 @@ public void testResponseHeaders() throws Exception {
Assert.assertEquals("application/json; charset=UTF-8", 
taskManagerConnection.getContentType());
 
// check headers in case of an error
-   URL notFoundJobUrl = new URL("http://localhost:; + port + 
"/jobs/dontexist");
+   URL notFoundJobUrl = new URL("http://localhost:; + 
CLUSTER.getWebUIPort() + "/jobs/dontexist");
HttpURLConnection notFoundJobConnection = (HttpURLConnection) 
notFoundJobUrl.openConnection();
notFoundJobConnection.setConnectTimeout(10);
notFoundJobConnection.connect();
if (notFoundJobConnection.getResponseCode() >= 400) {
// we don't set the content-encoding header

Assert.assertNull(notFoundJobConnection.getContentEncoding());
-   Assert.assertEquals("text/plain; charset=UTF-8", 
notFoundJobConnection.getContentType());
+   if (Objects.equals("flip6", 
System.getProperty("codebase"))) {
--- End diff --

this must be updated to "new"


---


[jira] [Commented] (FLINK-9183) Kafka consumer docs to warn about idle partitions

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439546#comment-16439546
 ] 

ASF GitHub Bot commented on FLINK-9183:
---

GitHub user juhoautio opened a pull request:

https://github.com/apache/flink/pull/5858

[FLINK-9183] [Documentation] Kafka consumer docs to warn about idle 
partitions

https://issues.apache.org/jira/browse/FLINK-9183

Another sub-task was added as a reminder to remove this part of the docs 
when FLINK-5479 is resolved:
https://issues.apache.org/jira/browse/FLINK-9184

## What is the purpose of the change

Warn users of FLINK-9183 until it gets fixed.

## Brief change log

Added one paragraph.

## Verifying this change

Verified by running `./build_docs.sh -p` and checking the result in browser.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/juhoautio/flink 
document_idle_kafka_partitions_issue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5858


commit d832042f680ec4d89bcb7ca0e4838bd84fa20b4b
Author: juhoautio 
Date:   2018-04-16T14:40:23Z

FLINK-9183: Kafka consumer docs to warn about idle partitions




> Kafka consumer docs to warn about idle partitions
> -
>
> Key: FLINK-9183
> URL: https://issues.apache.org/jira/browse/FLINK-9183
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Juho Autio
>Assignee: Juho Autio
>Priority: Major
>
> Looks like the bug FLINK-5479 is entirely preventing 
> FlinkKafkaConsumerBase#assignTimestampsAndWatermarks to be used if there are 
> any idle partitions. It would be nice to mention in documentation that 
> currently this requires all subscribed partitions to have a constant stream 
> of data with growing timestamps. When watermark gets stalled on an idle 
> partition it blocks everything.
>  
> Link to current documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5858: [FLINK-9183] [Documentation] Kafka consumer docs t...

2018-04-16 Thread juhoautio
GitHub user juhoautio opened a pull request:

https://github.com/apache/flink/pull/5858

[FLINK-9183] [Documentation] Kafka consumer docs to warn about idle 
partitions

https://issues.apache.org/jira/browse/FLINK-9183

Another sub-task was added as a reminder to remove this part of the docs 
when FLINK-5479 is resolved:
https://issues.apache.org/jira/browse/FLINK-9184

## What is the purpose of the change

Warn users of FLINK-9183 until it gets fixed.

## Brief change log

Added one paragraph.

## Verifying this change

Verified by running `./build_docs.sh -p` and checking the result in browser.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature? no
  - If yes, how is the feature documented? not applicable


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/juhoautio/flink 
document_idle_kafka_partitions_issue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5858


commit d832042f680ec4d89bcb7ca0e4838bd84fa20b4b
Author: juhoautio 
Date:   2018-04-16T14:40:23Z

FLINK-9183: Kafka consumer docs to warn about idle partitions




---


[jira] [Commented] (FLINK-8955) Port ClassLoaderITCase to flip6

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439536#comment-16439536
 ] 

ASF GitHub Bot commented on FLINK-8955:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5780#discussion_r181759432
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/ClassLoaderITCase.java
 ---
@@ -202,15 +204,27 @@ public void 
testCheckpointedStreamingClassloaderJobWithCustomClassLoader() throw
Collections.singleton(new 
Path(STREAMING_CHECKPOINTED_PROG_JAR_FILE)),
Collections.emptyList());
 
-   // Program should terminate with a 'SuccessException':
-   // we can not access the SuccessException here when executing 
the tests with maven, because its not available in the jar.
-   expectedException.expectCause(
-   Matchers.hasProperty("cause",
-   hasProperty("class",
-   hasProperty("canonicalName", equalTo(
-   
"org.apache.flink.test.classloading.jar.CheckpointedStreamingProgram.SuccessException");
-
-   streamingCheckpointedProg.invokeInteractiveModeForExecution();
+   try {
--- End diff --

my guess is that the exception stack has changed. The SuccessException is 
no longer the direct cause of the top-level exception.

```
Expected: exception with cause hasProperty("cause", hasProperty("class", 
hasProperty("canonicalName", 
"org.apache.flink.test.classloading.jar.CheckpointedStreamingProgram.SuccessException")))
 but: cause property 'cause' property 'class' null
Stacktrace was: org.apache.flink.client.program.ProgramInvocationException: 
The main method caused an error.
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
at 
org.apache.flink.test.classloading.ClassLoaderITCase.testCheckpointedStreamingClassloaderJobWithCustomClassLoader(ClassLoaderITCase.java:212)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 

[GitHub] flink pull request #5780: [FLINK-8955][tests] Port ClassLoaderITCase to flip...

2018-04-16 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5780#discussion_r181759432
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/ClassLoaderITCase.java
 ---
@@ -202,15 +204,27 @@ public void 
testCheckpointedStreamingClassloaderJobWithCustomClassLoader() throw
Collections.singleton(new 
Path(STREAMING_CHECKPOINTED_PROG_JAR_FILE)),
Collections.emptyList());
 
-   // Program should terminate with a 'SuccessException':
-   // we can not access the SuccessException here when executing 
the tests with maven, because its not available in the jar.
-   expectedException.expectCause(
-   Matchers.hasProperty("cause",
-   hasProperty("class",
-   hasProperty("canonicalName", equalTo(
-   
"org.apache.flink.test.classloading.jar.CheckpointedStreamingProgram.SuccessException");
-
-   streamingCheckpointedProg.invokeInteractiveModeForExecution();
+   try {
--- End diff --

my guess is that the exception stack has changed. The SuccessException is 
no longer the direct cause of the top-level exception.

```
Expected: exception with cause hasProperty("cause", hasProperty("class", 
hasProperty("canonicalName", 
"org.apache.flink.test.classloading.jar.CheckpointedStreamingProgram.SuccessException")))
 but: cause property 'cause' property 'class' null
Stacktrace was: org.apache.flink.client.program.ProgramInvocationException: 
The main method caused an error.
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
at 
org.apache.flink.test.classloading.ClassLoaderITCase.testCheckpointedStreamingClassloaderJobWithCustomClassLoader(ClassLoaderITCase.java:212)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
at 

[jira] [Commented] (FLINK-8980) End-to-end test: BucketingSink

2018-04-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439531#comment-16439531
 ] 

ASF GitHub Bot commented on FLINK-8980:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/5813
  
Thanks for the work @twalthr ! The test looks good but it fails 
occasionally due to https://issues.apache.org/jira/browse/FLINK-9113. Given 
that the test is unstable, I would suggest to not merge it for now, as it will 
lead to unstable builds.


> End-to-end test: BucketingSink
> --
>
> Key: FLINK-8980
> URL: https://issues.apache.org/jira/browse/FLINK-8980
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0
>
>
> In order to verify the {{BucketingSink}}, we should add an end-to-end test 
> which verifies that the {{BucketingSink}} does not lose data under failures.
> An idea would be to have a CountUp job which simply counts up a counter which 
> is persisted. The emitted values will be written to disk by the 
> {{BucketingSink}}. Now we should kill randomly Flink processes (cluster 
> entrypoint and TaskExecutors) to simulate failures. Even after these 
> failures, the written files should contain the correct sequence of numbers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5813: [FLINK-8980] [e2e] Add a BucketingSink end-to-end test

2018-04-16 Thread kl0u
Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/5813
  
Thanks for the work @twalthr ! The test looks good but it fails 
occasionally due to https://issues.apache.org/jira/browse/FLINK-9113. Given 
that the test is unstable, I would suggest to not merge it for now, as it will 
lead to unstable builds.


---


  1   2   3   >