[jira] [Assigned] (FLINK-4616) Kafka consumer doesn't store last emmited watermarks per partition in state

2016-11-29 Thread Roman Maier (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Maier reassigned FLINK-4616:
--

Assignee: Roman Maier

> Kafka consumer doesn't store last emmited watermarks per partition in state
> ---
>
> Key: FLINK-4616
> URL: https://issues.apache.org/jira/browse/FLINK-4616
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.1.1
>Reporter: Yuri Makhno
>Assignee: Roman Maier
> Fix For: 1.2.0, 1.1.4
>
>
> Kafka consumers stores in state only kafka offsets and doesn't store last 
> emmited watermarks, this may go to wrong state when checkpoint is restored:
> Let's say our watermark is (timestamp - 10) and in case we have the following 
> messages queue results will be different after checkpoint restore and during 
> normal processing:
> A(ts = 30)
> B(ts = 35)
> -- checkpoint goes here
> C(ts=15) -- this one should be filtered by next time window
> D(ts=60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707706#comment-15707706
 ] 

ASF GitHub Bot commented on FLINK-5209:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90175096
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagersHandler.java
 ---
@@ -124,13 +124,21 @@ public String handleJsonRequest(Map 
pathParams, Map Fix TaskManager metrics
> ---
>
> Key: FLINK-5209
> URL: https://issues.apache.org/jira/browse/FLINK-5209
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2902: [FLINK-5209] [webfrontend] Fix TaskManager metrics

2016-11-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90175096
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagersHandler.java
 ---
@@ -124,13 +124,21 @@ public String handleJsonRequest(Map 
pathParams, Map

[GitHub] flink pull request #2902: [FLINK-5209] [webfrontend] Fix TaskManager metrics

2016-11-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174966
  
--- Diff: 
flink-runtime-web/web-dashboard/app/partials/taskmanager/taskmanager.metrics.jade
 ---
@@ -75,18 +75,18 @@ div(ng-if="metrics.id")
 tbody
   tr
 td Direct
-td {{ metrics.metrics.directCount }}
-td {{ metrics.metrics.directUsed }}
-td {{ metrics.metrics.directTotal }}
+td {{ metrics.metrics.directCount | humanizeBytes }}
+td {{ metrics.metrics.directUsed | humanizeBytes }}
+td {{ metrics.metrics.directMax | humanizeBytes }}
   tr
 td Mapped
-td {{ metrics.metrics.mappedCount }}
-td {{ metrics.metrics.mappedUsed }}
-td {{ metrics.metrics.mappedMax }}
+td {{ metrics.metrics.mappedCount | humanizeBytes }}
--- End diff --

wait; only mapped/direct-count are not measured in bytes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707703#comment-15707703
 ] 

ASF GitHub Bot commented on FLINK-5209:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174966
  
--- Diff: 
flink-runtime-web/web-dashboard/app/partials/taskmanager/taskmanager.metrics.jade
 ---
@@ -75,18 +75,18 @@ div(ng-if="metrics.id")
 tbody
   tr
 td Direct
-td {{ metrics.metrics.directCount }}
-td {{ metrics.metrics.directUsed }}
-td {{ metrics.metrics.directTotal }}
+td {{ metrics.metrics.directCount | humanizeBytes }}
+td {{ metrics.metrics.directUsed | humanizeBytes }}
+td {{ metrics.metrics.directMax | humanizeBytes }}
   tr
 td Mapped
-td {{ metrics.metrics.mappedCount }}
-td {{ metrics.metrics.mappedUsed }}
-td {{ metrics.metrics.mappedMax }}
+td {{ metrics.metrics.mappedCount | humanizeBytes }}
--- End diff --

wait; only mapped/direct-count are not measured in bytes.


> Fix TaskManager metrics
> ---
>
> Key: FLINK-5209
> URL: https://issues.apache.org/jira/browse/FLINK-5209
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707691#comment-15707691
 ] 

ASF GitHub Bot commented on FLINK-5209:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174505
  
--- Diff: 
flink-runtime-web/web-dashboard/app/partials/taskmanager/taskmanager.metrics.jade
 ---
@@ -75,18 +75,18 @@ div(ng-if="metrics.id")
 tbody
   tr
 td Direct
-td {{ metrics.metrics.directCount }}
-td {{ metrics.metrics.directUsed }}
-td {{ metrics.metrics.directTotal }}
+td {{ metrics.metrics.directCount | humanizeBytes }}
+td {{ metrics.metrics.directUsed | humanizeBytes }}
+td {{ metrics.metrics.directMax | humanizeBytes }}
   tr
 td Mapped
-td {{ metrics.metrics.mappedCount }}
-td {{ metrics.metrics.mappedUsed }}
-td {{ metrics.metrics.mappedMax }}
+td {{ metrics.metrics.mappedCount | humanizeBytes }}
--- End diff --

The unit for none of these metrics is bytes


> Fix TaskManager metrics
> ---
>
> Key: FLINK-5209
> URL: https://issues.apache.org/jira/browse/FLINK-5209
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2902: [FLINK-5209] [webfrontend] Fix TaskManager metrics

2016-11-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174556
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagersHandler.java
 ---
@@ -124,13 +124,21 @@ public String handleJsonRequest(Map 
pathParams, Map

[jira] [Commented] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707693#comment-15707693
 ] 

ASF GitHub Bot commented on FLINK-5209:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174556
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/TaskManagersHandler.java
 ---
@@ -124,13 +124,21 @@ public String handleJsonRequest(Map 
pathParams, Map Fix TaskManager metrics
> ---
>
> Key: FLINK-5209
> URL: https://issues.apache.org/jira/browse/FLINK-5209
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2902: [FLINK-5209] [webfrontend] Fix TaskManager metrics

2016-11-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2902#discussion_r90174505
  
--- Diff: 
flink-runtime-web/web-dashboard/app/partials/taskmanager/taskmanager.metrics.jade
 ---
@@ -75,18 +75,18 @@ div(ng-if="metrics.id")
 tbody
   tr
 td Direct
-td {{ metrics.metrics.directCount }}
-td {{ metrics.metrics.directUsed }}
-td {{ metrics.metrics.directTotal }}
+td {{ metrics.metrics.directCount | humanizeBytes }}
+td {{ metrics.metrics.directUsed | humanizeBytes }}
+td {{ metrics.metrics.directMax | humanizeBytes }}
   tr
 td Mapped
-td {{ metrics.metrics.mappedCount }}
-td {{ metrics.metrics.mappedUsed }}
-td {{ metrics.metrics.mappedMax }}
+td {{ metrics.metrics.mappedCount | humanizeBytes }}
--- End diff --

The unit for none of these metrics is bytes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2902: [FLINK-5209] [webfrontend] Fix TaskManager metrics

2016-11-29 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2902

[FLINK-5209] [webfrontend] Fix TaskManager metrics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 5209_fix_taskmanager_metrics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2902.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2902


commit 57efb3d6d7cbdf70cbc1ee6d6c1831aeb2210b36
Author: Greg Hogan 
Date:   2016-11-29T21:25:21Z

[FLINK-5209] [webfrontend] Fix TaskManager metrics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706686#comment-15706686
 ] 

ASF GitHub Bot commented on FLINK-5209:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2902

[FLINK-5209] [webfrontend] Fix TaskManager metrics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 5209_fix_taskmanager_metrics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2902.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2902


commit 57efb3d6d7cbdf70cbc1ee6d6c1831aeb2210b36
Author: Greg Hogan 
Date:   2016-11-29T21:25:21Z

[FLINK-5209] [webfrontend] Fix TaskManager metrics




> Fix TaskManager metrics
> ---
>
> Key: FLINK-5209
> URL: https://issues.apache.org/jira/browse/FLINK-5209
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706673#comment-15706673
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
The builds are run with five different configurations each split into two 
sets of tests. Looking at the details 9/10 passed which is a success.


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2898: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-11-29 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
The builds are run with five different configurations each split into two 
sets of tests. Looking at the details 9/10 passed which is a success.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706637#comment-15706637
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2898
  
Thank you! I will pay attention to that next time I happen to contribute 
again.

The checks failed but as I see it didn't fail because of failing tests but 
because of some kind of crash. I don't think it happened because of my change. 
What do you think or what to do next?


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2898: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-11-29 Thread Hapcy
Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2898
  
Thank you! I will pay attention to that next time I happen to contribute 
again.

The checks failed but as I see it didn't fail because of failing tests but 
because of some kind of crash. I don't think it happened because of my change. 
What do you think or what to do next?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5209) Fix TaskManager metrics

2016-11-29 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-5209:
-

 Summary: Fix TaskManager metrics
 Key: FLINK-5209
 URL: https://issues.apache.org/jira/browse/FLINK-5209
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.2.0
Reporter: Greg Hogan
Assignee: Greg Hogan
 Fix For: 1.2.0


Properly propagate the network and non-JVM memory metrics to the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3921) StringParser not specifying encoding to use

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706049#comment-15706049
 ] 

ASF GitHub Bot commented on FLINK-3921:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2901

[FLINK-3921] StringParser encoding

This further modifies #2060:
- save charset as String which is serializable
- remove unused methods
- annotate new CsvReader methods as @PublicEvolving

This can be squashed into the first commit after review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 3921_stringparser_encoding

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2901






> StringParser not specifying encoding to use
> ---
>
> Key: FLINK-3921
> URL: https://issues.apache.org/jira/browse/FLINK-3921
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.0.3
>Reporter: Tatu Saloranta
>Assignee: Rekha Joshi
>Priority: Trivial
>
> Class `flink.types.parser.StringParser` has javadocs indicating that contents 
> are expected to be Ascii, similar to `StringValueParser`. That makes sense, 
> but when constructing actual instance, no encoding is specified; on line 66 
> f.ex:
>this.result = new String(bytes, startPos+1, i - startPos - 2);
> which leads to using whatever default platform encoding is. If contents 
> really are always Ascii (would not count on that as parser is used from CSV 
> reader), not a big deal, but it can lead to the usual Latin-1-VS-UTF-8 issues.
> So I think that encoding should be explicitly specified, whatever is to be 
> used: javadocs claim ascii, so could be "us-ascii", but could well be UTF-8 
> or even ISO-8859-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2901: [FLINK-3921] StringParser encoding

2016-11-29 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2901

[FLINK-3921] StringParser encoding

This further modifies #2060:
- save charset as String which is serializable
- remove unused methods
- annotate new CsvReader methods as @PublicEvolving

This can be squashed into the first commit after review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 3921_stringparser_encoding

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2901






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4921) Upgrade to Mesos 1.0.1

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706000#comment-15706000
 ] 

ASF GitHub Bot commented on FLINK-4921:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2827


> Upgrade to Mesos 1.0.1
> --
>
> Key: FLINK-4921
> URL: https://issues.apache.org/jira/browse/FLINK-4921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management, Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> Upgrade the client library to 1.0.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4921) Upgrade to Mesos 1.0.1

2016-11-29 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels closed FLINK-4921.
-
Resolution: Fixed

Fixed via 90acbe5174c3a9aa7f12a2e08ee29ca29f6f0a77

> Upgrade to Mesos 1.0.1
> --
>
> Key: FLINK-4921
> URL: https://issues.apache.org/jira/browse/FLINK-4921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management, Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> Upgrade the client library to 1.0.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2827: [FLINK-4921] Upgrade to Mesos 1.0.1

2016-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2827


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2898: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-11-29 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
Looks good to merge after the tests finish.

One more thought: if your IDE is set to fix trailing whitespace, it's fine 
in this PR since only a few lines changed but for larger PRs you'll want to 
limit formatting changes to code near your edits.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705997#comment-15705997
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
Looks good to merge after the tests finish.

One more thought: if your IDE is set to fix trailing whitespace, it's fine 
in this PR since only a few lines changed but for larger PRs you'll want to 
limit formatting changes to code near your edits.


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2900: Rebased: Keytab & TLS support for Flink on Mesos S...

2016-11-29 Thread mxm
GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2900

Rebased: Keytab & TLS support for Flink on Mesos Setup

Rebased #2734 to the latest master.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink FLINK-4826

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2900.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2900


commit 5ccd09cf30e4b4def0e333ead7509efe5515519a
Author: Vijay Srinivasaraghavan 
Date:   2016-10-13T22:45:35Z

FLINK-4826 Added keytab support to mesos container

commit 518de5e37f28b7060b534cae95655b063b4e2d36
Author: Vijay Srinivasaraghavan 
Date:   2016-10-31T16:54:03Z

FLINK-4918 Added SSL handler to artifact server




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5088) Add option to limit subpartition queue length

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705961#comment-15705961
 ] 

ASF GitHub Bot commented on FLINK-5088:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/2883
  
I agree with the renaming you propose.

I would not allow `flush()` to override the capacity limit as this has the 
potential to circumvent the capacity limit with a low latency `OutputFlusher`. 
Should we keep the `tryFlush` and `flush` differentiation or replace `flush` 
with `tryFlush`?

---

I've tested this on a cluster and didn't run into problems. Before merging, 
I still need to add some unit tests though. Most probably, this will not make 
it to 1.1.4. 


> Add option to limit subpartition queue length
> -
>
> Key: FLINK-5088
> URL: https://issues.apache.org/jira/browse/FLINK-5088
> Project: Flink
>  Issue Type: Improvement
>  Components: Network
>Affects Versions: 1.1.3
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Currently the sub partition queues are not bounded. Queued buffers are 
> consumed by the network event loop or local consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2883: [FLINK-5088] [network] Add capacity limit option to parti...

2016-11-29 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/2883
  
I agree with the renaming you propose.

I would not allow `flush()` to override the capacity limit as this has the 
potential to circumvent the capacity limit with a low latency `OutputFlusher`. 
Should we keep the `tryFlush` and `flush` differentiation or replace `flush` 
with `tryFlush`?

---

I've tested this on a cluster and didn't run into problems. Before merging, 
I still need to add some unit tests though. Most probably, this will not make 
it to 1.1.4. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4921) Upgrade to Mesos 1.0.1

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705953#comment-15705953
 ] 

ASF GitHub Bot commented on FLINK-4921:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2827
  
@mxm Yes, DCOS 1.8 is the stable release and is based on Mesos 1.0.x.   


> Upgrade to Mesos 1.0.1
> --
>
> Key: FLINK-4921
> URL: https://issues.apache.org/jira/browse/FLINK-4921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management, Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> Upgrade the client library to 1.0.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2827: [FLINK-4921] Upgrade to Mesos 1.0.1

2016-11-29 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2827
  
@mxm Yes, DCOS 1.8 is the stable release and is based on Mesos 1.0.x.   


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2899: Various logging improvements

2016-11-29 Thread uce
GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/2899

Various logging improvements

I would like to include the following fixes for the 1.1.4 release. In 
general, I try to improve the debugging experience with this.

1. **Reduce log pollution**: Some debug logs were very noisy and actually 
dominate the logs although they provide little value.
- Log heartbeats on TRACE level
- Don't log InputChannelDeploymentDescriptor
- Decrease HadoopFileSystem logging
2. **Improve log messages**: Some existing debug log messages were not that 
helpful.
- Log GlobalConfiguration loaded properties on INFO level
- Add TaskState toString
- Add more detailed log messages to HA job graph store
3. **Improve existing logger configuration templates**: The existing 
template simply configured the appenders and left everything except the root 
logger to the user. I changed it to be more fine grained (root logger, Flink, 
common libs/connectors). The goal is that users trying to DEBUG Flink, don't 
end up with too many unrelated log messages. 

/cc @tillrohrmann 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2899.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2899


commit 20ceaef71e8d10dbaf0cb01b8548fc3870d5ebac
Author: Ufuk Celebi 
Date:   2016-11-29T15:00:02Z

[FLINK-5201] [logging] Log loaded config properties on INFO level

commit bb837cece0f0945ab05e71eda5c3a728a64b390b
Author: Ufuk Celebi 
Date:   2016-11-29T15:04:48Z

[FLINK-5196] [logging] Don't log InputChannelDeploymentDescriptor

commit 6a21e85fc7d99ba687753b4989bb25eb19aaa3af
Author: Ufuk Celebi 
Date:   2016-11-29T16:15:27Z

[FLINK-5194] [logging] Log heartbeats on TRACE level

commit 4a91b78e52c440d5c736789cea0f8d80968339f8
Author: Ufuk Celebi 
Date:   2016-11-29T15:15:30Z

[FLINK-5198] [logging] Improve TaskState toString

commit 5a5a78a15c2c3a0b3d82446b98c0c73c26cb4a4d
Author: Ufuk Celebi 
Date:   2016-11-29T15:35:14Z

[FLINK-5199] [logging] Improve logging in ZooKeeperSubmittedJobGraphStore

commit 83ef198008e4dd84a4876b7a8cefc1f870ebe0a2
Author: Ufuk Celebi 
Date:   2016-11-29T16:08:53Z

[FLINK-5207] [logging] Decrease HadoopFileSystem logging

commit 98c63af63036178e9c8423b99360f42d18b2e388
Author: Ufuk Celebi 
Date:   2016-11-29T16:14:23Z

[FLINK-5192] [logging] Improve log config templates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705903#comment-15705903
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2892
  
Thank you for your feedback!
I followed your instructions and submitted another PR - #2898
I'll close this PR for now.


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705904#comment-15705904
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user Hapcy closed the pull request at:

https://github.com/apache/flink/pull/2892


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2892: [FLINK-5109] fix invalid content-encoding header o...

2016-11-29 Thread Hapcy
Github user Hapcy closed the pull request at:

https://github.com/apache/flink/pull/2892


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2898: [FLINK-5109] fix invalid content-encoding header o...

2016-11-29 Thread Hapcy
GitHub user Hapcy opened a pull request:

https://github.com/apache/flink/pull/2898

[FLINK-5109] fix invalid content-encoding header of webmonitor

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation --- not touched
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Hapcy/flink 
5109_invalid_content_encoding_header_in_rest_api_responses

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2898.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2898


commit aa105aa7160640548eac99c56af3afb5af377249
Author: tibor.moger 
Date:   2016-11-28T15:51:47Z

[FLINK-5109] fix invalid content-encoding header of webmonitor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2892: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-11-29 Thread Hapcy
Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2892
  
Thank you for your feedback!
I followed your instructions and submitted another PR - #2898
I'll close this PR for now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705899#comment-15705899
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

GitHub user Hapcy opened a pull request:

https://github.com/apache/flink/pull/2898

[FLINK-5109] fix invalid content-encoding header of webmonitor

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation --- not touched
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Hapcy/flink 
5109_invalid_content_encoding_header_in_rest_api_responses

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2898.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2898


commit aa105aa7160640548eac99c56af3afb5af377249
Author: tibor.moger 
Date:   2016-11-28T15:51:47Z

[FLINK-5109] fix invalid content-encoding header of webmonitor




> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5208) Increase application attempt timeout for the YARN tests on Travis

2016-11-29 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5208:
-

 Summary: Increase application attempt timeout for the YARN tests 
on Travis
 Key: FLINK-5208
 URL: https://issues.apache.org/jira/browse/FLINK-5208
 Project: Flink
  Issue Type: Bug
  Components: YARN Client
Reporter: Robert Metzger


[~StephanEwen] has been investigating a YARN test failure, and it seems that 
the failure boils down to the following:
{code}
12:14:44,069 INFO  org.apache.hadoop.yarn.util.AbstractLivelinessMonitor
 - Expired:appattempt_1480335230724_0003_01 Timed out after 20 secs
{code}

We should look into a way of increasing that timeout to make the YARN tests 
more robust.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2896: [FLINK-5197] [jm] Ignore outdated JobStatusChanged messag...

2016-11-29 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/2896
  
Good catch! The `RemoveJob` cannot succeed since is also checking the 
`currentJobs` that are checked for `JobStatusChanged` already. So in the end, 
the only case where this actually triggers removal is when it interfers with a 
recovered job as you say. 😨 

+1 to merge for 1.1 and #2895 for 1.2.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5197) Late JobStatusChanged messages can interfere with running jobs

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705885#comment-15705885
 ] 

ASF GitHub Bot commented on FLINK-5197:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/2896
  
Good catch! The `RemoveJob` cannot succeed since is also checking the 
`currentJobs` that are checked for `JobStatusChanged` already. So in the end, 
the only case where this actually triggers removal is when it interfers with a 
recovered job as you say.  

+1 to merge for 1.1 and #2895 for 1.2.


> Late JobStatusChanged messages can interfere with running jobs
> --
>
> Key: FLINK-5197
> URL: https://issues.apache.org/jira/browse/FLINK-5197
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> When the {{JobManager}} receives a {{JobStatusChanged}} message, it will look 
> up the {{ExecutionGraph}} for the given {{JobID}}. If there is no 
> {{ExecutionGraph}}, then a {{RemoveJob}} message is sent to itself. In the 
> general case, this is not problematic, because the {{RemoveJob}} message 
> won't do anything if there is no {{ExecutionGraph}}. However, since this is 
> an asynchronous call, it can happen that the corresponding job of the 
> {{JobID}} is recovered before receiving the {{RemoveJob}} message. In this 
> case, the newly recovered job would be removed.
> I propose to change the behaviour such that a {{JobStatusChanged}} for a 
> non-existing {{ExecutionGraph}} will be simply ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2882: [FLINK-5169] [network] Make consumption of InputCh...

2016-11-29 Thread uce
Github user uce closed the pull request at:

https://github.com/apache/flink/pull/2882


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5169) Make consumption of input channels fair

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705877#comment-15705877
 ] 

ASF GitHub Bot commented on FLINK-5169:
---

Github user uce closed the pull request at:

https://github.com/apache/flink/pull/2882


> Make consumption of input channels fair
> ---
>
> Key: FLINK-5169
> URL: https://issues.apache.org/jira/browse/FLINK-5169
> Project: Flink
>  Issue Type: Improvement
>  Components: Network
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Critical
> Fix For: 1.2.0, 1.1.4
>
>
> The input channels on the receiver side of the network stack queue incoming 
> data and notify the input gate about available data. These notifications 
> currently determine the order in which input channels are consumed, which can 
> lead to unfair consumption patterns where faster channels are favored over 
> slower ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2897: [FLINK-4676] [connectors] Merge batch and streamin...

2016-11-29 Thread fhueske
GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/2897

[FLINK-4676] [connectors] Merge batch and streaming connectors into a 
common flink-connectors Maven module.

Created a new Maven module `flink-connectors` and moved all submodules of 
`flink-batch-connectors` and `flink-streaming-connectors` to the new module.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink merged-connect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2897.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2897


commit 0e2739e2fc639b451cf07a5278385dbd9dec1361
Author: Fabian Hueske 
Date:   2016-11-29T12:57:30Z

[FLINK-4676] [connectors] Merge batch and streaming connectors into common 
Maven module.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4676) Merge flink-batch-connectors and flink-streaming-connectors modules

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705845#comment-15705845
 ] 

ASF GitHub Bot commented on FLINK-4676:
---

GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/2897

[FLINK-4676] [connectors] Merge batch and streaming connectors into a 
common flink-connectors Maven module.

Created a new Maven module `flink-connectors` and moved all submodules of 
`flink-batch-connectors` and `flink-streaming-connectors` to the new module.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink merged-connect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2897.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2897


commit 0e2739e2fc639b451cf07a5278385dbd9dec1361
Author: Fabian Hueske 
Date:   2016-11-29T12:57:30Z

[FLINK-4676] [connectors] Merge batch and streaming connectors into common 
Maven module.




> Merge flink-batch-connectors and flink-streaming-connectors modules
> ---
>
> Key: FLINK-4676
> URL: https://issues.apache.org/jira/browse/FLINK-4676
> Project: Flink
>  Issue Type: Task
>  Components: Batch Connectors and Input/Output Formats, Build System, 
> Streaming Connectors
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Minor
> Fix For: 1.2.0
>
>
> We have two separate Maven modules for batch and streaming connectors 
> (flink-batch-connectors and flink-streaming-connectors) that contain modules 
> for the individual external systems and storage formats such as HBase, 
> Cassandra, Avro, Elasticsearch, etc.
> Some of these systems can be used in streaming as well as batch jobs as for 
> instance HBase, Cassandra, and Elasticsearch. 
> However, due to the separate main modules for streaming and batch connectors, 
> we currently need to decide where to put a connector. 
> For example, the flink-connector-cassandra module is located in 
> flink-streaming-connectors but includes a CassandraInputFormat and 
> CassandraOutputFormat (i.e., a batch source and sink).
> This issue is about merging flink-batch-connectors and 
> flink-streaming-connectors into a joint flink-connectors module.
> Names of moved modules should not be changed (although this leads to an 
> inconsistent naming scheme: flink-connector-cassandra vs. flink-hbase) to 
> keep the change of code structure transparent to users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705740#comment-15705740
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2892
  
Hi @Hapcy, thanks for submitting this PR.

It's best to create a feature branch for each ticket rather than committing 
to master. You can fix this by creating a branch (git checkout master ; git 
checkout -b 5109_invalid_content_encoding_header_in_rest_api_responses) then 
resetting master (git checkout master ; git reset --hard 7f1c76d^ ; git rebase 
upstream/master) then rebasing against master (git checkout 
5109_invalid_content_encoding_header_in_rest_api_responses ; git rebase master 
; git push -f origin 
5109_invalid_content_encoding_header_in_rest_api_responses).

Something like that. Make a copy of everything first.


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Móger Tibor László
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2892: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-11-29 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2892
  
Hi @Hapcy, thanks for submitting this PR.

It's best to create a feature branch for each ticket rather than committing 
to master. You can fix this by creating a branch (git checkout master ; git 
checkout -b 5109_invalid_content_encoding_header_in_rest_api_responses) then 
resetting master (git checkout master ; git reset --hard 7f1c76d^ ; git rebase 
upstream/master) then rebasing against master (git checkout 
5109_invalid_content_encoding_header_in_rest_api_responses ; git rebase master 
; git push -f origin 
5109_invalid_content_encoding_header_in_rest_api_responses).

Something like that. Make a copy of everything first.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5207) Decrease HadoopFileSystem logging

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5207:
--

 Summary: Decrease HadoopFileSystem logging
 Key: FLINK-5207
 URL: https://issues.apache.org/jira/browse/FLINK-5207
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi


HadoopFileSystem logging is very noisy when trying to get the Hadoop 
configuration, which happens for every HadoopFileSystem. This happens for 
example when a checkpoint is written to HDFS in case of HA.

1) Two log messages relate to the user config:

{code}
final String hdfsDefaultPath = 
GlobalConfiguration.getString(ConfigConstants.HDFS_DEFAULT_CONFIG, null);
if (hdfsDefaultPath != null) {
retConf.addResource(new 
org.apache.hadoop.fs.Path(hdfsDefaultPath));
} else {
LOG.debug("Cannot find hdfs-default configuration 
file");
}
{code}

Since this is reflected in the logged configuration properties, it is mostly 
unnecessary to log over and over again. Therefore I propose to go with TRACE.

(2) Two log messages relate to the default config files of Hadoop. There I 
would change the debug message to happen only when they are *not* found. If it 
is still too noisy, we could completely remove it or decrease to TRACE at 
another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5159) Improve perfomance of inner joins with a single row input

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705628#comment-15705628
 ] 

ASF GitHub Bot commented on FLINK-5159:
---

Github user AlexanderShoshin commented on a diff in the pull request:

https://github.com/apache/flink/pull/2811#discussion_r90034073
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
 ---
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.table.plan.nodes.dataset
+
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.`type`.{RelDataType, RelDataTypeField}
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{BiRel, RelNode, RelWriter}
+import org.apache.calcite.rex.RexNode
+import org.apache.calcite.util.mapping.IntPair
+import org.apache.flink.api.common.functions.{FlatJoinFunction, 
FlatMapFunction}
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.java.DataSet
+import org.apache.flink.api.table.codegen.CodeGenerator
+import org.apache.flink.api.table.runtime.{MapJoinLeftRunner, 
MapJoinRightRunner}
+import 
org.apache.flink.api.table.typeutils.TypeConverter.determineReturnType
+import org.apache.flink.api.table.{BatchTableEnvironment, TableConfig, 
TableException}
+
+import scala.collection.JavaConversions._
+import scala.collection.JavaConverters._
+
+/**
+  * Flink RelNode that executes a Join where one of inputs is a single row.
+  */
+class DataSetSingleRowJoin(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+leftNode: RelNode,
+rightNode: RelNode,
+leftIsSingle: Boolean,
+rowRelDataType: RelDataType,
+joinCondition: RexNode,
+joinRowType: RelDataType,
+keyPairs: List[IntPair],
+ruleDescription: String)
+  extends BiRel(cluster, traitSet, leftNode, rightNode)
+  with DataSetRel {
+
+  override def deriveRowType() = rowRelDataType
+
+  override def copy(traitSet: RelTraitSet, inputs: 
java.util.List[RelNode]): RelNode = {
+new DataSetSingleRowJoin(
+  cluster,
+  traitSet,
+  inputs.get(0),
+  inputs.get(1),
+  leftIsSingle,
+  getRowType,
+  joinCondition,
+  joinRowType,
+  keyPairs,
+  ruleDescription)
+  }
+
+  override def toString: String = {
+s"$joinTypeToString(where: ($joinConditionToString), join: 
($joinSelectionToString))"
+  }
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+super.explainTerms(pw)
+  .item("where", joinConditionToString)
+  .item("join", joinSelectionToString)
+  .item("joinType", joinTypeToString)
+  }
+
+  override def computeSelfCost (planner: RelOptPlanner, metadata: 
RelMetadataQuery): RelOptCost = {
+val children = this.getInputs
+children.foldLeft(planner.getCostFactory.makeZeroCost()) { (cost, 
child) =>
+  if (leftIsSingle && child.equals(right) ||
+  !leftIsSingle && child.equals(left)) {
+val rowCnt = metadata.getRowCount(child)
+val rowSize = this.estimateRowSize(child.getRowType)
+cost.plus(planner.getCostFactory.makeCost(rowCnt, rowCnt, rowCnt * 
rowSize))
+  } else {
+cost
+  }
+}
+  }
+
+  override def translateToPlan(
+  tableEnv: BatchTableEnvironment,
+  expectedType: Option[TypeInformation[Any]]): DataSet[Any] = {
+
+if (isConditionTypesCompatible(left.getRowType.getFieldList,
--- End diff --

`DataSetJoin` has the same keyPairs checking code. And different key types 
(String and Int for example) in `WHERE` expression will be caught by it if we 
have 'a3 = b1'. But we will receive a `NumberFormatException` from the 
generated join 

[GitHub] flink pull request #2811: [FLINK-5159] Improve perfomance of inner joins wit...

2016-11-29 Thread AlexanderShoshin
Github user AlexanderShoshin commented on a diff in the pull request:

https://github.com/apache/flink/pull/2811#discussion_r90034073
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
 ---
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.table.plan.nodes.dataset
+
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.`type`.{RelDataType, RelDataTypeField}
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{BiRel, RelNode, RelWriter}
+import org.apache.calcite.rex.RexNode
+import org.apache.calcite.util.mapping.IntPair
+import org.apache.flink.api.common.functions.{FlatJoinFunction, 
FlatMapFunction}
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.java.DataSet
+import org.apache.flink.api.table.codegen.CodeGenerator
+import org.apache.flink.api.table.runtime.{MapJoinLeftRunner, 
MapJoinRightRunner}
+import 
org.apache.flink.api.table.typeutils.TypeConverter.determineReturnType
+import org.apache.flink.api.table.{BatchTableEnvironment, TableConfig, 
TableException}
+
+import scala.collection.JavaConversions._
+import scala.collection.JavaConverters._
+
+/**
+  * Flink RelNode that executes a Join where one of inputs is a single row.
+  */
+class DataSetSingleRowJoin(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+leftNode: RelNode,
+rightNode: RelNode,
+leftIsSingle: Boolean,
+rowRelDataType: RelDataType,
+joinCondition: RexNode,
+joinRowType: RelDataType,
+keyPairs: List[IntPair],
+ruleDescription: String)
+  extends BiRel(cluster, traitSet, leftNode, rightNode)
+  with DataSetRel {
+
+  override def deriveRowType() = rowRelDataType
+
+  override def copy(traitSet: RelTraitSet, inputs: 
java.util.List[RelNode]): RelNode = {
+new DataSetSingleRowJoin(
+  cluster,
+  traitSet,
+  inputs.get(0),
+  inputs.get(1),
+  leftIsSingle,
+  getRowType,
+  joinCondition,
+  joinRowType,
+  keyPairs,
+  ruleDescription)
+  }
+
+  override def toString: String = {
+s"$joinTypeToString(where: ($joinConditionToString), join: 
($joinSelectionToString))"
+  }
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+super.explainTerms(pw)
+  .item("where", joinConditionToString)
+  .item("join", joinSelectionToString)
+  .item("joinType", joinTypeToString)
+  }
+
+  override def computeSelfCost (planner: RelOptPlanner, metadata: 
RelMetadataQuery): RelOptCost = {
+val children = this.getInputs
+children.foldLeft(planner.getCostFactory.makeZeroCost()) { (cost, 
child) =>
+  if (leftIsSingle && child.equals(right) ||
+  !leftIsSingle && child.equals(left)) {
+val rowCnt = metadata.getRowCount(child)
+val rowSize = this.estimateRowSize(child.getRowType)
+cost.plus(planner.getCostFactory.makeCost(rowCnt, rowCnt, rowCnt * 
rowSize))
+  } else {
+cost
+  }
+}
+  }
+
+  override def translateToPlan(
+  tableEnv: BatchTableEnvironment,
+  expectedType: Option[TypeInformation[Any]]): DataSet[Any] = {
+
+if (isConditionTypesCompatible(left.getRowType.getFieldList,
--- End diff --

`DataSetJoin` has the same keyPairs checking code. And different key types 
(String and Int for example) in `WHERE` expression will be caught by it if we 
have 'a3 = b1'. But we will receive a `NumberFormatException` from the 
generated join function if `WHERE` expression looks like this: `a3 < b1`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if 

[jira] [Assigned] (FLINK-5206) Flakey PythonPlanBinderTest

2016-11-29 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-5206:
---

Assignee: Chesnay Schepler

> Flakey PythonPlanBinderTest
> ---
>
> Key: FLINK-5206
> URL: https://issues.apache.org/jira/browse/FLINK-5206
> Project: Flink
>  Issue Type: Bug
> Environment: in TravisCI
>Reporter: Nico Kruber
>Assignee: Chesnay Schepler
>
> {code:none}
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> Job execution failed.
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:903)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: Output path '/tmp/flink/result2' could not be 
> initialized. Canceling task...
>   at 
> org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:233)
>   at 
> org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:78)
>   at 
> org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:178)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
>   at java.lang.Thread.run(Thread.java:745)
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.891 sec 
> <<< FAILURE! - in org.apache.flink.python.api.PythonPlanBinderTest
> testJobWithoutObjectReuse(org.apache.flink.python.api.PythonPlanBinderTest)  
> Time elapsed: 11.53 sec  <<< FAILURE!
> java.lang.AssertionError: Error while calling the test program: Job execution 
> failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> 

[GitHub] flink pull request #2542: [FLINK-4613] [ml] Extend ALS to handle implicit fe...

2016-11-29 Thread mbalassi
Github user mbalassi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2542#discussion_r90032424
  
--- Diff: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/recommendation/ALS.scala
 ---
@@ -581,6 +637,16 @@ object ALS {
 val userXy = new ArrayBuffer[Array[Double]]()
 val numRatings = new ArrayBuffer[Int]()
 
+var precomputedXtX: Array[Double] = null
+
+override def open(config: Configuration): Unit = {
+  // retrieve broadcasted precomputed XtX if using implicit 
feedback
+  if (implicitPrefs) {
+precomputedXtX = 
getRuntimeContext.getBroadcastVariable[Array[Double]]("XtX")
+  .iterator().next()
+  }
+}
+
 override def coGroup(left: lang.Iterable[(Int, Int, 
Array[Array[Double]])],
--- End diff --

I agree with @gaborhermann here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4613) Extend ALS to handle implicit feedback datasets

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705605#comment-15705605
 ] 

ASF GitHub Bot commented on FLINK-4613:
---

Github user mbalassi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2542#discussion_r90032424
  
--- Diff: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/recommendation/ALS.scala
 ---
@@ -581,6 +637,16 @@ object ALS {
 val userXy = new ArrayBuffer[Array[Double]]()
 val numRatings = new ArrayBuffer[Int]()
 
+var precomputedXtX: Array[Double] = null
+
+override def open(config: Configuration): Unit = {
+  // retrieve broadcasted precomputed XtX if using implicit 
feedback
+  if (implicitPrefs) {
+precomputedXtX = 
getRuntimeContext.getBroadcastVariable[Array[Double]]("XtX")
+  .iterator().next()
+  }
+}
+
 override def coGroup(left: lang.Iterable[(Int, Int, 
Array[Array[Double]])],
--- End diff --

I agree with @gaborhermann here.


> Extend ALS to handle implicit feedback datasets
> ---
>
> Key: FLINK-4613
> URL: https://issues.apache.org/jira/browse/FLINK-4613
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Gábor Hermann
>Assignee: Gábor Hermann
>
> The Alternating Least Squares implementation should be extended to handle 
> _implicit feedback_ datasets. These datasets do not contain explicit ratings 
> by users, they are rather built by collecting user behavior (e.g. user 
> listened to artist X for Y minutes), and they require a slightly different 
> optimization objective. See details by [Hu et 
> al|http://dx.doi.org/10.1109/ICDM.2008.22].
> We do not need to modify much in the original ALS algorithm. See [Spark ALS 
> implementation|https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala],
>  which could be a basis for this extension. Only the updating factor part is 
> modified, and most of the changes are in the local parts of the algorithm 
> (i.e. UDFs). In fact, the only modification that is not local, is 
> precomputing a matrix product Y^T * Y and broadcasting it to all the nodes, 
> which we can do with broadcast DataSets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2542: [FLINK-4613] [ml] Extend ALS to handle implicit feedback ...

2016-11-29 Thread mbalassi
Github user mbalassi commented on the issue:

https://github.com/apache/flink/pull/2542
  
Yes @gaborhermann , I finally got here. 😄 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4613) Extend ALS to handle implicit feedback datasets

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705574#comment-15705574
 ] 

ASF GitHub Bot commented on FLINK-4613:
---

Github user mbalassi commented on the issue:

https://github.com/apache/flink/pull/2542
  
Yes @gaborhermann , I finally got here.  


> Extend ALS to handle implicit feedback datasets
> ---
>
> Key: FLINK-4613
> URL: https://issues.apache.org/jira/browse/FLINK-4613
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Gábor Hermann
>Assignee: Gábor Hermann
>
> The Alternating Least Squares implementation should be extended to handle 
> _implicit feedback_ datasets. These datasets do not contain explicit ratings 
> by users, they are rather built by collecting user behavior (e.g. user 
> listened to artist X for Y minutes), and they require a slightly different 
> optimization objective. See details by [Hu et 
> al|http://dx.doi.org/10.1109/ICDM.2008.22].
> We do not need to modify much in the original ALS algorithm. See [Spark ALS 
> implementation|https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala],
>  which could be a basis for this extension. Only the updating factor part is 
> modified, and most of the changes are in the local parts of the algorithm 
> (i.e. UDFs). In fact, the only modification that is not local, is 
> precomputing a matrix product Y^T * Y and broadcasting it to all the nodes, 
> which we can do with broadcast DataSets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5206) Flakey PythonPlanBinderTest

2016-11-29 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-5206:
--

 Summary: Flakey PythonPlanBinderTest
 Key: FLINK-5206
 URL: https://issues.apache.org/jira/browse/FLINK-5206
 Project: Flink
  Issue Type: Bug
 Environment: in TravisCI
Reporter: Nico Kruber


{code:none}
---
 T E S T S
---
Running org.apache.flink.python.api.PythonPlanBinderTest
Job execution failed.
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:903)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: Output path '/tmp/flink/result2' could not be 
initialized. Canceling task...
at 
org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:233)
at 
org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:78)
at 
org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:178)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
at java.lang.Thread.run(Thread.java:745)
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.891 sec <<< 
FAILURE! - in org.apache.flink.python.api.PythonPlanBinderTest
testJobWithoutObjectReuse(org.apache.flink.python.api.PythonPlanBinderTest)  
Time elapsed: 11.53 sec  <<< FAILURE!
java.lang.AssertionError: Error while calling the test program: Job execution 
failed.
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 

[GitHub] flink issue #2827: [FLINK-4921] Upgrade to Mesos 1.0.1

2016-11-29 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2827
  
Makes sense then. Do distributions like DC/OS already ship Mesos 1.0.1?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4921) Upgrade to Mesos 1.0.1

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705548#comment-15705548
 ] 

ASF GitHub Bot commented on FLINK-4921:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2827
  
Makes sense then. Do distributions like DC/OS already ship Mesos 1.0.1?


> Upgrade to Mesos 1.0.1
> --
>
> Key: FLINK-4921
> URL: https://issues.apache.org/jira/browse/FLINK-4921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management, Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
> Fix For: 1.2.0
>
>
> Upgrade the client library to 1.0.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2896: [FLINK-5197] [jm] Ignore outdated JobStatusChanged...

2016-11-29 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2896

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Backport of #2895 for release 1.1 branch.

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.

Review @uce.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
backportFixJobStatusChangedMessage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2896.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2896


commit 4a2f948224fe628c721adc4fae24199b0296c80f
Author: Till Rohrmann 
Date:   2016-11-29T15:02:29Z

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5205) Asynchronous remove job messages can interfere with recovered or resubmitted jobs

2016-11-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5205:


 Summary: Asynchronous remove job messages can interfere with 
recovered or resubmitted jobs
 Key: FLINK-5205
 URL: https://issues.apache.org/jira/browse/FLINK-5205
 Project: Flink
  Issue Type: Bug
  Components: JobManager
Affects Versions: 1.1.3, 1.2.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Minor
 Fix For: 1.2.0, 1.1.4


The {{JobManager}} executes the {{removeJob}} call at various places 
asynchronously via the {{RemoveJob}} message, even though this is not strictly 
needed. This is not necessary and can be done synchronously. The advantage 
would be that the remove job operation won't interfere with recovered or 
resubmitted jobs which have the same {{JobID}} as the job for which the message 
was initially triggered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5197) Late JobStatusChanged messages can interfere with running jobs

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705540#comment-15705540
 ] 

ASF GitHub Bot commented on FLINK-5197:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2896

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Backport of #2895 for release 1.1 branch.

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.

Review @uce.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
backportFixJobStatusChangedMessage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2896.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2896


commit 4a2f948224fe628c721adc4fae24199b0296c80f
Author: Till Rohrmann 
Date:   2016-11-29T15:02:29Z

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.




> Late JobStatusChanged messages can interfere with running jobs
> --
>
> Key: FLINK-5197
> URL: https://issues.apache.org/jira/browse/FLINK-5197
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> When the {{JobManager}} receives a {{JobStatusChanged}} message, it will look 
> up the {{ExecutionGraph}} for the given {{JobID}}. If there is no 
> {{ExecutionGraph}}, then a {{RemoveJob}} message is sent to itself. In the 
> general case, this is not problematic, because the {{RemoveJob}} message 
> won't do anything if there is no {{ExecutionGraph}}. However, since this is 
> an asynchronous call, it can happen that the corresponding job of the 
> {{JobID}} is recovered before receiving the {{RemoveJob}} message. In this 
> case, the newly recovered job would be removed.
> I propose to change the behaviour such that a {{JobStatusChanged}} for a 
> non-existing {{ExecutionGraph}} will be simply ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5204) Flakey YARNSessionCapacitySchedulerITCase (NullPointerException)

2016-11-29 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-5204:
--

 Summary: Flakey YARNSessionCapacitySchedulerITCase 
(NullPointerException)
 Key: FLINK-5204
 URL: https://issues.apache.org/jira/browse/FLINK-5204
 Project: Flink
  Issue Type: Bug
  Components: Tests, YARN
Affects Versions: 2.0.0
 Environment: TravisCI
Reporter: Nico Kruber


{code:none}
testTaskManagerFailure(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)
  Time elapsed: 0.61 sec  <<< ERROR!
java.lang.NullPointerException: java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:119)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:814)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:580)
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:806)
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:670)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:107)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy91.getApplications(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:478)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:455)
at 
org.apache.flink.yarn.YarnTestBase.checkClusterEmpty(YarnTestBase.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 

[jira] [Created] (FLINK-5203) YARNHighAvailabilityITCase fails to deploy YARN cluster

2016-11-29 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-5203:
--

 Summary: YARNHighAvailabilityITCase fails to deploy YARN cluster
 Key: FLINK-5203
 URL: https://issues.apache.org/jira/browse/FLINK-5203
 Project: Flink
  Issue Type: Bug
  Components: Tests, YARN
Affects Versions: 2.0.0
 Environment: in TravisCI
Reporter: Nico Kruber


{code:none}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 49.883 sec <<< 
FAILURE! - in org.apache.flink.yarn.YARNHighAvailabilityITCase
testMultipleAMKill(org.apache.flink.yarn.YARNHighAvailabilityITCase)  Time 
elapsed: 42.191 sec  <<< ERROR!
java.lang.RuntimeException: Couldn't deploy Yarn cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:840)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:407)
at 
org.apache.flink.yarn.YARNHighAvailabilityITCase.testMultipleAMKill(YARNHighAvailabilityITCase.java:131)
{code}

stdout log:
https://api.travis-ci.org/jobs/179733979/log.txt?deansi=true

full logs:
https://transfer.sh/UNjFq/29.5.tar.gz




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5197) Late JobStatusChanged messages can interfere with running jobs

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705538#comment-15705538
 ] 

ASF GitHub Bot commented on FLINK-5197:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2895

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixJobStatusChangedMessage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2895.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2895


commit 490cef46380178a2296c2f743b9eb91154967463
Author: Till Rohrmann 
Date:   2016-11-29T15:02:29Z

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.




> Late JobStatusChanged messages can interfere with running jobs
> --
>
> Key: FLINK-5197
> URL: https://issues.apache.org/jira/browse/FLINK-5197
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> When the {{JobManager}} receives a {{JobStatusChanged}} message, it will look 
> up the {{ExecutionGraph}} for the given {{JobID}}. If there is no 
> {{ExecutionGraph}}, then a {{RemoveJob}} message is sent to itself. In the 
> general case, this is not problematic, because the {{RemoveJob}} message 
> won't do anything if there is no {{ExecutionGraph}}. However, since this is 
> an asynchronous call, it can happen that the corresponding job of the 
> {{JobID}} is recovered before receiving the {{RemoveJob}} message. In this 
> case, the newly recovered job would be removed.
> I propose to change the behaviour such that a {{JobStatusChanged}} for a 
> non-existing {{ExecutionGraph}} will be simply ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2895: [FLINK-5197] [jm] Ignore outdated JobStatusChanged...

2016-11-29 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2895

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixJobStatusChangedMessage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2895.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2895


commit 490cef46380178a2296c2f743b9eb91154967463
Author: Till Rohrmann 
Date:   2016-11-29T15:02:29Z

[FLINK-5197] [jm] Ignore outdated JobStatusChanged messages

Outdated JobStatusChanged messages no longer trigger a RemoveJob message 
but are
logged and ignored. This has the advantage, that an outdated 
JobStatusChanged message
cannot interfere with a recovered job which can have the same job id.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2758: [FLINK-4260] Support specifying ESCAPE character i...

2016-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2758


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-4260) Allow SQL's LIKE ESCAPE

2016-11-29 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4260.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in bcef8a3f8db5b07e1c4f1f62731332cc1ea3aaa0.

> Allow SQL's LIKE ESCAPE
> ---
>
> Key: FLINK-4260
> URL: https://issues.apache.org/jira/browse/FLINK-4260
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Leo Deng
>Priority: Minor
> Fix For: 1.2.0
>
>
> Currently, the SQL API does not support specifying an ESCAPE character in a 
> LIKE expression. The SIMILAR TO should also support that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4260) Allow SQL's LIKE ESCAPE

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705520#comment-15705520
 ] 

ASF GitHub Bot commented on FLINK-4260:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2758


> Allow SQL's LIKE ESCAPE
> ---
>
> Key: FLINK-4260
> URL: https://issues.apache.org/jira/browse/FLINK-4260
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Leo Deng
>Priority: Minor
> Fix For: 1.2.0
>
>
> Currently, the SQL API does not support specifying an ESCAPE character in a 
> LIKE expression. The SIMILAR TO should also support that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5202) test timeout (no output for 300s) in YARNHighAvailabilityITCase

2016-11-29 Thread Nico Kruber (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705518#comment-15705518
 ] 

Nico Kruber commented on FLINK-5202:


one more:

https://api.travis-ci.org/jobs/179733976/log.txt?deansi=true
https://transfer.sh/16at6t/29.3.tar.gz

> test timeout (no output for 300s) in YARNHighAvailabilityITCase
> ---
>
> Key: FLINK-5202
> URL: https://issues.apache.org/jira/browse/FLINK-5202
> Project: Flink
>  Issue Type: Bug
>  Components: Tests, YARN
>Affects Versions: 2.0.0
> Environment: in TravisCI
>Reporter: Nico Kruber
>
> TravisCI occasionally fails since YARNHighAvailabilityITCase fails to produce 
> an output (or finish) within 300s. See the logs at the URLs below
> textual log:
> https://api.travis-ci.org/jobs/179733965/log.txt?deansi=true
> full log at https://transfer.sh/xcLvY/29.1.tar.gz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5201) Promote loaded config properties to INFO

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5201:
--

 Summary: Promote loaded config properties to INFO
 Key: FLINK-5201
 URL: https://issues.apache.org/jira/browse/FLINK-5201
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


Loaded config properties via {{GlobalConfiguration}} are logged on log level 
DEBUG. I think it's fair to log them on Info level since it happens only once 
on start up and it is usually very helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5202) test timeout (no output for 300s) in YARNHighAvailabilityITCase

2016-11-29 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-5202:
--

 Summary: test timeout (no output for 300s) in 
YARNHighAvailabilityITCase
 Key: FLINK-5202
 URL: https://issues.apache.org/jira/browse/FLINK-5202
 Project: Flink
  Issue Type: Bug
  Components: Tests, YARN
Affects Versions: 2.0.0
 Environment: in TravisCI
Reporter: Nico Kruber


TravisCI occasionally fails since YARNHighAvailabilityITCase fails to produce 
an output (or finish) within 300s. See the logs at the URLs below

textual log:
https://api.travis-ci.org/jobs/179733965/log.txt?deansi=true

full log at https://transfer.sh/xcLvY/29.1.tar.gz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5197) Late JobStatusChanged messages can interfere with running jobs

2016-11-29 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-5197:
-
Summary: Late JobStatusChanged messages can interfere with running jobs  
(was: Late JobStatusChanges can interfere with running jobs)

> Late JobStatusChanged messages can interfere with running jobs
> --
>
> Key: FLINK-5197
> URL: https://issues.apache.org/jira/browse/FLINK-5197
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> When the {{JobManager}} receives a {{JobStatusChanged}} message, it will look 
> up the {{ExecutionGraph}} for the given {{JobID}}. If there is no 
> {{ExecutionGraph}}, then a {{RemoveJob}} message is sent to itself. In the 
> general case, this is not problematic, because the {{RemoveJob}} message 
> won't do anything if there is no {{ExecutionGraph}}. However, since this is 
> an asynchronous call, it can happen that the corresponding job of the 
> {{JobID}} is recovered before receiving the {{RemoveJob}} message. In this 
> case, the newly recovered job would be removed.
> I propose to change the behaviour such that a {{JobStatusChanged}} for a 
> non-existing {{ExecutionGraph}} will be simply ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5184) Error result of compareSerialized in RowComparator class

2016-11-29 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-5184.

   Resolution: Fixed
Fix Version/s: 1.1.4
   1.2.0

Fixed for 1.2.0 with ecfb5b5f6fd6bf1555c7240d77dd9aca982f4416
Fixed for 1.1.4 with 0758d0be60647f09f0254092c05601d6698ae90f

> Error result of compareSerialized in RowComparator class
> 
>
> Key: FLINK-5184
> URL: https://issues.apache.org/jira/browse/FLINK-5184
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: godfrey he
>Assignee: godfrey he
> Fix For: 1.2.0, 1.1.4
>
>
> RowSerializer will write null mask for all fields in a row before serialize 
> row data to  DataOutputView. 
> {code:title=RowSerializer.scala|borderStyle=solid}
> override def serialize(value: Row, target: DataOutputView) {
> val len = fieldSerializers.length
> if (value.productArity != len) {
>   throw new RuntimeException("Row arity of value does not match 
> serializers.")
> }
> // write a null mask
> writeNullMask(len, value, target)
> ..
> }
> {code}
> RowComparator will deserialize a row data from DataInputView when call 
> compareSerialized method. However, the first parameter value of 
> readIntoNullMask method is wrong, which should be the count of all fields, 
> rather than the length of serializers (to deserialize the first n fields for 
> comparison).
> {code:title=RowComparator.scala|borderStyle=solid}
> override def compareSerialized(firstSource: DataInputView, secondSource: 
> DataInputView): Int = {
> val len = serializers.length
> val keyLen = keyPositions.length
> readIntoNullMask(len, firstSource, nullMask1)
> readIntoNullMask(len, secondSource, nullMask2)
> ..
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5195) Log JobManager configuration on start up

2016-11-29 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-5195.
--
Resolution: Not A Problem

Already logged

> Log JobManager configuration on start up
> 
>
> Key: FLINK-5195
> URL: https://issues.apache.org/jira/browse/FLINK-5195
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ufuk Celebi
>
> Log the job manager configuration on start up like the task managers. 
> Furthermore, I would like to promote the log level of both to INFO since it's 
> a one time thing and often helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4832) Count/Sum 0 elements

2016-11-29 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-4832.

   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed with ecfb5b5f6fd6bf1555c7240d77dd9aca982f4416

Thanks for the contribution [~anmu]!

> Count/Sum 0 elements
> 
>
> Key: FLINK-4832
> URL: https://issues.apache.org/jira/browse/FLINK-4832
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Anton Mushin
> Fix For: 1.2.0
>
>
> Currently, the Table API is unable to count or sum up 0 elements. We should 
> improve DataSet aggregations for this. Maybe by union the original DataSet 
> with a dummy record or by using a MapPartition function. Coming up with a 
> good design for this is also part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4825) Implement a RexExecutor that uses Flink's code generation

2016-11-29 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-4825.

   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed with db441decb41bf856400766023bfc7de77d6041aa

> Implement a RexExecutor that uses Flink's code generation
> -
>
> Key: FLINK-4825
> URL: https://issues.apache.org/jira/browse/FLINK-4825
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> The added {{ReduceExpressionRule}} leads to inconsistent behavior. Because 
> some parts of an expression are evalutated using Flink's code generation and 
> some parts use Calcite's code generation.
> A very easy example: boolean expressions casted to string are represented as 
> "TRUE/FALSE" using Calcite and "true/false" using Flink.
> I propose to implement the RexExecutor interface and forward the calls to 
> Flink's code generation. Additional improvements in order to be more standard 
> compliant could be solved in new Jira issues.
> I will disable the rule and the corresponding tests till this issue is fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5184) Error result of compareSerialized in RowComparator class

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705469#comment-15705469
 ] 

ASF GitHub Bot commented on FLINK-5184:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2894


> Error result of compareSerialized in RowComparator class
> 
>
> Key: FLINK-5184
> URL: https://issues.apache.org/jira/browse/FLINK-5184
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: godfrey he
>Assignee: godfrey he
>
> RowSerializer will write null mask for all fields in a row before serialize 
> row data to  DataOutputView. 
> {code:title=RowSerializer.scala|borderStyle=solid}
> override def serialize(value: Row, target: DataOutputView) {
> val len = fieldSerializers.length
> if (value.productArity != len) {
>   throw new RuntimeException("Row arity of value does not match 
> serializers.")
> }
> // write a null mask
> writeNullMask(len, value, target)
> ..
> }
> {code}
> RowComparator will deserialize a row data from DataInputView when call 
> compareSerialized method. However, the first parameter value of 
> readIntoNullMask method is wrong, which should be the count of all fields, 
> rather than the length of serializers (to deserialize the first n fields for 
> comparison).
> {code:title=RowComparator.scala|borderStyle=solid}
> override def compareSerialized(firstSource: DataInputView, secondSource: 
> DataInputView): Int = {
> val len = serializers.length
> val keyLen = keyPositions.length
> readIntoNullMask(len, firstSource, nullMask1)
> readIntoNullMask(len, secondSource, nullMask2)
> ..
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4825) Implement a RexExecutor that uses Flink's code generation

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705470#comment-15705470
 ] 

ASF GitHub Bot commented on FLINK-4825:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2884


> Implement a RexExecutor that uses Flink's code generation
> -
>
> Key: FLINK-4825
> URL: https://issues.apache.org/jira/browse/FLINK-4825
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> The added {{ReduceExpressionRule}} leads to inconsistent behavior. Because 
> some parts of an expression are evalutated using Flink's code generation and 
> some parts use Calcite's code generation.
> A very easy example: boolean expressions casted to string are represented as 
> "TRUE/FALSE" using Calcite and "true/false" using Flink.
> I propose to implement the RexExecutor interface and forward the calls to 
> Flink's code generation. Additional improvements in order to be more standard 
> compliant could be solved in new Jira issues.
> I will disable the rule and the corresponding tests till this issue is fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5175) StreamExecutionEnvironment's set function return `this` instead of void

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705471#comment-15705471
 ] 

ASF GitHub Bot commented on FLINK-5175:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2874


> StreamExecutionEnvironment's set function return `this` instead of void
> ---
>
> Key: FLINK-5175
> URL: https://issues.apache.org/jira/browse/FLINK-5175
> Project: Flink
>  Issue Type: Sub-task
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 2.0.0
>
>
> from FLINK-5167.
> for example :
> public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
> change to:
> public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
> numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2884: [FLINK-4825] [table] Implement a RexExecutor that ...

2016-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2884


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2874: [FLINK-5175] StreamExecutionEnvironment's set func...

2016-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2874


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2894: [FLINK-5184] fix bug: Error result of compareSeria...

2016-11-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2894


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5200) Display recovery information in Webfrontend

2016-11-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5200:


 Summary: Display recovery information in Webfrontend
 Key: FLINK-5200
 URL: https://issues.apache.org/jira/browse/FLINK-5200
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.1.3, 1.2.0
Reporter: Till Rohrmann
Priority: Minor
 Fix For: 1.2.0


It would be helpful to display some more recovery information in the web ui. 
E.g. we could show the restart attempt number, the restart time and the reason 
for the restart in the web ui.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2758: [FLINK-4260] Support specifying ESCAPE character in LIKE ...

2016-11-29 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2758
  
The PR looks good. The only thing missing is documentation. I will add it 
and merge this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4260) Allow SQL's LIKE ESCAPE

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705453#comment-15705453
 ] 

ASF GitHub Bot commented on FLINK-4260:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2758
  
The PR looks good. The only thing missing is documentation. I will add it 
and merge this.


> Allow SQL's LIKE ESCAPE
> ---
>
> Key: FLINK-4260
> URL: https://issues.apache.org/jira/browse/FLINK-4260
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Leo Deng
>Priority: Minor
>
> Currently, the SQL API does not support specifying an ESCAPE character in a 
> LIKE expression. The SIMILAR TO should also support that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5199) Improve logging of submitted job graph actions in HA case

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5199:
--

 Summary: Improve logging of submitted job graph actions in HA case
 Key: FLINK-5199
 URL: https://issues.apache.org/jira/browse/FLINK-5199
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi


Include the involved paths (ZK and FS) when logging and make sure they happen 
for each operation (put, get, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5198) Overwrite TaskState toString

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5198:
--

 Summary: Overwrite TaskState toString
 Key: FLINK-5198
 URL: https://issues.apache.org/jira/browse/FLINK-5198
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


TaskState is logged on DEBUG level for each completed checkpoint, but it 
doesn't have a good toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5197) Late JobStatusChanges can interfere with running jobs

2016-11-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5197:


 Summary: Late JobStatusChanges can interfere with running jobs
 Key: FLINK-5197
 URL: https://issues.apache.org/jira/browse/FLINK-5197
 Project: Flink
  Issue Type: Bug
  Components: JobManager
Affects Versions: 1.1.3, 1.2.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Minor
 Fix For: 1.2.0, 1.1.4


When the {{JobManager}} receives a {{JobStatusChanged}} message, it will look 
up the {{ExecutionGraph}} for the given {{JobID}}. If there is no 
{{ExecutionGraph}}, then a {{RemoveJob}} message is sent to itself. In the 
general case, this is not problematic, because the {{RemoveJob}} message won't 
do anything if there is no {{ExecutionGraph}}. However, since this is an 
asynchronous call, it can happen that the corresponding job of the {{JobID}} is 
recovered before receiving the {{RemoveJob}} message. In this case, the newly 
recovered job would be removed.

I propose to change the behaviour such that a {{JobStatusChanged}} for a 
non-existing {{ExecutionGraph}} will be simply ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5196) Don't log InputChannelDescriptor

2016-11-29 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi updated FLINK-5196:
---
Description: 
Logging the InputChannelDescriptors is very noisy and usually infeasible to 
parse for larger setups with all-to-all connections.

In a log of a larger scale Flink job this lead to a 11 fold reduction in file 
size (175 to 15 MBs).

  was:Logging the InputChannelDescriptors is very noisy and usually infeasible 
to parse for larger setups with all-to-all connections.


> Don't log InputChannelDescriptor
> 
>
> Key: FLINK-5196
> URL: https://issues.apache.org/jira/browse/FLINK-5196
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> Logging the InputChannelDescriptors is very noisy and usually infeasible to 
> parse for larger setups with all-to-all connections.
> In a log of a larger scale Flink job this lead to a 11 fold reduction in file 
> size (175 to 15 MBs).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5196) Don't log InputChannelDescriptor

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5196:
--

 Summary: Don't log InputChannelDescriptor
 Key: FLINK-5196
 URL: https://issues.apache.org/jira/browse/FLINK-5196
 Project: Flink
  Issue Type: Improvement
  Components: JobManager
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


Logging the InputChannelDescriptors is very noisy and usually infeasible to 
parse for larger setups with all-to-all connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5195) Log JobManager configuration on start up

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5195:
--

 Summary: Log JobManager configuration on start up
 Key: FLINK-5195
 URL: https://issues.apache.org/jira/browse/FLINK-5195
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi


Log the job manager configuration on start up like the task managers. 
Furthermore, I would like to promote the log level of both to INFO since it's a 
one time thing and often helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5194) Log heartbeats on TRACE level

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5194:
--

 Summary: Log heartbeats on TRACE level
 Key: FLINK-5194
 URL: https://issues.apache.org/jira/browse/FLINK-5194
 Project: Flink
  Issue Type: Improvement
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


Heartbeats are logged on DEBUG level. If users enable debug logs this leads to 
an explosion of log messages. I would like to log heartbeats on TRACE level, 
this includes InstanceManager, JobManager. The RemoteWatcher heartbeats can be 
disabled via FLINK-5192.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5193) Recovering all jobs fails completely if a single recovery fails

2016-11-29 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5193:


 Summary: Recovering all jobs fails completely if a single recovery 
fails
 Key: FLINK-5193
 URL: https://issues.apache.org/jira/browse/FLINK-5193
 Project: Flink
  Issue Type: Bug
  Components: JobManager
Affects Versions: 1.1.3, 1.2.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.2.0, 1.1.4


In HA case where the {{JobManager}} tries to recover all submitted job graphs, 
e.g. when regaining leadership, it can happen that none of the submitted jobs 
are recovered if a single recovery fails. Instead of failing the complete 
recovery procedure, the {{JobManager}} should still try to recover the 
remaining (non-failing) jobs and print a proper error message for the failed 
recoveries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5192) Provide better log config templates

2016-11-29 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5192:
--

 Summary: Provide better log config templates
 Key: FLINK-5192
 URL: https://issues.apache.org/jira/browse/FLINK-5192
 Project: Flink
  Issue Type: Improvement
  Components: JobManager, TaskManager
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


Our current log config template is very generic and invites users to always set 
the root logger to DEBUG if they want to get more details. Since Flink depends 
on libraries like Akka and it's common to run Flink with other systems like 
Hadoop or Kafka, this also increases the log levels for those systems.

I would propose to split the default logger configuration up and provide a 
separate debugging template. Furthermore, some noisy loggers might turned off 
by default, for example {{HadoopFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5191) ConnectedComponentsWithDeferredUpdateITCase failed

2016-11-29 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-5191:


 Summary: ConnectedComponentsWithDeferredUpdateITCase failed
 Key: FLINK-5191
 URL: https://issues.apache.org/jira/browse/FLINK-5191
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.2.0
Reporter: Fabian Hueske


The test failed with the following error message on Travis:

{code}
Running 
org.apache.flink.test.iterative.ConnectedComponentsWithDeferredUpdateITCase
Job execution failed.
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:903)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:846)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:846)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: Stream Closed
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:272)
at 
org.apache.flink.core.fs.local.LocalDataInputStream.read(LocalDataInputStream.java:72)
at 
org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:59)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.fillBuffer(DelimitedInputFormat.java:619)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.readLine(DelimitedInputFormat.java:513)
at 
org.apache.flink.api.common.io.DelimitedInputFormat.nextRecord(DelimitedInputFormat.java:479)
at 
org.apache.flink.api.java.io.CsvInputFormat.nextRecord(CsvInputFormat.java:78)
at 
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:166)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5008) Update quickstart documentation

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705362#comment-15705362
 ] 

ASF GitHub Bot commented on FLINK-5008:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/2764
  
I'll look into writing Flink programs with Eclipse and update the 
documentation if needed


> Update quickstart documentation
> ---
>
> Key: FLINK-5008
> URL: https://issues.apache.org/jira/browse/FLINK-5008
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
>
> * The IDE setup documentation of Flink is outdated in both parts: IntelliJ 
> IDEA was based on an old version and Eclipse/Scala IDE does not work at all 
> anymore.
> * The example in the "Quickstart: Setup" is outdated and requires "." to be 
> in the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2764: [FLINK-5008] Update quickstart documentation

2016-11-29 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/2764
  
I'll look into writing Flink programs with Eclipse and update the 
documentation if needed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3848) Add ProjectableTableSource interface and translation rule

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705337#comment-15705337
 ] 

ASF GitHub Bot commented on FLINK-3848:
---

Github user tonycox commented on a diff in the pull request:

https://github.com/apache/flink/pull/2810#discussion_r90010696
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/rules/dataSet/BatchProjectableTableSourceScanRule.scala
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.table.plan.rules.dataSet
+
+import org.apache.calcite.plan.RelOptRule._
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall, RelTraitSet}
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.{LogicalProject, LogicalTableScan}
+import org.apache.calcite.rex.RexInputRef
+import 
org.apache.flink.api.table.plan.nodes.dataset.{BatchProjectableTableSourceScan, 
DataSetConvention}
+import org.apache.flink.api.table.plan.schema.TableSourceTable
+import org.apache.flink.api.table.sources.{BatchTableSource, 
ProjectableTableSource}
+
+import scala.collection.JavaConverters._
+
+/** Rule to convert a [[LogicalTableScan]] with [[LogicalProject]]
+  * into a [[BatchProjectableTableSourceScan]].
+  */
+class BatchProjectableTableSourceScanRule
+  extends RelOptRule(
+operand(classOf[LogicalProject], operand(classOf[TableScan], none())),
+"BatchProjectableTableSourceScanRule") {
+
+  /** Rule must only match if TableScan targets a [[BatchTableSource]],
+* LogicalProject targets a [[ProjectableTableSource]] and all operands 
are [[RexInputRef]]
+*/
+  override def matches(call: RelOptRuleCall): Boolean = {
+val project: LogicalProject = call.rel(0).asInstanceOf[LogicalProject]
+val scan: TableScan = call.rel(1).asInstanceOf[TableScan]
+val dataSetTable = scan.getTable.unwrap(classOf[TableSourceTable])
+dataSetTable match {
+  case tst: TableSourceTable =>
+tst.tableSource match {
+  case s: BatchTableSource[_] =>
+s match {
+  case p: ProjectableTableSource[_] =>
+
project.getProjects.asScala.forall(_.isInstanceOf[RexInputRef])
+  case _ =>
+false
+}
+  case _ =>
+false
+}
+  case _ =>
+false
+}
+  }
+
+  override def onMatch(call: RelOptRuleCall): Unit = {
+val project = call.rel(0).asInstanceOf[LogicalProject]
+val scan: TableScan = call.rel(1).asInstanceOf[TableScan]
+
+val convInput = RelOptRule.convert(scan, DataSetConvention.INSTANCE)
+val traitSet: RelTraitSet = 
project.getTraitSet.replace(DataSetConvention.INSTANCE)
+
+val newRel = new BatchProjectableTableSourceScan(
--- End diff --

How can I adapted RexProgram in the same Rule? 
use `transformTo()` for `BatchProjectableTableSourceScan` and then for 
custom `LogicalCalc` ?


> Add ProjectableTableSource interface and translation rule
> -
>
> Key: FLINK-3848
> URL: https://issues.apache.org/jira/browse/FLINK-3848
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{ProjectableTableSource}} interface for {{TableSource}} implementation 
> that support projection push-down.
> The interface could look as follows
> {code}
> def trait ProjectableTableSource {
>   def setProjection(fields: Array[String]): Unit
> }
> {code}
> In addition we need Calcite rules to push a projection into a TableScan that 
> refers to a {{ProjectableTableSource}}. We might need to tweak the cost model 
> as well to push the optimizer in the right direction.
> Moreover, 

[GitHub] flink pull request #2810: [FLINK-3848] Add ProjectableTableSource interface ...

2016-11-29 Thread tonycox
Github user tonycox commented on a diff in the pull request:

https://github.com/apache/flink/pull/2810#discussion_r90010696
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/rules/dataSet/BatchProjectableTableSourceScanRule.scala
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.table.plan.rules.dataSet
+
+import org.apache.calcite.plan.RelOptRule._
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall, RelTraitSet}
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.{LogicalProject, LogicalTableScan}
+import org.apache.calcite.rex.RexInputRef
+import 
org.apache.flink.api.table.plan.nodes.dataset.{BatchProjectableTableSourceScan, 
DataSetConvention}
+import org.apache.flink.api.table.plan.schema.TableSourceTable
+import org.apache.flink.api.table.sources.{BatchTableSource, 
ProjectableTableSource}
+
+import scala.collection.JavaConverters._
+
+/** Rule to convert a [[LogicalTableScan]] with [[LogicalProject]]
+  * into a [[BatchProjectableTableSourceScan]].
+  */
+class BatchProjectableTableSourceScanRule
+  extends RelOptRule(
+operand(classOf[LogicalProject], operand(classOf[TableScan], none())),
+"BatchProjectableTableSourceScanRule") {
+
+  /** Rule must only match if TableScan targets a [[BatchTableSource]],
+* LogicalProject targets a [[ProjectableTableSource]] and all operands 
are [[RexInputRef]]
+*/
+  override def matches(call: RelOptRuleCall): Boolean = {
+val project: LogicalProject = call.rel(0).asInstanceOf[LogicalProject]
+val scan: TableScan = call.rel(1).asInstanceOf[TableScan]
+val dataSetTable = scan.getTable.unwrap(classOf[TableSourceTable])
+dataSetTable match {
+  case tst: TableSourceTable =>
+tst.tableSource match {
+  case s: BatchTableSource[_] =>
+s match {
+  case p: ProjectableTableSource[_] =>
+
project.getProjects.asScala.forall(_.isInstanceOf[RexInputRef])
+  case _ =>
+false
+}
+  case _ =>
+false
+}
+  case _ =>
+false
+}
+  }
+
+  override def onMatch(call: RelOptRuleCall): Unit = {
+val project = call.rel(0).asInstanceOf[LogicalProject]
+val scan: TableScan = call.rel(1).asInstanceOf[TableScan]
+
+val convInput = RelOptRule.convert(scan, DataSetConvention.INSTANCE)
+val traitSet: RelTraitSet = 
project.getTraitSet.replace(DataSetConvention.INSTANCE)
+
+val newRel = new BatchProjectableTableSourceScan(
--- End diff --

How can I adapted RexProgram in the same Rule? 
use `transformTo()` for `BatchProjectableTableSourceScan` and then for 
custom `LogicalCalc` ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5008) Update quickstart documentation

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705330#comment-15705330
 ] 

ASF GitHub Bot commented on FLINK-5008:
---

Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2764
  
Sorry, no input from me regarding Eclipse. I've given up on it about a year 
ago ;)


> Update quickstart documentation
> ---
>
> Key: FLINK-5008
> URL: https://issues.apache.org/jira/browse/FLINK-5008
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
>
> * The IDE setup documentation of Flink is outdated in both parts: IntelliJ 
> IDEA was based on an old version and Eclipse/Scala IDE does not work at all 
> anymore.
> * The example in the "Quickstart: Setup" is outdated and requires "." to be 
> in the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2764: [FLINK-5008] Update quickstart documentation

2016-11-29 Thread vasia
Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2764
  
Sorry, no input from me regarding Eclipse. I've given up on it about a year 
ago ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-2435) Add support for custom CSV field parsers

2016-11-29 Thread Anton Mushin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684046#comment-15684046
 ] 

Anton Mushin edited comment on FLINK-2435 at 11/29/16 12:49 PM:


Hi everyone,
is issue actual?
bq.It would be good to add support for CSV field parsers for custom data types 
which can be registered in a CSVReader. 
What is custom data types? 



was (Author: anmu):
Hi everyone,
is issue actual?
bq.It would be good to add support for CSV field parsers for custom data types 
which can be registered in a CSVReader. 
What is custom data types? Do you talk about POJO types?  Support  POJO and 
Truple types has been added in FLINK-2692 already.


> Add support for custom CSV field parsers
> 
>
> Key: FLINK-2435
> URL: https://issues.apache.org/jira/browse/FLINK-2435
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API
>Affects Versions: 0.10.0
>Reporter: Fabian Hueske
> Fix For: 1.0.0
>
>
> The {{CSVInputFormats}} have only {{FieldParsers}} for Java's primitive types 
> (byte, short, int, long, float, double, boolean, String).
> It would be good to add support for CSV field parsers for custom data types 
> which can be registered in a {{CSVReader}}. 
> We could offer two interfaces for field parsers.
> 1. The regular low-level {{FieldParser}} which operates on a byte array and 
> offsets.
> 2. A {{StringFieldParser}} which operates on a String that has been extracted 
> by a {{StringParser}} before. This interface will be easier to implement but 
> less efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4798) CEPITCase.testSimpleKeyedPatternCEP test failure

2016-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705211#comment-15705211
 ] 

ASF GitHub Bot commented on FLINK-4798:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2843#discussion_r90001380
  
--- Diff: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/CEPITCase.java ---
@@ -361,17 +351,18 @@ public String select(Map pattern) {
}
);
 
-   result.writeAsText(resultPath, FileSystem.WriteMode.OVERWRITE);
+   result.addSink(resultSink);
--- End diff --

wasn't aware that it sits in flink-contrib, nevermind my previous comment 
then.


> CEPITCase.testSimpleKeyedPatternCEP test failure
> 
>
> Key: FLINK-4798
> URL: https://issues.apache.org/jira/browse/FLINK-4798
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Boris Osipov
>  Labels: test-stability
>
> {code}
> ---
>  T E S T S
> ---
> Running org.apache.flink.cep.CEPITCase
> Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.627 sec <<< 
> FAILURE! - in org.apache.flink.cep.CEPITCase
> testSimpleKeyedPatternCEP(org.apache.flink.cep.CEPITCase)  Time elapsed: 
> 0.312 sec  <<< FAILURE!
> java.lang.AssertionError: Different number of lines in expected and obtained 
> result. expected:<3> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:316)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:302)
>   at org.apache.flink.cep.CEPITCase.after(CEPITCase.java:61)
> {code}
> in https://api.travis-ci.org/jobs/166676733/log.txt?deansi=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >