[jira] [Updated] (FLINK-11528) Translate the "Use Cases" page into Chinese

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11528:
---
Labels: pull-request-available  (was: )

> Translate the "Use Cases" page into Chinese
> ---
>
> Key: FLINK-11528
> URL: https://issues.apache.org/jira/browse/FLINK-11528
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: Xingcan Cui
>Priority: Major
>  Labels: pull-request-available
>
> Translate flink-web/usecases.zh.md into Chinese. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11908) Port window classes into flink-api-java

2019-03-13 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-11908:

Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-11068

> Port window classes into flink-api-java
> ---
>
> Key: FLINK-11908
> URL: https://issues.apache.org/jira/browse/FLINK-11908
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As discussed in FLINK-11068, it is good to open a separate issue for porting 
> the window classes before opening a PR for the {{Table}} classes. This can 
> make our PR smaller thus will be better to be reviewed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11918) Deprecated Window and Rename it to GroupWindow

2019-03-13 Thread sunjincheng (JIRA)
sunjincheng created FLINK-11918:
---

 Summary: Deprecated Window and Rename it to GroupWindow
 Key: FLINK-11918
 URL: https://issues.apache.org/jira/browse/FLINK-11918
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.8.0
Reporter: sunjincheng


{{OverWindow}} and {{Window}} are confusing in the API, and mentioned that we 
want to rename it to GroupWindow for many times.  So,  here just a suggestion, 
how about Deprecated the Window in release-1.8, since we should create a new 
RC2 for release 1.8. If we do not do that the Window will keep existing for 
almost half a year. I create this JIRA, and link to release-1.8 vote mail 
thread, ask RM's options. If all of you do not agree, I'll close the JIRA, 
otherwise, we can open the new PR for Depercated the window. 
What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792326#comment-16792326
 ] 

Hequn Cheng commented on FLINK-11068:
-

[~sunjincheng121] Thanks for your suggestions. I think it also makes sense. It 
makes it possible to remove the deprecated method earlier(in release-1.9) if we 
can deprecate it in 1.8. I can open another PR if we decided to do. 
What do you guys think?

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] sunjincheng121 commented on a change in pull request #7976: [FLINK-11908] Port window classes into flink-api-java

2019-03-13 Thread GitBox
sunjincheng121 commented on a change in pull request #7976: [FLINK-11908] Port 
window classes into flink-api-java
URL: https://github.com/apache/flink/pull/7976#discussion_r265415615
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/GroupWindow.java
 ##
 @@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * A group window specification.
+ *
+ * Group windows group rows based on time or row-count intervals and is 
therefore essentially a
+ * special type of groupBy. Just like groupBy, group windows allow to compute 
aggregates
+ * on groups of elements.
+ *
+ * Infinite streaming tables can only be grouped into time or row 
intervals. Hence window
+ * grouping is required to apply aggregations on streaming tables.
+ *
+ * For finite batch tables, group windows provide shortcuts for time-based 
groupBy.
+ *
+ * Note: {@link Window} is temporally used as the father class of {@link 
GroupWindow} for the
+ * sake of compatibility. It will be removed later.
 
 Review comment:
   Here just a suggestion, how about we open a new PR for Deprecated the Window 
in release-1.8, since we should create a new RC2 for release 1.8. If we do not 
do that the Window will keep existing for almost half a year. And I'll create 
the Jira FLINK-11918, and link to release-1.8 vote mail thread, ask RM's 
options. If all of you do not agree, I'll close the JIRA, otherwise, we can 
open the new PR for Deprecated the window.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792316#comment-16792316
 ] 

sunjincheng edited comment on FLINK-11068 at 3/14/19 4:16 AM:
--

Hi, [~hequn8128] [~twalthr] , I also agree rename `Window` to `GroupWindow`, 
and I have a soft spot for open a separate PRs. 

Here just a suggestion, how about we open a new PR for Deprecated the Window in 
release-1.8, since we should create a new RC2 for release 1.8. If we do not do 
that the Window will keep existing for almost half a year. And I'll create the 
Jira FLINK-11918, and link to release-1.8 vote mail thread, ask RM's options. 
If all of you do not agree, I'll close the JIRA, otherwise, we can open the new 
PR for Deprecated the window.


was (Author: sunjincheng121):
Hi, [~hequn8128] [~twalthr] , I also agree rename `Window` to `GroupWindow`, 
and I have a soft spot for open a separate PRs. 

Here just a suggestion, how about we open a new PR for Deprecated the Window in 
release-1.8, since we should create a new RC2 for release 1.8. If we do not do 
that the Window will keep existing for almost half a year. And I'll create a 
JIRA, and link to release-1.8 vote mail thread, ask RM's options. If all of you 
do not agree, I'll close the JIRA, otherwise, we can open the new PR for 
Deprecated the window.


What do you think?

What do you think?

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792316#comment-16792316
 ] 

sunjincheng edited comment on FLINK-11068 at 3/14/19 4:09 AM:
--

Hi, [~hequn8128] [~twalthr] , I also agree rename `Window` to `GroupWindow`, 
and I have a soft spot for open a separate PRs. 

Here just a suggestion, how about we open a new PR for Deprecated the Window in 
release-1.8, since we should create a new RC2 for release 1.8. If we do not do 
that the Window will keep existing for almost half a year. And I'll create a 
JIRA, and link to release-1.8 vote mail thread, ask RM's options. If all of you 
do not agree, I'll close the JIRA, otherwise, we can open the new PR for 
Deprecated the window.


What do you think?

What do you think?


was (Author: sunjincheng121):
Hi, [~hequn8128] [~twalthr] , I also agree rename `Window` to `GroupWindow`, 
and I have a soft spot for open a separate PRs. 

Here just a suggestion, how about we open a new PR for  Deprecated the Window 
in release-1.8, since we should create a new RC2 for release 1.8.

What do you think?

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792316#comment-16792316
 ] 

sunjincheng commented on FLINK-11068:
-

Hi, [~hequn8128] [~twalthr] , I also agree rename `Window` to `GroupWindow`, 
and I have a soft spot for open a separate PRs. 

Here just a suggestion, how about we open a new PR for  Deprecated the Window 
in release-1.8, since we should create a new RC2 for release 1.8.

What do you think?

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11909) Provide default failure/timeout/backoff handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-11909:
--
Summary: Provide default failure/timeout/backoff handling strategy for 
AsyncIO functions  (was: Provide default failure/timeout handling strategy for 
AsyncIO functions)

> Provide default failure/timeout/backoff handling strategy for AsyncIO 
> functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224] in [2]):
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> Discussion also extended to introduce configuration such as: 
> * MAX_RETRY_COUNT
> * RETRY_FAILURE_POLICY
> REF:
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
> [2] 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Backoff-strategies-for-async-IO-functions-tt26580.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-11909:
--
Description: 
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)

Discussion also extended to introduce configuration such as: 
* MAX_RETRY_COUNT
* RETRY_FAILURE_POLICY


REF:
[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



  was:
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)


[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling


> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> Discussion also extended to introduce configuration such as: 
> * MAX_RETRY_COUNT
> * RETRY_FAILURE_POLICY
> REF:
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-11909:
--
Description: 
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224] in [2]):

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)

Discussion also extended to introduce configuration such as: 
* MAX_RETRY_COUNT
* RETRY_FAILURE_POLICY


REF:
[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
[2] 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Backoff-strategies-for-async-IO-functions-tt26580.html



  was:
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)

Discussion also extended to introduce configuration such as: 
* MAX_RETRY_COUNT
* RETRY_FAILURE_POLICY


REF:
[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
[2] 




> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224] in [2]):
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> Discussion also extended to introduce configuration such as: 
> * MAX_RETRY_COUNT
> * RETRY_FAILURE_POLICY
> REF:
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
> [2] 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Backoff-strategies-for-async-IO-functions-tt26580.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-11909:
--
Description: 
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)

Discussion also extended to introduce configuration such as: 
* MAX_RETRY_COUNT
* RETRY_FAILURE_POLICY


REF:
[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
[2] 



  was:
Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)

Discussion also extended to introduce configuration such as: 
* MAX_RETRY_COUNT
* RETRY_FAILURE_POLICY


REF:
[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling




> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> Discussion also extended to introduce configuration such as: 
> * MAX_RETRY_COUNT
> * RETRY_FAILURE_POLICY
> REF:
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling
> [2] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-11909:
-

Assignee: Rong Rong

> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-11909:


Assignee: (was: vinoyang)

> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung commented on a change in pull request #7958: [FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink table

2019-03-13 Thread GitBox
KurtYoung commented on a change in pull request #7958: 
[FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink 
table
URL: https://github.com/apache/flink/pull/7958#discussion_r265407720
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/generated/RecordComparator.java
 ##
 @@ -33,14 +32,6 @@
 
 
 Review comment:
   use interface instead of abstract class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #7958: [FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink table

2019-03-13 Thread GitBox
KurtYoung commented on a change in pull request #7958: 
[FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink 
table
URL: https://github.com/apache/flink/pull/7958#discussion_r265407794
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/generated/NormalizedKeyComputer.java
 ##
 @@ -16,27 +16,18 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.runtime.sort;
+package org.apache.flink.table.generated;
 
-import org.apache.flink.api.common.typeutils.TypeComparator;
-import org.apache.flink.api.common.typeutils.TypeSerializer;
 import org.apache.flink.core.memory.MemorySegment;
 import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.runtime.sort.BinaryInMemorySortBuffer;
 
 /**
  * Normalized key computer for {@link BinaryInMemorySortBuffer}.
  * For performance, subclasses are usually implemented through CodeGenerator.
  */
 public abstract class NormalizedKeyComputer {
 
 Review comment:
   use interface?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792304#comment-16792304
 ] 

vinoyang commented on FLINK-11909:
--

[~walterddr] oh... sorry, I will release this issue.

> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: vinoyang
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792294#comment-16792294
 ] 

Rong Rong commented on FLINK-11909:
---

[~yanghua] Hi Vino, we haven't reach an agreement and a target yet. do you mind 
if I drive this effort? 

> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: vinoyang
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11917) Allow state schema migration from Kryo to POJO / Avro

2019-03-13 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-11917:
---

 Summary: Allow state schema migration from Kryo to POJO / Avro
 Key: FLINK-11917
 URL: https://issues.apache.org/jira/browse/FLINK-11917
 Project: Flink
  Issue Type: Improvement
  Components: API / Type Serialization System
Reporter: Tzu-Li (Gordon) Tai


In Flink, it has been commonly advertised that users should try to avoid Kryo 
for state serialization since it doesn't work out-of-the-box well for schema 
evolution stories. Kryo, in the first place, wasn't designed with that in mind.

In light of this, Flink should provide a migration path for state that were 
default to be serialized by the {{KryoSerializer}} to other serializers that 
now support better schema evolution capabilities, such as {{PojoSerializer}} 
and {{AvroSerializer}}.

Essentially, what this means is that in the {{KryoSerializerSnapshot}} class's 
{{resolveSchemaCompatibility}} method, we identify if the new serializer is 
either {{PojoSerializer}} or {{AvroSerializer}}; if so, we return 
{{TypeSerializerSchemaCompatibility.compatibleAfterMigration()}} as the result.

For the user, this would allow them to simply upgrade their state types to be 
Avro-generated {{SpecificRecord}} or a qualified POJO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] chummyhe89 commented on issue #5394: [FLINK-6571][tests] Catch InterruptedException in StreamSourceOperato…

2019-03-13 Thread GitBox
chummyhe89 commented on issue #5394: [FLINK-6571][tests] Catch 
InterruptedException in StreamSourceOperato…
URL: https://github.com/apache/flink/pull/5394#issuecomment-472685821
 
 
   Does this resolve the problem? 
   ```java
   @Override
   public void run(SourceContext ctx) throws Exception {
   boolean interrupted = false;
   try {
   while (running) {
   try {
   Thread.sleep(20);
   }catch (InterruptedException ignored){
   interrupted = true;
   }
   }
   }finally {
   if(interrupted){
   Thread.currentThread().interrupt();
   }
   }
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7981: [FLINK-11905][table-runtime-blink] Fix BlockCompressionTest does not compile with Java 9

2019-03-13 Thread GitBox
flinkbot commented on issue #7981: [FLINK-11905][table-runtime-blink] Fix 
BlockCompressionTest does not compile with Java 9
URL: https://github.com/apache/flink/pull/7981#issuecomment-472684588
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung opened a new pull request #7981: [FLINK-11905][table-runtime-blink] Fix BlockCompressionTest does not compile with Java 9

2019-03-13 Thread GitBox
KurtYoung opened a new pull request #7981: [FLINK-11905][table-runtime-blink] 
Fix BlockCompressionTest does not compile with Java 9
URL: https://github.com/apache/flink/pull/7981
 
 
   
   
   ## What is the purpose of the change
   Fix BlockCompressionTest does not compile with Java 9
   
   ## Brief change log
   
 - remove unnecessary dependency of `sun.misc.Cleaner`.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
   all no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11905:
---
Labels: blink pull-request-available  (was: blink)

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Kurt Young
>Priority: Major
>  Labels: blink, pull-request-available
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] tzulitai commented on issue #7979: [FLINK-11911] Make KafkaTopicPartition a valid POJO

2019-03-13 Thread GitBox
tzulitai commented on issue #7979: [FLINK-11911] Make KafkaTopicPartition a 
valid POJO
URL: https://github.com/apache/flink/pull/7979#issuecomment-472684022
 
 
   @flinkbot disapprove consensus
   
   The main problem with this change is that this breaks state compatibility 
with previous savepoints.
   The `KafkaTopicPartition` is currently part of the state of 
`FlinkKafkaConsumer`.
   Changing that to be a POJO now, essentially means that we want to change its 
serializer from `KryoSerializer` (since the class was not identified as a POJO, 
Flink defaults to Kryo for its serialization) to a `PojoSerializer`.
   
   The essential change this requires in the serialization stack is that the 
`KryoSerializer` needs to be able to detect that the new serializer on restore 
is a `PojoSerializer`, and allows the change by first performing state schema 
migration.
   
   I suggest to address that first before coming back to this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11916) Join with a Temporal Table should throw exception for left join

2019-03-13 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-11916:
---

 Summary: Join with a Temporal Table should throw exception for 
left join
 Key: FLINK-11916
 URL: https://issues.apache.org/jira/browse/FLINK-11916
 Project: Flink
  Issue Type: Bug
  Components: API / Table SQL
Reporter: Hequn Cheng


In {{TemporalJoinITCase.testProcessTimeInnerJoin}}, if we change the inner join 
to left join the test works fine. We may need to throw an exception if we only 
support inner join.

CC [~pnowojski]

The problem can be reproduced with the following sql:
{code:java}
@Test
  def testEventTimeInnerJoin(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val tEnv = StreamTableEnvironment.create(env)
env.setStateBackend(getStateBackend)
StreamITCase.clear
env.setParallelism(1)
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)

val sqlQuery =
  """
|SELECT
|  o.amount * r.rate AS amount
|FROM
|  Orders AS o left join
|  LATERAL TABLE (Rates(o.rowtime)) AS r on true
|WHERE r.currency = o.currency
|""".stripMargin


val ordersData = new mutable.MutableList[(Long, String, Timestamp)]
ordersData.+=((2L, "Euro", new Timestamp(2L)))
ordersData.+=((1L, "US Dollar", new Timestamp(3L)))
ordersData.+=((50L, "Yen", new Timestamp(4L)))
ordersData.+=((3L, "Euro", new Timestamp(5L)))

val ratesHistoryData = new mutable.MutableList[(String, Long, Timestamp)]
ratesHistoryData.+=(("US Dollar", 102L, new Timestamp(1L)))
ratesHistoryData.+=(("Euro", 114L, new Timestamp(1L)))
ratesHistoryData.+=(("Yen", 1L, new Timestamp(1L)))
ratesHistoryData.+=(("Euro", 116L, new Timestamp(5L)))
ratesHistoryData.+=(("Euro", 119L, new Timestamp(7L)))

var expectedOutput = new mutable.HashSet[String]()
expectedOutput += (2 * 114).toString
expectedOutput += (3 * 116).toString

val orders = env
  .fromCollection(ordersData)
  .assignTimestampsAndWatermarks(new TimestampExtractor[Long, String]())
  .toTable(tEnv, 'amount, 'currency, 'rowtime.rowtime)
val ratesHistory = env
  .fromCollection(ratesHistoryData)
  .assignTimestampsAndWatermarks(new TimestampExtractor[String, Long]())
  .toTable(tEnv, 'currency, 'rate, 'rowtime.rowtime)

tEnv.registerTable("Orders", orders)
tEnv.registerTable("RatesHistory", ratesHistory)
tEnv.registerTable("FilteredRatesHistory", 
tEnv.scan("RatesHistory").filter('rate > 110L))
tEnv.registerFunction(
  "Rates",
  tEnv.scan("FilteredRatesHistory").createTemporalTableFunction('rowtime, 
'currency))
tEnv.registerTable("TemporalJoinResult", tEnv.sqlQuery(sqlQuery))

// Scan from registered table to test for interplay between
// LogicalCorrelateToTemporalTableJoinRule and TableScanRule
val result = tEnv.scan("TemporalJoinResult").toAppendStream[Row]
result.addSink(new StreamITCase.StringSink[Row])
env.execute()

assertEquals(expectedOutput, StreamITCase.testResults.toSet)
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792277#comment-16792277
 ] 

Kurt Young commented on FLINK-11905:


I will take a look and fix this, thanks for the reporting [~Zentol]

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Kurt Young
>Priority: Major
>  Labels: blink
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11733) Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper

2019-03-13 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792272#comment-16792272
 ] 

vinoyang commented on FLINK-11733:
--

[~fhueske] Of cause, if there is another choice, I would like to avoid 
reflection. But, I did not figure out. So, I just listen to your opinion.

> Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper
> 
>
> Key: FLINK-11733
> URL: https://issues.apache.org/jira/browse/FLINK-11733
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Currently, Flink only support 
> {{org.apache.flink.hadoopcompatibility.mapred.Mapper}} in module 
> flink-hadoop-compatibility. I think we also need to support Hadoop new Mapper 
> API : {{org.apache.hadoop.mapreduce.Mapper}}.  We can implement a new 
> {{HadoopMapFunction}} to wrap {{org.apache.hadoop.mapreduce.Mapper}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-11909:


Assignee: vinoyang

> Provide default failure/timeout handling strategy for AsyncIO functions
> ---
>
> Key: FLINK-11909
> URL: https://issues.apache.org/jira/browse/FLINK-11909
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Rong Rong
>Assignee: vinoyang
>Priority: Major
>
> Currently Flink AsyncIO by default fails the entire job when async function 
> invoke fails [1]. It would be nice to have some default Async IO 
> failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
> timeout method to interact with the AsyncWaitOperator. For example (quote 
> [~suez1224]) :
> * FAIL_OPERATOR (default & current behavior)
> * FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
> * EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-11905:
--

Assignee: Kurt Young

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Kurt Young
>Priority: Major
>  Labels: blink
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11915) DataInputViewStream skip returns wrong value

2019-03-13 Thread Andrew Prudhomme (JIRA)
Andrew Prudhomme created FLINK-11915:


 Summary: DataInputViewStream skip returns wrong value
 Key: FLINK-11915
 URL: https://issues.apache.org/jira/browse/FLINK-11915
 Project: Flink
  Issue Type: Bug
Reporter: Andrew Prudhomme


The flink-core:org.apache.flink.api.java.typeutils.runtime.DataInputViewStream 
overrides the InputSteam skip function. This function should be returning the 
actual number of bytes skipped, but there is a bug which makes it return a 
lower value.

The fix should be something simple like:
{code:java}
-  return n - counter - inputView.skipBytes((int) counter);
+  return n - (counter - inputView.skipBytes((int) counter));
{code}
For context, I ran into this when trying to decode an Avro record where the 
writer schema had fields not present in the reader schema. The decoder would 
attempt to skip the unneeded data in the stream, but would throw an 
EOFException because the return value was wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11914) Expose a REST endpoint in JobManager to disconnect specific TaskManager

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792207#comment-16792207
 ] 

Shuyi Chen commented on FLINK-11914:


Hi [~Zentol], [~trohrm...@apache.org], [~gyao], what do you think? Thanks a lot.

> Expose a REST endpoint in JobManager to disconnect specific TaskManager
> ---
>
> Key: FLINK-11914
> URL: https://issues.apache.org/jira/browse/FLINK-11914
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> we want to add capability in the Flink web UI to kill each individual TM by 
> clicking a button, this would require first exposing the capability from the 
> REST API endpoint. The reason is that  some TM might be running on a heavily 
> loaded YARN host over time, and we want to kill just that TM and have flink 
> JM to reallocate a TM to restart the job graph. The other approach would be 
> restart the entire YARN job and this is heavy-weight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792155#comment-16792155
 ] 

Shuyi Chen edited comment on FLINK-11912 at 3/14/19 12:24 AM:
--

Hi [~tzulitai], I've attached a proposed tentative change (experimental) 
[here|https://github.com/apache/flink/commit/acaa46fdae6d1b3ba89caaef94ab6547be3688ea],
 could you please take a look and let me know if this is the right approach? 
Thanks a lot. 


was (Author: suez1224):
Hi [~tzulitai], I've attached a proposed tentative change (experimental) 
[here|https://github.com/apache/flink/commit/c37394acc01ea5a0c4e2681319ecbfaa63beead3],
 could you please take a look and let me know if this is the right approach? 
Thanks a lot. 

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792155#comment-16792155
 ] 

Shuyi Chen edited comment on FLINK-11912 at 3/14/19 12:22 AM:
--

Hi [~tzulitai], I've attached a proposed tentative change (experimental) 
[here|https://github.com/apache/flink/commit/c37394acc01ea5a0c4e2681319ecbfaa63beead3],
 could you please take a look and let me know if this is the right approach? 
Thanks a lot. 


was (Author: suez1224):
Hi [~tzulitai], I've attached a proposed tentative change (experimental) 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know if this is the right approach? 
Thanks a lot. 

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7980: [FLINK-11913] Shadding cassandra driver dependencies in cassandra conector

2019-03-13 Thread GitBox
flinkbot commented on issue #7980: [FLINK-11913] Shadding cassandra driver 
dependencies in cassandra conector
URL: https://github.com/apache/flink/pull/7980#issuecomment-472641520
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11913) Cassandra connector should shade it's usage of cassandra-driver classes to avoid conflicts

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11913:
---
Labels: cassandra pull-request-available  (was: cassandra)

> Cassandra connector should shade it's usage of cassandra-driver classes to 
> avoid conflicts
> --
>
> Key: FLINK-11913
> URL: https://issues.apache.org/jira/browse/FLINK-11913
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Affects Versions: 1.7.2
>Reporter: Gustavo Momenté
>Priority: Major
>  Labels: cassandra, pull-request-available
>
> The Cassandra connector have some dependencies that need to be available and 
> should not conflict with other user or system code. It should thus shade all 
> its dependencies and become self-contained.
> A simple example would be when an application read data from Cassandra using 
> the javax driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] gustavo-momente opened a new pull request #7980: [FLINK-11913] Shadding cassandra driver dependencies in cassandra conector

2019-03-13 Thread GitBox
gustavo-momente opened a new pull request #7980: [FLINK-11913] Shadding 
cassandra driver dependencies in cassandra conector
URL: https://github.com/apache/flink/pull/7980
 
 
   ## What is the purpose of the change
   
   The Cassandra connector have some dependencies that need to be available and 
should not conflict with other user or system code. It should thus shade all 
its dependencies and become self-contained.
   
   A simple example would be when an application read data from Cassandra using 
the javax driver.
   
   
   ## Brief change log
   
 - Shaded dependencies of `cassandra-driver-core`
 - Shaded dependencies of `cassandra-driver-mapping`
   
   ## Verifying this change
   
   In the shaded jar files verify that cassrandra-driver `.classes` are found 
in the `org.apache.flink.cassandra.shaded.com.datastax.driver` package 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11914) Expose a REST endpoint in JobManager to disconnect specific TaskManager

2019-03-13 Thread Shuyi Chen (JIRA)
Shuyi Chen created FLINK-11914:
--

 Summary: Expose a REST endpoint in JobManager to disconnect 
specific TaskManager
 Key: FLINK-11914
 URL: https://issues.apache.org/jira/browse/FLINK-11914
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / REST
Reporter: Shuyi Chen
Assignee: Shuyi Chen


we want to add capability in the Flink web UI to kill each individual TM by 
clicking a button, this would require first exposing the capability from the 
REST API endpoint. The reason is that  some TM might be running on a heavily 
loaded YARN host over time, and we want to kill just that TM and have flink JM 
to reallocate a TM to restart the job graph. The other approach would be 
restart the entire YARN job and this is heavy-weight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792155#comment-16792155
 ] 

Shuyi Chen edited comment on FLINK-11912 at 3/13/19 10:36 PM:
--

Hi [~tzulitai], I've attached a proposed tentative change (experimental) 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know if this is the right approach? 
Thanks a lot. 


was (Author: suez1224):
Hi [~tzulitai], I've attached a proposed tentative change 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know what you think? Thanks a lot. 

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] Fokko removed a comment on issue #7979: [FLINK-11911] Make KafkaTopicPartition a valid POJO

2019-03-13 Thread GitBox
Fokko removed a comment on issue #7979: [FLINK-11911] Make KafkaTopicPartition 
a valid POJO
URL: https://github.com/apache/flink/pull/7979#issuecomment-472632208
 
 
   @flinkbot approve description


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Fokko commented on issue #7979: [FLINK-11911] Make KafkaTopicPartition a valid POJO

2019-03-13 Thread GitBox
Fokko commented on issue #7979: [FLINK-11911] Make KafkaTopicPartition a valid 
POJO
URL: https://github.com/apache/flink/pull/7979#issuecomment-472632208
 
 
   @flinkbot approve description


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792155#comment-16792155
 ] 

Shuyi Chen edited comment on FLINK-11912 at 3/13/19 10:29 PM:
--

Hi [~tzulitai], I've attached a proposed tentative change 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know what you think? Thanks a lot. 


was (Author: suez1224):
Hi [~tzulitai], I've attached a proposed change 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know what you think? Thanks a lot. 

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792155#comment-16792155
 ] 

Shuyi Chen commented on FLINK-11912:


Hi [~tzulitai], I've attached a proposed change 
[here|https://github.com/apache/flink/commit/094135efcadf5c0ddb47eabd66091e20d26d1417],
 could you please take a look and let me know what you think? Thanks a lot. 

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen updated FLINK-11912:
---
Summary: Expose per partition Kafka lag metric in Flink Kafka connector  
(was: Expose per partition Kafka lag metric in Flink Kafka consumer)

> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer, Flink was not able to properly register it because the metrics 
> are only available after the consumer start polling data from partitions. I 
> would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11913) Cassandra connector should shade it's usage of cassandra-driver classes to avoid conflicts

2019-03-13 Thread JIRA
Gustavo Momenté created FLINK-11913:
---

 Summary: Cassandra connector should shade it's usage of 
cassandra-driver classes to avoid conflicts
 Key: FLINK-11913
 URL: https://issues.apache.org/jira/browse/FLINK-11913
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Cassandra
Affects Versions: 1.7.2
Reporter: Gustavo Momenté


The Cassandra connector have some dependencies that need to be available and 
should not conflict with other user or system code. It should thus shade all 
its dependencies and become self-contained.

A simple example would be when an application read data from Cassandra using 
the javax driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka connector

2019-03-13 Thread Shuyi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen updated FLINK-11912:
---
Description: 
In production, it's important that we expose the Kafka lag by partition metric 
in order for users to diagnose which Kafka partition is lagging. However, 
although the Kafka lag by partition metrics are available in KafkaConsumer 
after 0.10.2,  Flink was not able to properly register it because the metrics 
are only available after the consumer start polling data from partitions. I 
would suggest the following fix:
1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
MetricName for those partitions that we want to register into 
manualRegisteredMetricSet. 
3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, try 
to search for the metrics available in KafkaConsumer, and if found, register it 
and remove the entry from manualRegisteredMetricSet. 

The overhead of the above approach is bounded and only incur when discovering 
new partitions, and registration is done once the KafkaConsumer have the 
metrics exposed.

  was:
In production, it's important that we expose the Kafka lag by partition metric 
in order for users to diagnose which Kafka partition is lagging. However, 
although the Kafka lag by partition metrics are available in KafkaConsumer, 
Flink was not able to properly register it because the metrics are only 
available after the consumer start polling data from partitions. I would 
suggest the following fix:
1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
MetricName for those partitions that we want to register into 
manualRegisteredMetricSet. 
3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, try 
to search for the metrics available in KafkaConsumer, and if found, register it 
and remove the entry from manualRegisteredMetricSet. 

The overhead of the above approach is bounded and only incur when discovering 
new partitions, and registration is done once the KafkaConsumer have the 
metrics exposed.


> Expose per partition Kafka lag metric in Flink Kafka connector
> --
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4, 1.7.2
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition 
> metric in order for users to diagnose which Kafka partition is lagging. 
> However, although the Kafka lag by partition metrics are available in 
> KafkaConsumer after 0.10.2,  Flink was not able to properly register it 
> because the metrics are only available after the consumer start polling data 
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
> MetricName for those partitions that we want to register into 
> manualRegisteredMetricSet. 
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, 
> try to search for the metrics available in KafkaConsumer, and if found, 
> register it and remove the entry from manualRegisteredMetricSet. 
> The overhead of the above approach is bounded and only incur when discovering 
> new partitions, and registration is done once the KafkaConsumer have the 
> metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11912) Expose per partition Kafka lag metric in Flink Kafka consumer

2019-03-13 Thread Shuyi Chen (JIRA)
Shuyi Chen created FLINK-11912:
--

 Summary: Expose per partition Kafka lag metric in Flink Kafka 
consumer
 Key: FLINK-11912
 URL: https://issues.apache.org/jira/browse/FLINK-11912
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Kafka
Affects Versions: 1.7.2, 1.6.4
Reporter: Shuyi Chen
Assignee: Shuyi Chen


In production, it's important that we expose the Kafka lag by partition metric 
in order for users to diagnose which Kafka partition is lagging. However, 
although the Kafka lag by partition metrics are available in KafkaConsumer, 
Flink was not able to properly register it because the metrics are only 
available after the consumer start polling data from partitions. I would 
suggest the following fix:
1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add 
MetricName for those partitions that we want to register into 
manualRegisteredMetricSet. 
3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not, try 
to search for the metrics available in KafkaConsumer, and if found, register it 
and remove the entry from manualRegisteredMetricSet. 

The overhead of the above approach is bounded and only incur when discovering 
new partitions, and registration is done once the KafkaConsumer have the 
metrics exposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7979: Fd patch kafka topic partition

2019-03-13 Thread GitBox
flinkbot commented on issue #7979: Fd patch kafka topic partition
URL: https://github.com/apache/flink/pull/7979#issuecomment-472626266
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Fokko opened a new pull request #7979: Fd patch kafka topic partition

2019-03-13 Thread GitBox
Fokko opened a new pull request #7979: Fd patch kafka topic partition
URL: https://github.com/apache/flink/pull/7979
 
 
   ## What is the purpose of the change
   
   KafkaTopicPartition is not a POJO, and therefore it cannot be serialized 
efficiently. This is using the KafkaDeserializationSchema.
   
   When enforcing POJO's:
   ```
   java.lang.UnsupportedOperationException: Generic types have been disabled in 
the ExecutionConfig and type 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition is 
treated as a generic type.
at 
org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:86)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:107)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:52)
at 
org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:102)
at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:288)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:289)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:219)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.initializeState(FlinkKafkaConsumerBase.java:856)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
   ```
   
   And in the logs:
   ```
   2019-03-13 16:41:28,217 INFO  
org.apache.flink.api.java.typeutils.TypeExtractor - class 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition does 
not contain a setter for field topic
   2019-03-13 16:41:28,221 INFO  
org.apache.flink.api.java.typeutils.TypeExtractor - Class class 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
cannot be used as a POJO type because not all fields are valid POJO fields, and 
must be processed as GenericType. Please read the Flink documentation on "Data 
Types & Serialization" for details of the effect on performance.
   ```
   
   ## Brief change log
   
   Add:
   - An empty default constructor
   - Make the fields public
   
   ## Verifying this change
   
   Reran the job, and the log messages above disappeared.
   
   Related to: 
https://stackoverflow.com/questions/54008805/in-logs-i-see-kafkatopicpartition-cannot-be-used-as-a-pojo-what-that-it-means
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: yes
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? JavaDocs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11883) Harmonize the version of maven-shade-plugin

2019-03-13 Thread Fokko Driesprong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fokko Driesprong closed FLINK-11883.

Resolution: Won't Fix

> Harmonize the version of maven-shade-plugin
> ---
>
> Key: FLINK-11883
> URL: https://issues.apache.org/jira/browse/FLINK-11883
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.2
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11911) KafkaTopicPartition is not a valid POJO

2019-03-13 Thread Fokko Driesprong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fokko Driesprong updated FLINK-11911:
-
Affects Version/s: (was: 1.7.2)
   1.8.0

> KafkaTopicPartition is not a valid POJO
> ---
>
> Key: FLINK-11911
> URL: https://issues.apache.org/jira/browse/FLINK-11911
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
> Fix For: 1.8.0
>
>
> KafkaTopicPartition is not a POJO, and therefore it cannot be serialized 
> efficiently. This is using the KafkaDeserializationSchema.
> When enforcing POJO's:
> ```
> java.lang.UnsupportedOperationException: Generic types have been disabled in 
> the ExecutionConfig and type 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition is 
> treated as a generic type.
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:86)
>   at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:107)
>   at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:52)
>   at 
> org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:102)
>   at 
> org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:288)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:289)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:219)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.initializeState(FlinkKafkaConsumerBase.java:856)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>   at java.lang.Thread.run(Thread.java:748)
> ```
> And in the logs:
> ```
> 2019-03-13 16:41:28,217 INFO  
> org.apache.flink.api.java.typeutils.TypeExtractor - class 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
> does not contain a setter for field topic
> 2019-03-13 16:41:28,221 INFO  
> org.apache.flink.api.java.typeutils.TypeExtractor - Class class 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
> cannot be used as a POJO type because not all fields are valid POJO fields, 
> and must be processed as GenericType. Please read the Flink documentation on 
> "Data Types & Serialization" for details of the effect on performance.
> ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11911) KafkaTopicPartition is not a valid POJO

2019-03-13 Thread Fokko Driesprong (JIRA)
Fokko Driesprong created FLINK-11911:


 Summary: KafkaTopicPartition is not a valid POJO
 Key: FLINK-11911
 URL: https://issues.apache.org/jira/browse/FLINK-11911
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.7.2
Reporter: Fokko Driesprong
Assignee: Fokko Driesprong
 Fix For: 1.8.0


KafkaTopicPartition is not a POJO, and therefore it cannot be serialized 
efficiently. This is using the KafkaDeserializationSchema.

When enforcing POJO's:
```
java.lang.UnsupportedOperationException: Generic types have been disabled in 
the ExecutionConfig and type 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition is 
treated as a generic type.
at 
org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:86)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:107)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:52)
at 
org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:102)
at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:288)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:289)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:219)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.initializeState(FlinkKafkaConsumerBase.java:856)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
```

And in the logs:
```
2019-03-13 16:41:28,217 INFO  org.apache.flink.api.java.typeutils.TypeExtractor 
- class 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition does 
not contain a setter for field topic
2019-03-13 16:41:28,221 INFO  org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition 
cannot be used as a POJO type because not all fields are valid POJO fields, and 
must be processed as GenericType. Please read the Flink documentation on "Data 
Types & Serialization" for details of the effect on performance.
```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] Fokko commented on issue #7962: [FLINK-11883] Harmonize the version of maven-shade-plugin

2019-03-13 Thread GitBox
Fokko commented on issue #7962: [FLINK-11883] Harmonize the version of 
maven-shade-plugin
URL: https://github.com/apache/flink/pull/7962#issuecomment-472624464
 
 
   Thanks for the explanation @zentol


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Fokko closed pull request #7962: [FLINK-11883] Harmonize the version of maven-shade-plugin

2019-03-13 Thread GitBox
Fokko closed pull request #7962: [FLINK-11883] Harmonize the version of 
maven-shade-plugin
URL: https://github.com/apache/flink/pull/7962
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-2491) Checkpointing only works if all operators/tasks are still running

2019-03-13 Thread Maximilian Michels (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792126#comment-16792126
 ] 

Maximilian Michels commented on FLINK-2491:
---

I think we should take the time to finally address this limitation. [~mbalassi] 
I will reassign this unless you're planning to work on this any time soon.

> Checkpointing only works if all operators/tasks are still running
> -
>
> Key: FLINK-2491
> URL: https://issues.apache.org/jira/browse/FLINK-2491
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Márton Balassi
>Priority: Critical
> Attachments: fix_checkpoint_not_working_if_tasks_are_finished.patch
>
>
> While implementing a test case for the Kafka Consumer, I came across the 
> following bug:
> Consider the following topology, with the operator parallelism in parentheses:
> Source (2) --> Sink (1).
> In this setup, the {{snapshotState()}} method is called on the source, but 
> not on the Sink.
> The sink receives the generated data.
> only one of the two sources is generating data.
> I've implemented a test case for this, you can find it here: 
> https://github.com/rmetzger/flink/blob/para_checkpoint_bug/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ParallelismChangeCheckpoinedITCase.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-13 Thread GitBox
flinkbot commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn 
application type
URL: https://github.com/apache/flink/pull/7978#issuecomment-472607121
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu opened a new pull request #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-13 Thread GitBox
HuangZhenQiu opened a new pull request #7978: [FLINK-11910] [Yarn] add 
customizable yarn application type
URL: https://github.com/apache/flink/pull/7978
 
 
   ## What is the purpose of the change
   
   Let user customize yarn application type tag by using dynamic properties.
   
   ## Brief change log
  - use flink-version as a default key for user to specify the flink version
  - If flink-version is set, the value will be added after original "Apache 
Flink ".
   
   ## Verifying this change
   
   This change is verified in YARNSessionCapacitySchedulerITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11910) Make Yarn Application Type Customizable with Flink Version

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11910:
---
Labels: pull-request-available  (was: )

> Make Yarn Application Type Customizable with Flink Version
> --
>
> Key: FLINK-11910
> URL: https://issues.apache.org/jira/browse/FLINK-11910
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.6.3, 1.6.4, 1.7.2
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
>
> Internally, our organization support multiple version of Flink in production. 
> It will be more convenient for us to distinguish different version of jobs by 
> using the Application Type. 
> The simple solution is let user to use dynamic properties to set 
> "flink-version". If the property is set, we add it as suffix of "Apache 
> Flink" by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11910) Make Yarn Application Type Customizable with Flink Version

2019-03-13 Thread Zhenqiu Huang (JIRA)
Zhenqiu Huang created FLINK-11910:
-

 Summary: Make Yarn Application Type Customizable with Flink Version
 Key: FLINK-11910
 URL: https://issues.apache.org/jira/browse/FLINK-11910
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN
Affects Versions: 1.7.2, 1.6.4, 1.6.3
Reporter: Zhenqiu Huang
Assignee: Zhenqiu Huang


Internally, our organization support multiple version of Flink in production. 
It will be more convenient for us to distinguish different version of jobs by 
using the Application Type. 

The simple solution is let user to use dynamic properties to set 
"flink-version". If the property is set, we add it as suffix of "Apache Flink" 
by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12) [GitHub] Rework Configuration Objects

2019-03-13 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-12:

Component/s: Runtime / Configuration

> [GitHub] Rework Configuration Objects
> -
>
> Key: FLINK-12
> URL: https://issues.apache.org/jira/browse/FLINK-12
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: GitHub Import
>Assignee: Stephan Ewen
>Priority: Major
>  Labels: github-import
> Fix For: 0.7.0-incubating
>
>
> Currently, the configurations are implemented hacky. Everything is 
> represented as a serialized string and there is no clean interface, such that 
> different flavors of configurations (global-, delegatin-, default) are 
> inconsistent.
> I propose to rework the configuration as a map of objects, which are 
> serialized on demand with either a serialization library, or default 
> serialization mechanisms. Factoring out the interface of a Configuration 
> allows to keep all flavors consistent.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/12
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: enhancement, 
> Created at: Mon Apr 29 23:43:11 CEST 2013
> State: open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11906) Getting error while submitting job

2019-03-13 Thread Ken Krugler (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791994#comment-16791994
 ] 

Ken Krugler commented on FLINK-11906:
-

Hi Ramesh - for a problem like this, please first post to the Flink user list. 
Then, after feedback from Flink devs, if it's identified as a bug you could 
open a Jira issue. Thanks!

> Getting error while submitting job 
> ---
>
> Key: FLINK-11906
> URL: https://issues.apache.org/jira/browse/FLINK-11906
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.7.2
> Environment: Production
>Reporter: Ramesh Srinivasalu
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Getting below error while submitting java program to flink runner. Any help 
> would be greatly appreciated.
>  
>  
> [INFO] --- exec-maven-plugin:1.4.0:java (default-cli) @ cap_scoring ---
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> Submitting job with JobID: ae56656f79644d4c181395b9322d9dc0. Waiting for job 
> completion.
> [WARNING]
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Pipeline execution failed
>  at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:121)
>  at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
>  at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
>  at 
> com.att.streams.tdata.cap_scoring.CapInjestAndScore.main(CapInjestAndScore.java:410)
>  ... 6 more
> Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
> program execution failed: Couldn't retrieve the JobExecutionResult from the 
> JobManager.
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
>  at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:429)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
>  at 
> org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:211)
>  at 
> org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:188)
>  at 
> org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:172)
>  at 
> org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:114)
>  at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:118)
>  ... 9 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Couldn't 
> retrieve the JobExecutionResult from the JobManager.
>  at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:309)
>  at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
>  ... 18 more
> Caused by: 
> org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException: 
> Lost connection to the JobManager.
>  at 
> org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:219)
>  at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:101)
>  at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
>  at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>  at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>  at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>  at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>  at 
> 

[GitHub] [flink] flinkbot commented on issue #7977: [hotfix][runtime] Delete unused code from ExecutionGraph

2019-03-13 Thread GitBox
flinkbot commented on issue #7977: [hotfix][runtime] Delete unused code from 
ExecutionGraph
URL: https://github.com/apache/flink/pull/7977#issuecomment-472536271
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL opened a new pull request #7977: [hotfix][runtime] Delete unused code from ExecutionGraph

2019-03-13 Thread GitBox
GJL opened a new pull request #7977: [hotfix][runtime] Delete unused code from 
ExecutionGraph
URL: https://github.com/apache/flink/pull/7977
 
 
   ## What is the purpose of the change
   
   *This deletes unused code from ExecutionGraph.*
   
   
   ## Brief change log
   
 - *See commits*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**yes** / no / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11909) Provide default failure/timeout handling strategy for AsyncIO functions

2019-03-13 Thread Rong Rong (JIRA)
Rong Rong created FLINK-11909:
-

 Summary: Provide default failure/timeout handling strategy for 
AsyncIO functions
 Key: FLINK-11909
 URL: https://issues.apache.org/jira/browse/FLINK-11909
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Reporter: Rong Rong


Currently Flink AsyncIO by default fails the entire job when async function 
invoke fails [1]. It would be nice to have some default Async IO 
failure/timeout handling strategy, or opens up some APIs for AsyncFunction 
timeout method to interact with the AsyncWaitOperator. For example (quote 
[~suez1224]) :

* FAIL_OPERATOR (default & current behavior)
* FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N times)
* EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)


[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/asyncio.html#timeout-handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11908) Port window classes into flink-api-java

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11908:
---
Labels: pull-request-available  (was: )

> Port window classes into flink-api-java
> ---
>
> Key: FLINK-11908
> URL: https://issues.apache.org/jira/browse/FLINK-11908
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
>
> As discussed in FLINK-11068, it is good to open a separate issue for porting 
> the window classes before opening a PR for the {{Table}} classes. This can 
> make our PR smaller thus will be better to be reviewed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7976: [FLINK-11908] Port window classes into flink-api-java

2019-03-13 Thread GitBox
flinkbot commented on issue #7976: [FLINK-11908] Port window classes into 
flink-api-java
URL: https://github.com/apache/flink/pull/7976#issuecomment-472496692
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 opened a new pull request #7976: [FLINK-11908] Port window classes into flink-api-java

2019-03-13 Thread GitBox
hequn8128 opened a new pull request #7976: [FLINK-11908] Port window classes 
into flink-api-java
URL: https://github.com/apache/flink/pull/7976
 
 
   
   ## What is the purpose of the change
   
   This pull request ports window classes into flink-api-java module.
   
   
   ## Brief change log
   
 - Convert Tumble/Slide/Session/Over related classes to interface and 
rename the original class from XXX to XXXImpl. For example, add a 
TumbleWithSize interface and rename the original TumbleWithSize to 
TumbleWithSizeImpl. We can't port TumbleWithSize directly into the api-java 
module because ExpressionParser is used in it.
 - Deprecate `def window(window: Window): WindowedTable` and add `def 
window(window: GroupWindow): GroupWindowedTable`
 - Deprecate class WindowedTable and add GroupWindowedTable
 - Add test case, such as WindowValidationTest, to test failures for the 
creation of window. 
   
   
   ## Verifying this change
   
   This change is already covered by existing Window IT/UT test cases.
   Furthermore, WindowValidationTest has been add to test failures for the 
creation of window. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11654) Multiple transactional KafkaProducers writing to same cluster have clashing transaction IDs

2019-03-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/FLINK-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791832#comment-16791832
 ] 

Jürgen Kreileder commented on FLINK-11654:
--

[~aljoscha] There might be an additional issue here. I'm occasionally seeing 
this problem with a single sink which has a unique name/uid. Might parallelism 
be an issue?

> Multiple transactional KafkaProducers writing to same cluster have clashing 
> transaction IDs
> ---
>
> Key: FLINK-11654
> URL: https://issues.apache.org/jira/browse/FLINK-11654
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Major
> Fix For: 1.9.0
>
>
> We run multiple jobs on a cluster which write a lot to the same Kafka topic 
> from identically named sinks. When EXACTLY_ONCE semantic is enabled for the 
> KafkaProducers we run into a lot of ProducerFencedExceptions and all jobs go 
> into a restart cycle.
> Example exception from the Kafka log:
>  
> {code:java}
> [2019-02-18 18:05:28,485] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition finding-commands-dev-1-0 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.errors.ProducerFencedException: Producer's epoch is 
> no longer valid. There is probably another producer with a newer epoch. 483 
> (request epoch), 484 (server epoch)
> {code}
> The reason for this is the way FlinkKafkaProducer initializes the 
> TransactionalIdsGenerator:
> The IDs are only guaranteed to be unique for a single Job. But they can clash 
> between different Jobs (and Clusters).
>  
>  
> {code:java}
> --- 
> a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> +++ 
> b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> @@ -819,6 +819,7 @@ public class FlinkKafkaProducer
>                 nextTransactionalIdHintState = 
> context.getOperatorStateStore().getUnionListState(
>                         NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR);
>                 transactionalIdsGenerator = new TransactionalIdsGenerator(
> + // the prefix probably should include job id and maybe cluster id
>                         getRuntimeContext().getTaskName() + "-" + 
> ((StreamingRuntimeContext) getRuntimeContext()).getOperatorUniqueID(),
>                         getRuntimeContext().getIndexOfThisSubtask(),
>                         
> getRuntimeContext().getNumberOfParallelSubtasks(),{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11908) Port window classes into flink-api-java

2019-03-13 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791812#comment-16791812
 ] 

Hequn Cheng commented on FLINK-11908:
-

Hi [~twalthr], One thing need to confirm with you is I find 
{{ExpressionParser}} is used in {{TumbleWithSize}} which makes it hard to port 
{{TumbleWithSize}} directly into the api-java module. As a choice, I think we 
can convert it into an interface and use reflection to create the 
{{TumbleWithSizeImpl}} in Tumble.over(). The code may look like:
{code:java}
@PublicEvolving
public class Tumble {
public static TumbleWithSize over(String size) {
try {
Class clazz = 
Class.forName("org.apache.flink.table.api.TumbleWithSizeImpl");
Constructor con = clazz.getConstructor(String.class);
return (TumbleWithSize) con.newInstance(size);
} catch (Throwable t) {
throw new TableException("New TumbleWithSizeImpl class 
failed.", t);
}
}
}
{code}
What do you think?

> Port window classes into flink-api-java
> ---
>
> Key: FLINK-11908
> URL: https://issues.apache.org/jira/browse/FLINK-11908
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> As discussed in FLINK-11068, it is good to open a separate issue for porting 
> the window classes before opening a PR for the {{Table}} classes. This can 
> make our PR smaller thus will be better to be reviewed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] tweise commented on issue #7896: [FLINK-9007] [kinesis] [e2e] Add Kinesis end-to-end test

2019-03-13 Thread GitBox
tweise commented on issue #7896: [FLINK-9007] [kinesis] [e2e] Add Kinesis 
end-to-end test
URL: https://github.com/apache/flink/pull/7896#issuecomment-472473747
 
 
   @StefanRRichter thanks for taking a look.
   
   It's good to learn that we have an effort to provide a Java based e2e 
framework. It should simplify new tests and make existing ones more 
maintainable.
   
   This Kinesis test was under discussion for a while and it is a current gap 
in coverage. I would like to merge it and also backport it to 1.8.x. Happy to 
make changes to adopt the new framework when it is ready, but we should not 
block on it.
   
   As for separating test logic and program, that's more or less already the 
case. The pipeline code, the test driver and the Kinesis client (in the future 
"resource") live in separate classes. Should be easy to port to the new 
framework when it is ready.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791806#comment-16791806
 ] 

Hequn Cheng commented on FLINK-11068:
-

I have created FLINK-11908 for porting window classes and we can discuss window 
problems in it. I will copy the comments above into FLINK-11908. 

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11906) Getting error while submitting job

2019-03-13 Thread Ramesh Srinivasalu (JIRA)
Ramesh Srinivasalu created FLINK-11906:
--

 Summary: Getting error while submitting job 
 Key: FLINK-11906
 URL: https://issues.apache.org/jira/browse/FLINK-11906
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.7.2
 Environment: Production
Reporter: Ramesh Srinivasalu


Getting below error while submitting java program to flink runner. Any help 
would be greatly appreciated.

 

 

[INFO] --- exec-maven-plugin:1.4.0:java (default-cli) @ cap_scoring ---
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
Submitting job with JobID: ae56656f79644d4c181395b9322d9dc0. Waiting for job 
completion.
[WARNING]
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Pipeline execution failed
 at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:121)
 at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
 at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
 at 
com.att.streams.tdata.cap_scoring.CapInjestAndScore.main(CapInjestAndScore.java:410)
 ... 6 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
program execution failed: Couldn't retrieve the JobExecutionResult from the 
JobManager.
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
 at 
org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:429)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
 at 
org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:211)
 at org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:188)
 at 
org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:172)
 at 
org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:114)
 at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:118)
 ... 9 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Couldn't 
retrieve the JobExecutionResult from the JobManager.
 at org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:309)
 at 
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
 ... 18 more
Caused by: 
org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException: Lost 
connection to the JobManager.
 at 
org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:219)
 at 
org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:101)
 at 
org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
 at 
akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
 at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
 at akka.dispatch.Mailbox.run(Mailbox.scala:220)
 at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:03 min
[INFO] Finished at: 2019-03-13T15:03:55Z
[INFO] Final Memory: 32M/2099M
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:java 
(default-cli) on project cap_scoring: An exception occured while executing the 
Java class. null: 

[jira] [Created] (FLINK-11908) Port window classes into flink-api-java

2019-03-13 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-11908:
---

 Summary: Port window classes into flink-api-java
 Key: FLINK-11908
 URL: https://issues.apache.org/jira/browse/FLINK-11908
 Project: Flink
  Issue Type: Improvement
  Components: API / Table SQL
Reporter: Hequn Cheng
Assignee: Hequn Cheng


As discussed in FLINK-11068, it is good to open a separate issue for porting 
the window classes before opening a PR for the {{Table}} classes. This can make 
our PR smaller thus will be better to be reviewed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11907) GenericTypeInfoTest fails on Java 9

2019-03-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-11907:


 Summary: GenericTypeInfoTest fails on Java 9
 Key: FLINK-11907
 URL: https://issues.apache.org/jira/browse/FLINK-11907
 Project: Flink
  Issue Type: Sub-task
  Components: API / Type Serialization System, Tests
Affects Versions: 1.9.0
Reporter: Chesnay Schepler


Output difference:
{code:java}
    pojos:java.util.List
    key:int
    sqlDate:java.sql.Date
    bigInt:java.math.BigInteger
    signum:int
    mag:[I
-   bitCount:int
-   bitLength:int
-   lowestSetBit:int
-   firstNonzeroIntNum:int
+   bitCountPlusOne:int
+   bitLengthPlusOne:int
+   lowestSetBitPlusTwo:int
+   firstNonzeroIntNumPlusTwo:int
    bigDecimalKeepItNull:java.math.BigDecimal
    intVal:java.math.BigInteger
    signum:int
    mag:[I
-   bitCount:int
-   bitLength:int
-   lowestSetBit:int
-   firstNonzeroIntNum:int
+   bitCountPlusOne:int
+   bitLengthPlusOne:int
+   lowestSetBitPlusTwo:int
+   firstNonzeroIntNumPlusTwo:int
    scale:int
    scalaBigInt:scala.math.BigInt
    bigInteger:java.math.BigInteger
    signum:int
    mag:[I
-   bitCount:int
-   bitLength:int
-   lowestSetBit:int
-   firstNonzeroIntNum:int
+   bitCountPlusOne:int
+   bitLengthPlusOne:int
+   lowestSetBitPlusTwo:int
+   firstNonzeroIntNumPlusTwo:int
    mixed:java.util.List
    
makeMeGeneric:org.apache.flink.test.operators.util.CollectionDataSets$PojoWithDateAndEnum
    group:java.lang.String
+   value:[B
+   coder:byte
+   hash:int
+   date:java.util.Date
+   cat:org.apache.flink.test.operators.util.CollectionDataSets$Category 
(is enum)

     
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11068) Convert the API classes *Table, *Window to interfaces

2019-03-13 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791790#comment-16791790
 ] 

Hequn Cheng commented on FLINK-11068:
-

Hi [~twalthr], Great to have your suggestions. Thanks for merging the 
FLINK-11449 and the great work. (y) 

I think it is good to open a separate PR for porting the windows and fix the 
GroupWindow issue in it in a backward compatible way. I will start a new jira 
and open a PR on it soon.

One thing need to confirm with you is I find {{ExpressionParser}} is used in 
{{TumbleWithSize}} which make it hard to port {{TumbleWithSize}} directly into 
the api-java module. As a choice, I think we can convert it into an interface 
and use reflection to cerate the {{TumbleWithSizeImpl}} in Tumble.over(). The 
code may looks like:
{code:java}
@PublicEvolving
public class Tumble {
public static TumbleWithSize over(String size) {
try {
Class clazz = 
Class.forName("org.apache.flink.table.api.TumbleWithSizeImpl");
Constructor con = clazz.getConstructor(String.class);
return (TumbleWithSize) con.newInstance(size);
} catch (Throwable t) {
throw new TableException("New TumbleWithSizeImpl class 
failed.", t);
}
}
}
{code}

What do you think?

> Convert the API classes *Table, *Window to interfaces
> -
>
> Key: FLINK-11068
> URL: https://issues.apache.org/jira/browse/FLINK-11068
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This includes: Table, GroupedTable, WindowedTable, WindowGroupedTable, 
> OverWindowedTable, Window, OverWindow
> We can keep the "Table" Scala implementation in a planner module until it has 
> been converted to Java.
> We can add a method to the planner later to give us a concrete instance. This 
> is one possibility to have a smooth transition period instead of changing all 
> classes at once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] asfgit closed pull request #7389: [FLINK-11237] [table] Enhance LocalExecutor to wrap TableEnvironment w/ classloader

2019-03-13 Thread GitBox
asfgit closed pull request #7389: [FLINK-11237] [table] Enhance LocalExecutor 
to wrap TableEnvironment w/ classloader
URL: https://github.com/apache/flink/pull/7389
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11237) Enhance LocalExecutor to wrap TableEnvironment w/ user classloader

2019-03-13 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-11237.

   Resolution: Fixed
Fix Version/s: 1.9.0

Fixed in 1.9.0: 4973d8a4121e1eb711a8acba02fb7ced813dac41

> Enhance LocalExecutor to wrap TableEnvironment w/ user classloader
> --
>
> Key: FLINK-11237
> URL: https://issues.apache.org/jira/browse/FLINK-11237
> Project: Flink
>  Issue Type: Improvement
>  Components: SQL / Client
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The SQL Client's {{LocalExecutor}} calls into the table environment to 
> execute queries, explain statements, and much more.   Any call that involves 
> resolving a descriptor to a factory implementation must be wrapped in the 
> user classloader.   Some of the calls already are wrapped (for resolving 
> UDFs).  With new functionality coming for resolving external catalogs with a 
> descriptor, other call sites must be wrapped.
> Note that the {{TableEnvironment}} resolves the tables defined within an 
> external catalog lazily (at query time).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11237) Enhance LocalExecutor to wrap TableEnvironment w/ user classloader

2019-03-13 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-11237:
-
Component/s: (was: API / Table SQL)
 SQL / Client

> Enhance LocalExecutor to wrap TableEnvironment w/ user classloader
> --
>
> Key: FLINK-11237
> URL: https://issues.apache.org/jira/browse/FLINK-11237
> Project: Flink
>  Issue Type: Sub-task
>  Components: SQL / Client
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The SQL Client's {{LocalExecutor}} calls into the table environment to 
> execute queries, explain statements, and much more.   Any call that involves 
> resolving a descriptor to a factory implementation must be wrapped in the 
> user classloader.   Some of the calls already are wrapped (for resolving 
> UDFs).  With new functionality coming for resolving external catalogs with a 
> descriptor, other call sites must be wrapped.
> Note that the {{TableEnvironment}} resolves the tables defined within an 
> external catalog lazily (at query time).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11237) Enhance LocalExecutor to wrap TableEnvironment w/ user classloader

2019-03-13 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-11237:
-
Issue Type: Improvement  (was: Sub-task)
Parent: (was: FLINK-10744)

> Enhance LocalExecutor to wrap TableEnvironment w/ user classloader
> --
>
> Key: FLINK-11237
> URL: https://issues.apache.org/jira/browse/FLINK-11237
> Project: Flink
>  Issue Type: Improvement
>  Components: SQL / Client
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The SQL Client's {{LocalExecutor}} calls into the table environment to 
> execute queries, explain statements, and much more.   Any call that involves 
> resolving a descriptor to a factory implementation must be wrapped in the 
> user classloader.   Some of the calls already are wrapped (for resolving 
> UDFs).  With new functionality coming for resolving external catalogs with a 
> descriptor, other call sites must be wrapped.
> Note that the {{TableEnvironment}} resolves the tables defined within an 
> external catalog lazily (at query time).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11107) [state] Avoid memory stateBackend to create arbitrary folders under HA path when no checkpoint path configured

2019-03-13 Thread Yun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791765#comment-16791765
 ] 

Yun Tang commented on FLINK-11107:
--

User also meet this problem and report it in [chinese mail 
list|https://lists.apache.org/thread.html/eb00ee4a5a4c8181121bffef2998ead3ddf6d26d52d2e824ae5ec0e8@%3Cuser-zh.flink.apache.org%3E].

> [state] Avoid memory stateBackend to create arbitrary folders under HA path 
> when no checkpoint path configured
> --
>
> Key: FLINK-11107
> URL: https://issues.apache.org/jira/browse/FLINK-11107
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.3, 1.6.5
>
>
> Currently, memory state-backend would create a folder named with random UUID 
> under HA directory if no checkpoint path ever configured. (the code logic 
> locates within {{StateBackendLoader#fromApplicationOrConfigOrDefault}}) 
> However, the default memory state-backend would not only be created on JM 
> side, but also on each task manager's side, which means many folders with 
> random UUID would be created under HA directory. It would result in exception 
> like:
> {noformat}
> The directory item limit of /tmp/flink/ha is exceeded: limit=1048576 
> items=1048576{noformat}
>  If this happens, no new jobs could be submitted only if we clean up those 
> directories manually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #7974: [FLINK-11902][rest] Do not wrap all exceptions in RestHandlerException

2019-03-13 Thread GitBox
flinkbot edited a comment on issue #7974: [FLINK-11902][rest] Do not wrap all 
exceptions in RestHandlerException 
URL: https://github.com/apache/flink/pull/7974#issuecomment-472421030
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @GJL [committer]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @GJL [committer]
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @GJL [committer]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @GJL [committer]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on a change in pull request #7974: [FLINK-11902][rest] Do not wrap all exceptions in RestHandlerException

2019-03-13 Thread GitBox
GJL commented on a change in pull request #7974: [FLINK-11902][rest] Do not 
wrap all exceptions in RestHandlerException 
URL: https://github.com/apache/flink/pull/7974#discussion_r265156558
 
 

 ##
 File path: 
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/handlers/JarRunHandlerTest.java
 ##
 @@ -90,6 +90,8 @@ public void testRunJar() throws Exception {
if (expected.isPresent()) {
// implies the job was actually 
submitted

assertTrue(expected.get().getMessage().contains("ProgramInvocationException"));
+   // original cause is preserved 
in stack trace
+   
assertTrue(expected.get().getMessage().contains("ZipException"));
 
 Review comment:
   I would prefer hamcrest for a proper failure message:
   
   ```
   import static org.hamcrest.Matchers.containsString;
   import static org.junit.Assert.assertThat;
   ...
   assertThat(expected.get().getMessage(), containsString("ZipException"))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-11905:
-
Component/s: (was: API / Table SQL)
 Runtime / Operators

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: blink
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791752#comment-16791752
 ] 

Timo Walther commented on FLINK-11905:
--

CC: [~ykt836]

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: blink
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6131) Add side inputs for DataStream API

2019-03-13 Thread Ashish Pokharel (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791749#comment-16791749
 ] 

Ashish Pokharel commented on FLINK-6131:


Thanks Fabian! Looking forward to try out Blink integration as well ...

> Add side inputs for DataStream API
> --
>
> Key: FLINK-6131
> URL: https://issues.apache.org/jira/browse/FLINK-6131
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>
> This is an umbrella issue for tracking the implementation of FLIP-17: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-17+Side+Inputs+for+DataStream+API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-11905:
-
Labels: blink  (was: )

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: blink
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-11905:


 Summary: BlockCompressionTest does not compile with Java 9
 Key: FLINK-11905
 URL: https://issues.apache.org/jira/browse/FLINK-11905
 Project: Flink
  Issue Type: Bug
  Components: API / Table SQL, Tests
Affects Versions: 1.9.0
Reporter: Chesnay Schepler


[https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]

 
{code:java}
13:58:16.804 [INFO] 
-
13:58:16.804 [ERROR] COMPILATION ERROR : 
13:58:16.804 [INFO] 
-
13:58:16.804 [ERROR] 
/home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
 cannot find symbol
  symbol:   class Cleaner
  location: package sun.misc
13:58:16.804 [ERROR] 
/home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
 package sun.nio.ch is not visible
  (package sun.nio.ch is declared in module java.base, which does not export it 
to the unnamed module)
13:58:16.804 [ERROR] 
/home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
 cannot find symbol
  symbol:   class Cleaner
  location: class 
org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11901) Update the year in NOTICE files

2019-03-13 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-11901.

   Resolution: Fixed
Fix Version/s: 1.8.0

Fixed in 1.9.0: 64086ab0e314481cf1ed800ebb5125256bc26e7d & 
b5ddf2ac80c3621300a832058d14e553184dc88a
Fixed in 1.8.0: cc7aaab11c2b3802f45853092f72866ee208e0cf & 
f7e7fbd72f78ef4da9b16ce9a6a022bd788ea4fa

> Update the year in NOTICE files
> ---
>
> Key: FLINK-11901
> URL: https://issues.apache.org/jira/browse/FLINK-11901
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{NOTICE}} files are still starting with:
> {code}
> flink-table-planner
> Copyright 2014-2018 The Apache Software Foundation
> {code}
> We need to update the year to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] asfgit closed pull request #7975: [FLINK-11901][build] Update NOTICE files with year 2019

2019-03-13 Thread GitBox
asfgit closed pull request #7975:  [FLINK-11901][build] Update NOTICE files 
with year 2019
URL: https://github.com/apache/flink/pull/7975
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11733) Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper

2019-03-13 Thread Fabian Hueske (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791730#comment-16791730
 ] 

Fabian Hueske commented on FLINK-11733:
---

OK, I was thinking of something else but it doesn't seem to work.

We can of course use reflection, but it might come with a performance penalty.

> Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper
> 
>
> Key: FLINK-11733
> URL: https://issues.apache.org/jira/browse/FLINK-11733
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Currently, Flink only support 
> {{org.apache.flink.hadoopcompatibility.mapred.Mapper}} in module 
> flink-hadoop-compatibility. I think we also need to support Hadoop new Mapper 
> API : {{org.apache.hadoop.mapreduce.Mapper}}.  We can implement a new 
> {{HadoopMapFunction}} to wrap {{org.apache.hadoop.mapreduce.Mapper}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #7975: [FLINK-11901][build] Update NOTICE files with year 2019

2019-03-13 Thread GitBox
flinkbot edited a comment on issue #7975:  [FLINK-11901][build] Update NOTICE 
files with year 2019
URL: https://github.com/apache/flink/pull/7975#issuecomment-472424089
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @zentol [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @zentol [PMC]
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @zentol [PMC]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @zentol [PMC]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on issue #7975: [FLINK-11901][build] Update NOTICE files with year 2019

2019-03-13 Thread GitBox
twalthr commented on issue #7975:  [FLINK-11901][build] Update NOTICE files 
with year 2019
URL: https://github.com/apache/flink/pull/7975#issuecomment-472432284
 
 
   Thanks @zentol. Will merge this to master and 1.8 branch...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11901) Update the year in NOTICE files

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11901:
---
Labels: pull-request-available  (was: )

> Update the year in NOTICE files
> ---
>
> Key: FLINK-11901
> URL: https://issues.apache.org/jira/browse/FLINK-11901
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Blocker
>  Labels: pull-request-available
>
> The {{NOTICE}} files are still starting with:
> {code}
> flink-table-planner
> Copyright 2014-2018 The Apache Software Foundation
> {code}
> We need to update the year to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce new TwoInputSelectable, BoundedOneInput and BoundedTwoInput interfaces

2019-03-13 Thread GitBox
sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce 
new TwoInputSelectable, BoundedOneInput and BoundedTwoInput interfaces
URL: https://github.com/apache/flink/pull/7959#discussion_r265115073
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/BoundedTwoInput.java
 ##
 @@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Interface for the two-input operator that can process EndOfInput event.
+ */
+@PublicEvolving
+public interface BoundedTwoInput {
+
+   /**
+* It is notified that no more data will arrive on the input identified
+* by {@code inputId}.
+*/
+   void endInput(InputIdentifier inputId) throws Exception;
 
 Review comment:
   Okay. I think that `BoundedTwoInput` can be renamed to `BoundedInput `.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7975: [FLINK-11901][build] Update NOTICE files with year 2019

2019-03-13 Thread GitBox
flinkbot commented on issue #7975:  [FLINK-11901][build] Update NOTICE files 
with year 2019
URL: https://github.com/apache/flink/pull/7975#issuecomment-472424089
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr opened a new pull request #7975: [FLINK-11901][build] Update NOTICE files with year 2019

2019-03-13 Thread GitBox
twalthr opened a new pull request #7975:  [FLINK-11901][build] Update NOTICE 
files with year 2019
URL: https://github.com/apache/flink/pull/7975
 
 
   ## What is the purpose of the change
   
   Provides a script for updating the NOTICE file year and updates it to 2019.
   
   
   ## Brief change log
   
   - Script added
   - NOTICE files updated
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce new TwoInputSelectable, BoundedOneInput and BoundedTwoInput interfaces

2019-03-13 Thread GitBox
sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce 
new TwoInputSelectable, BoundedOneInput and BoundedTwoInput interfaces
URL: https://github.com/apache/flink/pull/7959#discussion_r265126375
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/BoundedTwoInput.java
 ##
 @@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Interface for the two-input operator that can process EndOfInput event.
+ */
+@PublicEvolving
+public interface BoundedTwoInput {
+
+   /**
+* It is notified that no more data will arrive on the input identified
+* by {@code inputId}.
+*/
+   void endInput(InputIdentifier inputId) throws Exception;
 
 Review comment:
   For one-input operator, this interface method does not need parameters, and 
it seems confusing if one parameter is taken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7974: [FLINK-11902][rest] Do not wrap all exceptions in RestHandlerException

2019-03-13 Thread GitBox
flinkbot commented on issue #7974: [FLINK-11902][rest] Do not wrap all 
exceptions in RestHandlerException 
URL: https://github.com/apache/flink/pull/7974#issuecomment-472421030
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11902) JarRunHandler wraps all exceptions in a RestHandlerException

2019-03-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11902:
---
Labels: pull-request-available  (was: )

> JarRunHandler wraps all exceptions in a RestHandlerException
> 
>
> Key: FLINK-11902
> URL: https://issues.apache.org/jira/browse/FLINK-11902
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.7.2, 1.8.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.3, 1.8.1
>
>
> The {{JarRunHandle}} wraps every exception during the job-submission in a 
> {{RestHandlerException}}, which should only be used for exceptions where we 
> are aware of the underlying cause.
> As a result the stacktraces from exceptions thrown during job-submission are 
> not forwarded to users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zentol opened a new pull request #7974: [FLINK-11902][rest] Do not wrap all exceptions in RestHandlerException

2019-03-13 Thread GitBox
zentol opened a new pull request #7974: [FLINK-11902][rest] Do not wrap all 
exceptions in RestHandlerException 
URL: https://github.com/apache/flink/pull/7974
 
 
   
   
   ## What is the purpose of the change
   
   Modifies the `JarRunHandler` to not blanket-wrap all exceptions in a 
`RestHandlerException`, syncing the behavior with the `JobSubmitHandler`. This 
prevented stacktraces from being forwarded to users.
   
   Additionally contains a small hotfix to improve logging message when dealing 
with unhandled exceptions.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11904) Improve MemoryStateBackendTest by using JUnit's Parameterized

2019-03-13 Thread Congxian Qiu(klion26) (JIRA)
Congxian Qiu(klion26) created FLINK-11904:
-

 Summary: Improve MemoryStateBackendTest by using JUnit's 
Parameterized
 Key: FLINK-11904
 URL: https://issues.apache.org/jira/browse/FLINK-11904
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: Congxian Qiu(klion26)
Assignee: Congxian Qiu(klion26)


Currently, there are two classes {{MemoryStateBackendTest}} and 
{{AsyncMemoryStateBackendTest}}, the only difference is whether using async 
mode.

We can improve this by using JUnit's Parameterize



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >