[GitHub] [flink] FangYongs closed pull request #21965: [FLINK-31121] Support discarding too large records in kafka sink

2023-03-24 Thread via GitHub


FangYongs closed pull request #21965: [FLINK-31121] Support discarding too 
large records in kafka sink
URL: https://github.com/apache/flink/pull/21965


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] FangYongs commented on pull request #21965: [FLINK-31121] Support discarding too large records in kafka sink

2023-03-24 Thread via GitHub


FangYongs commented on PR #21965:
URL: https://github.com/apache/flink/pull/21965#issuecomment-1483726416

   Thanks @tzulitai I will close this pr and reopen a new one


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] DavidLiu001 commented on a diff in pull request #22241: [FLINK-31542][connectors] Fix FatalExceptionClassifier semantics of r…

2023-03-24 Thread via GitHub


DavidLiu001 commented on code in PR #22241:
URL: https://github.com/apache/flink/pull/22241#discussion_r1148303506


##
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java:
##
@@ -44,6 +44,8 @@ public FatalExceptionClassifier(
 public boolean isFatal(Throwable err, Consumer 
throwableConsumer) {
 if (validator.test(err)) {
 throwableConsumer.accept(throwableMapper.apply(err));
+return true;
+} else {

Review Comment:
   > And should we add unit tests for this method?
   The previous test cases related to `FatalExceptionClassifier` has covered 
`isFatal` method, this PR is a minor change. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] DavidLiu001 commented on a diff in pull request #22241: [FLINK-31542][connectors] Fix FatalExceptionClassifier semantics of r…

2023-03-24 Thread via GitHub


DavidLiu001 commented on code in PR #22241:
URL: https://github.com/apache/flink/pull/22241#discussion_r1148303070


##
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java:
##
@@ -44,6 +44,8 @@ public FatalExceptionClassifier(
 public boolean isFatal(Throwable err, Consumer 
throwableConsumer) {
 if (validator.test(err)) {
 throwableConsumer.accept(throwableMapper.apply(err));
+return true;
+} else {

Review Comment:
   > Dont think we should return `false` right after validator.test is `false` 
as it wont go into `chainedClassifier` check. Instead, we should return false 
at the line 55.
   Good comment, PR was updated.  A little difference, `chainedClassifier` 
check includes `false` and `true`,  It seems that new "return false" is not 
needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31006) job is not finished when using pipeline mode to run bounded source like kafka/pulsar

2023-03-24 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704869#comment-17704869
 ] 

Ran Tao commented on FLINK-31006:
-

[~tzulitai] Not the same problem, this issue is the problem that noMoreSplit 
was not reset during jm fo. [~jackylau] because kafka in flink main repo is 
code freezing, could you close this PR and create a new one in the 
flink-connector-kafka repo if you reproduce it. WDYT?

> job is not finished when using pipeline mode to run bounded source like 
> kafka/pulsar
> 
>
> Key: FLINK-31006
> URL: https://issues.apache.org/jira/browse/FLINK-31006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-10-13-20-52-890.png, 
> image-2023-02-10-13-23-38-430.png, image-2023-02-10-13-24-46-929.png, 
> image-2023-03-04-01-04-18-658.png, image-2023-03-04-01-05-25-335.png, 
> image-2023-03-04-01-07-04-927.png, image-2023-03-04-01-07-36-168.png, 
> image-2023-03-04-01-08-29-042.png, image-2023-03-04-01-09-24-199.png
>
>
> when i do failover works like kill jm/tm when using  pipeline mode to run 
> bounded source like kafka, i found job is not finished, when every partition 
> data has consumed.
>  
> After dig into code, i found this logical not run when JM recover. the 
> partition infos are not changed. so noMoreNewPartitionSplits is not set to 
> true. then this will not run 
>  
> !image-2023-02-10-13-23-38-430.png!
>  
> !image-2023-02-10-13-24-46-929.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] EricBrzezenski commented on pull request #625: [FLINK-31081][Documentation] Updating Jira Dashboard links to point to the #Issue-tracker

2023-03-24 Thread via GitHub


EricBrzezenski commented on PR #625:
URL: https://github.com/apache/flink-web/pull/625#issuecomment-1483698185

   Removed all html changes, and fixed the links in the Chinese documentation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #21909: [FLINK-31006][connector/kafka] Fix noMoreNewPartitionSplits is not se…

2023-03-24 Thread via GitHub


liuyongvs commented on PR #21909:
URL: https://github.com/apache/flink/pull/21909#issuecomment-1483681339

   @tzulitai i talked with @PatrickRen offline, don't have a suitable way to 
fix it before. and  we will solved it in next weeke


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22144: [FLINK-31102][table] Add ARRAY_REMOVE function.

2023-03-24 Thread via GitHub


liuyongvs commented on PR #22144:
URL: https://github.com/apache/flink/pull/22144#issuecomment-1483680915

   hi @snuyanzin , will it be merged now?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22250: [FLINK-26945][table] Add built-in DATE_SUB function.

2023-03-24 Thread via GitHub


liuyongvs commented on PR #22250:
URL: https://github.com/apache/flink/pull/22250#issuecomment-1483679919

   @flinkbot  run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #21909: [FLINK-31006][connector/kafka] Fix noMoreNewPartitionSplits is not se…

2023-03-24 Thread via GitHub


liuyongvs commented on PR #21909:
URL: https://github.com/apache/flink/pull/21909#issuecomment-1483678936

   hi @tzulitai , it is a different problem.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] vtkhanh commented on a diff in pull request #22241: [FLINK-31542][connectors] Fix FatalExceptionClassifier semantics of r…

2023-03-24 Thread via GitHub


vtkhanh commented on code in PR #22241:
URL: https://github.com/apache/flink/pull/22241#discussion_r1148126469


##
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java:
##
@@ -44,6 +44,8 @@ public FatalExceptionClassifier(
 public boolean isFatal(Throwable err, Consumer 
throwableConsumer) {
 if (validator.test(err)) {
 throwableConsumer.accept(throwableMapper.apply(err));
+return true;
+} else {

Review Comment:
   And should we add unit tests for this method?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] vtkhanh commented on a diff in pull request #22241: [FLINK-31542][connectors] Fix FatalExceptionClassifier semantics of r…

2023-03-24 Thread via GitHub


vtkhanh commented on code in PR #22241:
URL: https://github.com/apache/flink/pull/22241#discussion_r1148124903


##
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java:
##
@@ -44,6 +44,8 @@ public FatalExceptionClassifier(
 public boolean isFatal(Throwable err, Consumer 
throwableConsumer) {
 if (validator.test(err)) {
 throwableConsumer.accept(throwableMapper.apply(err));
+return true;
+} else {

Review Comment:
   Dont think we should return `false` right after validator.test is `false` as 
it wont go into `chainedClassifier` check. Instead, we should return false at 
the line 55.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31006) job is not finished when using pipeline mode to run bounded source like kafka/pulsar

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704810#comment-17704810
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-31006 at 3/24/23 9:00 PM:
--

Is this subsumed by https://issues.apache.org/jira/browse/FLINK-31319 (merged)? 
If yes, can we close this ticket as a duplicate?


was (Author: tzulitai):
Is this subsumed by https://issues.apache.org/jira/browse/FLINK-31319 (merged)?

> job is not finished when using pipeline mode to run bounded source like 
> kafka/pulsar
> 
>
> Key: FLINK-31006
> URL: https://issues.apache.org/jira/browse/FLINK-31006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-10-13-20-52-890.png, 
> image-2023-02-10-13-23-38-430.png, image-2023-02-10-13-24-46-929.png, 
> image-2023-03-04-01-04-18-658.png, image-2023-03-04-01-05-25-335.png, 
> image-2023-03-04-01-07-04-927.png, image-2023-03-04-01-07-36-168.png, 
> image-2023-03-04-01-08-29-042.png, image-2023-03-04-01-09-24-199.png
>
>
> when i do failover works like kill jm/tm when using  pipeline mode to run 
> bounded source like kafka, i found job is not finished, when every partition 
> data has consumed.
>  
> After dig into code, i found this logical not run when JM recover. the 
> partition infos are not changed. so noMoreNewPartitionSplits is not set to 
> true. then this will not run 
>  
> !image-2023-02-10-13-23-38-430.png!
>  
> !image-2023-02-10-13-24-46-929.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31006) job is not finished when using pipeline mode to run bounded source like kafka/pulsar

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704810#comment-17704810
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-31006:
-

Is this subsumed by https://issues.apache.org/jira/browse/FLINK-31319 (merged)?

> job is not finished when using pipeline mode to run bounded source like 
> kafka/pulsar
> 
>
> Key: FLINK-31006
> URL: https://issues.apache.org/jira/browse/FLINK-31006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-02-10-13-20-52-890.png, 
> image-2023-02-10-13-23-38-430.png, image-2023-02-10-13-24-46-929.png, 
> image-2023-03-04-01-04-18-658.png, image-2023-03-04-01-05-25-335.png, 
> image-2023-03-04-01-07-04-927.png, image-2023-03-04-01-07-36-168.png, 
> image-2023-03-04-01-08-29-042.png, image-2023-03-04-01-09-24-199.png
>
>
> when i do failover works like kill jm/tm when using  pipeline mode to run 
> bounded source like kafka, i found job is not finished, when every partition 
> data has consumed.
>  
> After dig into code, i found this logical not run when JM recover. the 
> partition infos are not changed. so noMoreNewPartitionSplits is not set to 
> true. then this will not run 
>  
> !image-2023-02-10-13-23-38-430.png!
>  
> !image-2023-02-10-13-24-46-929.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] tzulitai commented on pull request #21909: [FLINK-31006][connector/kafka] Fix noMoreNewPartitionSplits is not se…

2023-03-24 Thread via GitHub


tzulitai commented on PR #21909:
URL: https://github.com/apache/flink/pull/21909#issuecomment-1483399122

   I believe that this PR is subsumed by 
https://issues.apache.org/jira/browse/FLINK-31319.
   Can we close this PR @liuyongvs?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] tzulitai commented on pull request #5: [FLINK-25916][connector-kafka] Using upsert-kafka with a flush buffer…

2023-03-24 Thread via GitHub


tzulitai commented on PR #5:
URL: 
https://github.com/apache/flink-connector-kafka/pull/5#issuecomment-1483388583

   Hey folks @paul8263 @jaumebecks @mhv666, I can try to review this next week 
after some other work - will need a bit of time to catch up. Thanks for the 
patience.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31599) Update Kafka dependency in flink-connector-kafka to 3.4.0

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704803#comment-17704803
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-31599 at 3/24/23 8:34 PM:
--

Merged to {{apache/flink-connector-kafka:main}} via 
06789ec7b86a3b2edae1dc7f9f29d672ebd3610b


was (Author: tzulitai):
Merged to `apache/flink-connector-kafka:main` via 
06789ec7b86a3b2edae1dc7f9f29d672ebd3610b

> Update Kafka dependency in flink-connector-kafka to 3.4.0
> -
>
> Key: FLINK-31599
> URL: https://issues.apache.org/jira/browse/FLINK-31599
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Alex Sorokoumov
>Assignee: Alex Sorokoumov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kafka-5.0.0
>
>
> There is a number of reasons to upgrade to the latest version.
>  
> First, the Kafka connector uses reflection, so internal changes in Kafka 
> clients' implementation might require changes in the connector. With more 
> frequent upgrades, the amount of work per upgrade is smaller.
>  
> Second, there were a number of relevant bug fixes since 3.2.3:
> * [KAFKA-14303] - Producer.send without record key and batch.size=0 goes into 
> infinite loop
> * [KAFKA-14379] - consumer should refresh preferred read replica on update 
> metadata
> * [KAFKA-14422] - Consumer rebalance stuck after new static member joins a 
> group with members not supporting static members
> * [KAFKA-14417] - Producer doesn't handle REQUEST_TIMED_OUT for 
> InitProducerIdRequest, treats as fatal error
> * [KAFKA-14532] - Correctly handle failed fetch when partitions unassigned
> * 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31599) Update Kafka dependency in flink-connector-kafka to 3.4.0

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-31599:

Fix Version/s: kafka-5.0.0

> Update Kafka dependency in flink-connector-kafka to 3.4.0
> -
>
> Key: FLINK-31599
> URL: https://issues.apache.org/jira/browse/FLINK-31599
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Alex Sorokoumov
>Assignee: Alex Sorokoumov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kafka-5.0.0
>
>
> There is a number of reasons to upgrade to the latest version.
>  
> First, the Kafka connector uses reflection, so internal changes in Kafka 
> clients' implementation might require changes in the connector. With more 
> frequent upgrades, the amount of work per upgrade is smaller.
>  
> Second, there were a number of relevant bug fixes since 3.2.3:
> * [KAFKA-14303] - Producer.send without record key and batch.size=0 goes into 
> infinite loop
> * [KAFKA-14379] - consumer should refresh preferred read replica on update 
> metadata
> * [KAFKA-14422] - Consumer rebalance stuck after new static member joins a 
> group with members not supporting static members
> * [KAFKA-14417] - Producer doesn't handle REQUEST_TIMED_OUT for 
> InitProducerIdRequest, treats as fatal error
> * [KAFKA-14532] - Correctly handle failed fetch when partitions unassigned
> * 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31599) Update Kafka dependency in flink-connector-kafka to 3.4.0

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-31599.
-
Resolution: Fixed

> Update Kafka dependency in flink-connector-kafka to 3.4.0
> -
>
> Key: FLINK-31599
> URL: https://issues.apache.org/jira/browse/FLINK-31599
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Alex Sorokoumov
>Assignee: Alex Sorokoumov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kafka-5.0.0
>
>
> There is a number of reasons to upgrade to the latest version.
>  
> First, the Kafka connector uses reflection, so internal changes in Kafka 
> clients' implementation might require changes in the connector. With more 
> frequent upgrades, the amount of work per upgrade is smaller.
>  
> Second, there were a number of relevant bug fixes since 3.2.3:
> * [KAFKA-14303] - Producer.send without record key and batch.size=0 goes into 
> infinite loop
> * [KAFKA-14379] - consumer should refresh preferred read replica on update 
> metadata
> * [KAFKA-14422] - Consumer rebalance stuck after new static member joins a 
> group with members not supporting static members
> * [KAFKA-14417] - Producer doesn't handle REQUEST_TIMED_OUT for 
> InitProducerIdRequest, treats as fatal error
> * [KAFKA-14532] - Correctly handle failed fetch when partitions unassigned
> * 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31599) Update Kafka dependency in flink-connector-kafka to 3.4.0

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704803#comment-17704803
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-31599:
-

Merged to `apache/flink-connector-kafka:main` via 
06789ec7b86a3b2edae1dc7f9f29d672ebd3610b

> Update Kafka dependency in flink-connector-kafka to 3.4.0
> -
>
> Key: FLINK-31599
> URL: https://issues.apache.org/jira/browse/FLINK-31599
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Alex Sorokoumov
>Assignee: Alex Sorokoumov
>Priority: Minor
>  Labels: pull-request-available
>
> There is a number of reasons to upgrade to the latest version.
>  
> First, the Kafka connector uses reflection, so internal changes in Kafka 
> clients' implementation might require changes in the connector. With more 
> frequent upgrades, the amount of work per upgrade is smaller.
>  
> Second, there were a number of relevant bug fixes since 3.2.3:
> * [KAFKA-14303] - Producer.send without record key and batch.size=0 goes into 
> infinite loop
> * [KAFKA-14379] - consumer should refresh preferred read replica on update 
> metadata
> * [KAFKA-14422] - Consumer rebalance stuck after new static member joins a 
> group with members not supporting static members
> * [KAFKA-14417] - Producer doesn't handle REQUEST_TIMED_OUT for 
> InitProducerIdRequest, treats as fatal error
> * [KAFKA-14532] - Correctly handle failed fetch when partitions unassigned
> * 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-kafka] tzulitai closed pull request #11: [FLINK-31599] Update kafka version to 3.4.0

2023-03-24 Thread via GitHub


tzulitai closed pull request #11: [FLINK-31599] Update kafka version to 3.4.0
URL: https://github.com/apache/flink-connector-kafka/pull/11


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] tzulitai commented on pull request #11: [FLINK-31599] Update kafka version to 3.4.0

2023-03-24 Thread via GitHub


tzulitai commented on PR #11:
URL: 
https://github.com/apache/flink-connector-kafka/pull/11#issuecomment-1483354881

   Tested locally and everything passes. +1 LGTM. I re-triggered CI, will wait 
for a green light just to be sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31599) Update Kafka dependency in flink-connector-kafka to 3.4.0

2023-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31599:
---
Labels: pull-request-available  (was: )

> Update Kafka dependency in flink-connector-kafka to 3.4.0
> -
>
> Key: FLINK-31599
> URL: https://issues.apache.org/jira/browse/FLINK-31599
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Alex Sorokoumov
>Assignee: Alex Sorokoumov
>Priority: Minor
>  Labels: pull-request-available
>
> There is a number of reasons to upgrade to the latest version.
>  
> First, the Kafka connector uses reflection, so internal changes in Kafka 
> clients' implementation might require changes in the connector. With more 
> frequent upgrades, the amount of work per upgrade is smaller.
>  
> Second, there were a number of relevant bug fixes since 3.2.3:
> * [KAFKA-14303] - Producer.send without record key and batch.size=0 goes into 
> infinite loop
> * [KAFKA-14379] - consumer should refresh preferred read replica on update 
> metadata
> * [KAFKA-14422] - Consumer rebalance stuck after new static member joins a 
> group with members not supporting static members
> * [KAFKA-14417] - Producer doesn't handle REQUEST_TIMED_OUT for 
> InitProducerIdRequest, treats as fatal error
> * [KAFKA-14532] - Correctly handle failed fetch when partitions unassigned
> * 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-kafka] Gerrrr commented on pull request #11: [FLINK-31599] Update kafka version to 3.4.0

2023-03-24 Thread via GitHub


Ge commented on PR #11:
URL: 
https://github.com/apache/flink-connector-kafka/pull/11#issuecomment-1483281333

   The failed test was flaky prior to this patch -  
https://issues.apache.org/jira/browse/FLINK-25451.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31542) FatalExceptionClassifier#isFatal returns false if the exception is fatal

2023-03-24 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-31542:
-

Assignee: DavidLiu

> FatalExceptionClassifier#isFatal returns false if the exception is fatal
> 
>
> Key: FLINK-31542
> URL: https://issues.apache.org/jira/browse/FLINK-31542
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Samuel Siebenmann
>Assignee: DavidLiu
>Priority: Minor
>  Labels: pull-request-available
>
> FatalExceptionClassifier#isFatal returns `false` if the passed throwable is 
> fatal and `true` if it is not:
> {code:java}
> public boolean isFatal(Throwable err, Consumer throwableConsumer){ 
>   
> if (validator.test(err)) {
> throwableConsumer.accept(throwableMapper.apply(err));   
> return false;
> }
> if (chainedClassifier != null) { 
> return chainedClassifier.isFatal(err, throwableConsumer);
> } else {
> return true;
> }
> }
> {code}
> ([github|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java#L44])
> However, the semantics of the method would indicate that it should return 
> `true` if the passed throwable is fatal and `false` if it is not (i.e. the 
> opposite of what is currently the case).
> Additionally, the method name doesn't clearly indicate its side effects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31542) FatalExceptionClassifier#isFatal returns false if the exception is fatal

2023-03-24 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704753#comment-17704753
 ] 

Danny Cranmer commented on FLINK-31542:
---

[~DavidLiu001] thanks for picking this up, I will take a look on Monday.

> FatalExceptionClassifier#isFatal returns false if the exception is fatal
> 
>
> Key: FLINK-31542
> URL: https://issues.apache.org/jira/browse/FLINK-31542
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Samuel Siebenmann
>Priority: Minor
>  Labels: pull-request-available
>
> FatalExceptionClassifier#isFatal returns `false` if the passed throwable is 
> fatal and `true` if it is not:
> {code:java}
> public boolean isFatal(Throwable err, Consumer throwableConsumer){ 
>   
> if (validator.test(err)) {
> throwableConsumer.accept(throwableMapper.apply(err));   
> return false;
> }
> if (chainedClassifier != null) { 
> return chainedClassifier.isFatal(err, throwableConsumer);
> } else {
> return true;
> }
> }
> {code}
> ([github|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java#L44])
> However, the semantics of the method would indicate that it should return 
> `true` if the passed throwable is fatal and `false` if it is not (i.e. the 
> opposite of what is currently the case).
> Additionally, the method name doesn't clearly indicate its side effects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704750#comment-17704750
 ] 

Matthias Pohl edited comment on FLINK-28440 at 3/24/23 6:26 PM:


I try to finalize FLINK-31593 but ran into an issue with the 
{{StatefulJobSavepointMigrationITCase}} for Flink version 1.17 with RocksDB 
state backend and {{SnapshotType.CHECKPOINT}}. The following exception is 
exposed through the logs:
{code}
34632 [Flat Map -> Sink: Unnamed (2/4)#30] WARN  
org.apache.flink.runtime.taskmanager.Task [] - Flat Map -> Sink: Unnamed 
(2/4)#30 
(98755c0995b5d43eae13ca81da224424_d1392353922252257afa15351a98bae9_1_30) 
switched from INITIALIZING to FAILED with failure cause:
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:747)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:722)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:688)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
[classes/:?]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_345]
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state 
backend for StreamFlatMap_d1392353922252257afa15351a98bae9_(2/4) from any of 
the 1 provided restore options.
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught 
unexpected exception.
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:405)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:510)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:99)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.lambda$createKeyedStateBackend$1(AbstractChangelogStateBackend.java:145)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:72)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:94)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:338)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: java.io.FileNotFoundException: 

[jira] [Comment Edited] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704750#comment-17704750
 ] 

Matthias Pohl edited comment on FLINK-28440 at 3/24/23 6:25 PM:


I try to finalize FLINK-31593 but ran into an issue with the 
{{StatefulJobSavepointMigrationITCase}} for Flink version 1.17 with RocksDB 
state backend and {{SnapshotType.CHECKPOINT}}. The following exception is 
exposed through the logs:
{code}
34632 [Flat Map -> Sink: Unnamed (2/4)#30] WARN  
org.apache.flink.runtime.taskmanager.Task [] - Flat Map -> Sink: Unnamed 
(2/4)#30 
(98755c0995b5d43eae13ca81da224424_d1392353922252257afa15351a98bae9_1_30) 
switched from INITIALIZING to FAILED with failure cause:
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:747)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:722)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:688)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
[classes/:?]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_345]
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state 
backend for StreamFlatMap_d1392353922252257afa15351a98bae9_(2/4) from any of 
the 1 provided restore options.
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught 
unexpected exception.
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:405)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:510)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:99)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.lambda$createKeyedStateBackend$1(AbstractChangelogStateBackend.java:145)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:72)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:94)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:338)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: java.io.FileNotFoundException: 

[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704750#comment-17704750
 ] 

Matthias Pohl commented on FLINK-28440:
---

I try to finalize FLINK-31593 but ran into an issue with the 
{{StatefulJobSavepointMigrationITCase}} for Flink version 1.17 with RocksDB 
state backend and {{SnapshotType.CHECKPOINT}}. The following exception is 
exposed through the logs:
{code}
34632 [Flat Map -> Sink: Unnamed (2/4)#30] WARN  
org.apache.flink.runtime.taskmanager.Task [] - Flat Map -> Sink: Unnamed 
(2/4)#30 
(98755c0995b5d43eae13ca81da224424_d1392353922252257afa15351a98bae9_1_30) 
switched from INITIALIZING to FAILED with failure cause:
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:747)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:722)
 ~[classes/:?]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:688)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
 ~[classes/:?]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
[classes/:?]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
[classes/:?]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_345]
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state 
backend for StreamFlatMap_d1392353922252257afa15351a98bae9_(2/4) from any of 
the 1 provided restore options.
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught 
unexpected exception.
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:405)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:510)
 ~[classes/:?]
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:99)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.lambda$createKeyedStateBackend$1(AbstractChangelogStateBackend.java:145)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:72)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:94)
 ~[classes/:?]
at 
org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:338)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:355)
 ~[classes/:?]
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:166)
 ~[classes/:?]
... 11 more
Caused by: java.io.FileNotFoundException: 

[jira] [Assigned] (FLINK-31611) Add delayed restart to failed jobs

2023-03-24 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi reassigned FLINK-31611:
-

Assignee: Matyas Orhidi

> Add delayed restart to failed jobs
> --
>
> Key: FLINK-31611
> URL: https://issues.apache.org/jira/browse/FLINK-31611
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.5.0
>
>
> Operator is able to restart failed jobs already using:
> {{kubernetes.operator.job.restart.failed: true}}
> It's beneficial however to keep a failed job around for a while for 
> inspection:
> {{kubernetes.operator.job.restart.failed.delay: 5m}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-kafka] mas-chen commented on pull request #16: [FLINK-30859] Externalize confluent avro related code

2023-03-24 Thread via GitHub


mas-chen commented on PR #16:
URL: 
https://github.com/apache/flink-connector-kafka/pull/16#issuecomment-1483223799

   BTW I noticed that the NOTICE files are outdated. I hope you don't mind that 
I change it in the same PR here: 
[a554982](https://github.com/apache/flink-connector-kafka/pull/16/commits/a55498297a3d20c0ccaf078cd8629ea22e23f820)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31611) Add delayed restart to failed jobs

2023-03-24 Thread Matyas Orhidi (Jira)
Matyas Orhidi created FLINK-31611:
-

 Summary: Add delayed restart to failed jobs
 Key: FLINK-31611
 URL: https://issues.apache.org/jira/browse/FLINK-31611
 Project: Flink
  Issue Type: New Feature
Reporter: Matyas Orhidi


Operator is able to restart failed jobs already using:
{{kubernetes.operator.job.restart.failed: true}}

It's beneficial however to keep a failed job around for a while for inspection:
{{kubernetes.operator.job.restart.failed.delay: 5m}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31611) Add delayed restart to failed jobs

2023-03-24 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31611:
--
Fix Version/s: kubernetes-operator-1.5.0

> Add delayed restart to failed jobs
> --
>
> Key: FLINK-31611
> URL: https://issues.apache.org/jira/browse/FLINK-31611
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.5.0
>
>
> Operator is able to restart failed jobs already using:
> {{kubernetes.operator.job.restart.failed: true}}
> It's beneficial however to keep a failed job around for a while for 
> inspection:
> {{kubernetes.operator.job.restart.failed.delay: 5m}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31611) Add delayed restart to failed jobs

2023-03-24 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31611:
--
Component/s: Kubernetes Operator

> Add delayed restart to failed jobs
> --
>
> Key: FLINK-31611
> URL: https://issues.apache.org/jira/browse/FLINK-31611
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.5.0
>
>
> Operator is able to restart failed jobs already using:
> {{kubernetes.operator.job.restart.failed: true}}
> It's beneficial however to keep a failed job around for a while for 
> inspection:
> {{kubernetes.operator.job.restart.failed.delay: 5m}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin commented on a diff in pull request #21934: [FLINK-27998][table] Upgrade Calcite to 1.30.0

2023-03-24 Thread via GitHub


snuyanzin commented on code in PR #21934:
URL: https://github.com/apache/flink/pull/21934#discussion_r1147913809


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/typeutils/SymbolUtil.java:
##
@@ -473,6 +474,11 @@ public final class SymbolUtil {
 // BOUND
 addSymbolMapping(null, null, BoundType.OPEN, "BOUND", "OPEN");
 addSymbolMapping(null, null, BoundType.CLOSED, "BOUND", "CLOSED");
+
+// REX_UNKNOWN_AS
+addSymbolMapping(null, null, RexUnknownAs.TRUE, "REX_UNKNOWN_AS", 
"TRUE");

Review Comment:
   done



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/SargJsonPlanTest.java:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableEnvironment;
+import org.apache.flink.table.planner.utils.StreamTableTestUtil;
+import org.apache.flink.table.planner.utils.TableTestBase;
+
+import org.junit.Before;
+import org.junit.Test;
+
+/** Test json serialization for Sarg. */
+public class SargJsonPlanTest extends TableTestBase {

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #21934: [FLINK-27998][table] Upgrade Calcite to 1.30.0

2023-03-24 Thread via GitHub


snuyanzin commented on code in PR #21934:
URL: https://github.com/apache/flink/pull/21934#discussion_r1147913544


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdSize.scala:
##
@@ -126,7 +126,7 @@ class FlinkRelMdSize private extends 
MetadataHandler[BuiltInMetadata.Size] {
 // get each column's RexNode (RexLiteral, RexInputRef or null)
 val projectNodes = (0 until fieldCount).map {
   i =>
-val initNode: RexNode = rel.getCluster.getRexBuilder.constantNull()
+val initNode: RexNode = 
rel.getCluster.getRexBuilder.makeNullLiteral(rel.getRowType)

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #22169: [FLINK-31399] AdaptiveScheduler is able to handle changes in job resource requirements.

2023-03-24 Thread via GitHub


rkhachatryan commented on code in PR #22169:
URL: https://github.com/apache/flink/pull/22169#discussion_r1147858602


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java:
##
@@ -1213,4 +1268,17 @@  T transitionToState(StateFactory 
targetState) {
 State getState() {
 return state;
 }
+
+/**
+ * Check for slots that are idle for more than {@link 
JobManagerOptions#SLOT_IDLE_TIMEOUT} and
+ * release them back to the ResourceManager.
+ */
+private void checkIdleSlotTimeout() {
+declarativeSlotPool.releaseIdleSlots(System.currentTimeMillis());
+getMainThreadExecutor()
+.schedule(
+this::checkIdleSlotTimeout,

Review Comment:
   Thanks. But the already scheduled tasks will still be there?
   
   edit: With the added check it's probably NIT though 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #22169: [FLINK-31399] AdaptiveScheduler is able to handle changes in job resource requirements.

2023-03-24 Thread via GitHub


rkhachatryan commented on code in PR #22169:
URL: https://github.com/apache/flink/pull/22169#discussion_r1147858602


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java:
##
@@ -1213,4 +1268,17 @@  T transitionToState(StateFactory 
targetState) {
 State getState() {
 return state;
 }
+
+/**
+ * Check for slots that are idle for more than {@link 
JobManagerOptions#SLOT_IDLE_TIMEOUT} and
+ * release them back to the ResourceManager.
+ */
+private void checkIdleSlotTimeout() {
+declarativeSlotPool.releaseIdleSlots(System.currentTimeMillis());
+getMainThreadExecutor()
+.schedule(
+this::checkIdleSlotTimeout,

Review Comment:
   Thanks. But the already scheduled tasks will still be there?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #22251: [FLINK-31591] JobGraphWriter persists JobResourceRequirements

2023-03-24 Thread via GitHub


rkhachatryan commented on code in PR #22251:
URL: https://github.com/apache/flink/pull/22251#discussion_r1147851855


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/DefaultJobGraphStore.java:
##
@@ -241,6 +243,22 @@ public void putJobGraph(JobGraph jobGraph) throws 
Exception {
 LOG.info("Added {} to {}.", jobGraph, jobGraphStateHandleStore);
 }
 
+@Override
+public void putJobResourceRequirements(
+JobID jobId, JobResourceRequirements jobResourceRequirements) 
throws Exception {
+synchronized (lock) {
+@Nullable final JobGraph jobGraph = recoverJobGraph(jobId);
+if (jobGraph == null) {
+throw new FileNotFoundException(
+String.format(
+"JobGraph for job [%s] was not found in 
JobGraphStore and is needed for attaching JobResourceRequirements.",
+jobId));
+}
+JobResourceRequirements.writeToJobGraph(jobGraph, 
jobResourceRequirements);
+putJobGraph(jobGraph);

Review Comment:
   I think you're right, the only impact will be overwriting resource 
requirements, which is acceptable.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on pull request #21208: [FLINK-29822] fix wrong description in comments of StreamExecutionEnv…

2023-03-24 Thread via GitHub


pnowojski commented on PR #21208:
URL: https://github.com/apache/flink/pull/21208#issuecomment-1483101289

   @zoltar9264, could you regenerate the docs? 
   
   ```
   Documentation is outdated, please regenerate it according to the 
instructions in flink-docs/README.md
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] chucheng92 commented on pull request #22255: [FLINK-31598][table] Cleanup usage of deprecated TableEnvironment#registerTable

2023-03-24 Thread via GitHub


chucheng92 commented on PR #22255:
URL: https://github.com/apache/flink/pull/22255#issuecomment-1483100671

   > @chucheng92 , yes, I missed updating `SemiAntiJoinStreamITCase`. But I 
will leave `StreamTableEnvironmentImpl` as it is. Because simply replacing the 
`registerTable` with `createTemporaryView` will break the API compatibility 
(path identifier vs arbitrary name string). The usage of `registerTable` can be 
removed when we drop the deprecated methods in `StreamTableEnvironment`.
   
   can i understand that the current replacement is only for the test class and 
a few internal class, and not execute replacement for the exposed API until we 
prepare to remove it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] chucheng92 commented on pull request #22255: [FLINK-31598][table] Cleanup usage of deprecated TableEnvironment#registerTable

2023-03-24 Thread via GitHub


chucheng92 commented on PR #22255:
URL: https://github.com/apache/flink/pull/22255#issuecomment-1483086948

   > @chucheng92 , yes, I missed updating `SemiAntiJoinStreamITCase`. But I 
will leave `StreamTableEnvironmentImpl` as it is. Because simply replacing the 
`registerTable` with `createTemporaryView` will break the API compatibility 
(path identifier vs arbitrary name string). The usage of `registerTable` can be 
removed when we drop the deprecated methods in `StreamTableEnvironment`.
   
   thanks for explanations.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] chucheng92 commented on pull request #10: [FLINK-30935][connector/kafka] Add kafka serializers version check when using SimpleVersionedSerializer

2023-03-24 Thread via GitHub


chucheng92 commented on PR #10:
URL: 
https://github.com/apache/flink-connector-kafka/pull/10#issuecomment-1483080448

   @tzulitai hi, Gordon, can you help to take a look please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #22255: [FLINK-31598][table] Cleanup usage of deprecated TableEnvironment#registerTable

2023-03-24 Thread via GitHub


wuchong commented on PR #22255:
URL: https://github.com/apache/flink/pull/22255#issuecomment-1483032285

   @chucheng92 , yes, I missed updating `SemiAntiJoinStreamITCase`. But I will 
leave `StreamTableEnvironmentImpl` as it is. Because simply replacing the 
`registerTable` with `createTemporaryView` will break the API compatibility 
(path identifier vs arbitrary name string). The usage of `registerTable` can be 
removed when we drop the deprecated methods in `StreamTableEnvironment`. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on a diff in pull request #22255: [FLINK-31598][table] Cleanup usage of deprecated TableEnvironment#registerTable

2023-03-24 Thread via GitHub


wuchong commented on code in PR #22255:
URL: https://github.com/apache/flink/pull/22255#discussion_r1147767963


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/TableTestBase.scala:
##
@@ -1185,7 +1185,7 @@ abstract class TableTestUtil(
   name)
 val operation = new RichTableSourceQueryOperation(identifier, tableSource, 
statistic)
 val table = testingTableEnv.createTable(operation)
-testingTableEnv.registerTable(name, table)
+testingTableEnv.createTemporaryView(name, table)

Review Comment:
   Good catch. I will update the comment. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] tzulitai commented on pull request #21965: [FLINK-31121] Support discarding too large records in kafka sink

2023-03-24 Thread via GitHub


tzulitai commented on PR #21965:
URL: https://github.com/apache/flink/pull/21965#issuecomment-1483014288

   Update: apache/flink-connector-kafka is ready now. @FangYongs can you close 
this PR and reopen it against the externalized repo? Once you do that please 
feel free to ping me for a review of this change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-31567:
--
Description: 
Once the release has been finalized (FLINK-31562), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _now deprecated_ Flink version (i.e. 1.15 if 1.17.0 is released)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]

  was:
Once the release has been finalized (FLINK-31562), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]


> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
> automatically - at least for minor releases  for both minor and major 
> releases)
>  # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
>  # Updated the japicmp 

[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 3:38 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (/) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): https://formulae.brew.sh/formula/apache-flink#default
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue 

[GitHub] [flink] XComp closed pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.

2023-03-24 Thread via GitHub


XComp closed pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.
URL: https://github.com/apache/flink/pull/22269


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.

2023-03-24 Thread via GitHub


XComp commented on PR #22269:
URL: https://github.com/apache/flink/pull/22269#issuecomment-1483009806

   This branch will be closed without merging because we only deprecate the 
docs for any not-supported version.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.

2023-03-24 Thread via GitHub


XComp commented on code in PR #22269:
URL: https://github.com/apache/flink/pull/22269#discussion_r1147755191


##
docs/config.toml:
##
@@ -27,7 +27,7 @@ pygmentsUseClasses = true
   IsStable = true
 
   # Flag to indicate whether an outdated warning should be shown.
-  ShowOutDatedWarning = false
+  ShowOutDatedWarning = true

Review Comment:
   That's actually a good point. The warning itself states
   > This documentation is for an out-of-date version of Apache Flink. We 
recommend you use the latest [stable 
version](https://nightlies.apache.org/flink/flink-docs-stable/).
   
   which backs your claim. And it also makes sense because we can still do 
updates to the docs of 1.16.
   
   The release documentation says "Set show_outdated_warning: true in 
docs/config.toml in the branch of the previous Flink version" which is a bit 
unclear. I'm going to close this branch and update the release docs to make 
this more precise.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp merged pull request #22270: [FLINK-31567][docs] Deprecates 1.15 docs.

2023-03-24 Thread via GitHub


XComp merged PR #22270:
URL: https://github.com/apache/flink/pull/22270


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 3:31 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): https://formulae.brew.sh/formula/apache-flink#default
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # (?) Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (?) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to 

[jira] [Closed] (FLINK-30118) Migrate DDB connector Integration Tests/ITCase to E2E module

2023-03-24 Thread Daren Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daren Wong closed FLINK-30118.
--
Resolution: Won't Fix

> Migrate DDB connector Integration Tests/ITCase to E2E module
> 
>
> Key: FLINK-30118
> URL: https://issues.apache.org/jira/browse/FLINK-30118
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Reporter: Daren Wong
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> Currently DDB connector 
> [ITCase|https://github.com/apache/flink-connector-aws/blob/53ea41008910237073804dc090d67a1e0852163d/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/sink/DynamoDbSinkITCase.java#L77]
>  is implemented whereby it starts a [DDB docker 
> image|https://github.com/apache/flink-connector-aws/blob/main/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/testutils/DynamoDbContainer.java]
>  and run through several test scenarios on it.
> The proposal is to move this ITCase to an e2e test that will be run as part 
> of Github Action. This will help speed up Maven builds without sacrificing 
> integration/e2e test to ensure quality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa commented on pull request #22084: [FLINK-31293][runtime] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached

2023-03-24 Thread via GitHub


reswqa commented on PR #22084:
URL: https://github.com/apache/flink/pull/22084#issuecomment-1482982621

   Thanks @akalash and @1996fanrui for the review.
   
   > Is there an easy way to check that we allocate an overdraft buffer only 
when isRequestedSizeReached? I mean we can create test version of 
networkBufferPool to check this
   
   The main reasons why I don't think there's an easy way are:
   - `NetworkBufferPool` does not distinguish between `overdraft` and 
`non-overdraft` buffers request.
   -  `NetworkBufferPool` itself is not an interface, it is not easy to make a 
very beautiful mock for it (i.e. `TestingNetworkBufferPool`), of course it can 
also be done through inheritance, but this will make our test still partly 
dependent on its implementation.
   
   One solution I can think of is: we keep requesting and returning buffers 
until there are `currentPoolSize` buffers in `availableMemorySegment`. Next, we 
intercept the `requestPooledMemorySegment` method of the test version 
of `NetworkBufferPool` and keep requesting buffers. Then we check that this 
method will be called only `isRequestedSizeReached()` is satisfied. Is this 
really necessary compared to the complexity? The tests introduced in this PR 
already cover this part, at least to some extent.
   
   Perhaps it is enough if we can adding more checks for tests like 
`testDecreasePoolSize` to ensure that the condition must be met before 
requesting the overdraft buffer. WDYT?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen commented on a diff in pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.

2023-03-24 Thread via GitHub


PatrickRen commented on code in PR #22269:
URL: https://github.com/apache/flink/pull/22269#discussion_r1147724573


##
docs/config.toml:
##
@@ -27,7 +27,7 @@ pygmentsUseClasses = true
   IsStable = true
 
   # Flag to indicate whether an outdated warning should be shown.
-  ShowOutDatedWarning = false
+  ShowOutDatedWarning = true

Review Comment:
   I'm not sure if we should mark 1.16 as an outdated version. My understanding 
is that only end-of-life versions like 1.15 and 1.16 are "outdated" 樂 



##
docs/config.toml:
##
@@ -27,7 +27,7 @@ pygmentsUseClasses = true
   IsStable = true
 
   # Flag to indicate whether an outdated warning should be shown.
-  ShowOutDatedWarning = false
+  ShowOutDatedWarning = true

Review Comment:
   I'm not sure if we should mark 1.16 as an outdated version. My understanding 
is that only end-of-life versions like 1.15 and 1.14 are "outdated" 樂 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #22169: [FLINK-31399] AdaptiveScheduler is able to handle changes in job resource requirements.

2023-03-24 Thread via GitHub


zentol commented on code in PR #22169:
URL: https://github.com/apache/flink/pull/22169#discussion_r1147718435


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java:
##
@@ -1213,4 +1268,17 @@  T transitionToState(StateFactory 
targetState) {
 State getState() {
 return state;
 }
+
+/**
+ * Check for slots that are idle for more than {@link 
JobManagerOptions#SLOT_IDLE_TIMEOUT} and
+ * release them back to the ResourceManager.
+ */
+private void checkIdleSlotTimeout() {
+declarativeSlotPool.releaseIdleSlots(System.currentTimeMillis());
+getMainThreadExecutor()
+.schedule(
+this::checkIdleSlotTimeout,

Review Comment:
   > Should the scheduled task be cancelled, e.g. in case of losing leadership? 
Otherwise, won't we get as many scheduled tasks as leadership changes?
   
   I've added another branch to ensure we don't re-schedule the idleness check 
when the job was suspended.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] tzulitai commented on pull request #22228: [FLINK-31049] [flink-connector-kafka]Add support for Kafka record headers to KafkaSink

2023-03-24 Thread via GitHub


tzulitai commented on PR #8:
URL: https://github.com/apache/flink/pull/8#issuecomment-1482955341

   Thanks for opening the PR @AlexAxeman! After you reopen the PR against 
`apache/flink-connector-kafka:main`, please feel free to ping me for a review 
on the change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #22251: [FLINK-31591] JobGraphWriter persists JobResourceRequirements

2023-03-24 Thread via GitHub


zentol commented on code in PR #22251:
URL: https://github.com/apache/flink/pull/22251#discussion_r1147690649


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/DefaultJobGraphStore.java:
##
@@ -241,6 +243,22 @@ public void putJobGraph(JobGraph jobGraph) throws 
Exception {
 LOG.info("Added {} to {}.", jobGraph, jobGraphStateHandleStore);
 }
 
+@Override
+public void putJobResourceRequirements(
+JobID jobId, JobResourceRequirements jobResourceRequirements) 
throws Exception {
+synchronized (lock) {
+@Nullable final JobGraph jobGraph = recoverJobGraph(jobId);
+if (jobGraph == null) {
+throw new FileNotFoundException(
+String.format(
+"JobGraph for job [%s] was not found in 
JobGraphStore and is needed for attaching JobResourceRequirements.",
+jobId));
+}
+JobResourceRequirements.writeToJobGraph(jobGraph, 
jobResourceRequirements);
+putJobGraph(jobGraph);

Review Comment:
   That can indeed happen. It's unlikely to be a _problem_ though, because even 
in the worst case (i.e., a JM is writing the requirements while loosing 
leadership, and another one gains leadership and starts reading job graphs 
before that write is complete, which is already unlikely) you we might "forget" 
the last set requirements after a failover.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31610) Refactoring of LocalBufferPool

2023-03-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704646#comment-17704646
 ] 

Weijie Guo edited comment on FLINK-31610 at 3/24/23 2:48 PM:
-

Thanks [~akalash] for driving this! 

>From my point of view, this is really a very meaningful work. {{BufferPool}} 
>is a very low-level and basic component in Flink, but it is now so complicated 
>that we can hardly be confident that there are no bugs in it. To be honest, 
>issues like {{FLINK-31293}} and {{FLINK-29298}} are very difficult to 
>troubleshoot, in part because our current implementation is too complex and 
>there are no strong guarantees at the code level for some assumptions.

Back to this ticket, the simplifications in the description are in the right 
direction. The existence of {{numberOfRequestedOverdraftMemorySegments}} field 
makes our consistency maintenance very fragile. But there seems to be a small 
problem with removing it: Consider such a scenario, the {{CurrentPoolSize = 
5}}, {{numOfRequestedMemorySegments = 7}}, {{maxOverdraftBuffersPerGate = 2}}. 
If {{numberOfRequestedOverdraftMemorySegments = 0}}, then 2 buffers can be 
requested now. If the counter for requested overdraft-buffers is removed, is it 
still allowed to request buffers in this case, and if so, how many buffers can 
be requested at most?

But it is possible that this is the culprit of the current difficulty in 
maintaining consistency. IIUC, the solution you describe actually changes the 
definition of overdraft buffer from static to dynamic. If we consider the part 
exceeding {{CurrentPoolSize}} as overdraft, then things will be much simpler. 
If this part of overdraft buffers exceeds the upper limit(i.e. 
{{maxOverdraftBuffersPerGate}}), then no more buffers can be requested. 
However, there may be other problems in doing so, at least we have broken the 
previous behavior to some extent. I need to think in more detail about the 
correctness of this solution.

Any other comments are welcome. I'd like to also cc [~kevin.cyj] to see if he 
can give more input.



was (Author: weijie guo):
Thanks [~akalash] for driving this! 

>From my point of view, this is really a very meaningful work. {{BufferPool}} 
>is a very low-level and basic component in Flink, but it is now so complicated 
>that we can hardly be confident that there are no bugs in it. To be honest, 
>issues like {{FLINK-31293}} and {{FLINK-29298}} are very difficult to 
>troubleshoot, in part because our current implementation is too complex and 
>there are no strong guarantees at the code level for some assumptions.

Back to this ticket, the simplifications in the description are in the right 
direction. The existence of {{numberOfRequestedOverdraftMemorySegments}} field 
makes our consistency maintenance very fragile. But there seems to be a small 
problem with removing it: Consider such a scenario, the {{CurrentPoolSize = 
5}}, {{numOfRequestedMemorySegments = 7}}, {{maxOverdraftBuffersPerGate = 2}}. 
If {{numberOfRequestedOverdraftMemorySegments = 0}}, then 2 buffers can be 
requested now. If the counter for requested overdraft-buffers is removed, is it 
still allowed to request buffers in this case, and if so, how many buffers can 
be requested at most?

But it is possible that this is the culprit of the current difficulty in 
maintaining consistency. IIUC, the solution you describe actually changes the 
definition of overdraft buffer from static to dynamic. If we consider the part 
exceeding {{CurrentPoolSize}} as overdraft, then things will be much simpler. 
If this part of the buffer exceeds the upper limit(i.e. 
{{maxOverdraftBuffersPerGate}}), then no more buffers can be requested. 
However, there may be other problems in doing so, at least we have broken the 
previous behavior to some extent. I need to think in more detail about the 
correctness of this solution.

Any other comments are welcome. I'd like to also cc [~kevin.cyj] to see if he 
can give more input.


> Refactoring of LocalBufferPool
> --
>
> Key: FLINK-31610
> URL: https://issues.apache.org/jira/browse/FLINK-31610
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.17.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> FLINK-31293 bug highlighted the issue with the internal mutual consistency of 
> different fields in LocalBufferPool. ex.:
> -  `numberOfRequestedOverdraftMemorySegments`
> -  `numberOfRequestedMemorySegments`
> -  `availableMemorySegment`
> -  `currentPoolSize`
> Most of the problem was fixed already(I hope) but it is a good idea to 
> reorganize the code in such a way that all invariants between all fields 
> inside will be clearly determined and difficult to break.
> As one example I can propose getting rid of 
> 

[jira] [Comment Edited] (FLINK-31610) Refactoring of LocalBufferPool

2023-03-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704646#comment-17704646
 ] 

Weijie Guo edited comment on FLINK-31610 at 3/24/23 2:46 PM:
-

Thanks [~akalash] for driving this! 

>From my point of view, this is really a very meaningful work. {{BufferPool}} 
>is a very low-level and basic component in Flink, but it is now so complicated 
>that we can hardly be confident that there are no bugs in it. To be honest, 
>issues like {{FLINK-31293}} and {{FLINK-29298}} are very difficult to 
>troubleshoot, in part because our current implementation is too complex and 
>there are no strong guarantees at the code level for some assumptions.

Back to this ticket, the simplifications in the description are in the right 
direction. The existence of {{numberOfRequestedOverdraftMemorySegments}} field 
makes our consistency maintenance very fragile. But there seems to be a small 
problem with removing it: Consider such a scenario, the {{CurrentPoolSize = 
5}}, {{numOfRequestedMemorySegments = 7}}, {{maxOverdraftBuffersPerGate = 2}}. 
If {{numberOfRequestedOverdraftMemorySegments = 0}}, then 2 buffers can be 
requested now. If the counter for requested overdraft-buffers is removed, is it 
still allowed to request buffers in this case, and if so, how many buffers can 
be requested at most?

But it is possible that this is the culprit of the current difficulty in 
maintaining consistency. IIUC, the solution you describe actually changes the 
definition of overdraft buffer from static to dynamic. If we consider the part 
exceeding {{CurrentPoolSize}} as overdraft, then things will be much simpler. 
If this part of the buffer exceeds the upper limit(i.e. 
{{maxOverdraftBuffersPerGate}}), then no more buffers can be requested. 
However, there may be other problems in doing so, at least we have broken the 
previous behavior to some extent. I need to think in more detail about the 
correctness of this solution.

Any other comments are welcome. I'd like to also cc [~kevin.cyj] to see if he 
can give more input.



was (Author: weijie guo):
Thanks [~akalash] for driving this! 

>From my point of view, this is really a very meaningful work. {{BufferPool}} 
>is a very low-level and basic component in Flink, but it is now so complicated 
>that we can hardly be confident that there are no bugs in it. To be honest, 
>issues like {{FLINK-31293}} and {{FLINK-29298}} are very difficult to 
>troubleshoot, in part because our current implementation is too complex and 
>there are no strong guarantees at the code level for some assumptions.

Back to this ticket, the simplifications in the description are in the right 
direction. The existence of {{numberOfRequestedOverdraftMemorySegments}} field 
makes our consistency maintenance very fragile. But there seems to be a small 
problem with removing it: Consider such a scenario, the {{CurrentPoolSize = 
5}}, {{numOfRequestedMemorySegments = 7}}, {{maxOverdraftBuffersPerGate = 2}}. 
If {{numberOfRequestedOverdraftMemorySegments = 0}}, then 2 buffers can be 
requested now. If the count of overdraft buffers that have been requested is 
removed, is it still allowed to request buffers in this case, and if so, how 
many buffers can be requested at most?

But it is possible that this is the culprit of the current difficulty in 
maintaining consistency. IIUC, the solution you describe actually changes the 
definition of overdraft buffer from static to dynamic. If we consider the part 
exceeding {{CurrentPoolSize}} as overdraft, then things will be much simpler. 
If this part of the buffer exceeds the upper limit(i.e. 
{{maxOverdraftBuffersPerGate}}), then no more buffers can be requested. 
However, there may be other problems in doing so, at least we have broken the 
previous behavior to some extent. I need to think in more detail about the 
correctness of this solution.

Any other comments are welcome. I'd like to also cc [~kevin.cyj] to see if he 
can give more input.


> Refactoring of LocalBufferPool
> --
>
> Key: FLINK-31610
> URL: https://issues.apache.org/jira/browse/FLINK-31610
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.17.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> FLINK-31293 bug highlighted the issue with the internal mutual consistency of 
> different fields in LocalBufferPool. ex.:
> -  `numberOfRequestedOverdraftMemorySegments`
> -  `numberOfRequestedMemorySegments`
> -  `availableMemorySegment`
> -  `currentPoolSize`
> Most of the problem was fixed already(I hope) but it is a good idea to 
> reorganize the code in such a way that all invariants between all fields 
> inside will be clearly determined and difficult to break.
> As one example I can propose getting rid of 
> 

[jira] [Commented] (FLINK-31610) Refactoring of LocalBufferPool

2023-03-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704646#comment-17704646
 ] 

Weijie Guo commented on FLINK-31610:


Thanks [~akalash] for driving this! 

>From my point of view, this is really a very meaningful work. {{BufferPool}} 
>is a very low-level and basic component in Flink, but it is now so complicated 
>that we can hardly be confident that there are no bugs in it. To be honest, 
>issues like {{FLINK-31293}} and {{FLINK-29298}} are very difficult to 
>troubleshoot, in part because our current implementation is too complex and 
>there are no strong guarantees at the code level for some assumptions.

Back to this ticket, the simplifications in the description are in the right 
direction. The existence of {{numberOfRequestedOverdraftMemorySegments}} field 
makes our consistency maintenance very fragile. But there seems to be a small 
problem with removing it: Consider such a scenario, the {{CurrentPoolSize = 
5}}, {{numOfRequestedMemorySegments = 7}}, {{maxOverdraftBuffersPerGate = 2}}. 
If {{numberOfRequestedOverdraftMemorySegments = 0}}, then 2 buffers can be 
requested now. If the count of overdraft buffers that have been requested is 
removed, is it still allowed to request buffers in this case, and if so, how 
many buffers can be requested at most?

But it is possible that this is the culprit of the current difficulty in 
maintaining consistency. IIUC, the solution you describe actually changes the 
definition of overdraft buffer from static to dynamic. If we consider the part 
exceeding {{CurrentPoolSize}} as overdraft, then things will be much simpler. 
If this part of the buffer exceeds the upper limit(i.e. 
{{maxOverdraftBuffersPerGate}}), then no more buffers can be requested. 
However, there may be other problems in doing so, at least we have broken the 
previous behavior to some extent. I need to think in more detail about the 
correctness of this solution.

Any other comments are welcome. I'd like to also cc [~kevin.cyj] to see if he 
can give more input.


> Refactoring of LocalBufferPool
> --
>
> Key: FLINK-31610
> URL: https://issues.apache.org/jira/browse/FLINK-31610
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.17.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> FLINK-31293 bug highlighted the issue with the internal mutual consistency of 
> different fields in LocalBufferPool. ex.:
> -  `numberOfRequestedOverdraftMemorySegments`
> -  `numberOfRequestedMemorySegments`
> -  `availableMemorySegment`
> -  `currentPoolSize`
> Most of the problem was fixed already(I hope) but it is a good idea to 
> reorganize the code in such a way that all invariants between all fields 
> inside will be clearly determined and difficult to break.
> As one example I can propose getting rid of 
> numberOfRequestedOverdraftMemorySegments and using existing 
> numberOfRequestedMemorySegments instead. That means:
> - the pool will be available when `!availableMemorySegments.isEmpty() && 
> unavailableSubpartitionsCount == 0`
> - we don't request a new `ordinary` buffer when 
> `numberOfRequestedMemorySegments >=  currentPoolSize` but we request the 
> overdraft buffer instead
> - `setNumBuffers` should work automatically without any changes
> I think we can come up with a couple of such improvements to simplify the 
> code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin commented on a diff in pull request #21934: [FLINK-27998][table] Upgrade Calcite to 1.30.0

2023-03-24 Thread via GitHub


snuyanzin commented on code in PR #21934:
URL: https://github.com/apache/flink/pull/21934#discussion_r1147674488


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/runtime/stream/jsonplan/SargJsonPlanITCase.java:
##
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.runtime.stream.jsonplan;
+
+import org.apache.flink.table.planner.factories.TestValuesTableFactory;
+import org.apache.flink.table.planner.utils.JsonPlanTestBase;
+import org.apache.flink.types.Row;
+
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/** Test for Sarg JsonPlan ser/de. */
+public class SargJsonPlanITCase extends JsonPlanTestBase {

Review Comment:
   I put it here since it covers 
`org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer#deserializeSarg`
 while `org.apache.flink.table.planner.plan.nodes.exec.stream.SargJsonPlanTest` 
doesn't



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22271: [FLINK-28372][rpc] Migrate to Akka Artery

2023-03-24 Thread via GitHub


flinkbot commented on PR #22271:
URL: https://github.com/apache/flink/pull/22271#issuecomment-1482901309

   
   ## CI report:
   
   * d8170dd7b998933bccb11e61e347d9f5fdaed2fb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28372) Investigate Akka Artery

2023-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28372:
---
Labels: pull-request-available  (was: )

> Investigate Akka Artery
> ---
>
> Key: FLINK-28372
> URL: https://issues.apache.org/jira/browse/FLINK-28372
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Our current Akka setup uses the deprecated netty-based stack. We need to 
> eventually migrate to Akka Artery.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] ferenc-csaky opened a new pull request, #22271: [FLINK-28372][rpc] Migrate to Akka Artery

2023-03-24 Thread via GitHub


ferenc-csaky opened a new pull request, #22271:
URL: https://github.com/apache/flink/pull/22271

   ## What is the purpose of the change
   
   Changes Akka remoting mechanism from the classic Netty based one to  Artery.
   
   
   ## Brief change log
   
   - Akka RPC does not depend on Netty anymore.
   - Changes in Akka configurations, as artery has some different config 
options, but mostly configs that are not needed anymore.
   
   
   ## Verifying this change
   
   After deploying a job, check the job and task managers on the Flink 
dashboard.
   
   This change is already covered by existing tests under the `flink-rpc` 
module.
   
   There are some parts that may require some discussion. I disabled the 
`RemoteAkkaRpcActorTest#failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable`
 test case, because with Artery, lifecycle monitoring is only triggered if the 
2 RPC service are on different nodes. Also, in the current iteration I did not 
exposed `watch-failure-detector` related fields in the `AkkaOptions`, which 
probably should be done, but first I just wanted to get some opinion about the 
way it is in general currently.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no (it removes Netty 
from `flink-rpc`)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-30859) Remove flink-connector-kafka from master branch

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-30859:
---

Assignee: Mason Chen  (was: Tzu-Li (Gordon) Tai)

> Remove flink-connector-kafka from master branch
> ---
>
> Key: FLINK-30859
> URL: https://issues.apache.org/jira/browse/FLINK-30859
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Affects Versions: 1.18.0
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: pull-request-available
>
> Remove flink-connector-kafka from master branch since the repo has now been 
> externalized and 1.17 commits have been sync'ed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30859) Remove flink-connector-kafka from master branch

2023-03-24 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704643#comment-17704643
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-30859:
-

Reassigned to [~mason6345] as we agreed that he will be doing most of the work 
for removing the code in apache/flink:main

> Remove flink-connector-kafka from master branch
> ---
>
> Key: FLINK-30859
> URL: https://issues.apache.org/jira/browse/FLINK-30859
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Affects Versions: 1.18.0
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: pull-request-available
>
> Remove flink-connector-kafka from master branch since the repo has now been 
> externalized and 1.17 commits have been sync'ed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mxm merged pull request #554: [FLINK-30575] Limit output ratio in case of near zero values

2023-03-24 Thread via GitHub


mxm merged PR #554:
URL: https://github.com/apache/flink-kubernetes-operator/pull/554


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 2:09 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # (?) Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (?) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (?) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing 

[jira] [Updated] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-31567:
--
Description: 
Once the release has been finalized (FLINK-31562), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]

  was:
Once the release has been finalized (FLINK-31562), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
{code:bash}
if [ "${currentBranch}" = "master" ]; then
  echo "flink_alias=release-1.16" >> ${GITHUB_ENV}
elif [ "${currentBranch}" = "release-1.14" ]; then
  echo "flink_alias=stable" >> ${GITHUB_ENV}
fi
->
if [ "${currentBranch}" = "master" ]; then
  echo "flink_alias=release-1.16" >> ${GITHUB_ENV}
elif [ "${currentBranch}" = "release-1.15" ]; then
  echo "flink_alias=stable" >> ${GITHUB_ENV}
fi
{code}

 


> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced 

[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 2:08 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (?) Completion declared on the dev@ mailing list.
 # (?) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # (!) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in [a9fa484a|https://github.com/apache/flink/commit/a9fa484a]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> 

[GitHub] [flink] flinkbot commented on pull request #22270: [FLINK-31567][docs] Deprecates 1.15 docs.

2023-03-24 Thread via GitHub


flinkbot commented on PR #22270:
URL: https://github.com/apache/flink/pull/22270#issuecomment-1482859882

   
   ## CI report:
   
   * 15ca4c1977b6d27295cf1a895bf6f44d17cddbfd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22269: [FLINK-31567][docs] Deprecates 1.16 docs.

2023-03-24 Thread via GitHub


flinkbot commented on PR #22269:
URL: https://github.com/apache/flink/pull/22269#issuecomment-1482859651

   
   ## CI report:
   
   * f52fd387389f80e0c11a5d720be5430e88246e4e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 2:02 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version: [PR 
#22269|https://github.com/apache/flink/pull/22269] (for 1.16) and [PR 
#22270|https://github.com/apache/flink/pull/22270] (for 1.15)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> 

[jira] [Assigned] (FLINK-31522) Introduce FlinkResultSet and related classes for jdbc driver

2023-03-24 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-31522:
--

Assignee: Fang Yong

> Introduce FlinkResultSet and related classes for jdbc driver
> 
>
> Key: FLINK-31522
> URL: https://issues.apache.org/jira/browse/FLINK-31522
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Gateway, Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>
> Introduce FlinkResultSet and related classes for jdbc driver to support data 
> iterator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31538) Supports parse catalog/database and properties for uri

2023-03-24 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-31538:
--

Assignee: Fang Yong

> Supports parse catalog/database and properties for uri
> --
>
> Key: FLINK-31538
> URL: https://issues.apache.org/jira/browse/FLINK-31538
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>
> Supports parse catalog/database and properties for uri



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp opened a new pull request, #22270: [FLINK-31567][docs] Deprecates 1.15 docs.

2023-03-24 Thread via GitHub


XComp opened a new pull request, #22270:
URL: https://github.com/apache/flink/pull/22270

   We missed deprecating the 1.15 logs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl edited comment on FLINK-31567 at 3/24/23 1:54 PM:


# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]


was (Author: mapohl):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged

 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
{code}
 {code}

 #  

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
> automatically - at least for minor releases  for both minor and major 
> releases)
> 

[jira] [Commented] (FLINK-31567) Promote release 1.17

2023-03-24 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704609#comment-17704609
 ] 

Matthias Pohl commented on FLINK-31567:
---

# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged

 # (/) Release announced on the user@ mailing list: [announcement 
link|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable: [blog 
post|https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/]
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # (/) Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} 
directory: [422fd855|https://github.com/apache/flink-web/commit/422fd855]
 # (/) Updated the japicmp configuration: Done in FLINK-31568
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in 
[a5e05b7|https://github.com/apache/flink/commit/a5e05b713e237e91ca2128a7a517b93083178370]
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _previous_ Flink version
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
{code}
 {code}

 #  

> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
> automatically - at least for minor releases  for both minor and major 
> releases)
>  # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
>  # Updated the japicmp configuration
>  ** corresponding SNAPSHOT branch japicmp reference version set to the just 
> released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
> enabled
>  ** (minor version release only) master branch japicmp reference version set 
> to the just released version
>  ** (minor version release only) master branch japicmp exclusions have been 
> cleared
>  # Update the list of previous version in {{docs/config.toml}} on the master 
> branch.
>  # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch 
> of the _previous_ Flink version
>  # Update stable and master alias in 
> [https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
> {code:bash}
> if [ "${currentBranch}" = "master" ]; then
>   echo "flink_alias=release-1.16" >> ${GITHUB_ENV}
> elif [ "${currentBranch}" = "release-1.14" ]; then
>   echo "flink_alias=stable" >> ${GITHUB_ENV}
> fi
> ->
> if [ "${currentBranch}" = "master" ]; then
>   echo "flink_alias=release-1.16" >> ${GITHUB_ENV}
> elif [ "${currentBranch}" = "release-1.15" ]; then
>   echo "flink_alias=stable" >> ${GITHUB_ENV}
> fi
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31610) Refactoring of LocalBufferPool

2023-03-24 Thread Anton Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704604#comment-17704604
 ] 

Anton Kalashnikov commented on FLINK-31610:
---

[~Weijie Guo], [~fanrui], What do you think about the idea in the description? 
Do you have any other ideas for simplification the code?

CC [~pnowojski]

> Refactoring of LocalBufferPool
> --
>
> Key: FLINK-31610
> URL: https://issues.apache.org/jira/browse/FLINK-31610
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.17.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> FLINK-31293 bug highlighted the issue with the internal mutual consistency of 
> different fields in LocalBufferPool. ex.:
> -  `numberOfRequestedOverdraftMemorySegments`
> -  `numberOfRequestedMemorySegments`
> -  `availableMemorySegment`
> -  `currentPoolSize`
> Most of the problem was fixed already(I hope) but it is a good idea to 
> reorganize the code in such a way that all invariants between all fields 
> inside will be clearly determined and difficult to break.
> As one example I can propose getting rid of 
> numberOfRequestedOverdraftMemorySegments and using existing 
> numberOfRequestedMemorySegments instead. That means:
> - the pool will be available when `!availableMemorySegments.isEmpty() && 
> unavailableSubpartitionsCount == 0`
> - we don't request a new `ordinary` buffer when 
> `numberOfRequestedMemorySegments >=  currentPoolSize` but we request the 
> overdraft buffer instead
> - `setNumBuffers` should work automatically without any changes
> I think we can come up with a couple of such improvements to simplify the 
> code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18996) Avoid disorder for time interval join

2023-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18996:
---
Labels: auto-deprioritized-critical auto-deprioritized-major 
pull-request-available  (was: auto-deprioritized-critical 
auto-deprioritized-major)

> Avoid disorder for time interval join
> -
>
> Key: FLINK-18996
> URL: https://issues.apache.org/jira/browse/FLINK-18996
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benchao Li
>Assignee: Lyn Zhang
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> pull-request-available
> Fix For: 1.18.0
>
>
> Currently, the time interval join will produce data with rowtime later than 
> watermark. If we use the rowtime again in downstream, e.t. window 
> aggregation, we'll lose some data.
>  
> reported from user-zh: 
> [http://apache-flink.147419.n8.nabble.com/Re-flink-interval-join-tc4458.html#none]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lincoln-lil commented on a diff in pull request #22014: [FLINK-18996][table-runtime] Emit join failed record realtime

2023-03-24 Thread via GitHub


lincoln-lil commented on code in PR #22014:
URL: https://github.com/apache/flink/pull/22014#discussion_r1147543075


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java:
##
@@ -542,6 +542,18 @@ public class ExecutionConfigOptions {
 + "In Flink 1.15.x the pattern was wrongly 
defined as '___' "
 + "which would prevent migrations in the 
future.");
 
+@Documentation.TableOption(execMode = Documentation.ExecMode.STREAMING)
+public static final ConfigOption
+TABLE_EXEC_INTERVAL_JOIN_MIN_CLEAN_UP_INTERVAL_MILLIS =

Review Comment:
   -> "TABLE_EXEC_INTERVAL_JOIN_MIN_CLEAN_UP_INTERVAL"



##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java:
##
@@ -542,6 +542,18 @@ public class ExecutionConfigOptions {
 + "In Flink 1.15.x the pattern was wrongly 
defined as '___' "
 + "which would prevent migrations in the 
future.");
 
+@Documentation.TableOption(execMode = Documentation.ExecMode.STREAMING)
+public static final ConfigOption
+TABLE_EXEC_INTERVAL_JOIN_MIN_CLEAN_UP_INTERVAL_MILLIS =
+key("table.exec.interval-join.min-cleanup-interval")
+.durationType()
+.defaultValue(Duration.ofMillis(0))
+.withDescription(
+"Specifies a minimum time interval for how 
long cleanup unmatched records in the interval join operator. "
++ "Before Flink 1.18, the default 
value of this param was the half of interval duration. "
++ "NOTE: This option greater than 
0 will cause records disorder and may cause downstream operator discard these 
records e.g. window operator. "

Review Comment:
   'records disorder' may cause some confusion, how about changing it to "Note: 
Set this option greater than 0 will cause unmatched records in outer joins to 
be output later than watermark, leading to possible discarding of these records 
by downstream watermark-dependent operators, such as window operators. The 
default value is 0, which means it will clean up unmatched records 
immediately." ?



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecIntervalJoin.java:
##
@@ -347,6 +355,7 @@ private TwoInputTransformation 
createProcTimeJoin(
 IntervalJoinFunction joinFunction,
 JoinSpec joinSpec,
 IntervalJoinSpec.WindowBounds windowBounds,
+long minCleanUpInterval,

Review Comment:
   nit: -> minCleanUpIntervalMillis



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecIntervalJoin.java:
##
@@ -66,6 +66,8 @@
 
 import java.util.List;
 
+import static 
org.apache.flink.table.api.config.ExecutionConfigOptions.TABLE_EXEC_INTERVAL_JOIN_MIN_CLEAN_UP_INTERVAL_MILLIS;

Review Comment:
   nit: unnecessary static import



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/interval/RowTimeIntervalJoin.java:
##
@@ -35,6 +35,7 @@ public RowTimeIntervalJoin(
 long leftLowerBound,
 long leftUpperBound,
 long allowedLateness,
+long minCleanUpInterval,

Review Comment:
   ditto



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/interval/ProcTimeIntervalJoin.java:
##
@@ -31,10 +31,19 @@ public ProcTimeIntervalJoin(
 FlinkJoinType joinType,
 long leftLowerBound,
 long leftUpperBound,
+long minCleanUpInterval,

Review Comment:
   ditto



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecIntervalJoin.java:
##
@@ -357,6 +366,7 @@ private TwoInputTransformation 
createProcTimeJoin(
 joinSpec.getJoinType(),
 windowBounds.getLeftLowerBound(),
 windowBounds.getLeftUpperBound(),
+minCleanUpInterval,

Review Comment:
   ditto



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/interval/TimeIntervalJoin.java:
##
@@ -87,13 +87,14 @@ abstract class TimeIntervalJoin extends 
KeyedCoProcessFunction

[jira] [Commented] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-24 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704602#comment-17704602
 ] 

Marios Trivyzas commented on FLINK-31551:
-

[~libenchao] could you please take a look?

> Support CrateDB in JDBC Connector
> -
>
> Key: FLINK-31551
> URL: https://issues.apache.org/jira/browse/FLINK-31551
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Currently PostgreSQL is supported, but PostgresCatalog along with all the 
> relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp commented on pull request #22121: [FLINK-27051] fix CompletedCheckpoint.DiscardObject.discard is not idempotent

2023-03-24 Thread via GitHub


XComp commented on PR #22121:
URL: https://github.com/apache/flink/pull/22121#issuecomment-1482780981

   > My concern is that these functions being called multiple times could throw 
unnecessary exceptions:
   
   That's actually a good point - now I remember the history of this issue: The 
actual goal is to harden the contract of `StateObject#discardState` instead of 
introducing some workaround in the calling objects. That's why I started 
collecting the subtasks under FLINK-26606. FLINK-27051 is actually blocked by 
the other subtasks, I guess. WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31610) Refactoring of LocalBufferPool

2023-03-24 Thread Anton Kalashnikov (Jira)
Anton Kalashnikov created FLINK-31610:
-

 Summary: Refactoring of LocalBufferPool
 Key: FLINK-31610
 URL: https://issues.apache.org/jira/browse/FLINK-31610
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.17.0
Reporter: Anton Kalashnikov


FLINK-31293 bug highlighted the issue with the internal mutual consistency of 
different fields in LocalBufferPool. ex.:
-  `numberOfRequestedOverdraftMemorySegments`
-  `numberOfRequestedMemorySegments`
-  `availableMemorySegment`
-  `currentPoolSize`

Most of the problem was fixed already(I hope) but it is a good idea to 
reorganize the code in such a way that all invariants between all fields inside 
will be clearly determined and difficult to break.

As one example I can propose getting rid of 
numberOfRequestedOverdraftMemorySegments and using existing 
numberOfRequestedMemorySegments instead. That means:

- the pool will be available when `!availableMemorySegments.isEmpty() && 
unavailableSubpartitionsCount == 0`
- we don't request a new `ordinary` buffer when 
`numberOfRequestedMemorySegments >=  currentPoolSize` but we request the 
overdraft buffer instead
- `setNumBuffers` should work automatically without any changes

I think we can come up with a couple of such improvements to simplify the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] XComp commented on a diff in pull request #625: [FLINK-31081][Documentation] Updating Jira Dashboard links to point to the #Issue-tracker

2023-03-24 Thread via GitHub


XComp commented on code in PR #625:
URL: https://github.com/apache/flink-web/pull/625#discussion_r1147543316


##
docs/content.zh/how-to-contribute/documentation-style-guide.md:
##
@@ -13,7 +13,7 @@ weight: 21
 Flink 同时维护了 **英文** 和 **中文** 两种文档,当你拓展或者更新文档时,需要在 pull request 
中包含两种语言版本。如果你不熟悉中文,确保本次贡献补充了如下额外操作:
 
 * 开一个翻译的
-  [JIRA](https://issues.apache.org/jira/projects/FLINK/issues)
+[JIRA]({{< relref "community" >}}#issue-tracker)

Review Comment:
   Looks like that doesn't work properly. Tracker is translated into Chinese



##
docs/content.zh/how-to-contribute/contribute-code.md:
##
@@ -19,7 +19,7 @@ Apache Flink 是一个通过志愿者贡献的代码来维护、改进和扩展
 ## 寻找可贡献的内容
 
 如果你已经有好的想法可以贡献,可以直接参考下面的 "代码贡献步骤"。
-如果你在寻找可贡献的内容,可以通过 [Flink 
的问题跟踪列表](https://issues.apache.org/jira/projects/FLINK/issues) 浏览处于 open 
状态且未被分配的 Jira 工单,然后根据 "代码贡献步骤" 中的描述来参与贡献。
+如果你在寻找可贡献的内容,可以通过 [Flink 的问题跟踪列表]({{< relref "community" >}}#issue-tracker) 
浏览处于 open 状态且未被分配的 Jira 工单,然后根据 "代码贡献步骤" 中的描述来参与贡献。

Review Comment:
   The reference doesn't work. The Chinese version has a "tracker" translated 
into Chinese.



##
docs/content/getting-help.md:
##
@@ -58,7 +58,7 @@ Many members of the Flink community are active on [Stack 
Overflow](https://stack
 
 ## Found a Bug?
 
-If you observe an unexpected behavior that might be caused by a bug, you can 
search for reported bugs or file a bug report in [Flink's 
JIRA](https://issues.apache.org/jira/browse/FLINK).
+If you observe an unexpected behavior that might be caused by a bug, you can 
search for reported bugs or file a bug report in [Flink's JIRA]({{< relref 
"community#issue-tracker" >}}).

Review Comment:
   This one also exist in the Chinese version.



##
docs/content.zh/community.md:
##
@@ -120,7 +120,7 @@ under the License.
 
 ## 如何从 Apache Flink 获得帮助?
 
-我们可以通过多种方式从 Apache Flink 社区获得帮助。Flink committer 主要活跃在 
[邮件列表](#mailing-lists)。对于用户支持和问题咨询,则可以通过 用户邮件列表 获得帮助。你还可以加入社区专属的 
[Slack](#slack)。有些 Committer 同时会关注 [Stack 
Overflow](#stack-overflow)。请在提问的时候记得添加 
*[apache-flink](http://stackoverflow.com/questions/tagged/apache-flink)* 
的标签。问题反馈以及新特性的讨论则可以在 开发邮件列表 或者 
[Jira](https://issues.apache.org/jira/browse/FLINK) 上进行讨论。有兴趣对 Flink 进行贡献的人请查阅 
[贡献指南]({{< relref "how-to-contribute" >}}).
+我们可以通过多种方式从 Apache Flink 社区获得帮助。Flink committer 主要活跃在 
[邮件列表](#mailing-lists)。对于用户支持和问题咨询,则可以通过 用户邮件列表 获得帮助。你还可以加入社区专属的 
[Slack](#slack)。有些 Committer 同时会关注 [Stack 
Overflow](#stack-overflow)。请在提问的时候记得添加 
*[apache-flink](http://stackoverflow.com/questions/tagged/apache-flink)* 
的标签。问题反馈以及新特性的讨论则可以在 开发邮件列表 或者 [Jira](#issue-tracker) 上进行讨论。有兴趣对 Flink 
进行贡献的人请查阅 [贡献指南]({{< relref "how-to-contribute" >}}).

Review Comment:
   Looks like that doesn't work properly. Tracker is translated into Chinese



##
docs/content.zh/how-to-contribute/code-style-and-quality-pull-requests.md:
##
@@ -27,7 +27,7 @@ Please understand that contributions that do not follow this 
guide will take lon
 
 ## 1. JIRA issue and Naming
 
-Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues).
+Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/browse/FLINK).

Review Comment:
   Both links work. No need to edit this, I guess



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mxm opened a new pull request, #554: [FLINK-30575] Limit output ratio in case of near zero values

2023-03-24 Thread via GitHub


mxm opened a new pull request, #554:
URL: https://github.com/apache/flink-kubernetes-operator/pull/554

   When the rates drop to zero, we set near zero values to continue to use the 
same scaling logic. There is a gap in this logic when only the input rate drops 
to zero but the output rate is still measured to be greater than zero. This is 
because we do not pass on the output rates directly but the output / input 
ratio which becomes very large in this special scenario.
   
   In this change we cap the output rate in case we use near zero input rates. 
This has been tested to effectively prevent scale up spikes in pipelines which 
only occasionally receive data.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31609) Fatal error in ResourceManager caused YARNSessionFIFOSecuredITCase.testDetachedMode to fail

2023-03-24 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-31609:
-

 Summary: Fatal error in ResourceManager caused 
YARNSessionFIFOSecuredITCase.testDetachedMode to fail
 Key: FLINK-31609
 URL: https://issues.apache.org/jira/browse/FLINK-31609
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.18.0
Reporter: Matthias Pohl


This looks like FLINK-30908. I created a follow-up ticket because we reached a 
new minor version.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47547=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461

{code}
Mar 24 09:32:29 2023-03-24 09:31:50,001 ERROR 
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - Exception 
on heartbeat
Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send RPC 
request to server
Mar 24 09:32:29 java.io.InterruptedIOException: Interrupted waiting to send RPC 
request to server
Mar 24 09:32:29 at org.apache.hadoop.ipc.Client.call(Client.java:1461) 
~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at org.apache.hadoop.ipc.Client.call(Client.java:1403) 
~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at com.sun.proxy.$Proxy33.allocate(Unknown Source) 
~[?:?]
Mar 24 09:32:29 at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
 ~[hadoop-yarn-common-2.10.2.jar:?]
Mar 24 09:32:29 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) ~[?:1.8.0_292]
Mar 24 09:32:29 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_292]
Mar 24 09:32:29 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_292]
Mar 24 09:32:29 at java.lang.reflect.Method.invoke(Method.java:498) 
~[?:1.8.0_292]
Mar 24 09:32:29 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
 ~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at com.sun.proxy.$Proxy34.allocate(Unknown Source) 
~[?:?]
Mar 24 09:32:29 at 
org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:297)
 ~[hadoop-yarn-client-2.10.2.jar:?]
Mar 24 09:32:29 at 
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$HeartbeatThread.run(AMRMClientAsyncImpl.java:274)
 [hadoop-yarn-client-2.10.2.jar:?]
Mar 24 09:32:29 Caused by: java.lang.InterruptedException
Mar 24 09:32:29 at 
java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404) ~[?:1.8.0_292]
Mar 24 09:32:29 at 
java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:1.8.0_292]
Mar 24 09:32:29 at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1177) 
~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 at org.apache.hadoop.ipc.Client.call(Client.java:1456) 
~[hadoop-common-2.10.2.jar:?]
Mar 24 09:32:29 ... 17 more
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22268: [FLINK-30897][tests]Avoid timeouts in JUnit tests

2023-03-24 Thread via GitHub


flinkbot commented on PR #22268:
URL: https://github.com/apache/flink/pull/22268#issuecomment-1482725972

   
   ## CI report:
   
   * b039fa180db0bc408aef34fbe6bb056185d8f95e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-30897) Avoid timeouts in JUnit tests

2023-03-24 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-30897:
---

Assignee: lincoln lee

> Avoid timeouts in JUnit tests
> -
>
> Key: FLINK-30897
> URL: https://issues.apache.org/jira/browse/FLINK-30897
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Minor
>  Labels: pull-request-available
>
> As our [testing 
> guideline|https://flink.apache.org/contributing/code-style-and-quality-common.html#avoid-timeouts-in-junit-tests]
>  says we should 'Avoid timeouts in JUnit tests' but rather depend on the 
> global timeout in Azure. There're 10 itcases throughout the project that use 
> the 'Timeout Rule' and 22 tests use the 'Deadline' to set local timeouts. We 
> need to check if we can change this dependency one by one
> List of related test classes:
> 'Timeout Rule':
> {code}
> flink-end-to-end-tests-common-kafka  (1 usage found)
>             org.apache.flink.tests.util.kafka  (1 usage found)
>                 SQLClientSchemaRegistryITCase.java  (1 usage found)
>                     78 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-glue-schema-registry-avro-test_2.12  (1 usage found)
>             org.apache.flink.glue.schema.registry.test  (1 usage found)
>                 GlueSchemaRegistryAvroKinesisITCase.java  (1 usage found)
>                     74 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-glue-schema-registry-json-test  (1 usage found)
>             org.apache.flink.glue.schema.registry.test.json  (1 usage found)
>                 GlueSchemaRegistryJsonKinesisITCase.java  (1 usage found)
>                     68 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-runtime  (1 usage found)
>             org.apache.flink.runtime.io.disk  (1 usage found)
>                 BatchShuffleReadBufferPoolTest.java  (1 usage found)
>                     41 @Rule public Timeout timeout = new Timeout(60, 
> TimeUnit.SECONDS);
>         flink-streaming-java  (1 usage found)
>             org.apache.flink.streaming.api.operators.async  (1 usage found)
>                 AsyncWaitOperatorTest.java  (1 usage found)
>                     117 @Rule public Timeout timeoutRule = new Timeout(100, 
> TimeUnit.SECONDS);
>         flink-tests  (5 usages found)
>             org.apache.flink.runtime.operators.lifecycle  (3 usages found)
>                 BoundedSourceITCase.java  (1 usage found)
>                     75 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>                 PartiallyFinishedSourcesITCase.java  (1 usage found)
>                     79 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>                 StopWithSavepointITCase.java  (1 usage found)
>                     103 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>             org.apache.flink.test.runtime  (2 usages found)
>                 JoinDeadlockITCase.java  (1 usage found)
>                     39 @Rule public Timeout globalTimeout = new Timeout(120 * 
> 1000); // Set timeout for deadlocks
>                 SelfJoinDeadlockITCase.java  (1 usage found)
>                     46 @Rule public Timeout globalTimeout = new Timeout(120 * 
> 1000); // Set timeout for deadlocks
> {code}
> 'Deadline':
> {code}
> flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/session/SessionManagerImplTest.java:2
> flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/OneInputStreamTaskTest.java:2
> flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGateTest.java:2
> flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java:2
> flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java:4
> flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientSchemaRegistryITCase.java:2
> flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java:2
> flink-end-to-end-tests/flink-end-to-end-tests-hbase/src/test/java/org/apache/flink/tests/util/hbase/SQLClientHBaseITCase.java:2
> flink-end-to-end-tests/flink-metrics-availability-test/src/test/java/org/apache/flink/metrics/tests/MetricsAvailabilityITCase.java:6
> flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java:3
> 

[jira] [Updated] (FLINK-30897) Avoid timeouts in JUnit tests

2023-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30897:
---
Labels: pull-request-available  (was: )

> Avoid timeouts in JUnit tests
> -
>
> Key: FLINK-30897
> URL: https://issues.apache.org/jira/browse/FLINK-30897
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: lincoln lee
>Priority: Minor
>  Labels: pull-request-available
>
> As our [testing 
> guideline|https://flink.apache.org/contributing/code-style-and-quality-common.html#avoid-timeouts-in-junit-tests]
>  says we should 'Avoid timeouts in JUnit tests' but rather depend on the 
> global timeout in Azure. There're 10 itcases throughout the project that use 
> the 'Timeout Rule' and 22 tests use the 'Deadline' to set local timeouts. We 
> need to check if we can change this dependency one by one
> List of related test classes:
> 'Timeout Rule':
> {code}
> flink-end-to-end-tests-common-kafka  (1 usage found)
>             org.apache.flink.tests.util.kafka  (1 usage found)
>                 SQLClientSchemaRegistryITCase.java  (1 usage found)
>                     78 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-glue-schema-registry-avro-test_2.12  (1 usage found)
>             org.apache.flink.glue.schema.registry.test  (1 usage found)
>                 GlueSchemaRegistryAvroKinesisITCase.java  (1 usage found)
>                     74 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-glue-schema-registry-json-test  (1 usage found)
>             org.apache.flink.glue.schema.registry.test.json  (1 usage found)
>                 GlueSchemaRegistryJsonKinesisITCase.java  (1 usage found)
>                     68 @ClassRule public static final Timeout TIMEOUT = new 
> Timeout(10, TimeUnit.MINUTES);
>         flink-runtime  (1 usage found)
>             org.apache.flink.runtime.io.disk  (1 usage found)
>                 BatchShuffleReadBufferPoolTest.java  (1 usage found)
>                     41 @Rule public Timeout timeout = new Timeout(60, 
> TimeUnit.SECONDS);
>         flink-streaming-java  (1 usage found)
>             org.apache.flink.streaming.api.operators.async  (1 usage found)
>                 AsyncWaitOperatorTest.java  (1 usage found)
>                     117 @Rule public Timeout timeoutRule = new Timeout(100, 
> TimeUnit.SECONDS);
>         flink-tests  (5 usages found)
>             org.apache.flink.runtime.operators.lifecycle  (3 usages found)
>                 BoundedSourceITCase.java  (1 usage found)
>                     75 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>                 PartiallyFinishedSourcesITCase.java  (1 usage found)
>                     79 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>                 StopWithSavepointITCase.java  (1 usage found)
>                     103 @Rule public Timeout timeoutRule = new Timeout(10, 
> TimeUnit.MINUTES);
>             org.apache.flink.test.runtime  (2 usages found)
>                 JoinDeadlockITCase.java  (1 usage found)
>                     39 @Rule public Timeout globalTimeout = new Timeout(120 * 
> 1000); // Set timeout for deadlocks
>                 SelfJoinDeadlockITCase.java  (1 usage found)
>                     46 @Rule public Timeout globalTimeout = new Timeout(120 * 
> 1000); // Set timeout for deadlocks
> {code}
> 'Deadline':
> {code}
> flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/session/SessionManagerImplTest.java:2
> flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/OneInputStreamTaskTest.java:2
> flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGateTest.java:2
> flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java:2
> flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/WebFrontendITCase.java:4
> flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientSchemaRegistryITCase.java:2
> flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java:2
> flink-end-to-end-tests/flink-end-to-end-tests-hbase/src/test/java/org/apache/flink/tests/util/hbase/SQLClientHBaseITCase.java:2
> flink-end-to-end-tests/flink-metrics-availability-test/src/test/java/org/apache/flink/metrics/tests/MetricsAvailabilityITCase.java:6
> flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java:3
> 

[GitHub] [flink] lincoln-lil opened a new pull request, #22268: [FLINK-30897][tests]Avoid timeouts in JUnit tests

2023-03-24 Thread via GitHub


lincoln-lil opened a new pull request, #22268:
URL: https://github.com/apache/flink/pull/22268

   ## What is the purpose of the change
   Due to our [testing 
guideline](https://flink.apache.org/contributing/code-style-and-quality-common.html#avoid-timeouts-in-junit-tests),
 we should 'Avoid timeouts in JUnit tests' but rather depend on the global 
timeout in Azure. There're 10 itcases throughout the project that use the 
'Timeout Rule' and 22 tests use the 'Deadline' to set local timeouts. 
   This is a best effortly removal for these timeout rule and local deadline 
timeout usages in related tests.
   
   ## Brief change log
   remove timeout rule and local deadline timeout the logic in related tests
   
   ## Verifying this change
   existing cases
   
   ## Does this pull request potentially affect one of the following parts:
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
@Public(Evolving): (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #21736: [FLINK-27518][tests] Refactor migration tests to support version update automatically

2023-03-24 Thread via GitHub


XComp commented on code in PR #21736:
URL: https://github.com/apache/flink/pull/21736#discussion_r1147503902


##
flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/AbstractOperatorRestoreTestBase.java:
##
@@ -71,7 +80,17 @@
  * AbstractOperatorRestoreTestBase#migrateJob}, please create the 
corresponding test resource
  * directory and copy the _metadata file by hand.
  */
-public abstract class AbstractOperatorRestoreTestBase extends TestLogger {
+public abstract class AbstractOperatorRestoreTestBase extends TestLogger 
implements MigrationTest {
+
+private static final FlinkVersion MIGRATION_BASE_VERSION = 
FlinkVersion.v1_16;

Review Comment:
   can't we add a utility method `FlinkVersion.previousMinorVersion()` that 
decrements the minor version by 1? That's how we can derive the base version 
from the target version instead of having a fixed field which we would have to 
update again with each release.



##
flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version:
##
@@ -0,0 +1 @@
+v1.16

Review Comment:
   That one needs to be updated to v1.17 now after FLINK-31593 is merged.



##
flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileProcessingMigrationTest.java:
##
@@ -309,17 +288,34 @@ public void testMonitoringSourceRestore() throws 
Exception {
 
 testHarness.setup();
 
-testHarness.initializeState(
-OperatorSnapshotUtil.getResourceFilename(
-"monitoring-function-migration-test-"
-+ expectedModTime
-+ "-flink"
-+ testMigrateVersion
-+ "-snapshot"));
+// Get the exact filename
+Tuple2 fileNameAndModTime = 
getResourceFilename(testMigrateVersion);
+
+testHarness.initializeState(fileNameAndModTime.f0);
 
 testHarness.open();
 
-Assert.assertEquals((long) expectedModTime, 
monitoringFunction.getGlobalModificationTime());
+Assert.assertEquals(
+fileNameAndModTime.f1.longValue(), 
monitoringFunction.getGlobalModificationTime());
+}
+
+private Tuple2 getResourceFilename(FlinkVersion version) 
throws IOException {
+String resourceDirectory = 
OperatorSnapshotUtil.getResourceFilename("");
+Pattern fileNamePattern =
+Pattern.compile(
+"monitoring-function-migration-test-(\\d+)-flink"

Review Comment:
   I did some digging through the code. My impression is that we don't even 
need the modification time, do we? It only ends up in the filename :thinking: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs closed pull request #22064: [FLINK-26945][table] add DATE_SUB function.

2023-03-24 Thread via GitHub


liuyongvs closed pull request #22064: [FLINK-26945][table] add DATE_SUB 
function.
URL: https://github.com/apache/flink/pull/22064


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22267: [FLINK-31602][table] Add built-in ARRAY_POSITION function.

2023-03-24 Thread via GitHub


flinkbot commented on PR #22267:
URL: https://github.com/apache/flink/pull/22267#issuecomment-1482704018

   
   ## CI report:
   
   * 238e8d2db8dd14feb18a02e5baaadaedddf7ca6e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #21736: [FLINK-27518][tests] Refactor migration tests to support version update automatically

2023-03-24 Thread via GitHub


XComp commented on code in PR #21736:
URL: https://github.com/apache/flink/pull/21736#discussion_r1147501300


##
flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/AbstractOperatorRestoreTestBase.java:
##
@@ -104,16 +123,32 @@ public abstract class AbstractOperatorRestoreTestBase 
extends TestLogger {
 .setNumberSlotsPerTaskManager(NUM_SLOTS_PER_TM)
 .build());
 
-private final boolean allowNonRestoredState;
 private final ScheduledExecutor scheduledExecutor =
 new 
ScheduledExecutorServiceAdapter(EXECUTOR_RESOURCE.getExecutor());
 
-protected AbstractOperatorRestoreTestBase() {
-this(true);
+protected AbstractOperatorRestoreTestBase(FlinkVersion flinkVersion) {
+this.flinkVersion = flinkVersion;
 }
 
-protected AbstractOperatorRestoreTestBase(boolean allowNonRestoredState) {
-this.allowNonRestoredState = allowNonRestoredState;
+protected void internalGenerateSnapshots(FlinkVersion targetVersion) 
throws Exception {

Review Comment:
   Ok, after I have done the data generation myself in FLINK-31593, I get how 
it should be implemented :+1: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31602:
---
Labels: pull-request-available  (was: )

> Add ARRAY_POSITION supported in SQL & Table API
> ---
>
> Key: FLINK-31602
> URL: https://issues.apache.org/jira/browse/FLINK-31602
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> array_position(array, element) - Returns the (1-based) index of the first 
> element of the array as long.
> Syntax:
> array_position(array, element)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> Returns the position of the first occurrence of element in the given array as 
> long.
> Returns 0 if the given value could not be found in the array.
> Returns null if either of the arguments are null
> {code:sql}
> > SELECT array_position(array(3, 2, 1), 1);
>  3 {code}
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_remove|https://spark.apache.org/docs/latest/api/sql/index.html#array_position]
> postgresql 
> [https://www.postgresql.org/docs/12/functions-array.html#ARRAY-FUNCTIONS-TABLE]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] liuyongvs opened a new pull request, #22267: [FLINK-31602][table] Add built-in ARRAY_POSITION function.

2023-03-24 Thread via GitHub


liuyongvs opened a new pull request, #22267:
URL: https://github.com/apache/flink/pull/22267

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] dannycranmer merged pull request #616: [hotfix] Update Flink version support policy as per https://lists.apa…

2023-03-24 Thread via GitHub


dannycranmer merged PR #616:
URL: https://github.com/apache/flink-web/pull/616


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >