[GitHub] [flink] 1996fanrui commented on pull request #20214: [FLINK-28454][kafka][docs] Fix the wrong timestamp unit of KafkaSource

2022-07-07 Thread GitBox


1996fanrui commented on PR #20214:
URL: https://github.com/apache/flink/pull/20214#issuecomment-1178583874

   Hi @PatrickRen , could you help review in your free time?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-28454:
---
Description: 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]

 

The timestamp unit of startingOffset is second in kafkaSource doc, but it 
should be milliseconds. It will mislead flink users.

 

!image-2022-07-08-13-04-59-993.png!

 

 

It should be milliseconds, because the timestamp is used in  
OffsetSpec.fromTimestamp() , and the comment is : *@param timestamp in 
milliseconds*

 

By the way, the timestamp is milliseconds in old source doc. link: 
[https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#kafka-consumers-start-position-configuration]

 

!image-2022-07-08-13-46-05-540.png!

 

!image-2022-07-08-13-42-49-902.png|width=1103,height=970!

 

!image-2022-07-08-13-43-14-278.png|width=697,height=692!

 

 

  was:
[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]

 

The timestamp unit of startingOffset is second in kafkaSource doc, but it 
should be milliseconds. It will mislead flink users.

 

!image-2022-07-08-13-04-59-993.png!

 

 


> Fix the wrong timestamp example of KafkaSource
> --
>
> Key: FLINK-28454
> URL: https://issues.apache.org/jira/browse/FLINK-28454
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: 1.13.6, 1.14.5, 1.15.1
>Reporter: fanrui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.2
>
> Attachments: image-2022-07-08-13-04-59-993.png, 
> image-2022-07-08-13-42-49-902.png, image-2022-07-08-13-43-14-278.png, 
> image-2022-07-08-13-46-05-540.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]
>  
> The timestamp unit of startingOffset is second in kafkaSource doc, but it 
> should be milliseconds. It will mislead flink users.
>  
> !image-2022-07-08-13-04-59-993.png!
>  
>  
> It should be milliseconds, because the timestamp is used in  
> OffsetSpec.fromTimestamp() , and the comment is : *@param timestamp in 
> milliseconds*
>  
> By the way, the timestamp is milliseconds in old source doc. link: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#kafka-consumers-start-position-configuration]
>  
> !image-2022-07-08-13-46-05-540.png!
>  
> !image-2022-07-08-13-42-49-902.png|width=1103,height=970!
>  
> !image-2022-07-08-13-43-14-278.png|width=697,height=692!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-28454:
---
Attachment: image-2022-07-08-13-46-05-540.png

> Fix the wrong timestamp example of KafkaSource
> --
>
> Key: FLINK-28454
> URL: https://issues.apache.org/jira/browse/FLINK-28454
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: 1.13.6, 1.14.5, 1.15.1
>Reporter: fanrui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.2
>
> Attachments: image-2022-07-08-13-04-59-993.png, 
> image-2022-07-08-13-42-49-902.png, image-2022-07-08-13-43-14-278.png, 
> image-2022-07-08-13-46-05-540.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]
>  
> The timestamp unit of startingOffset is second in kafkaSource doc, but it 
> should be milliseconds. It will mislead flink users.
>  
> !image-2022-07-08-13-04-59-993.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-28454:
---
Attachment: image-2022-07-08-13-43-14-278.png

> Fix the wrong timestamp example of KafkaSource
> --
>
> Key: FLINK-28454
> URL: https://issues.apache.org/jira/browse/FLINK-28454
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: 1.13.6, 1.14.5, 1.15.1
>Reporter: fanrui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.2
>
> Attachments: image-2022-07-08-13-04-59-993.png, 
> image-2022-07-08-13-42-49-902.png, image-2022-07-08-13-43-14-278.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]
>  
> The timestamp unit of startingOffset is second in kafkaSource doc, but it 
> should be milliseconds. It will mislead flink users.
>  
> !image-2022-07-08-13-04-59-993.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-28454:
---
Attachment: image-2022-07-08-13-42-49-902.png

> Fix the wrong timestamp example of KafkaSource
> --
>
> Key: FLINK-28454
> URL: https://issues.apache.org/jira/browse/FLINK-28454
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: 1.13.6, 1.14.5, 1.15.1
>Reporter: fanrui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.2
>
> Attachments: image-2022-07-08-13-04-59-993.png, 
> image-2022-07-08-13-42-49-902.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]
>  
> The timestamp unit of startingOffset is second in kafkaSource doc, but it 
> should be milliseconds. It will mislead flink users.
>  
> !image-2022-07-08-13-04-59-993.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20214: [FLINK-28454][kafka][docs] Fix the wrong timestamp unit of KafkaSource

2022-07-07 Thread GitBox


flinkbot commented on PR #20214:
URL: https://github.com/apache/flink/pull/20214#issuecomment-1178562814

   
   ## CI report:
   
   * 2e3e781743dcc3237221b231dedb2ba88d4c29cb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28454:
---
Labels: pull-request-available  (was: )

> Fix the wrong timestamp example of KafkaSource
> --
>
> Key: FLINK-28454
> URL: https://issues.apache.org/jira/browse/FLINK-28454
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Documentation
>Affects Versions: 1.13.6, 1.14.5, 1.15.1
>Reporter: fanrui
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.2
>
> Attachments: image-2022-07-08-13-04-59-993.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]
>  
> The timestamp unit of startingOffset is second in kafkaSource doc, but it 
> should be milliseconds. It will mislead flink users.
>  
> !image-2022-07-08-13-04-59-993.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui opened a new pull request, #20214: [FLINK-28454][kafka][docs] Fix the wrong timestamp unit of KafkaSource

2022-07-07 Thread GitBox


1996fanrui opened a new pull request, #20214:
URL: https://github.com/apache/flink/pull/20214

   ## What is the purpose of the change
   
   Fix the wrong timestamp unit of KafkaSource.
   
   ## Brief change log
   
   Fix the wrong timestamp unit of KafkaSource.
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #302: [FLINK-28297][FLINK-27914] Improve operator metric groups + Add JOSDK metric integration

2022-07-07 Thread GitBox


gyfora commented on code in PR #302:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/302#discussion_r916467085


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/OperatorJosdkMetrics.java:
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.metrics;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.metrics.Counter;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.metrics.MetricRegistry;
+
+import io.javaoperatorsdk.operator.api.monitoring.Metrics;
+import io.javaoperatorsdk.operator.api.reconciler.RetryInfo;
+import io.javaoperatorsdk.operator.processing.event.Event;
+import io.javaoperatorsdk.operator.processing.event.ResourceID;
+import 
io.javaoperatorsdk.operator.processing.event.source.controller.ResourceEvent;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Implementation of {@link Metrics} to monitor and forward JOSDK metrics to 
{@link MetricRegistry}.
+ */
+public class OperatorJosdkMetrics implements Metrics {

Review Comment:
   It's not clear what that measures and what benefit it would bring. So I 
decided to simply not do it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-28454) Fix the wrong timestamp example of KafkaSource

2022-07-07 Thread fanrui (Jira)
fanrui created FLINK-28454:
--

 Summary: Fix the wrong timestamp example of KafkaSource
 Key: FLINK-28454
 URL: https://issues.apache.org/jira/browse/FLINK-28454
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Documentation
Affects Versions: 1.15.1, 1.14.5, 1.13.6
Reporter: fanrui
 Fix For: 1.16.0, 1.15.2
 Attachments: image-2022-07-08-13-04-59-993.png

[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kafka/]

 

The timestamp unit of startingOffset is second in kafkaSource doc, but it 
should be milliseconds. It will mislead flink users.

 

!image-2022-07-08-13-04-59-993.png!

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20213: [hotfix][docs] Fix formatting in Chinese doc

2022-07-07 Thread GitBox


flinkbot commented on PR #20213:
URL: https://github.com/apache/flink/pull/20213#issuecomment-1178535668

   
   ## CI report:
   
   * a9467f4326596ef1d15d8685dc38d6308a309cbb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-07 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564079#comment-17564079
 ] 

Nicholas Jiang commented on FLINK-28364:


[~dianfu][~gyfora][~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-28364) Python Job support for Kubernetes Operator

2022-07-07 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564079#comment-17564079
 ] 

Nicholas Jiang edited comment on FLINK-28364 at 7/8/22 4:21 AM:


[~dianfu], [~gyfora], [~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.


was (Author: nicholasjiang):
[~dianfu][~gyfora][~thw], IMO, after 
[FLINK-28443|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-28443] 
is completed, this feature could be more easily worked.

> Python Job support for Kubernetes Operator
> --
>
> Key: FLINK-28364
> URL: https://issues.apache.org/jira/browse/FLINK-28364
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: wu3396
>Priority: Major
>
> *Describe the solution*
> Job types that I want to support pyflink for
> *Describe alternatives*
> like [here|https://github.com/spotify/flink-on-k8s-operator/pull/165]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] polaris6 opened a new pull request, #20213: [hotfix][docs] Fix formatting in Chinese doc

2022-07-07 Thread GitBox


polaris6 opened a new pull request, #20213:
URL: https://github.com/apache/flink/pull/20213

   Fix formatting in Chinese doc:
   
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/resource-providers/standalone/overview/
   
   ## Before modification:
   
![image](https://user-images.githubusercontent.com/29967142/177916065-a46e3812-c8b4-42e9-801e-5ae6b3d92648.png)
   
   ## After modification:
   https://user-images.githubusercontent.com/29967142/177916106-c53958e0-2134-4d0f-aad2-8914b4297f78.png;>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-28247) Exception will be thrown when over window contains grouping in Hive Dialect

2022-07-07 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li resolved FLINK-28247.

Fix Version/s: 1.16.0
   Resolution: Fixed

> Exception will be thrown when over window contains grouping in Hive Dialect
> ---
>
> Key: FLINK-28247
> URL: https://issues.apache.org/jira/browse/FLINK-28247
> Project: Flink
>  Issue Type: Sub-task
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The exception will be reprodued by the following sql when using Hive Dialect:
> {code:java}
> create table t(category int, live int, comments int)
> SELECT grouping(category), lag(live) over(partition by grouping(category)) 
> FROM t GROUP BY category, live; {code}
> The reson is it will first call 
> `HiveParserCalcitePlanner#genSelectForWindowing` to generate the window, 
> which will then call `HiveParserUtils#rewriteGroupingFunctionAST` to rewrite 
> the group function in the over window :
>  
> {code:java}
> // rewrite grouping function
> if (current.getType() == HiveASTParser.TOK_FUNCTION
> && current.getChildCount() >= 2) {
> HiveParserASTNode func = (HiveParserASTNode) current.getChild(0);
> if (func.getText().equals("grouping")) {
> visited.setValue(true);
> convertGrouping(
> current, grpByAstExprs, noneSet, legacyGrouping, found);
> }
> } 
> {code}
>  
> So `grouping(category)` will be converted to `grouping(0, 1)`. 
> After `HiveParserCalcitePlanner#genSelectForWindowing`, it will try to 
> rewrite it again:
>  
> {code:java}
> if (!qbp.getDestToGroupBy().isEmpty()) {
> // Special handling of grouping function
> expr =
> rewriteGroupingFunctionAST(
> getGroupByForClause(qbp, selClauseName),
> expr,
> !cubeRollupGrpSetPresent);
> } {code}
> And it will also fall back to `convertGrouping` again as 
> `current.getChildCount() >= 2` will be true. But then, it can't find any 
> field 
> presented  in group by  for it's `grouping(0, 1)` now.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28247) Exception will be thrown when over window contains grouping in Hive Dialect

2022-07-07 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564078#comment-17564078
 ] 

Rui Li commented on FLINK-28247:


Fixed in master: a069a306a0c95ba62bbc63f6afc995fd3216a326

> Exception will be thrown when over window contains grouping in Hive Dialect
> ---
>
> Key: FLINK-28247
> URL: https://issues.apache.org/jira/browse/FLINK-28247
> Project: Flink
>  Issue Type: Sub-task
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> The exception will be reprodued by the following sql when using Hive Dialect:
> {code:java}
> create table t(category int, live int, comments int)
> SELECT grouping(category), lag(live) over(partition by grouping(category)) 
> FROM t GROUP BY category, live; {code}
> The reson is it will first call 
> `HiveParserCalcitePlanner#genSelectForWindowing` to generate the window, 
> which will then call `HiveParserUtils#rewriteGroupingFunctionAST` to rewrite 
> the group function in the over window :
>  
> {code:java}
> // rewrite grouping function
> if (current.getType() == HiveASTParser.TOK_FUNCTION
> && current.getChildCount() >= 2) {
> HiveParserASTNode func = (HiveParserASTNode) current.getChild(0);
> if (func.getText().equals("grouping")) {
> visited.setValue(true);
> convertGrouping(
> current, grpByAstExprs, noneSet, legacyGrouping, found);
> }
> } 
> {code}
>  
> So `grouping(category)` will be converted to `grouping(0, 1)`. 
> After `HiveParserCalcitePlanner#genSelectForWindowing`, it will try to 
> rewrite it again:
>  
> {code:java}
> if (!qbp.getDestToGroupBy().isEmpty()) {
> // Special handling of grouping function
> expr =
> rewriteGroupingFunctionAST(
> getGroupByForClause(qbp, selClauseName),
> expr,
> !cubeRollupGrpSetPresent);
> } {code}
> And it will also fall back to `convertGrouping` again as 
> `current.getChildCount() >= 2` will be true. But then, it can't find any 
> field 
> presented  in group by  for it's `grouping(0, 1)` now.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Vancior commented on a diff in pull request #20190: [FLINK-28426][python] Supports m1 wheel package in PyFlink

2022-07-07 Thread GitBox


Vancior commented on code in PR #20190:
URL: https://github.com/apache/flink/pull/20190#discussion_r916446550


##
tools/azure-pipelines/build-python-wheels.yml:
##
@@ -14,15 +14,9 @@
 # limitations under the License.
 
 jobs:
-  - job: build_wheels
-strategy:
-  matrix:
-linux:
-  vm-label: 'ubuntu-20.04'
-mac:
-  vm-label: 'macOS-10.15'
+  - job: build_wheels_on_Linux
 pool:
-  vmImage: $(vm-label)
+  vmImage: 'ubuntu-20.04'

Review Comment:
   might be `'ubuntu-latest'`



##
flink-python/pyproject.toml:
##
@@ -0,0 +1,32 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+[build-system]
+# Minimum requirements for the build system to execute.
+requires = [
+"packaging==20.5; platform_machine=='arm64'",  # macos M1

Review Comment:
   packaging requirement might be relaxed to `>=20.5`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20212: [FLINK-28314][runtime-web] introduce "Cluster Environment" tab under history server

2022-07-07 Thread GitBox


flinkbot commented on PR #20212:
URL: https://github.com/apache/flink/pull/20212#issuecomment-1178524952

   
   ## CI report:
   
   * 247628bfcf13dd6ecd1393abbd3cc552c3525231 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lirui-apache closed pull request #20071: [FLINK-28247][hive] fix "no any field presented in group by" exception when over window contains grouping function in Hive dialect

2022-07-07 Thread GitBox


lirui-apache closed pull request #20071: [FLINK-28247][hive] fix "no any field 
presented in group by" exception when over window contains grouping function in 
Hive dialect
URL: https://github.com/apache/flink/pull/20071


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28314) [UI] Introduce "Cluster Environment" tab under history server

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28314:
---
Labels: pull-request-available  (was: )

> [UI] Introduce "Cluster Environment" tab under history server
> -
>
> Key: FLINK-28314
> URL: https://issues.apache.org/jira/browse/FLINK-28314
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.16.0
>Reporter: Junhan Yang
>Assignee: Junhan Yang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] yangjunhan commented on pull request #20212: [FLINK-28314][runtime-web] introduce "Cluster Environment" tab under history server

2022-07-07 Thread GitBox


yangjunhan commented on PR #20212:
URL: https://github.com/apache/flink/pull/20212#issuecomment-1178520078

   This PR includes UI changes regarding the 
[FLIP](https://cwiki.apache.org/confluence/display/FLINK/FLIP-241%3A+Completed+Jobs+Information+Enhancement).
 Could @simplejason help review this? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yangjunhan opened a new pull request, #20212: [FLINK-28314][runtime-web] introduce "Cluster Environment" tab under history server

2022-07-07 Thread GitBox


yangjunhan opened a new pull request, #20212:
URL: https://github.com/apache/flink/pull/20212

   
   
   ## What is the purpose of the change
   
   Completed Job UI Enhancement based on FLIP-241 
(https://cwiki.apache.org/confluence/display/FLINK/FLIP-241%3A+Completed+Jobs+Information+Enhancement)
 :
   
   - [FLINK-26207](https://issues.apache.org/jira/browse/FLINK-26207)
   - [FLINK-28314](https://issues.apache.org/jira/browse/FLINK-28314)
   
   
   ## Brief change log
   
   - Refactor runtime-web modules by depending on a common injecting config 
object interface (pre-requisite for this UI change)
   - Introduce "Cluster Environment" tab under history server
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] RocMarshal commented on pull request #17560: [FLINK-22315][table] Support ADD column/constraint/watermark for ALTER TABLE statement.

2022-07-07 Thread GitBox


RocMarshal commented on PR #17560:
URL: https://github.com/apache/flink/pull/17560#issuecomment-1178509222

   > @RocMarshal Thanks for your contribution, the syntax pr has merged into 
master, you can continue this work.
   
   @lsyldliu Nice work on FLINK-21683 !. Would you like to take the current 
ticket ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20211: [FLINK-28451][hive] Use UserCodeClassloader instead of the current thread's classloader to load function

2022-07-07 Thread GitBox


flinkbot commented on PR #20211:
URL: https://github.com/apache/flink/pull/20211#issuecomment-1178505039

   
   ## CI report:
   
   * d53356a9eb09e70a422297812adeea7528b39be7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20208: [hotfix][flink-rpc] Fix the RemoteRpcInvocation class typo.

2022-07-07 Thread GitBox


liuzhuang2017 commented on PR #20208:
URL: https://github.com/apache/flink/pull/20208#issuecomment-1178503599

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28177) Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey failed with 503 Service Unavailable

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564066#comment-17564066
 ] 

Huang Xingbo commented on FLINK-28177:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37817=logs=4eda0b4a-bd0d-521a-0916-8285b9be9bb5=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9

> Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey failed with 
> 503 Service Unavailable
> 
>
> Key: FLINK-28177
> URL: https://issues.apache.org/jira/browse/FLINK-28177
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-06-21T07:39:23.9065585Z Jun 21 07:39:23 [ERROR] Tests run: 4, Failures: 
> 0, Errors: 2, Skipped: 0, Time elapsed: 43.125 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
> 2022-06-21T07:39:23.9068457Z Jun 21 07:39:23 [ERROR] 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey
>   Time elapsed: 8.697 s  <<< ERROR!
> 2022-06-21T07:39:23.9069955Z Jun 21 07:39:23 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> 2022-06-21T07:39:23.9071135Z Jun 21 07:39:23  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2022-06-21T07:39:23.9072225Z Jun 21 07:39:23  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-06-21T07:39:23.9073408Z Jun 21 07:39:23  at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118)
> 2022-06-21T07:39:23.9075081Z Jun 21 07:39:23  at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81)
> 2022-06-21T07:39:23.9076560Z Jun 21 07:39:23  at 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey(Elasticsearch6DynamicSinkITCase.java:286)
> 2022-06-21T07:39:23.9078535Z Jun 21 07:39:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-21T07:39:23.9079534Z Jun 21 07:39:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-21T07:39:23.9080702Z Jun 21 07:39:23  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-21T07:39:23.9081838Z Jun 21 07:39:23  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-06-21T07:39:23.9082942Z Jun 21 07:39:23  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-06-21T07:39:23.9084127Z Jun 21 07:39:23  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-06-21T07:39:23.9085246Z Jun 21 07:39:23  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-06-21T07:39:23.9086380Z Jun 21 07:39:23  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-06-21T07:39:23.9087812Z Jun 21 07:39:23  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-06-21T07:39:23.9088843Z Jun 21 07:39:23  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-06-21T07:39:23.9089823Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-21T07:39:23.9103797Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-06-21T07:39:23.9105022Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-06-21T07:39:23.9106065Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-06-21T07:39:23.9107500Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-06-21T07:39:23.9108591Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-06-21T07:39:23.9109575Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-06-21T07:39:23.9110606Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-06-21T07:39:23.9111634Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-06-21T07:39:23.9112653Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-06-21T07:39:23.9113922Z Jun 21 07:39:23  at 
> 

[jira] [Updated] (FLINK-28177) Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey failed with 503 Service Unavailable

2022-07-07 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-28177:
-
Priority: Critical  (was: Major)

> Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey failed with 
> 503 Service Unavailable
> 
>
> Key: FLINK-28177
> URL: https://issues.apache.org/jira/browse/FLINK-28177
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-06-21T07:39:23.9065585Z Jun 21 07:39:23 [ERROR] Tests run: 4, Failures: 
> 0, Errors: 2, Skipped: 0, Time elapsed: 43.125 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
> 2022-06-21T07:39:23.9068457Z Jun 21 07:39:23 [ERROR] 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey
>   Time elapsed: 8.697 s  <<< ERROR!
> 2022-06-21T07:39:23.9069955Z Jun 21 07:39:23 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> 2022-06-21T07:39:23.9071135Z Jun 21 07:39:23  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2022-06-21T07:39:23.9072225Z Jun 21 07:39:23  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-06-21T07:39:23.9073408Z Jun 21 07:39:23  at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118)
> 2022-06-21T07:39:23.9075081Z Jun 21 07:39:23  at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81)
> 2022-06-21T07:39:23.9076560Z Jun 21 07:39:23  at 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey(Elasticsearch6DynamicSinkITCase.java:286)
> 2022-06-21T07:39:23.9078535Z Jun 21 07:39:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-21T07:39:23.9079534Z Jun 21 07:39:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-21T07:39:23.9080702Z Jun 21 07:39:23  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-21T07:39:23.9081838Z Jun 21 07:39:23  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-06-21T07:39:23.9082942Z Jun 21 07:39:23  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-06-21T07:39:23.9084127Z Jun 21 07:39:23  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-06-21T07:39:23.9085246Z Jun 21 07:39:23  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-06-21T07:39:23.9086380Z Jun 21 07:39:23  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-06-21T07:39:23.9087812Z Jun 21 07:39:23  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-06-21T07:39:23.9088843Z Jun 21 07:39:23  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-06-21T07:39:23.9089823Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-21T07:39:23.9103797Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-06-21T07:39:23.9105022Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-06-21T07:39:23.9106065Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-06-21T07:39:23.9107500Z Jun 21 07:39:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-06-21T07:39:23.9108591Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-06-21T07:39:23.9109575Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-06-21T07:39:23.9110606Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-06-21T07:39:23.9111634Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-06-21T07:39:23.9112653Z Jun 21 07:39:23  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-06-21T07:39:23.9113922Z Jun 21 07:39:23  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> 2022-06-21T07:39:23.9115083Z Jun 21 07:39:23  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 

[jira] [Updated] (FLINK-28451) HiveFunctionDefinitionFactory load udf using user classloader instead of thread context classloader

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28451:
---
Labels: pull-request-available  (was: )

> HiveFunctionDefinitionFactory load udf using user classloader instead of 
> thread context classloader
> ---
>
> Key: FLINK-28451
> URL: https://issues.apache.org/jira/browse/FLINK-28451
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Currently `HiveFunctionDefinitionFactory` load user defined function using 
> thread context classloader, after we introducing user classloader to load 
> function class from FLINK-27659, this maybe doesn't work if the user jar not 
> in thread classloader. The following exception will be thrown:
> {code:java}
>  23478 [main] WARN  org.apache.flink.table.client.cli.CliClient [] - Could 
> not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: Failed 
> to parse statement: SELECT id, func1(str) FROM (VALUES (1, 'Hello World')) AS 
> T(id, str) ;at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71)
>  ~[classes/:?]at 
> org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:296)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:281)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:229)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.runSqlStatements(CliClientITCase.java:172)
>  [test-classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:137)
>  [test-classes/:?]at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_291]at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_291]at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_291]at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_291]at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.13.2.jar:4.13.2]at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> [classes/:?]at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) 
> [junit-4.13.2.jar:4.13.2]at 
> 

[GitHub] [flink] luoyuxia opened a new pull request, #20211: [FLINK-28451][hive] Use UserCodeClassloader instead of the current thread's classloader to load function

2022-07-07 Thread GitBox


luoyuxia opened a new pull request, #20211:
URL: https://github.com/apache/flink/pull/20211

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-28442) SplitGenerator should use Ordered Packing

2022-07-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-28442.

Resolution: Fixed

master: 38257d6595596c8e7f9c8060ba4ea0b252594139

> SplitGenerator should use Ordered Packing
> -
>
> Key: FLINK-28442
> URL: https://issues.apache.org/jira/browse/FLINK-28442
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> SplitGenerator should use Ordered Packing
> - The full part data in the stream is consumed in an orderly way
> - Ordered batch reads can make subsequent calculations more efficient, such 
> as aggregation by partial primary key groupBy



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-28247) Exception will be thrown when over window contains grouping in Hive Dialect

2022-07-07 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li reassigned FLINK-28247:
--

Assignee: luoyuxia

> Exception will be thrown when over window contains grouping in Hive Dialect
> ---
>
> Key: FLINK-28247
> URL: https://issues.apache.org/jira/browse/FLINK-28247
> Project: Flink
>  Issue Type: Sub-task
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> The exception will be reprodued by the following sql when using Hive Dialect:
> {code:java}
> create table t(category int, live int, comments int)
> SELECT grouping(category), lag(live) over(partition by grouping(category)) 
> FROM t GROUP BY category, live; {code}
> The reson is it will first call 
> `HiveParserCalcitePlanner#genSelectForWindowing` to generate the window, 
> which will then call `HiveParserUtils#rewriteGroupingFunctionAST` to rewrite 
> the group function in the over window :
>  
> {code:java}
> // rewrite grouping function
> if (current.getType() == HiveASTParser.TOK_FUNCTION
> && current.getChildCount() >= 2) {
> HiveParserASTNode func = (HiveParserASTNode) current.getChild(0);
> if (func.getText().equals("grouping")) {
> visited.setValue(true);
> convertGrouping(
> current, grpByAstExprs, noneSet, legacyGrouping, found);
> }
> } 
> {code}
>  
> So `grouping(category)` will be converted to `grouping(0, 1)`. 
> After `HiveParserCalcitePlanner#genSelectForWindowing`, it will try to 
> rewrite it again:
>  
> {code:java}
> if (!qbp.getDestToGroupBy().isEmpty()) {
> // Special handling of grouping function
> expr =
> rewriteGroupingFunctionAST(
> getGroupByForClause(qbp, selClauseName),
> expr,
> !cubeRollupGrpSetPresent);
> } {code}
> And it will also fall back to `convertGrouping` again as 
> `current.getChildCount() >= 2` will be true. But then, it can't find any 
> field 
> presented  in group by  for it's `grouping(0, 1)` now.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi merged pull request #206: [FLINK-28442] SplitGenerator should use Ordered Packing

2022-07-07 Thread GitBox


JingsongLi merged PR #206:
URL: https://github.com/apache/flink-table-store/pull/206


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27703) FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook failed with The marker file was not found within 10000 msecs

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564062#comment-17564062
 ] 

Huang Xingbo commented on FLINK-27703:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37836=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8

> FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook 
> failed with The marker file was not found within 1 msecs
> -
>
> Key: FLINK-27703
> URL: https://issues.apache.org/jira/browse/FLINK-27703
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-05-19T09:08:49.8088232Z May 19 09:08:49 [ERROR] Failures: 
> 2022-05-19T09:08:49.8090850Z May 19 09:08:49 [ERROR]   
> FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook:97->testDirectoriesCleanupOnKill:127
>  The marker file was not found within 1 msecs
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35834=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9744



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20210: [FLINK-27205][docs-zh] Translate "Concepts -> Glossary" page into Chinese.

2022-07-07 Thread GitBox


flinkbot commented on PR #20210:
URL: https://github.com/apache/flink/pull/20210#issuecomment-1178486408

   
   ## CI report:
   
   * 84db3782d879ab99d18863e7877923e4aa32be38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on pull request #17560: [FLINK-22315][table] Support ADD column/constraint/watermark for ALTER TABLE statement.

2022-07-07 Thread GitBox


lsyldliu commented on PR #17560:
URL: https://github.com/apache/flink/pull/17560#issuecomment-1178485595

   @RocMarshal Thanks for your contribution, the syntax pr has merged into 
master, you can continue this work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28325) DataOutputSerializer#writeBytes increase position twice

2022-07-07 Thread Li Peng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Peng updated FLINK-28325:

Attachment: image.png

> DataOutputSerializer#writeBytes increase position twice
> ---
>
> Key: FLINK-28325
> URL: https://issues.apache.org/jira/browse/FLINK-28325
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Reporter: huweihua
>Priority: Minor
> Attachments: image-2022-06-30-18-14-50-827.png, 
> image-2022-06-30-18-15-18-590.png, image.png
>
>
> Hi, I was looking at the code and found that DataOutputSerializer.writeBytes 
> increases the position twice, I feel it is a problem, please let me know if 
> it is for a special purpose
> org.apache.flink.core.memory.DataOutputSerializer#writeBytes
>  
> !image-2022-06-30-18-14-50-827.png!!image-2022-06-30-18-15-18-590.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28453) KafkaSourceLegacyITCase.testBrokerFailure hang on azure

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564059#comment-17564059
 ] 

Huang Xingbo commented on FLINK-28453:
--

cc [~renqs]

> KafkaSourceLegacyITCase.testBrokerFailure hang on azure
> ---
>
> Key: FLINK-28453
> URL: https://issues.apache.org/jira/browse/FLINK-28453
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-07-07T17:55:16.4876240Z "main" #1 prio=5 os_prio=0 
> tid=0x7fd0b000b800 nid=0x258c waiting on condition [0x7fd0b77f2000]
> 2022-07-07T17:55:16.4876830Zjava.lang.Thread.State: WAITING (parking)
> 2022-07-07T17:55:16.4877232Z  at sun.misc.Unsafe.park(Native Method)
> 2022-07-07T17:55:16.4877916Z  - parking to wait for  <0xa46ffae8> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> 2022-07-07T17:55:16.4878495Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-07-07T17:55:16.4879100Z  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-07-07T17:55:16.4879714Z  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> 2022-07-07T17:55:16.4880318Z  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-07-07T17:55:16.4880921Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-07-07T17:55:16.4881505Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:67)
> 2022-07-07T17:55:16.4882182Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBrokerFailureTest(KafkaConsumerTestBase.java:1506)
> 2022-07-07T17:55:16.4883003Z  at 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testBrokerFailure(KafkaSourceLegacyITCase.java:94)
> 2022-07-07T17:55:16.4883648Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-07-07T17:55:16.4884261Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-07-07T17:55:16.4884907Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-07-07T17:55:16.4885487Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-07-07T17:55:16.4886039Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-07-07T17:55:16.4886697Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-07-07T17:55:16.4887406Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-07-07T17:55:16.4888051Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-07-07T17:55:16.4888678Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-07-07T17:55:16.4889330Z  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
> 2022-07-07T17:55:16.4889974Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-07-07T17:55:16.4890554Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-07-07T17:55:16.4891101Z  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-07-07T17:55:16.4891705Z  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-07-07T17:55:16.4892306Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-07-07T17:55:16.4892901Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-07-07T17:55:16.4893525Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-07-07T17:55:16.4894166Z  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-07-07T17:55:16.4894712Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-07-07T17:55:16.4895267Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-07-07T17:55:16.4895913Z  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-07-07T17:55:16.4896468Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-07-07T17:55:16.4897105Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-07-07T17:55:16.4897722Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-07-07T17:55:16.4898385Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-07-07T17:55:16.4898967Z  at 
> 

[jira] [Created] (FLINK-28453) KafkaSourceLegacyITCase.testBrokerFailure hang on azure

2022-07-07 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-28453:


 Summary: KafkaSourceLegacyITCase.testBrokerFailure hang on azure
 Key: FLINK-28453
 URL: https://issues.apache.org/jira/browse/FLINK-28453
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.16.0
Reporter: Huang Xingbo



{code:java}
2022-07-07T17:55:16.4876240Z "main" #1 prio=5 os_prio=0 tid=0x7fd0b000b800 
nid=0x258c waiting on condition [0x7fd0b77f2000]
2022-07-07T17:55:16.4876830Zjava.lang.Thread.State: WAITING (parking)
2022-07-07T17:55:16.4877232Zat sun.misc.Unsafe.park(Native Method)
2022-07-07T17:55:16.4877916Z- parking to wait for  <0xa46ffae8> (a 
java.util.concurrent.CompletableFuture$Signaller)
2022-07-07T17:55:16.4878495Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2022-07-07T17:55:16.4879100Zat 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
2022-07-07T17:55:16.4879714Zat 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
2022-07-07T17:55:16.4880318Zat 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
2022-07-07T17:55:16.4880921Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
2022-07-07T17:55:16.4881505Zat 
org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:67)
2022-07-07T17:55:16.4882182Zat 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBrokerFailureTest(KafkaConsumerTestBase.java:1506)
2022-07-07T17:55:16.4883003Zat 
org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testBrokerFailure(KafkaSourceLegacyITCase.java:94)
2022-07-07T17:55:16.4883648Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-07-07T17:55:16.4884261Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-07-07T17:55:16.4884907Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-07-07T17:55:16.4885487Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2022-07-07T17:55:16.4886039Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2022-07-07T17:55:16.4886697Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2022-07-07T17:55:16.4887406Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2022-07-07T17:55:16.4888051Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2022-07-07T17:55:16.4888678Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2022-07-07T17:55:16.4889330Zat 
org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
2022-07-07T17:55:16.4889974Zat 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2022-07-07T17:55:16.4890554Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
2022-07-07T17:55:16.4891101Zat 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-07-07T17:55:16.4891705Zat 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
2022-07-07T17:55:16.4892306Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
2022-07-07T17:55:16.4892901Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2022-07-07T17:55:16.4893525Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2022-07-07T17:55:16.4894166Zat 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2022-07-07T17:55:16.4894712Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2022-07-07T17:55:16.4895267Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2022-07-07T17:55:16.4895913Zat 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2022-07-07T17:55:16.4896468Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2022-07-07T17:55:16.4897105Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2022-07-07T17:55:16.4897722Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2022-07-07T17:55:16.4898385Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2022-07-07T17:55:16.4898967Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2022-07-07T17:55:16.4899507Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-07-07T17:55:16.4900014Zat 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-07-07T17:55:16.4900557Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-07-07T17:55:16.4901061Zat 

[GitHub] [flink] flinkbot commented on pull request #20209: [FLINK-28307][runtime-web] Completed jobs UI enhancement

2022-07-07 Thread GitBox


flinkbot commented on PR #20209:
URL: https://github.com/apache/flink/pull/20209#issuecomment-1178481593

   
   ## CI report:
   
   * a2672368ed4e20b6ee03ad5545485918e89c9fcf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yangjunhan closed pull request #20209: [FLINK-28307][runtime-web] Completed jobs UI enhancement

2022-07-07 Thread GitBox


yangjunhan closed pull request #20209: [FLINK-28307][runtime-web] Completed 
jobs UI enhancement
URL: https://github.com/apache/flink/pull/20209


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 opened a new pull request, #20210: [FLINK-27205][docs-zh] Translate "Concepts -> Glossary" page into Chinese.

2022-07-07 Thread GitBox


liuzhuang2017 opened a new pull request, #20210:
URL: https://github.com/apache/flink/pull/20210

   ## What is the purpose of the change
   
   - Translate Glossary page into Chinese: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/concepts/glossary/.
   - The markdown file is located in docs/concepts/glossary.md.
   - In 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/concepts/glossary/,
 most of them have been translated, but a small part has not been translated 
into Chinese. Details See 
[FLINK-13037](https://issues.apache.org/jira/browse/FLINK-13037) for 
information.
   
   
   ## Brief change log
   
   - Translate "Concepts -> Glossary" page into Chinese.
   
   
   
   ## Verifying this change
   
   - This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): **no**
   - The public API, i.e., is any changed class annotated with 
@Public(Evolving): **no**
   - The serializers:  **no**
   - The runtime per-record code paths (performance sensitive): **no**
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **no**
   - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yangjunhan commented on pull request #20209: [FLINK-28307][runtime-web] Completed jobs UI enhancement

2022-07-07 Thread GitBox


yangjunhan commented on PR #20209:
URL: https://github.com/apache/flink/pull/20209#issuecomment-1178478120

   This PR includes UI changes regarding the [Completed Jobs Information 
Enhancement 
FLIP](https://cwiki.apache.org/confluence/display/FLINK/FLIP-241%3A+Completed+Jobs+Information+Enhancement).
 Could @simplejason help review this? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28307) FLIP-241: Completed Jobs Information Enhancement

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28307:
---
Labels: pull-request-available  (was: )

> FLIP-241: Completed Jobs Information Enhancement
> 
>
> Key: FLINK-28307
> URL: https://issues.apache.org/jira/browse/FLINK-28307
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Streaming and Batch users have different interests in probing a job. While 
> streaming users mainly care about the instant status of a running job (tps, 
> delay, backpressure, etc.), batch users care more about the overall job 
> status during the entire execution (queueing / execution time, total data 
> amount, etc.).
> As Flink grows into a unified streaming & batch processor and is adopted by 
> more and more batch users, the experiences in inspecting completed jobs has 
> become more important than ever.
> We compared Flink with other popular batch processors, and spotted several 
> potential improvements. Most of these changes involves WebUI & REST API 
> changes, which should be discussed and voted on as FLIPs. However, creating 
> separated FLIPs for each of the improvement might be overkill, because 
> changes needed by each improvement are quite small. Thus, we include all 
> these potential improvements in this one FLIP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] yangjunhan opened a new pull request, #20209: [FLINK-28307][runtime-web] Completed jobs UI enhancement

2022-07-07 Thread GitBox


yangjunhan opened a new pull request, #20209:
URL: https://github.com/apache/flink/pull/20209

   
   
   ## What is the purpose of the change
   
   Completed Job UI Enhancement based on FLIP-241 
(https://cwiki.apache.org/confluence/display/FLINK/FLIP-241%3A+Completed+Jobs+Information+Enhancement)
 and its related jira issues:
   
   - FLINK-26207
   - FLINK-28314
   - FLINK-28315
   - FLINK-28316
   
   ## Brief change log
   
   - Introduce aggregate stats in tables of the subtasks and taskmanagers
   - Add external taskmanager log links in the vertex drawer under history 
server
   - Introduce "Cluster Environment" tab under history server
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] SteNicholas commented on pull request #302: [FLINK-28297][FLINK-27914] Improve operator metric groups + Add JOSDK metric integration

2022-07-07 Thread GitBox


SteNicholas commented on PR #302:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/302#issuecomment-1178475779

   @gyfora, thanks for the improvement for JOSDK metrics. I have verified the  
JOSDK metrics and the metrics look good to me.
   ```
   -- Counters 
---
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.resource.default.basic-example.JOSDK.Resource.Event.Count:
 1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.resource.default.basic-example.JOSDK.Reconciliation.finished.Count:
 20
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.resource.default.basic-example.JOSDK.Resource.Event.ADDED.Count:
 1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.resource.default.basic-example.JOSDK.Reconciliation.Count:
 20
   
   -- Gauges 
-
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.Count:
 1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.GarbageCollector.G1
 Young Generation.Count: 4
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Direct.MemoryUsed:
 16777225
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.READY.Count:
 1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Mapped.TotalCapacity:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Mapped.MemoryUsed:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.GarbageCollector.G1
 Old Generation.Count: 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Metaspace.Max:
 -1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.DEPLOYING.Count:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Direct.TotalCapacity:
 16777224
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.ERROR.Count:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Heap.Max:
 2061500416
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.NonHeap.Max:
 -1
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Direct.Count:
 5
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.DEPLOYED_NOT_READY.Count:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.GarbageCollector.G1
 Old Generation.Time: 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.NonHeap.Committed:
 63680512
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Mapped.Count:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.CPU.Load:
 0.0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.GarbageCollector.G1
 Young Generation.Time: 89
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.namespace.default.FlinkDeployment.MISSING.Count:
 0
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.NonHeap.Used:
 59893976
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.ClassLoader.ClassesUnloaded:
 6
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.CPU.Time:
 1313000
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Heap.Committed:
 132120576
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.Memory.Metaspace.Committed:
 43536384
   
flink-kubernetes-operator-59fb9cfd78-2wwfm.k8soperator.default.flink-kubernetes-operator.system.Status.JVM.ClassLoader.ClassesLoaded:
 7894
   

[jira] [Commented] (FLINK-26621) flink-tests failed on azure due to Error occurred in starting fork

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564056#comment-17564056
 ] 

Huang Xingbo commented on FLINK-26621:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37849=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba

> flink-tests failed on azure due to Error occurred in starting fork
> --
>
> Key: FLINK-26621
> URL: https://issues.apache.org/jira/browse/FLINK-26621
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.4, 1.16.0, 1.15.2
>Reporter: Yun Gao
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-03-11T16:20:12.6929558Z Mar 11 16:20:12 [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2022-03-11T16:20:12.6939269Z Mar 11 16:20:12 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test 
> (integration-tests) on project flink-tests: There are test failures.
> 2022-03-11T16:20:12.6940062Z Mar 11 16:20:12 [ERROR] 
> 2022-03-11T16:20:12.6940954Z Mar 11 16:20:12 [ERROR] Please refer to 
> /__w/2/s/flink-tests/target/surefire-reports for the individual test results.
> 2022-03-11T16:20:12.6941875Z Mar 11 16:20:12 [ERROR] Please refer to dump 
> files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2022-03-11T16:20:12.6942966Z Mar 11 16:20:12 [ERROR] ExecutionException Error 
> occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6943919Z Mar 11 16:20:12 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException Error occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6945023Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 2022-03-11T16:20:12.6945878Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 2022-03-11T16:20:12.6946761Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 2022-03-11T16:20:12.6947532Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> 2022-03-11T16:20:12.6953051Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314)
> 2022-03-11T16:20:12.6954035Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159)
> 2022-03-11T16:20:12.6954917Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932)
> 2022-03-11T16:20:12.6955749Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2022-03-11T16:20:12.6956542Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2022-03-11T16:20:12.6957456Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2022-03-11T16:20:12.6958232Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2022-03-11T16:20:12.6959038Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 2022-03-11T16:20:12.6960553Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> 2022-03-11T16:20:12.6962116Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> 2022-03-11T16:20:12.6963009Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
> 2022-03-11T16:20:12.6963737Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
> 2022-03-11T16:20:12.6964644Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
> 2022-03-11T16:20:12.6965647Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
> 2022-03-11T16:20:12.6966732Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
> 2022-03-11T16:20:12.6967818Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
> 

[jira] [Commented] (FLINK-26721) PulsarSourceITCase.testSavepoint failed on azure pipeline

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564055#comment-17564055
 ] 

Huang Xingbo commented on FLINK-26721:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37849=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> PulsarSourceITCase.testSavepoint failed on azure pipeline
> -
>
> Key: FLINK-26721
> URL: https://issues.apache.org/jira/browse/FLINK-26721
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Assignee: Yufan Sheng
>Priority: Blocker
>  Labels: build-stability, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> Mar 18 05:49:52 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 315.581 s <<< FAILURE! - in 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase
> Mar 18 05:49:52 [ERROR] 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase.testSavepoint(TestEnvironment,
>  DataStreamSourceExternalContext, CheckpointingMode)[1]  Time elapsed: 
> 140.803 s  <<< FAILURE!
> Mar 18 05:49:52 java.lang.AssertionError: 
> Mar 18 05:49:52 
> Mar 18 05:49:52 Expecting
> Mar 18 05:49:52   
> Mar 18 05:49:52 to be completed within 2M.
> Mar 18 05:49:52 
> Mar 18 05:49:52 exception caught while trying to get the future result: 
> java.util.concurrent.TimeoutException
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Mar 18 05:49:52   at 
> org.assertj.core.internal.Futures.assertSucceededWithin(Futures.java:109)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.internalSucceedsWithin(AbstractCompletableFutureAssert.java:400)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.succeedsWithin(AbstractCompletableFutureAssert.java:396)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.checkResultWithSemantic(SourceTestSuiteBase.java:766)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.restartFromSavepoint(SourceTestSuiteBase.java:399)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.testSavepoint(SourceTestSuiteBase.java:241)
> Mar 18 05:49:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 18 05:49:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 18 05:49:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 18 05:49:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 18 05:49:52   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> Mar 18 05:49:52   at 
> 

[jira] [Updated] (FLINK-28388) Python doc build breaking nightly docs

2022-07-07 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-28388:

Affects Version/s: 1.16.0
   (was: shaded-7.0)

> Python doc build breaking nightly docs
> --
>
> Key: FLINK-28388
> URL: https://issues.apache.org/jira/browse/FLINK-28388
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Documentation
>Affects Versions: 1.16.0
>Reporter: Márton Balassi
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> For the past 5 days the nightly doc builds via GHA are broken:
> https://github.com/apache/flink/actions/workflows/docs.yml
> {noformat}
> Exception occurred:
>   File "/root/flink/flink-python/pyflink/java_gateway.py", line 86, in 
> launch_gateway
> raise Exception("It's launching the PythonGatewayServer during Python UDF 
> execution "
> Exception: It's launching the PythonGatewayServer during Python UDF execution 
> which is unexpected. It usually happens when the job codes are in the top 
> level of the Python script file and are not enclosed in a `if name == 'main'` 
> statement.
> The full traceback has been saved in /tmp/sphinx-err-3thh_wi2.log, if you 
> want to report the issue to the developers.
> Please also report this if it was a user error, so that a better error 
> message can be provided next time.
> A bug report can be filed in the tracker at 
> . Thanks!
> Makefile:76: recipe for target 'html' failed
> make: *** [html] Error 2
> ==sphinx checks... [FAILED]===
> Error: Process completed with exit code 1.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] PatrickRen commented on a diff in pull request #20205: [FLINK-28250][Connector/Kafka] Fix exactly-once Kafka Sink out of memory

2022-07-07 Thread GitBox


PatrickRen commented on code in PR #20205:
URL: https://github.com/apache/flink/pull/20205#discussion_r916412313


##
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaCommitter.java:
##
@@ -112,6 +112,8 @@ public void 
commit(Collection> requests)
 e);
 recyclable.ifPresent(Recyclable::close);

Review Comment:
   What about removing this line as the `recyclable` will finally be closed 
anyway? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27703) FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook failed with The marker file was not found within 10000 msecs

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564053#comment-17564053
 ] 

Huang Xingbo commented on FLINK-27703:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37849=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8

> FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook 
> failed with The marker file was not found within 1 msecs
> -
>
> Key: FLINK-27703
> URL: https://issues.apache.org/jira/browse/FLINK-27703
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-05-19T09:08:49.8088232Z May 19 09:08:49 [ERROR] Failures: 
> 2022-05-19T09:08:49.8090850Z May 19 09:08:49 [ERROR]   
> FileChannelManagerImplTest.testDirectoriesCleanupOnKillWithoutCallerHook:97->testDirectoriesCleanupOnKill:127
>  The marker file was not found within 1 msecs
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35834=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9744



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28441) The parsing result of configuration `pipeline.cached-files` is incorrect

2022-07-07 Thread xiaozilong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaozilong closed FLINK-28441.
--
Resolution: Invalid

> The parsing result of configuration `pipeline.cached-files` is incorrect
> 
>
> Key: FLINK-28441
> URL: https://issues.apache.org/jira/browse/FLINK-28441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.13.0
>Reporter: xiaozilong
>Priority: Major
>
> The parsing process of the configuration will use the method 
> `org.apache.flink.configuration.ConfigurationUtils#parseMap`, the 
> configuration will be spliy by ':', unfortunately when we have the character 
> ':' in the value, the parsing result is incorrect.
> e.g.
> input: name:file2,path:hdfs:///tmp/file2
> output: [path, hdfs, ///tmp/file2]
> Consider only the value containing the character ':' part



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28388) Python doc build breaking nightly docs

2022-07-07 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-28388.
---
Fix Version/s: 1.16.0
   Resolution: Fixed

Fixed in master via 974fea145cc6189eae3410203339aa2950dbbd4d

> Python doc build breaking nightly docs
> --
>
> Key: FLINK-28388
> URL: https://issues.apache.org/jira/browse/FLINK-28388
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Documentation
>Affects Versions: shaded-7.0
>Reporter: Márton Balassi
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> For the past 5 days the nightly doc builds via GHA are broken:
> https://github.com/apache/flink/actions/workflows/docs.yml
> {noformat}
> Exception occurred:
>   File "/root/flink/flink-python/pyflink/java_gateway.py", line 86, in 
> launch_gateway
> raise Exception("It's launching the PythonGatewayServer during Python UDF 
> execution "
> Exception: It's launching the PythonGatewayServer during Python UDF execution 
> which is unexpected. It usually happens when the job codes are in the top 
> level of the Python script file and are not enclosed in a `if name == 'main'` 
> statement.
> The full traceback has been saved in /tmp/sphinx-err-3thh_wi2.log, if you 
> want to report the issue to the developers.
> Please also report this if it was a user error, so that a better error 
> message can be provided next time.
> A bug report can be filed in the tracker at 
> . Thanks!
> Makefile:76: recipe for target 'html' failed
> make: *** [html] Error 2
> ==sphinx checks... [FAILED]===
> Error: Process completed with exit code 1.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26621) flink-tests failed on azure due to Error occurred in starting fork

2022-07-07 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564052#comment-17564052
 ] 

Huang Xingbo commented on FLINK-26621:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37863=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba

> flink-tests failed on azure due to Error occurred in starting fork
> --
>
> Key: FLINK-26621
> URL: https://issues.apache.org/jira/browse/FLINK-26621
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.4, 1.16.0, 1.15.2
>Reporter: Yun Gao
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-03-11T16:20:12.6929558Z Mar 11 16:20:12 [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2022-03-11T16:20:12.6939269Z Mar 11 16:20:12 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test 
> (integration-tests) on project flink-tests: There are test failures.
> 2022-03-11T16:20:12.6940062Z Mar 11 16:20:12 [ERROR] 
> 2022-03-11T16:20:12.6940954Z Mar 11 16:20:12 [ERROR] Please refer to 
> /__w/2/s/flink-tests/target/surefire-reports for the individual test results.
> 2022-03-11T16:20:12.6941875Z Mar 11 16:20:12 [ERROR] Please refer to dump 
> files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2022-03-11T16:20:12.6942966Z Mar 11 16:20:12 [ERROR] ExecutionException Error 
> occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6943919Z Mar 11 16:20:12 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException Error occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6945023Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 2022-03-11T16:20:12.6945878Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 2022-03-11T16:20:12.6946761Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 2022-03-11T16:20:12.6947532Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> 2022-03-11T16:20:12.6953051Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314)
> 2022-03-11T16:20:12.6954035Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159)
> 2022-03-11T16:20:12.6954917Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932)
> 2022-03-11T16:20:12.6955749Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2022-03-11T16:20:12.6956542Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2022-03-11T16:20:12.6957456Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2022-03-11T16:20:12.6958232Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2022-03-11T16:20:12.6959038Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 2022-03-11T16:20:12.6960553Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> 2022-03-11T16:20:12.6962116Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> 2022-03-11T16:20:12.6963009Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
> 2022-03-11T16:20:12.6963737Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
> 2022-03-11T16:20:12.6964644Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
> 2022-03-11T16:20:12.6965647Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
> 2022-03-11T16:20:12.6966732Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
> 2022-03-11T16:20:12.6967818Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
> 

[GitHub] [flink] dianfu closed pull request #20189: [FLINK-28388][docs][tests] Fix documentation build

2022-07-07 Thread GitBox


dianfu closed pull request #20189: [FLINK-28388][docs][tests] Fix documentation 
build
URL: https://github.com/apache/flink/pull/20189


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-28451) HiveFunctionDefinitionFactory load udf using user classloader instead of thread context classloader

2022-07-07 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564051#comment-17564051
 ] 

luoyuxia edited comment on FLINK-28451 at 7/8/22 2:28 AM:
--

[~lsy] Thanks for reporting it. I'll fix it. Previoulsy, I also have a similiar 
issue FLINK-28430 .


was (Author: luoyuxia):
[~lsy] Thanks for reporting it. I'll fix it. Previouly, I also have a similiar 
issue [FLINK-28430|https://issues.apache.org/jira/browse/FLINK-28430] .

> HiveFunctionDefinitionFactory load udf using user classloader instead of 
> thread context classloader
> ---
>
> Key: FLINK-28451
> URL: https://issues.apache.org/jira/browse/FLINK-28451
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently `HiveFunctionDefinitionFactory` load user defined function using 
> thread context classloader, after we introducing user classloader to load 
> function class from FLINK-27659, this maybe doesn't work if the user jar not 
> in thread classloader. The following exception will be thrown:
> {code:java}
>  23478 [main] WARN  org.apache.flink.table.client.cli.CliClient [] - Could 
> not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: Failed 
> to parse statement: SELECT id, func1(str) FROM (VALUES (1, 'Hello World')) AS 
> T(id, str) ;at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71)
>  ~[classes/:?]at 
> org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:296)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:281)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:229)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.runSqlStatements(CliClientITCase.java:172)
>  [test-classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:137)
>  [test-classes/:?]at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_291]at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_291]at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_291]at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_291]at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.13.2.jar:4.13.2]at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> [classes/:?]at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.2.jar:4.13.2]at 
> 

[jira] [Commented] (FLINK-28451) HiveFunctionDefinitionFactory load udf using user classloader instead of thread context classloader

2022-07-07 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564051#comment-17564051
 ] 

luoyuxia commented on FLINK-28451:
--

[~lsy] Thanks for reporting it. I'll fix it. Previouly, I also have a similiar 
issue [FLINK-28430|https://issues.apache.org/jira/browse/FLINK-28430] .

> HiveFunctionDefinitionFactory load udf using user classloader instead of 
> thread context classloader
> ---
>
> Key: FLINK-28451
> URL: https://issues.apache.org/jira/browse/FLINK-28451
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently `HiveFunctionDefinitionFactory` load user defined function using 
> thread context classloader, after we introducing user classloader to load 
> function class from FLINK-27659, this maybe doesn't work if the user jar not 
> in thread classloader. The following exception will be thrown:
> {code:java}
>  23478 [main] WARN  org.apache.flink.table.client.cli.CliClient [] - Could 
> not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: Failed 
> to parse statement: SELECT id, func1(str) FROM (VALUES (1, 'Hello World')) AS 
> T(id, str) ;at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:174)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlCommandParserImpl.parseCommand(SqlCommandParserImpl.java:45)
>  ~[classes/:?]at 
> org.apache.flink.table.client.cli.SqlMultiLineParser.parse(SqlMultiLineParser.java:71)
>  ~[classes/:?]at 
> org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2964) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3778) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:679) 
> ~[jline-reader-3.21.0.jar:?]at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:296)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:281)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:229)
>  [classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.runSqlStatements(CliClientITCase.java:172)
>  [test-classes/:?]at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:137)
>  [test-classes/:?]at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_291]at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_291]at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_291]at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_291]at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.13.2.jar:4.13.2]at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> [classes/:?]at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 
> [junit-4.13.2.jar:4.13.2]at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) 
> [junit-4.13.2.jar:4.13.2]at 
> 

[GitHub] [flink-docker] Myasuka commented on a diff in pull request #117: [FLINK-28057] Fix LD_PRELOAD on ARM images

2022-07-07 Thread GitBox


Myasuka commented on code in PR #117:
URL: https://github.com/apache/flink-docker/pull/117#discussion_r916413243


##
1.15/scala_2.12-java11-debian/docker-entrypoint.sh:
##
@@ -91,7 +91,12 @@ prepare_configuration() {
 
 maybe_enable_jemalloc() {
 if [ "${DISABLE_JEMALLOC:-false}" == "false" ]; then
-export LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libjemalloc.so
+# Maybe use export LD_PRELOAD=$LD_PRELOAD:/usr/lib/$(uname 
-i)-linux-gnu/libjemalloc.so
+if [[ `uname -i` == 'aarch64' ]]; then
+export 
LD_PRELOAD=$LD_PRELOAD:/usr/lib/aarch64-linux-gnu/libjemalloc.so

Review Comment:
   Based on current comments, we can use `uname -m` to get the machine hardware 
name, and check whether the file existed. If not, we fall back to default 
location `/usr/lib/x86_64-linux-gnu/libjemalloc.so`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-docker] Myasuka commented on a diff in pull request #117: [FLINK-28057] Fix LD_PRELOAD on ARM images

2022-07-07 Thread GitBox


Myasuka commented on code in PR #117:
URL: https://github.com/apache/flink-docker/pull/117#discussion_r916412298


##
docker-entrypoint.sh:
##
@@ -91,7 +91,12 @@ prepare_configuration() {
 
 maybe_enable_jemalloc() {
 if [ "${DISABLE_JEMALLOC:-false}" == "false" ]; then
-export LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libjemalloc.so
+# Maybe use export LD_PRELOAD=$LD_PRELOAD:/usr/lib/$(uname 
-i)-linux-gnu/libjemalloc.so
+if [[ `uname -i` == 'aarch64' ]]; then
+export 
LD_PRELOAD=$LD_PRELOAD:/usr/lib/aarch64-linux-gnu/libjemalloc.so

Review Comment:
   From the description of 
[uname](https://www.gnu.org/software/coreutils/manual/html_node/uname-invocation.html#uname-invocation),
 we can see `uname -m` is a better choice.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] sunshineJK commented on a diff in pull request #20127: [FLINK-26270] Flink SQL write data to kafka by CSV format , whether d…

2022-07-07 Thread GitBox


sunshineJK commented on code in PR #20127:
URL: https://github.com/apache/flink/pull/20127#discussion_r916412128


##
flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFormatOptions.java:
##
@@ -87,5 +87,12 @@ public class CsvFormatOptions {
 "Optional null literal string that is interpreted 
as a\n"
 + "null value (disabled by default)");
 
+public static final ConfigOption 
DISABLE_WRITE_BIGDECIMAL_AS_SCIENTIFIC_NOTATION =
+
ConfigOptions.key("disable-write-bigdecimal-as-scientific-notation")
+.booleanType()
+.defaultValue(false)
+.withDescription(
+"Disabled representation of data of type 
Bigdecimal as scientific notation (default is false, which requires conversion 
to scientific notation), e.g. A Bigdecimal number of 10, If false the 
result is '1E+5', if true the result is 10.");
+
 private CsvFormatOptions() {}

Review Comment:
   hi,@afedulov I've changed this parameter and now the default is false



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zoltar9264 commented on pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …

2022-07-07 Thread GitBox


zoltar9264 commented on PR #20152:
URL: https://github.com/apache/flink/pull/20152#issuecomment-1178465547

   Thanks a lot for your advice @rkhachatryan , I feel like I can learn a lot 
from it. Since there are still some personal matters this week, I will be 
considering your suggestions one by one next week. Thanks again !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20208: [hotfix][flink-rpc] Fix the RemoteRpcInvocation class typo.

2022-07-07 Thread GitBox


flinkbot commented on PR #20208:
URL: https://github.com/apache/flink/pull/20208#issuecomment-1178458237

   
   ## CI report:
   
   * 52f8097b674005819e871edbfc66d8cfb9989c9d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20208: [hotfix][flink-rpc] Fix the RemoteRpcInvocation class typo.

2022-07-07 Thread GitBox


liuzhuang2017 commented on PR #20208:
URL: https://github.com/apache/flink/pull/20208#issuecomment-1178457848

   @zentol , Sorry to bother you, can you help me review this pull request? 
Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] SteNicholas commented on a diff in pull request #302: [FLINK-28297][FLINK-27914] Improve operator metric groups + Add JOSDK metric integration

2022-07-07 Thread GitBox


SteNicholas commented on code in PR #302:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/302#discussion_r916401412


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/KubernetesOperatorMetricOptions.java:
##
@@ -23,8 +23,22 @@
 /** Configuration options for metrics. */
 public class KubernetesOperatorMetricOptions {
 public static final ConfigOption SCOPE_NAMING_KUBERNETES_OPERATOR =
-ConfigOptions.key("metrics.scope.k8soperator")
-.defaultValue(".k8soperator..")
+ConfigOptions.key("metrics.scope.k8soperator.system")

Review Comment:
   Does the `KubernetesOperatorMetricOptions` need to generate the 
configuration document in `flink-kubernetes-docs`?



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/OperatorJosdkMetrics.java:
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.metrics;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.metrics.Counter;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.metrics.MetricRegistry;
+
+import io.javaoperatorsdk.operator.api.monitoring.Metrics;
+import io.javaoperatorsdk.operator.api.reconciler.RetryInfo;
+import io.javaoperatorsdk.operator.processing.event.Event;
+import io.javaoperatorsdk.operator.processing.event.ResourceID;
+import 
io.javaoperatorsdk.operator.processing.event.source.controller.ResourceEvent;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Implementation of {@link Metrics} to monitor and forward JOSDK metrics to 
{@link MetricRegistry}.
+ */
+public class OperatorJosdkMetrics implements Metrics {

Review Comment:
   Why not the implementation override the `timeControllerExecution` interface? 
Doesn't we need to support the metrics for the exection?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20208: [hotfix][flink-rpc] Fix the RemoteRpcInvocation class typo.

2022-07-07 Thread GitBox


liuzhuang2017 commented on PR #20208:
URL: https://github.com/apache/flink/pull/20208#issuecomment-1178456083

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 opened a new pull request, #20208: [hotfix][flink-rpc] Fix the RemoteRpcInvocation class typo.

2022-07-07 Thread GitBox


liuzhuang2017 opened a new pull request, #20208:
URL: https://github.com/apache/flink/pull/20208

   
   ## What is the purpose of the change
   
   - Change the keyword `th` to `the`.
   
   
   ## Brief change log
   
   - Change the keyword `th` to `the`.
   
   
   ## Verifying this change
   
   - No need to test.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on pull request #19727: [FLINK-27618][sql] Flink supports CUME_DIST function

2022-07-07 Thread GitBox


luoyuxia commented on PR #19727:
URL: https://github.com/apache/flink/pull/19727#issuecomment-1178453567

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zoltar9264 commented on pull request #20093: [FLINK-28172][changelog] Scatter dstl files into separate directories…

2022-07-07 Thread GitBox


zoltar9264 commented on PR #20093:
URL: https://github.com/apache/flink/pull/20093#issuecomment-1178453865

   Thanks @rkhachatryan . Done.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] JesseAtSZ commented on pull request #20091: [FLINK-27570][runtime] Fix initialize base locations for checkpoint

2022-07-07 Thread GitBox


JesseAtSZ commented on PR #20091:
URL: https://github.com/apache/flink/pull/20091#issuecomment-1178450932

   @rkhachatryan If we modify `FINALIZE_CHECKPOINT_FAILURE` has a great impact, 
we can change `initializeBaseLocationsForCheckpoint` first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20207: Update generating_watermarks.md

2022-07-07 Thread GitBox


flinkbot commented on PR #20207:
URL: https://github.com/apache/flink/pull/20207#issuecomment-1178449071

   
   ## CI report:
   
   * 11628bdae7022960a7700d4c710225327d5b2675 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wingwuyao opened a new pull request, #20207: Update generating_watermarks.md

2022-07-07 Thread GitBox


wingwuyao opened a new pull request, #20207:
URL: https://github.com/apache/flink/pull/20207

   There seems to be a problem with the spelling of 'punctuated', fix that
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-28439) Support SUBSTR and REGEXP built-in function in Table API

2022-07-07 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-28439.
---
  Assignee: LuNing Wang
Resolution: Fixed

Merged to master via 90c8284b10ecad84d2c954efe0a98e8fe87cf1dd

> Support SUBSTR and REGEXP built-in function in Table API 
> -
>
> Key: FLINK-28439
> URL: https://issues.apache.org/jira/browse/FLINK-28439
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Reporter: LuNing Wang
>Assignee: LuNing Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dianfu merged pull request #20201: [FLINK-28439][table][python] Support SUBSTR and REGEXP built-in function in Table API

2022-07-07 Thread GitBox


dianfu merged PR #20201:
URL: https://github.com/apache/flink/pull/20201


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wingwuyao closed pull request #20199: Update and rename docs/content/docs/dev/datastream/event-time/generat…

2022-07-07 Thread GitBox


wingwuyao closed pull request #20199: Update and rename 
docs/content/docs/dev/datastream/event-time/generat…
URL: https://github.com/apache/flink/pull/20199


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #19983: [FLINK-27878][datastream] Add Retry Support For Async I/O In DataStream API

2022-07-07 Thread GitBox


lincoln-lil commented on PR #19983:
URL: https://github.com/apache/flink/pull/19983#issuecomment-1178431723

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28452) "Reduce" and "aggregate" operations of "window_all" are supported in Python datastream API

2022-07-07 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-28452:

Affects Version/s: (was: 1.15.1)

> "Reduce" and "aggregate" operations of "window_all" are supported in Python 
> datastream API
> --
>
> Key: FLINK-28452
> URL: https://issues.apache.org/jira/browse/FLINK-28452
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream, API / Python
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
> Fix For: 1.16.0
>
>
> The window allocator based on "window_all" adds "reduce" and "aggregate" 
> operations to align the reduce and aggregate already supported in the Java API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-28452) "Reduce" and "aggregate" operations of "window_all" are supported in Python datastream API

2022-07-07 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-28452:
---

Assignee: zhangjingcun

> "Reduce" and "aggregate" operations of "window_all" are supported in Python 
> datastream API
> --
>
> Key: FLINK-28452
> URL: https://issues.apache.org/jira/browse/FLINK-28452
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream, API / Python
>Affects Versions: 1.15.1
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
> Fix For: 1.16.0
>
>
> The window allocator based on "window_all" adds "reduce" and "aggregate" 
> operations to align the reduce and aggregate already supported in the Java API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28452) "Reduce" and "aggregate" operations of "window_all" are supported in Python datastream API

2022-07-07 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-28452:


 Summary: "Reduce" and "aggregate" operations of "window_all" are 
supported in Python datastream API
 Key: FLINK-28452
 URL: https://issues.apache.org/jira/browse/FLINK-28452
 Project: Flink
  Issue Type: New Feature
  Components: API / DataStream, API / Python
Affects Versions: 1.15.1
Reporter: zhangjingcun
 Fix For: 1.16.0


The window allocator based on "window_all" adds "reduce" and "aggregate" 
operations to align the reduce and aggregate already supported in the Java API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20206: [FLINK-25909][runtime][security] Add HBaseDelegationTokenProvider

2022-07-07 Thread GitBox


flinkbot commented on PR #20206:
URL: https://github.com/apache/flink/pull/20206#issuecomment-1178425468

   
   ## CI report:
   
   * 9ccb31920dd2de1f7c4aedce50fd1e194fc5ac80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-25909) Move HBase token obtain functionality into HBaseDelegationTokenProvider

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25909:
---
Labels: pull-request-available  (was: )

> Move HBase token obtain functionality into HBaseDelegationTokenProvider
> ---
>
> Key: FLINK-25909
> URL: https://issues.apache.org/jira/browse/FLINK-25909
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.15.0
>Reporter: Gabor Somogyi
>Assignee: jackwangcs
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] JackWangCS commented on a diff in pull request #20206: [FLINK-25909][runtime][security] Add HBaseDelegationTokenProvider

2022-07-07 Thread GitBox


JackWangCS commented on code in PR #20206:
URL: https://github.com/apache/flink/pull/20206#discussion_r916381471


##
flink-runtime/src/main/resources/META-INF/services/org.apache.flink.runtime.security.token.DelegationTokenProvider:
##
@@ -14,3 +14,4 @@
 # limitations under the License.
 
 org.apache.flink.runtime.security.token.HadoopFSDelegationTokenProvider
+org.apache.flink.runtime.security.token.HBaseDelegationTokenProvider

Review Comment:
   The HBaseDelegationTokenProvider is still placed here for compatibility, we 
can move it to hbase-connector in the future



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] JackWangCS opened a new pull request, #20206: [FLINK-25908][runtime][security] Add HBaseDelegationTokenProvider

2022-07-07 Thread GitBox


JackWangCS opened a new pull request, #20206:
URL: https://github.com/apache/flink/pull/20206

   ## What is the purpose of the change
   
   This PR adds HBaseDelegationTokenProvider to obtain HBase Delegation tokens 
using the new delegation token framework.
   
   
   ## Brief change log
   
 - Add `HBaseDelegationTokenProvider`
 - Add `HBaseDelegationTokenProvider` into the `META-INFO` service 
registration
   
   
   ## Verifying this change
- This provider functionality will be covered by some existing tests. But 
We don't have dedicated tests to test the HBase delegation token obtainning. I 
think we may need to add new end-to-end tests to cover it. 
   

   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] bzhaoopenstack commented on pull request #20193: [FLINK-28433][connector/jdbc]Add mariadb jdbc connection validation

2022-07-07 Thread GitBox


bzhaoopenstack commented on PR #20193:
URL: https://github.com/apache/flink/pull/20193#issuecomment-1178417684

   > @bzhaoopenstack Thanks for the PR, but in order to add support for MariaDB 
I would also like to see a test and documentation in place.
   
   Thanks very much for reply and suggest. I will add the test and 
documentation. Once I'm finish, I think I need your kind review and advice. 
Thank you. ;-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-10206) Add hbase sink connector

2022-07-07 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564023#comment-17564023
 ] 

Jing Ge commented on FLINK-10206:
-

There is a new HBaseDynamicTableSink since the last comment. Could we close 
this issue? Is there any specific requirement that HBaseDynamicTableSink could 
not cover?

> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Affects Versions: 1.6.0
>Reporter: Igloo
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase source connector for batch operation. 
>  
> In some cases, we need to save Streaming/Batch results into hbase.  Just like 
> cassandra streaming/Batch sink implementations. 
>  
> Design documentation: 
> [https://docs.google.com/document/d/1of0cYd73CtKGPt-UL3WVFTTBsVEre-TNRzoAt5u2PdQ/edit?usp=sharing]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20205: [FLINK-28250][Connector/Kafka] Fix exactly-once Kafka Sink out of memory

2022-07-07 Thread GitBox


flinkbot commented on PR #20205:
URL: https://github.com/apache/flink/pull/20205#issuecomment-1178341140

   
   ## CI report:
   
   * 7304f9dd3c2c65dd985f5de831496b091b58b556 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28250) exactly-once sink kafka cause out of memory

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28250:
---
Labels: pull-request-available  (was: )

> exactly-once sink kafka cause out of memory
> ---
>
> Key: FLINK-28250
> URL: https://issues.apache.org/jira/browse/FLINK-28250
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
> Environment: *flink version: flink-1.15.0*
> *tm: 8* parallelism, 1 slot, 2g
> centos7
>Reporter: jinshuangxian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2022-06-25-22-07-35-686.png, 
> image-2022-06-25-22-07-54-649.png, image-2022-06-25-22-08-04-891.png, 
> image-2022-06-25-22-08-15-024.png
>
>
> *my sql code:*
> CREATE TABLE sourceTable (
> data bytes
> )WITH(
> 'connector'='kafka',
> 'topic'='topic1',
> 'properties.bootstrap.servers'='host1',
> 'properties.group.id'='gorup1',
> 'scan.startup.mode'='latest-offset',
> 'format'='raw'
> );
>  
> CREATE TABLE sinkTable (
> data bytes
> )
> WITH (
> 'connector'='kafka',
> 'topic'='topic2',
> 'properties.bootstrap.servers'='host2',
> 'properties.transaction.timeout.ms'='3',
> 'sink.semantic'='exactly-once',
> 'sink.transactional-id-prefix'='xx-kafka-sink-a',
> 'format'='raw'
> );
> insert into sinkTable
> select data
> from sourceTable;
>  
> *problem:*
> After the program runs online for about half an hour, full gc frequently 
> appears
>  
> {*}Troubleshoot{*}:
> I use command 'jmap -dump:live,format=b,file=/tmp/dump2.hprof' dump the 
> problem tm memory. It is found that there are 115 
> FlinkKafkaInternalProducers, which is not normal.
> !image-2022-06-25-22-07-54-649.png!!image-2022-06-25-22-07-35-686.png!
> After reading the code of KafkaCommitter, it is found that after the commit 
> is successful, the producer is not recycled, only abnormal situations are 
> recycled.
> !image-2022-06-25-22-08-04-891.png!
> I added a few lines of code. After the online test, the program works 
> normally, and the problem of oom memory is solved.
> !image-2022-06-25-22-08-15-024.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] charles-tan opened a new pull request, #20205: [FLINK-28250][Connector/Kafka] Fix exactly-once Kafka Sink out of memory

2022-07-07 Thread GitBox


charles-tan opened a new pull request, #20205:
URL: https://github.com/apache/flink/pull/20205

   
   
   ## What is the purpose of the change
   
   The goal of this PR is to address a memory leak where the Kafka Producer 
does not get cleaned up after kafka commit is successful.
   
   
   ## Brief change log
   
   
[7304f9d](https://github.com/charles-tan/flink/commit/7304f9dd3c2c65dd985f5de831496b091b58b556):
 recycles producer after successful kafka commit by adding a finally block to 
the existing try/catch.
   
   
   ## Verifying this change
   
   Manually verified the change by running a 1 node cluster with 1 JobManagers 
and 1 TaskManagers, running the following job for several hours without any 
memory issues:
   ```
   public class FlinkTest {
   public void main(String[] args) throws Exception {
   StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
   KafkaSource source = KafkaSource.builder()
   .setBootstrapServers("localhost:9092")
   .setTopics("input")
   .setGroupId("my-group" + System.currentTimeMillis())
   .setStartingOffsets(OffsetsInitializer.earliest())
   .setValueOnlyDeserializer(new SimpleStringSchema())
   .build();
   DataStream sourceStream = env.fromSource(
   source,
   WatermarkStrategy.forMonotonousTimestamps(), "Kafka Source");
   KafkaRecordSerializationSchema serializer = 
KafkaRecordSerializationSchema.builder()
   .setValueSerializationSchema(new SimpleStringSchema())
   .setTopic("output")
   .build();
   Properties sinkProps = new Properties();
   
sinkProps.put("[transaction.timeout.ms](http://transaction.timeout.ms/)", 
18);
   KafkaSink sink = KafkaSink.builder()
   .setBootstrapServers("localhost:9092")
   .setKafkaProducerConfig(sinkProps)
   .setRecordSerializer(serializer)
   .setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)
   .build();
   sourceStream.sinkTo(sink);
   env.enableCheckpointing(1);
   env.getCheckpointConfig().setCheckpointTimeout(6);
   
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
   env.getCheckpointConfig().setMinPauseBetweenCheckpoints(500L);
   env.getCheckpointConfig().setTolerableCheckpointFailureNumber(1);
   env.execute("tester");
   }
   }
   ```
   Took heap dumps using `jmap` tool and noticed the taskmanager memory was 
stable. Link to the relevant mailing list thread: 
https://lists.apache.org/thread/c86cd8qyqb6qxy639hkzbozkwv2qxk84.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-10740) FLIP-27: Refactor Source Interface

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10740:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor usability  (was: auto-deprioritized-major auto-unassigned 
pull-request-available usability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> FLIP-27: Refactor Source Interface
> --
>
> Key: FLINK-10740
> URL: https://issues.apache.org/jira/browse/FLINK-10740
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet, API / DataStream
>Reporter: Aljoscha Krettek
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor, usability
>
> Please see the FLIP for any details: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-10206) Add hbase sink connector

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10206:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Affects Versions: 1.6.0
>Reporter: Igloo
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase source connector for batch operation. 
>  
> In some cases, we need to save Streaming/Batch results into hbase.  Just like 
> cassandra streaming/Batch sink implementations. 
>  
> Design documentation: 
> [https://docs.google.com/document/d/1of0cYd73CtKGPt-UL3WVFTTBsVEre-TNRzoAt5u2PdQ/edit?usp=sharing]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27891) Add ARRAY_APPEND and ARRAY_PREPEND supported in SQL & Table API

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-27891:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add ARRAY_APPEND and ARRAY_PREPEND supported in SQL & Table API
> ---
>
> Key: FLINK-27891
> URL: https://issues.apache.org/jira/browse/FLINK-27891
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> {{ARRAY_APPEND}} - adds element to the end of the array and returns the 
> resulting array
> {{ARRAY_PREPEND}} - adds element to the beginning of the array and returns 
> the resulting array
> Syntax:
> {code:sql}
> ARRAY_APPEND(  ,  );
> ARRAY_PREPEND(  ,  );
> {code}
> Arguments:
> array: An ARRAY to to add a new element.
> new_element: A new element.
> Returns:
> An array. If array is NULL, the result is NULL.
> Examples:
> {code:sql}
> SELECT array_append(array[1, 2, 3], 4);
> -- array[1, 2, 3, 4]
> select array_append(cast(null as int array), 2);
> -- null
> SELECT array_prepend(array[1, 2, 3], 4);
> -- array[4, 1, 2, 3]
> SELECT array_prepend(array[1, 2, 3], null);
> -- array[null, 1, 2, 3]
> {code}
> See more:
> {{ARRAY_APPEND}}
>Snowflake 
> https://docs.snowflake.com/en/sql-reference/functions/array_append.html
>PostgreSQL 
> https://www.postgresql.org/docs/14/functions-array.html#ARRAY-FUNCTIONS-TABLE
> {{ARRAY_PREPEND}}
>Snowflake 
> https://docs.snowflake.com/en/sql-reference/functions/array_prepend.html
>PostgreSQL 
> https://www.postgresql.org/docs/14/functions-array.html#ARRAY-FUNCTIONS-TABLE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-9219) Add support for OpenGIS features in Table & SQL API

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-9219:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add support for OpenGIS features in Table & SQL API
> ---
>
> Key: FLINK-9219
> URL: https://issues.apache.org/jira/browse/FLINK-9219
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> CALCITE-1968 added core functionality for handling 
> spatial/geographical/geometry data. It should not be too hard to expose these 
> features also in Flink's Table & SQL API. We would need a new {{GEOMETRY}} 
> data type and connect the function APIs.
> Right now the following functions are supported by Calcite:
> {code}
> ST_AsText, ST_AsWKT, ST_Boundary, ST_Buffer, ST_Contains, 
> ST_ContainsProperly, ST_Crosses, ST_Disjoint, ST_Distance, ST_DWithin, 
> ST_Envelope, ST_EnvelopesIntersect, ST_Equals, ST_GeometryType, 
> ST_GeometryTypeCode, ST_GeomFromText, ST_Intersects, ST_Is3D, 
> ST_LineFromText, ST_MakeLine, ST_MakePoint, ST_MLineFromText, 
> ST_MPointFromText, ST_MPolyFromText, ST_Overlaps, ST_Point, ST_PointFromText, 
> ST_PolyFromText, ST_SetSRID, ST_Touches, ST_Transform, ST_Union, ST_Within, 
> ST_Z
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-11463) Rework end-to-end tests in Java

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11463:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Rework end-to-end tests in Java
> ---
>
> Key: FLINK-11463
> URL: https://issues.apache.org/jira/browse/FLINK-11463
> Project: Flink
>  Issue Type: New Feature
>  Components: Test Infrastructure
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is the (long-term) umbrella issue for reworking our end-to-tests in Java 
> on top of a new set of utilities.
> Below are some areas where problems have been identified that I want to 
> address with a prototype soon. This prototype primarily aims to introduce 
> certain patterns to be built upon in the future.
> h2. Environments
> h4. Problem
> Our current tests directly work against flink-dist and setup local clusters 
> with/-out HA. Similar issues apply to Kafka and ElasticSearch.
> This prevents us from re-using tests for other environments (Yarn, Docker) 
> and distributed settings.
> We also frequently have issues with cleaning up resources as it is the 
> responsibility of the test itself.
> h4. Proposal
> Introduce a common interface for a given resource type (i.e. Flink, Kafka) 
> that tests will work against.
> These resources should be implemented as jUnit external resources to allow 
> reasonable life-cycle management.
> Tests get access to an instance of this resource through a factory method.
> Each resource implementation has a dedicated factory that is loaded with a 
> {{ServiceLoader}}. Factories evaluate system-properties to determine whether 
> the implementation should be loaded, and then optionally configure the 
> resource.
> Example:
> {code}
> public interface FlinkResource {
>   ... common methods ...
> /**
>* Returns the configured FlinkResource implementation, or a {@link 
> LocalStandaloneFlinkResource} if none is configured.
>*
>* @return configured FlinkResource, or {@link 
> LocalStandaloneFlinkResource} is none is configured
>*/
>   FlinkResource get() {
>   // load factories
>   // evaluate system properties
>   // return instance
>   }
> }
> public interface FlinkResourceFactory {
>   /**
>* Returns a {@link FlinkResource} instance. If the instance could not 
> be instantiated (for example, because a
>* mandatory parameter was missing), then an empty {@link Optional} 
> should be returned.
>*
>* @return FlinkResource instance, or an empty Optional if the instance 
> could not be instantiated
>*/
>   Optional create();
> }
> {code}
> As example, running {{mvn verify -De2e.flink.mode=localStandalone}} could 
> load a FlinkResource that sets up a local standalone cluster, while for {{mvn 
> verify -De2e.flink.mode=distributedStandalone -De2e.flink.hosts=...}} it 
> would connect to the given host and setup a distributed cluster.
> Tests are not _required_ to work against the common interface, and may be 
> hard-wired to run against specific implementations. Simply put, the resource 
> implementations should be public.
> h4. Future considerations
> The factory method may be extended to allow tests to specify a set of 
> conditions that must be fulfilled, for example HA to be enabled. If this 
> requirement cannot be fulfilled the test should be skipped.
> h2. Split Management
> h4. Problem
> End-to-end tests are run in separate {{cron--e2e}} branches. To 
> accommodate the Travis time limits we run a total of 6 jobs each covering a 
> subset of the tests.
> These so-called splits are currently managed in the respective branches, and 
> not on master/release branches.
> This is a rather hidden detail that not everyone is aware of, nor is it 
> easily discoverable. This has resulted several times in newly added tests not 
> actually being run. Furthermore, if the arguments for tests are modified 
> these changes have to be replicated to each branch.
> h4. Proposal
> Use jUnit Categories to assign each test explicitly to one of 

[jira] [Updated] (FLINK-12130) Apply command line options to configuration before installing security modules

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12130:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Apply command line options to configuration before installing security modules
> --
>
> Key: FLINK-12130
> URL: https://issues.apache.org/jira/browse/FLINK-12130
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: jiasheng55
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
>
> Currently if the user configures Kerberos credentials through command line, 
> it won't work.
> {code:java}
> // flink run -m yarn-cluster -yD 
> security.kerberos.login.keytab=/path/to/keytab -yD 
> security.kerberos.login.principal=xxx /path/to/test.jar
> {code}
> Above command would cause security failure if you do not have a ticket cache 
> w/ kinit.
> Maybe we could call 
> _org.apache.flink.client.cli.AbstractCustomCommandLine#applyCommandLineOptionsToConfiguration_
>   before _SecurityUtils.install(new 
> SecurityConfiguration(cli.configuration));_
> Here is a demo patch: 
> [https://github.com/jiasheng55/flink/commit/ef6880dba8a1f36849f5d1bb308405c421b29986]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13213) MinIdleStateRetentionTime/MaxIdleStateRetentionTime in TableConfig will be removed after call toAppendStream/toRetractStream without QueryConfig parameters

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13213:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> MinIdleStateRetentionTime/MaxIdleStateRetentionTime in TableConfig will be 
> removed after call toAppendStream/toRetractStream without QueryConfig 
> parameters
> ---
>
> Key: FLINK-13213
> URL: https://issues.apache.org/jira/browse/FLINK-13213
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two `toAppendStream` method in `StreamTableEnvironment`:
> 1.  def toAppendStream[T: TypeInformation](table: Table): DataStream[T]
> 2.   def toAppendStream[T: TypeInformation](table: Table, queryConfig: 
> StreamQueryConfig): DataStream[T]
> After convert `Table` to `DataStream` by call the first method or 
> toRetractStream, the MinIdleStateRetentionTime/MaxIdleStateRetentionTime in 
> TableConfig will be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13301:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference
> ---
>
> Key: FLINK-13301
> URL: https://issues.apache.org/jira/browse/FLINK-13301
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference. The problem could be happened when run  the following example: 
> {code:java}
> // prepare source Data
> val testData = new mutable.MutableList[(Int)]
> testData.+=((3))
> val t = env.fromCollection(testData).toTable(tEnv).as('a)
> // register a TableSink
> val fieldNames = Array("f0")
> val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
> //val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
> val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
> tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
> fieldTypes))
> 
> t.select('a.floor()).insertInto("targetTable")
> env.execute()
> {code}
> The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
> `SqlFloorFunction` infers returnType is the type of the first argument(INT in 
> the above case).
> If I change `fieldTypes` to Array(Types.INT()), the following error will be 
> thrown in compile phase.
> {code:java}
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [targetTable] do not match.
> Query result schema: [_c0: Long]
> TableSink schema:[f0: Integer]
>   at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> {code}
> And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
> thrown in runtime.
> {code:java}
> org.apache.flink.table.api.TableException: Result field does not match 
> requested type. Requested: Long; Actual: Integer
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
> {code}
> {color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` 
> and so on.  {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-11934) Remove the deprecate methods in TableEnvironment

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11934:
---
Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
stale-assigned  (was: auto-deprioritized-major auto-deprioritized-minor 
auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Remove the deprecate methods in TableEnvironment
> 
>
> Key: FLINK-11934
> URL: https://issues.apache.org/jira/browse/FLINK-11934
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Hequn Cheng
>Assignee: Daisy Tsang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, stale-assigned
>
> There are several {{getTableEnvironment()}} methods which are deprecated 
> during 1.8 in {{TableEnvironment}}. As the release-1.8 has been cut off. We 
> can remove these deprecate methods now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13218) '*.count not supported in TableApi query

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13218:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> '*.count not supported in TableApi query
> 
>
> Key: FLINK-13218
> URL: https://issues.apache.org/jira/browse/FLINK-13218
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> The following query is not supported yet:
> {code:java}
> val t = StreamTestData.get3TupleDataStream(env).toTable(tEnv, 'a, 'b, 'c)
>   .groupBy('b)
>   .select('b, 'a.sum, '*.count)
> {code}
> The following exception will be thrown.
> {code:java}
> org.apache.flink.table.api.ValidationException: Cannot resolve field [*], 
> input field list:[a, b, c].
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.failForField(ReferenceResolverRule.java:80)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$4(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseThrow(Optional.java:290)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$5(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$visit$6(ReferenceResolverRule.java:74)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:71)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:51)
>   at 
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13247) Implement external shuffle service for YARN

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13247:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Implement external shuffle service for YARN
> ---
>
> Key: FLINK-13247
> URL: https://issues.apache.org/jira/browse/FLINK-13247
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Network
>Reporter: MalcolmSanders
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Flink batch job users could achieve better cluster utilization and job 
> throughput throught external shuffle service because the producers of 
> intermedia result partitions can be released once intermedia result 
> partitions have been persisted on disks. In 
> [FLINK-10653|https://issues.apache.org/jira/browse/FLINK-10653], [~zjwang] 
> has introduced pluggable shuffle manager architecture which abstracts the 
> process of data transfer between stages from flink runtime as shuffle 
> service. I propose to YARN implementation for flink external shuffle service 
> since YARN is widely used in various companies.
> The basic idea is as follows:
> (1) Producers write intermedia result partitions to local disks assigned by 
> NodeManager;
> (2) Yarn shuffle servers, deployed on each NodeManager as an auxiliary 
> service, are acknowledged of intermedia result partition descriptions by 
> producers;
> (3) Consumers fetch intermedia result partition from yarn shuffle servers;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14527) Add integration tests for PostgreSQL and MySQL dialects in Flink JDBC module

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14527:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add integration tests for PostgreSQL and MySQL dialects in Flink JDBC module
> 
>
> Key: FLINK-14527
> URL: https://issues.apache.org/jira/browse/FLINK-14527
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Currently, we already supported PostgreSQL and MySQL and Derby dialects in 
> flink-jdbc as sink and source. However, we only have integeration tests for 
> Derby. 
> We should add integeration tests for PostgreSQL and MySQL dialects too. Maybe 
> we can use JUnit {{Parameterized}} feature to avoid duplicated testing code.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-17231) Deprecate possible implicit fallback from LB to NodePort When retrieving Endpoint of LB Service

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17231:
---
Labels: auto-unassigned pull-request-available stale-assigned  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Deprecate possible implicit fallback from LB to NodePort When retrieving 
> Endpoint of LB Service
> ---
>
> Key: FLINK-17231
> URL: https://issues.apache.org/jira/browse/FLINK-17231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: liuzhuo
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
>
> Currently, if people use the LB Service, when it comes to retrieving the 
> Endpoint for the external Service, we make an implicit fallback immediately 
> to return the NodePort address and port if we fail to get the LB address in a 
> single try. This kind of toleration can confuse the users since the NodePort 
> address/port may be unaccessible due to some reasons like network security 
> policy, and the users may not know what really happen behind.
> This ticket proposes to always return the LB address/port otherwise throw an 
> Exception indicating that the LB is unready or abnormal.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13503) Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null

2022-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13503:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add contract in `LookupableTableSource` to specify the behavior when 
> lookupKeys contains null
> -
>
> Key: FLINK-13503
> URL: https://issues.apache.org/jira/browse/FLINK-13503
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC, Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Jing Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we should add contract in `LookupableTableSource` to specify expected 
> behavior when the lookupKeys contains null value.  
> For example, one input record of eval method is (null,1) which means to look 
> up data in (a,b) columns which key satisfy the requirement.  there are at 
> least three possibility here.
>   * to ignore null value, that is, in the above example, only looks `b = 1`
>   * to lookup `is value`, that is, in the above example, only looks `a is 
> null and b = 1`
>   * to return empty records, that is, in the above example, only looks `a = 
> null and b = 1`
> In fact, there are different behavior in current code. 
> For example, in Jdbc connector,
> The query template in `JdbcLookUpFunction` like:
> SELECT c, d, e, f from T where a = ? and b = ?
> If pass (null, 1) to `eval` method, it will generate the following query:
> SELECT c, d, e, f from T where a = null and b = 1
> Which always outputs empty records.
> BTW, Is this behavior reasonable?
> and the `InMemoryLookupableTableSource` behaviors like point 2 in the above 
> list.
> some private connector in Blink behaviors like point 1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >