Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-20 Thread via GitHub


Zakelly commented on code in PR #24279:
URL: https://github.com/apache/flink/pull/24279#discussion_r1497039627


##
docs/content.zh/docs/ops/debugging/flame_graphs.md:
##
@@ -27,36 +27,37 @@ under the License.
 
 # Flame Graphs

Review Comment:
   我建议就直接翻译成火焰图吧,也省去很多空格的烦恼。
   ```suggestion
   # 火焰图 Flame Graphs
   ```



##
docs/content.zh/docs/ops/debugging/flame_graphs.md:
##
@@ -27,36 +27,37 @@ under the License.
 
 # Flame Graphs
 
-[Flame Graphs](http://www.brendangregg.com/flamegraphs.html) are a 
visualization that effectively surfaces answers to questions like:
-- Which methods are currently consuming CPU resources?
-- How does consumption by one method compare to the others?
-- Which series of calls on the stack led to executing a particular method?
+[Flame Graphs](http://www.brendangregg.com/flamegraphs.html) 
是一种有效的可视化工具,可以回答以下问题:
+
+- 目前哪些方法正在消耗CPU资源?
+- 一个方法的消耗与其他方法相比如何?
+- 哪一系列的堆栈调用导致了特定方法的执行?
 
 {{< img src="/fig/flame_graph_on_cpu.png" class="img-fluid" width="90%" >}}
 {{% center %}}
 Flame Graph
 {{% /center %}}
 
-Flame Graphs are constructed by sampling stack traces a number of times. Each 
method call is presented by a bar, where the length of the bar is proportional 
to the number of times it is present in the samples.
+Flame Graphs是通过多次采样堆栈跟踪来构建的。每个方法调用都由一个条形图表示,其中条形图的长度与其在样本中出现的次数成比例。
 
-Starting with Flink 1.13, Flame Graphs are natively supported in Flink. In 
order to produce a Flame Graph, navigate to the job graph of a running job, 
select an operator of interest and in the menu to the right click on the Flame 
Graph tab:  
+从Flink 1.13版本开始,Flink原生支持Flame Graphs。要生成一个Flame 
Graph,请导航到正在运行的作业的作业图,选择感兴趣的算子,并在右侧菜单中点击“Flame Graph”选项卡: 

Review Comment:
   ```suggestion
   从 Flink 1.13 版本开始,Flink 原生支持火焰图。要生成一个火焰图,请导航到正在运行的作业图,选择感兴趣的算子,并在右侧菜单中点击 
"Flame Graph" 选项卡: 
   ```



##
docs/content.zh/docs/ops/debugging/flame_graphs.md:
##
@@ -65,26 +66,25 @@ The Off-CPU Flame Graph visualizes blocking calls found in 
the samples. A distin
 Off-CPU Flame Graph
 {{% /center %}}
 
-Mixed mode Flame Graphs are constructed from stack traces of threads in all 
possible states.
+混合模式的 Flame Graphs 是由处于所有可能状态的线程的堆栈跟踪构建而成。
 
 {{< img src="/fig/flame_graph_mixed.png" class="img-fluid" width="90%" >}}
 {{% center %}}
-Flame Graph in Mixed Mode
+混合模式的 Flame Graph 
 {{% /center %}}
 
-##  Sampling process
+##  采样过程
 
-The collection of stack traces is done purely within the JVM, so only method 
calls within the Java runtime are visible (no system calls).
+堆栈跟踪的收集纯粹在JVM内部进行,因此只能看到Java运行时内的方法调用(没有系统调用)。
 
-Flame Graph construction is performed at the level of an individual 
[operator]({{< ref "docs/concepts/glossary" >}}#operator) by default,
-i.e. all [task]({{< ref "docs/concepts/glossary" >}}#task) threads of that 
operator are sampled in parallel and their stack traces are combined.
-If a method call consumes 100% of the resources in one of the parallel tasks 
but none in the others,
-the bottleneck might be obscured by being averaged out.
+默认情况下,Flame Graph 的构建是在单个[operator]({{< ref "docs/concepts/glossary" 
>}}#operator)级别上进行的,
+即该算子的所有[task]({{< ref "docs/concepts/glossary" >}}#task)线程并行采样,并将它们的堆栈跟踪合并起来。
+如果某个方法调用在其中一个并行任务中占用了100%的资源,但在其他任务中没有占用,则可能会被平均化而掩盖住瓶颈。
 
-Starting with Flink 1.17, Flame Graph provides "drill down" visualizations to 
the task level.
-Select a subtask of interest, and you can see the flame graph of the 
corresponding subtask.
+从Flink 1.17版本开始,Flame Graph 提供了 "drill down" 可视化到任务级别的功能。

Review Comment:
   ```suggestion
   Flink 从 1.17 版本开始提供了单并发级别火焰图可视化的功能。
   ```



##
docs/content.zh/docs/ops/debugging/flame_graphs.md:
##
@@ -27,36 +27,37 @@ under the License.
 
 # Flame Graphs
 
-[Flame Graphs](http://www.brendangregg.com/flamegraphs.html) are a 
visualization that effectively surfaces answers to questions like:
-- Which methods are currently consuming CPU resources?
-- How does consumption by one method compare to the others?
-- Which series of calls on the stack led to executing a particular method?
+[Flame Graphs](http://www.brendangregg.com/flamegraphs.html) 
是一种有效的可视化工具,可以回答以下问题:
+
+- 目前哪些方法正在消耗CPU资源?
+- 一个方法的消耗与其他方法相比如何?
+- 哪一系列的堆栈调用导致了特定方法的执行?
 
 {{< img src="/fig/flame_graph_on_cpu.png" class="img-fluid" width="90%" >}}
 {{% center %}}
 Flame Graph
 {{% /center %}}
 
-Flame Graphs are constructed by sampling stack traces a number of times. Each 
method call is presented by a bar, where the length of the bar is proportional 
to the number of times it is present in the samples.
+Flame Graphs是通过多次采样堆栈跟踪来构建的。每个方法调用都由一个条形图表示,其中条形图的长度与其在样本中出现的次数成比例。
 
-Starting with Flink 1.13, Flame Graphs are natively supported in Flink. In 
order to produce a Flame Graph, navigate to the job graph of a running job, 
select an operator of interest and in the menu to the right click on the Flame 
Graph tab:  
+从Flink 1.13版本开始,Flink原生支持Flame Graphs。要生成一个Flame 
Graph,请导航到正在运行的作业的作业图,选择感兴趣的算子,并在右侧菜单中点击“Flame Graph”选项卡: 
 
 

Re: [PR] [FLINK-33728] Do not rewatch when KubernetesResourceManagerDriver wat… [flink]

2024-02-20 Thread via GitHub


zhougit86 commented on PR #24163:
URL: https://github.com/apache/flink/pull/24163#issuecomment-1956068736

   > @zhougit86,
   > 
   > Thanks for addressing my comments. The PR LGTM.
   > 
   > The CI failures seem unrelated to this PR. I have just rebased the PR onto 
the latest master branch. Let's see if that resolves the CI failures.
   > 
   > I also have another two minor comments, which I addressed myself in a 
fixup commit. Please check them out.
   > 
   > If you have no objections on my changes, I'll merge this PR once it passes 
the CI.
   
   Thanks for your advice, really appreciate!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34351) Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819104#comment-17819104
 ] 

Yubin Li commented on FLINK-34351:
--

[~qingyue] I found some typos in the original test scripts, stage 4 should be 
as follows:
{code:java}
CREATE TABLE source_t (
a INT,
b BIGINT,
c STRING
) WITH (
'connector' = 'datagen',
'rows-per-second' = '5'
);

CREATE TABLE sink_t (
b BIGINT PRIMARY KEY NOT ENFORCED,
cnt BIGINT,
avg_a DOUBLE,
min_c STRING
) WITH (
'connector' = 'print'
);
COMPILE PLAN '/path/to/agg-plan.json' FOR
INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '1s') */
b, 
COUNT(*) AS cnt,
AVG(a) FILTER (WHERE a > 1) AS avg_a,
MIN(c) AS min_c 
FROM source_t GROUP BY b;{code}
After careful tests, the practical feature takes effect actually.

!image-2024-02-21-15-24-43-212.png|width=294,height=197!

!image-2024-02-21-15-24-22-289.png|width=418,height=127!

> Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs 
> using SQL Hint
> ---
>
> Key: FLINK-34351
> URL: https://issues.apache.org/jira/browse/FLINK-34351
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yubin Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-21-15-24-22-289.png, 
> image-2024-02-21-15-24-43-212.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34351) Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34351:
-
Attachment: image-2024-02-21-15-24-22-289.png

> Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs 
> using SQL Hint
> ---
>
> Key: FLINK-34351
> URL: https://issues.apache.org/jira/browse/FLINK-34351
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yubin Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-21-15-24-22-289.png, 
> image-2024-02-21-15-24-43-212.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34351) Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34351:
-
Attachment: image-2024-02-21-15-24-43-212.png

> Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs 
> using SQL Hint
> ---
>
> Key: FLINK-34351
> URL: https://issues.apache.org/jira/browse/FLINK-34351
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yubin Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-21-15-24-22-289.png, 
> image-2024-02-21-15-24-43-212.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34457][state] Rename options for latency tracking [flink]

2024-02-20 Thread via GitHub


masteryhx commented on code in PR #24323:
URL: https://github.com/apache/flink/pull/24323#discussion_r1496994348


##
flink-core/src/main/java/org/apache/flink/configuration/StateLatencyTrackOptions.java:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.configuration;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.docs.Documentation;
+
+/**
+ * A collection of all configuration options that relate to the latency 
tracking for state access.
+ */
+@PublicEvolving
+public class StateLatencyTrackOptions {
+
+@Documentation.Section(Documentation.Sections.STATE_LATENCY_TRACKING)
+public static final ConfigOption LATENCY_TRACK_ENABLED =
+ConfigOptions.key("state.latency-track.keyed-state-enabled")
+.booleanType()
+.defaultValue(false)
+
.withDeprecatedKeys("state.backend.latency-track.keyed-state-enabled")

Review Comment:
   minor suggestion: How about just using 
`StateBackendOptions.LATENCY_TRACK_ENABLED.key()` to help us to track the 
deprecated options ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread xiangyu feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangyu feng closed FLINK-34452.


> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread xiangyu feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangyu feng resolved FLINK-34452.
--
Resolution: Fixed

> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819097#comment-17819097
 ] 

xiangyu feng commented on FLINK-34452:
--

[~lincoln.86xy] Hi, I will close this ticket as resolved.

> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819096#comment-17819096
 ] 

xiangyu feng commented on FLINK-34452:
--

[~Weijie Guo] Thx for your work!

> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34371][runtime] Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment [flink]

2024-02-20 Thread via GitHub


mohitjain2504 commented on code in PR #24272:
URL: https://github.com/apache/flink/pull/24272#discussion_r1496989288


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/OperatorAttributesBuilder.java:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.annotation.Experimental;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+/** The builder class for {@link OperatorAttributes}. */
+@Experimental
+public class OperatorAttributesBuilder {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(OperatorAttributesBuilder.class);
+@Nullable private Boolean outputOnlyAfterEndOfStream = null;

Review Comment:
   Do we want to make this value false by default and remove the `@Nullable`? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34202][python] Optimize Python nightly CI time [flink]

2024-02-20 Thread via GitHub


HuangXingBo merged PR #24321:
URL: https://github.com/apache/flink/pull/24321


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29802][state] Changelog supports native savepoint [flink]

2024-02-20 Thread via GitHub


masteryhx commented on PR #22744:
URL: https://github.com/apache/flink/pull/22744#issuecomment-1955978228

   Thanks for the review.
   I rebased and squashed some commits.
   I will merge it after the CI passes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2024-02-20 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-33581:
--
Release Note: 
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces are now be deprecated in FLINK-1.19. And these objects and methods 
are planned to be removed in Flink-2.0. The deprecated interfaces include the 
getter and setter methods of RestartStrategy, CheckpointStorage, and 
StateBackend.

More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.

  was:
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. The deprecated interfaces include the 
getter and setter methods of RestartStrategy, CheckpointStorage, and 
StateBackend.

More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.


> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-11-30-17-59-42-650.png
>
>
> Deprecate the non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819086#comment-17819086
 ] 

Weijie Guo edited comment on FLINK-34452 at 2/21/24 6:20 AM:
-

[~xiangyu0xf] I have tested this feature on a yarn cluster:

1. Set {{taskmanager.numberOfTaskSlots=2}}, 
{{slotmanager.number-of-slots.min=7}}, and {{slot.idle.timeout=5}}.
2. Start a yarn session cluster.
3. From ActiveResourceManager's log that 4 workers (TM) were requested and 
successfully registered.
4. These workers are not be destroyed even after idle timeout.

Everything looks as expected then.


was (Author: weijie guo):
I have tested this feature on a yarn cluster:

1. Set {{taskmanager.numberOfTaskSlots=2}}, 
{{slotmanager.number-of-slots.min=7}}, and {{slot.idle.timeout=5}}.
2. Start a yarn session cluster.
3. From ActiveResourceManager's log that 4 workers (TM) were requested and 
successfully registered.
4. These workers are not be destroyed even after idle timeout.

Everything looks as expected then.

> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][test] Replaces String#replaceAll with String#replace to avoid regex compilation [flink]

2024-02-20 Thread via GitHub


snuyanzin commented on PR #24351:
URL: https://github.com/apache/flink/pull/24351#issuecomment-1955970997

   @XComp since you've also participated at 
https://github.com/apache/flink/pull/24315#discussion_r1493015749
   could you please have a look here?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34452) Release Testing: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-20 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819086#comment-17819086
 ] 

Weijie Guo commented on FLINK-34452:


I have tested this feature on a yarn cluster:

1. Set {{taskmanager.numberOfTaskSlots=2}}, 
{{slotmanager.number-of-slots.min=7}}, and {{slot.idle.timeout=5}}.
2. Start a yarn session cluster.
3. From ActiveResourceManager's log that 4 workers (TM) were requested and 
successfully registered.
4. These workers are not be destroyed even after idle timeout.

Everything looks as expected then.

> Release Testing: Verify FLINK-15959 Add min number of slots configuration to 
> limit total number of slots
> 
>
> Key: FLINK-34452
> URL: https://issues.apache.org/jira/browse/FLINK-34452
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xiangyu feng
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> Test Suggestion:
>  # Prepare configuration options:
>  ** taskmanager.numberOfTaskSlots = 2,
>  ** slotmanager.number-of-slots.min = 7,
>  ** slot.idle.timeout = 5
>  # Setup a Flink session Cluster on Yarn or Native Kubernetes based on 
> following docs:
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn]
>  ** 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes]
>  # Verify that 4 TaskManagers will be registered even though no jobs has been 
> submitted
>  # Verify that these TaskManagers will not be destroyed after 50 seconds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33397) FLIP-373: Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819085#comment-17819085
 ] 

Jane Chan commented on FLINK-33397:
---

I've drafted a release note for this feature. cc [~lincoln.86xy] and 
[~xuyangzhong] 

> FLIP-373: Support Configuring Different State TTLs using SQL Hint
> -
>
> Key: FLINK-33397
> URL: https://issues.apache.org/jira/browse/FLINK-33397
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> Please refer to 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
>  
> |https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33397) FLIP-373: Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-33397:
--
Release Note: 
This is a new feature in Apache Flink 1.19 that enhances the flexibility and 
user experience when managing SQL state time-to-live (TTL) settings. Users can 
now specify custom TTL values for regular joins and group aggregations directly 
within their queries by [utilizing the STATE_TTL 
hint](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#state-ttl-hints).

This improvement means that you no longer need to alter your compiled plan to 
set specific TTLs for these operators. With the introduction of STATE_TTL 
hints, you can streamline your workflow and dynamically adjust the TTL based on 
your operational requirements.

> FLIP-373: Support Configuring Different State TTLs using SQL Hint
> -
>
> Key: FLINK-33397
> URL: https://issues.apache.org/jira/browse/FLINK-33397
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> Please refer to 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
>  
> |https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34371][runtime] Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment [flink]

2024-02-20 Thread via GitHub


Sxnan commented on PR #24272:
URL: https://github.com/apache/flink/pull/24272#issuecomment-1955959322

   @yunfengzhou-hub Thanks for the update. LGTM!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33728] Do not rewatch when KubernetesResourceManagerDriver wat… [flink]

2024-02-20 Thread via GitHub


xintongsong commented on code in PR #24163:
URL: https://github.com/apache/flink/pull/24163#discussion_r1496940478


##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManagerDriver.java:
##
@@ -90,7 +92,7 @@ public class KubernetesResourceManagerDriver
 /** Current max pod index. When creating a new pod, it should increase 
one. */
 private long currentMaxPodId = 0;
 
-private Optional podsWatchOpt;
+private CompletableFuture podsWatchOptFuture;

Review Comment:
   ```suggestion
   private CompletableFuture podsWatchOptFuture =
   FutureUtils.completedExceptionally(
   new ResourceManagerException(
   "KubernetesResourceManagerDriver is not 
initialized."));
   ```



##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManagerDriver.java:
##
@@ -450,17 +462,30 @@ public void handleError(Throwable throwable) {
 if (throwable instanceof KubernetesTooOldResourceVersionException) 
{
 getMainThreadExecutor()
 .execute(
-() -> {
-if (running) {
-
podsWatchOpt.ifPresent(KubernetesWatch::close);
-log.info("Creating a new watch on 
TaskManager pods.");
-try {
-podsWatchOpt = 
watchTaskManagerPods();
-} catch (Exception e) {
-
getResourceEventHandler().onError(e);
-}
-}
-});
+() ->
+podsWatchOptFuture.whenCompleteAsync(
+(KubernetesWatch watch, 
Throwable throwable1) -> {
+if (running) {
+try {
+if (watch != null) 
{
+watch.close();
+}
+} catch (Exception e) {
+log.warn(
+"Error 
when get old watch to close, which is not supposed to happen",
+e);
+}
+log.info(
+"Creating a 
new watch on TaskManager pods.");
+try {
+podsWatchOptFuture 
=
+
watchTaskManagerPods();
+} catch (Exception e) {
+
getResourceEventHandler().onError(e);
+}
+}
+},
+getMainThreadExecutor()));

Review Comment:
   The outer `getMainThreadExecutor().execute()` is unnecessary.



##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManagerDriver.java:
##
@@ -90,7 +92,7 @@ public class KubernetesResourceManagerDriver
 /** Current max pod index. When creating a new pod, it should increase 
one. */
 private long currentMaxPodId = 0;
 
-private Optional podsWatchOpt;
+private CompletableFuture podsWatchOptFuture;

Review Comment:
   This helps prevent NPE if `podsWatchOptFuture` is accidentally being used 
before being initialized.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34479][documentation] Fix missed changelog configs in the mentation [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24355:
URL: https://github.com/apache/flink/pull/24355#issuecomment-1955946589

   
   ## CI report:
   
   * 5bf7811bbd65c6dec0557bd6384bad3577b77572 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34458][checkpointing] Rename options for Generalized incremental checkpoints (changelog) [flink]

2024-02-20 Thread via GitHub


masteryhx commented on code in PR #24324:
URL: https://github.com/apache/flink/pull/24324#discussion_r1496946596


##
docs/content.zh/docs/deployment/config.md:
##
@@ -470,7 +470,7 @@ using State Changelog.
 
  FileSystem-based Changelog options
 
-These settings take effect when the `state.backend.changelog.storage`  is set 
to `filesystem` (see [above](#state-backend-changelog-storage)).

Review Comment:
   Thanks for the pr.
   I found the section is missed and committed a pr to fix it: 
   https://github.com/apache/flink/pull/24354
   https://github.com/apache/flink/pull/24355
   Big thanks if you could help to review this firstly.
   And the title should also be modified after above pr is merged.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34219) Introduce a new join operator to support minibatch

2024-02-20 Thread Shuai Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuai Xu updated FLINK-34219:
-
Release Note: Support minibatch regular join to reduce intermediate result 
and resolve record amplification in cascading join scenarios.

> Introduce a new join operator to support minibatch
> --
>
> Key: FLINK-34219
> URL: https://issues.apache.org/jira/browse/FLINK-34219
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Shuai Xu
>Assignee: Shuai Xu
>Priority: Major
> Fix For: 1.19.0
>
>
> This is the parent task of FLIP-415.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34479][documentation] Fix missed changelog configs in the documentation [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24354:
URL: https://github.com/apache/flink/pull/24354#issuecomment-1955941725

   
   ## CI report:
   
   * ebb5f3501f6b24b40da46b102bccbefa40e000c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34479][documentation] Fix missed changelog configs in the mentation [flink]

2024-02-20 Thread via GitHub


masteryhx opened a new pull request, #24355:
URL: https://github.com/apache/flink/pull/24355

   
   
   ## What is the purpose of the change
   
   Fix missed changelog configs in the documentation
   
   
   ## Brief change log
   
- Add changelog related configs in the doc.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34479) Fix missed changelog configs in the documentation

2024-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34479:
---
Labels: pull-request-available  (was: )

> Fix missed changelog configs in the documentation
> -
>
> Key: FLINK-34479
> URL: https://issues.apache.org/jira/browse/FLINK-34479
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
>
> state_backend_changelog_section has been missed in the documentation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34479][documentation] Fix missed changelog configs in the documentation [flink]

2024-02-20 Thread via GitHub


masteryhx opened a new pull request, #24354:
URL: https://github.com/apache/flink/pull/24354

   
   
   ## What is the purpose of the change
   
   Fix missed changelog configs in the documentation
   
   
   ## Brief change log
   
- Add changelog related configs in the doc.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34479) Fix missed changelog configs in the documentation

2024-02-20 Thread Hangxiang Yu (Jira)
Hangxiang Yu created FLINK-34479:


 Summary: Fix missed changelog configs in the documentation
 Key: FLINK-34479
 URL: https://issues.apache.org/jira/browse/FLINK-34479
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.19.0, 1.20.0
Reporter: Hangxiang Yu
Assignee: Hangxiang Yu


state_backend_changelog_section has been missed in the documentation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34463] Open catalog in CatalogManager should use proper context classloader [flink]

2024-02-20 Thread via GitHub


jrthe42 commented on code in PR #24328:
URL: https://github.com/apache/flink/pull/24328#discussion_r1496930017


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java:
##
@@ -310,7 +311,10 @@ public void createCatalog(String catalogName, 
CatalogDescriptor catalogDescripto
 }
 
 Catalog catalog = initCatalog(catalogName, catalogDescriptor);
-catalog.open();
+try (TemporaryClassLoaderContext context =

Review Comment:
   @hackergin thanks, a test case is added to verify the change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2024-02-20 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-33581:
--
Release Note: 
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. The deprecated interfaces include the 
getter and setter methods of RestartStrategy, CheckpointStorage, and 
StateBackend.

More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.

  was:
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. Detailed information regarding the 
deprecations is as follows:

1. RestartStrategy:
Suggested alternative: 
Users can configure the RestartStrategyOptions related ConfigOptions, such as 
"restart-strategy.type", in the configuration, instead of passing a 
RestartStrategyConfiguration object.

2.CheckpointStorage:
Suggested alternative: 
Users can configure "state.checkpoint-storage" in the configuration as the 
fully qualified name of the checkpoint storage or use some FLINK-provided 
checkpoint storage shortcut names such as "jobmanager" and "filesystem", and 
provide the necessary configuration options for building that storage, instead 
of passing a CheckpointStorage object.

3.StateBackend:
Suggested alternative: 
Users can configure "state.backend.type" in the configuration as the fully 
qualified name of the state backend or use some FLINK-provided state backend 
shortcut names such as "hashmap" and "rocksdb", and provide the necessary 
configuration options for building that StateBackend, instead of passing a 
StateBackend object.

More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.


> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-11-30-17-59-42-650.png
>
>
> Deprecate the non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33545) KafkaSink implementation can cause dataloss during broker issue when not using EXACTLY_ONCE if there's any batching

2024-02-20 Thread Kevin Tseng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819079#comment-17819079
 ] 

Kevin Tseng commented on FLINK-33545:
-

Hi [~mason6345] 

I have replicated the behavior you referred to in
{code:java}
https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java#L1107
{code}
based on the logic displayed, it seemed *FlinkKafkaProducer* (the now 
deprecated producer) also considered the possibility of race condition,

as *pendingRecords* is only increased in *FlinkKafkaProducer::invoke* akin to 
*KafkaWriter::write* / *FlinkKafkaInternalProducer::send*

and decreased only in *FlinkKafkaProducer::acknowledgeMessage* that is only 
invoked within *Callback::onCompletion* akin to 
*KafkaWriter$WriterCallback::onCompletion*

the updated implementation are:
 # add pendingRecords typed AtomicLong (same as FlinkKafkaProducer)
 # increase the variable in FlinkKafkaInternalProducer::send
 # create an intermediate callback class TrackingCallback that decorates 
callback parameter of FlinkKafkaInternalProducer::send
 # decrease pendingRecords in the TrackingCallback decorated class
 # check the pendingRecords variable after flush, within 
FlinkKafkaInternalProducer::flush to ensure nothing else was sent while 
flushing has taken place, and throw IllegalStateException (same exception as 
FlinkKafkaProducer)

 

 

> KafkaSink implementation can cause dataloss during broker issue when not 
> using EXACTLY_ONCE if there's any batching
> ---
>
> Key: FLINK-33545
> URL: https://issues.apache.org/jira/browse/FLINK-33545
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.18.0
>Reporter: Kevin Tseng
>Assignee: Kevin Tseng
>Priority: Major
>  Labels: pull-request-available
>
> In the current implementation of KafkaSource and KafkaSink there are some 
> assumption that were made:
>  # KafkaSource completely relies on Checkpoint to manage and track its offset 
> in *KafkaSourceReader* class
>  # KafkaSink in *KafkaWriter* class only performs catch-flush when 
> *DeliveryGuarantee.EXACTLY_ONCE* is specified.
> KafkaSource is assuming that checkpoint should be properly fenced and 
> everything it had read up-til checkpoint being initiated will be processed or 
> recorded by operators downstream, including the TwoPhaseCommiter such as 
> *KafkaSink*
> *KafkaSink* goes by the model of:
>  
> {code:java}
> flush -> prepareCommit -> commit{code}
>  
> In a scenario that:
>  * KafkaSource ingested records #1 to #100
>  * KafkaSink only had chance to send records #1 to #96
>  * with a batching interval of 5ms
> when checkpoint has been initiated, flush will only confirm the sending of 
> record #1 to #96.
> This allows checkpoint to proceed as there's no error, and record #97 to 100 
> will be batched after first flush.
> Now, if broker goes down / has issue that caused the internal KafkaProducer 
> to not be able to send out the record after a batch, and is on a constant 
> retry-cycle (default value of KafkaProducer retries is Integer.MAX_VALUE), 
> *WriterCallback* error handling will never be triggered until the next 
> checkpoint flush.
> This can be tested by creating a faulty Kafka cluster and run the following 
> code:
> {code:java}
> Properties props = new Properties(); 
> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVER);
> props.put(ProducerConfig.CLIENT_ID_CONFIG, "example-producer");
> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class.getName());
> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class.getName()); 
> props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE); 
> props.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, Integer.MAX_VALUE); 
> props.put(ProducerConfig.ACKS_CONFIG, "all"); 
> final KafkaProducer producer = new KafkaProducer<>(props);
> try {
> for (int i = 0; i < 10; i++) {
> System.out.printf("sending record #%d\n", i);
> String data = UUID.randomUUID().toString();
> final ProducerRecord record = new 
> ProducerRecord<>(TOPIC, Integer.toString(i), data);
> producer.send(record, new CB(Integer.toString(i), data));
> Thread.sleep(1); //sleep for 10 seconds
> }
> } catch (Exception e) {
> e.printStackTrace();
> } finally {
> System.out.println("flushing");
> producer.flush();
> System.out.println("closing");
> producer.close();
> }{code}
> Once callback returns due to network timeout, it will cause Flink to restart 
> from previously saved 

[jira] [Updated] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2024-02-20 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-33581:
--
Release Note: 
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. Detailed information regarding the 
deprecations is as follows:

1. RestartStrategy:
Suggested alternative: 
Users can configure the RestartStrategyOptions related ConfigOptions, such as 
"restart-strategy.type", in the configuration, instead of passing a 
RestartStrategyConfiguration object.

2.CheckpointStorage:
Suggested alternative: 
Users can configure "state.checkpoint-storage" in the configuration as the 
fully qualified name of the checkpoint storage or use some FLINK-provided 
checkpoint storage shortcut names such as "jobmanager" and "filesystem", and 
provide the necessary configuration options for building that storage, instead 
of passing a CheckpointStorage object.

3.StateBackend:
Suggested alternative: 
Users can configure "state.backend.type" in the configuration as the fully 
qualified name of the state backend or use some FLINK-provided state backend 
shortcut names such as "hashmap" and "rocksdb", and provide the necessary 
configuration options for building that StateBackend, instead of passing a 
StateBackend object.

More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.

  was:
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. Detailed information regarding the 
deprecations is as follows:

1. RestartStrategy:

Suggested alternative: 
Users can configure the RestartStrategyOptions related ConfigOptions, such as 
"restart-strategy.type", in the configuration, instead of passing a 
RestartStrategyConfiguration object.

2.CheckpointStorage:

Suggested alternative: 
Users can configure "state.checkpoint-storage" in the configuration as the 
fully qualified name of the checkpoint storage or use some FLINK-provided 
checkpoint storage shortcut names such as "jobmanager" and "filesystem", and 
provide the necessary configuration options for building that storage, instead 
of passing a CheckpointStorage object.

3.StateBackend:

Suggested alternative: 
Users can configure "state.backend.type" in the configuration as the fully 
qualified name of the state backend or use some FLINK-provided state backend 
shortcut names such as "hashmap" and "rocksdb", and provide the necessary 
configuration options for building that StateBackend, instead of passing a 
StateBackend object.


More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.


> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-11-30-17-59-42-650.png
>
>
> Deprecate the non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2024-02-20 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-33581:
--
Release Note: 
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. Detailed information regarding the 
deprecations is as follows:

1. RestartStrategy:

Suggested alternative: 
Users can configure the RestartStrategyOptions related ConfigOptions, such as 
"restart-strategy.type", in the configuration, instead of passing a 
RestartStrategyConfiguration object.

2.CheckpointStorage:

Suggested alternative: 
Users can configure "state.checkpoint-storage" in the configuration as the 
fully qualified name of the checkpoint storage or use some FLINK-provided 
checkpoint storage shortcut names such as "jobmanager" and "filesystem", and 
provide the necessary configuration options for building that storage, instead 
of passing a CheckpointStorage object.

3.StateBackend:

Suggested alternative: 
Users can configure "state.backend.type" in the configuration as the fully 
qualified name of the state backend or use some FLINK-provided state backend 
shortcut names such as "hashmap" and "rocksdb", and provide the necessary 
configuration options for building that StateBackend, instead of passing a 
StateBackend object.


More details can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992.

  was:
The non-ConfigOption objects in the StreamExecutionEnvironment, 
CheckpointConfig, and ExecutionConfig and their corresponding getter/setter 
interfaces is now be deprecated in FLINK-1.19. And these objects and methods is 
planned to be removed in Flink-2.0. Detailed information regarding the 
deprecations is as follows:

1. RestartStrategy:
Class:
org.apache.flink.api.common.restartstrategy.RestartStrategies
org.apache.flink.api.common.restartstrategy.RestartStrategies.RestartStrategyConfiguration
org.apache.flink.api.common.restartstrategy.RestartStrategies.FixedDelayRestartStrategyConfiguration
org.apache.flink.api.common.restartstrategy.RestartStrategies.ExponentialDelayRestartStrategyConfiguration
org.apache.flink.api.common.restartstrategy.RestartStrategies.FailureRateRestartStrategyConfiguration
org.apache.flink.api.common.restartstrategy.RestartStrategies.FallbackRestartStrategyConfiguration

Method:
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment#setRestartStrategy(RestartStrategies.RestartStrategyConfiguration
 restartStrategyConfiguration)
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment#getRestartStrategy()
org.apache.flink.api.common.ExecutionConfig#getRestartStrategy()
org.apache.flink.api.common.ExecutionConfig#setRestartStrategy(RestartStrategies.RestartStrategyConfiguration
 restartStrategyConfiguration)
pyflink.common.execution_config.ExecutionConfig#set_restart_strategy(self, 
restart_strategy_configuration: RestartStrategyConfiguration)
pyflink.datastream.stream_execution_environment.StreamExecutionEnvironment#set_restart_strategy(self,
 restart_strategy_configuration: RestartStrategyConfiguration)

Field:
org.apache.flink.api.common.ExecutionConfig#restartStrategyConfiguration

Suggested alternative: 
Users can configure the RestartStrategyOptions related ConfigOptions, such as 
"restart-strategy.type", in the configuration, instead of passing a 
RestartStrategyConfiguration object.

2.CheckpointStorage:
Method:
org.apache.flink.streaming.api.environment.CheckpointConfig#setCheckpointStorage(CheckpointStorage
 storage)
org.apache.flink.streaming.api.environment.CheckpointConfig#setCheckpointStorage(String
 checkpointDirectory)
org.apache.flink.streaming.api.environment.CheckpointConfig#setCheckpointStorage(URI
 checkpointDirectory)
org.apache.flink.streaming.api.environment.CheckpointConfig#setCheckpointStorage(Path
 checkpointDirectory)
org.apache.flink.streaming.api.environment.CheckpointConfig#getCheckpointStorage()
pyflink.datastream.checkpoint_config.CheckpointConfig#set_checkpoint_storage(self,
 storage: CheckpointStorage)
pyflink.datastream.checkpoint_config.CheckpointConfig#set_checkpoint_storage_dir(self,
 checkpoint_path: str)
pyflink.datastream.checkpoint_config.CheckpointConfig#get_checkpoint_storage(self)

Suggested alternative: 
Users can configure "state.checkpoint-storage" in the configuration as the 
fully qualified name of the checkpoint storage or use some FLINK-provided 
checkpoint storage shortcut names such as "jobmanager" and "filesystem", and 
provide the necessary configuration options for building that storage, instead 
of passing a CheckpointStorage object.

3.StateBackend:
Method:
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment#setStateBackend(StateBackend
 backend)

[jira] [Commented] (FLINK-33817) Allow ReadDefaultValues = False for non primitive types on Proto3

2024-02-20 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819075#comment-17819075
 ] 

Benchao Li commented on FLINK-33817:


According to what [~dsaisharath] provides, I tend to include this improvement 
to the master, I'll review the PR shortly, hope to get it in in 1.20.0

[~nathantalewis] This is an improvement instead of bugfix, so it will only be 
merged to the latest development version, and do not apply to bugfix versions 
such as 1.17.x, 1.18.x, does this sounds good to you?

> Allow ReadDefaultValues = False for non primitive types on Proto3
> -
>
> Key: FLINK-33817
> URL: https://issues.apache.org/jira/browse/FLINK-33817
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Priority: Major
>  Labels: pull-request-available
>
> *Background*
>  
> The current Protobuf format 
> [implementation|https://github.com/apache/flink/blob/c3e2d163a637dca5f49522721109161bd7ebb723/flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/deserialize/ProtoToRowConverter.java]
>  always sets ReadDefaultValues=False when using Proto3 version. This can 
> cause severe performance degradation for large Protobuf schemas with OneOf 
> fields as the entire generated code needs to be executed during 
> deserialization even when certain fields are not present in the data to be 
> deserialized and all the subsequent nested Fields can be skipped. Proto3 
> supports hasXXX() methods for checking field presence for non primitive types 
> since Proto version 
> [3.15|https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0]. In 
> the internal performance benchmarks in our company, we've seen almost 10x 
> difference in performance for one of our real production usecase when 
> allowing to set ReadDefaultValues=False with proto3 version. The exact 
> difference in performance depends on the schema complexity and data payload 
> but we should allow user to set readDefaultValue=False in general.
>  
> *Solution*
>  
> Support using ReadDefaultValues=False when using Proto3 version. We need to 
> be careful to check for field presence only on non-primitive types if 
> ReadDefaultValues is false and version used is Proto3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32070) FLIP-306 Unified File Merging Mechanism for Checkpoints

2024-02-20 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-32070:

Fix Version/s: 1.20.0
   (was: 1.19.0)

> FLIP-306 Unified File Merging Mechanism for Checkpoints
> ---
>
> Key: FLINK-32070
> URL: https://issues.apache.org/jira/browse/FLINK-32070
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
> Fix For: 1.20.0
>
>
> The FLIP: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-306%3A+Unified+File+Merging+Mechanism+for+Checkpoints]
>  
> The creation of multiple checkpoint files can lead to a 'file flood' problem, 
> in which a large number of files are written to the checkpoint storage in a 
> short amount of time. This can cause issues in large clusters with high 
> workloads, such as the creation and deletion of many files increasing the 
> amount of file meta modification on DFS, leading to single-machine hotspot 
> issues for meta maintainers (e.g. NameNode in HDFS). Additionally, the 
> performance of object storage (e.g. Amazon S3 and Alibaba OSS) can 
> significantly decrease when listing objects, which is necessary for object 
> name de-duplication before creating an object, further affecting the 
> performance of directory manipulation in the file system's perspective of 
> view (See [hadoop-aws module 
> documentation|https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#:~:text=an%20intermediate%20state.-,Warning%20%232%3A%20Directories%20are%20mimicked,-The%20S3A%20clients],
>  section 'Warning #2: Directories are mimicked').
> While many solutions have been proposed for individual types of state files 
> (e.g. FLINK-11937 for keyed state (RocksDB) and FLINK-26803 for channel 
> state), the file flood problems from each type of checkpoint file are similar 
> and lack systematic view and solution. Therefore, the goal of this FLIP is to 
> establish a unified file merging mechanism to address the file flood problem 
> during checkpoint creation for all types of state files, including keyed, 
> non-keyed, channel, and changelog state. This will significantly improve the 
> system stability and availability of fault tolerance in Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-24696][docs-zh]Translate how to configure unaligned checkpoints into Chinese [flink]

2024-02-20 Thread via GitHub


Libra-JL commented on PR #22582:
URL: https://github.com/apache/flink/pull/22582#issuecomment-1955904136

   > @Libra-JL Thanks for you update! I noticed that there still some format 
change in `docs/layouts/shortcodes/generated/checkpointing_configuration.html`, 
would you please remove them?
   
   Sorry, I forgot what the original format should look like. I did the 
following: I replaced this with the latest file from the master branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-6755) Allow triggering Checkpoints through command line client

2024-02-20 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819073#comment-17819073
 ] 

lincoln lee commented on FLINK-6755:


[~Zakelly]Thanks for your updates!

> Allow triggering Checkpoints through command line client
> 
>
> Key: FLINK-6755
> URL: https://issues.apache.org/jira/browse/FLINK-6755
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client, Runtime / Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Zakelly Lan
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> The command line client currently only allows triggering (and canceling with) 
> Savepoints. 
> While this is good if we want to fork or modify the pipelines in a 
> non-checkpoint compatible way, now with incremental checkpoints this becomes 
> wasteful for simple job restarts/pipeline updates. 
> I suggest we add a new command: 
> ./bin/flink checkpoint  [checkpointDirectory]
> and a new flag -c for the cancel command to indicate we want to trigger a 
> checkpoint:
> ./bin/flink cancel -c [targetDirectory] 
> Otherwise this can work similar to the current savepoint taking logic, we 
> could probably even piggyback on the current messages by adding boolean flag 
> indicating whether it should be a savepoint or a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-02-20 Thread via GitHub


gyfora commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1955883131

   > > > > > > @gyfora yes, I am having JIRA account using which I can login to 
https://issues.apache.org/jira/projects/FLINK/.
   > > > > > 
   > > > > > 
   > > > > > okay then can you please tell me the account name? :D
   > > > > 
   > > > > 
   > > > > account name : **lajithk**
   > > > 
   > > > 
   > > > It seems like you need to create a confluence account 
(cwiki.apache.org) once you have that I can give you permissions to create a 
FLIP page
   > > 
   > > 
   > > I have been checking on to create confluence account , 
https://cwiki.apache.org/confluence , it says for register go to Log in page, 
but don't see any option to register there in login page. On further digging 
noticed some thing like 
https://cwiki.apache.org/confluence/display/DIRxTRIPLESEC/User+Registration . 
Is that something I have to follow up ?. or are there any path I can look for 
registration?.
   > 
   > @gyfora , Could you please point me anyone I can reach out to get 
assistance on helping to get account created in 
https://cwiki.apache.org/confluence?. Thank you in advance.
   
   Seems like confluence access for non-committers have been recently disabled. 
Based on a recent discussion we are adopting a new (somewhat simpler) process 
for new FLIPS: https://lists.apache.org/thread/rkpvlnwj9gv1hvx1dyklx6k88qpnvk2t
   
   ```
   Contributors create a Google Doc and make that view-only, and post that 
Google Doc to the mailing list for a discussion thread. 
   When the discussions have been resolved, the contributor ask on the Dev 
mailing list to a committer/PMC to copy the contents from the Google Doc, and 
create a FLIP number for them. 
   The contributor can then use that FLIP to actually have a VOTE thread.
   ```
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34478) NoSuchMethod error for "flink cancel $jobId" via Command Line

2024-02-20 Thread Liu Yi (Jira)
Liu Yi created FLINK-34478:
--

 Summary: NoSuchMethod error for "flink cancel $jobId" via Command 
Line
 Key: FLINK-34478
 URL: https://issues.apache.org/jira/browse/FLINK-34478
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.18.1
Reporter: Liu Yi


On 1.18.1 standalone mode (launched by "\{flink}/bin/start-cluster.sh"), I hit 
" [java.lang.NoSuchMethodError: 'boolean 
org.apache.commons.cli.CommandLine.hasOption(org.apache.commons.cli.Option)'|https://www.google.com/search?q=java.lang.NoSuchMethodError:+%27boolean+org.apache.commons.cli.CommandLine.hasOption%28org.apache.commons.cli.Option%29%27]
 " when trying to cancel a job submitted via the UI by executing the Command 
Line "{*}{flink}/bin/flink cancel _$jobId_{*}". While clicking on "Cancel Job" 
link in the UI can cancel the job just fine, and "flink run" command line also 
works fine.

Has anyone seen same/similar behavior?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34371][runtime] Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment [flink]

2024-02-20 Thread via GitHub


yunfengzhou-hub commented on PR #24272:
URL: https://github.com/apache/flink/pull/24272#issuecomment-1955843349

   Thanks for the comments @Sxnan @mohitjain2504 . I have updated the PR 
according to the comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-24696][docs-zh]Translate how to configure unaligned checkpoints into Chinese [flink]

2024-02-20 Thread via GitHub


Zakelly commented on code in PR #22582:
URL: https://github.com/apache/flink/pull/22582#discussion_r1496822966


##
docs/layouts/shortcodes/generated/checkpointing_configuration.html:
##
@@ -12,55 +12,55 @@
 state.backend.incremental
 false
 Boolean
-Option whether the state backend should create incremental 
checkpoints, if possible. For an incremental checkpoint, only a diff from the 
previous checkpoint is stored, rather than the complete checkpoint state. Once 
enabled, the state size shown in web UI or fetched from rest API only 
represents the delta checkpoint size instead of full checkpoint size. Some 
state backends may not support incremental checkpoints and ignore this 
option.
+选择是否启用 State Backend 创建增量 Checkpoint(如果可能)。对于增量 
Checkpoint,只会存储与上一个 Checkpoint 的差异,而不是完整的 Checkpoint State。一旦启用,Web UI 中显示的 
State 大小或从 REST API 获取的 State 大小仅表示增量 Checkpoint 大小,而不是完整的 Checkpoint 大小。某些 
State Backend 可能不支持增量 Checkpoint 并忽略此选项。

Review Comment:
   Ah, are you saying the 'Affects Version/s'? There are corresponding git tag 
for each version in this repo. And nevermind, you are right about merging this 
into `master` branch.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-24696][docs-zh]Translate how to configure unaligned checkpoints into Chinese [flink]

2024-02-20 Thread via GitHub


Zakelly commented on PR #22582:
URL: https://github.com/apache/flink/pull/22582#issuecomment-1955792248

   @Libra-JL Thanks for you update! I noticed that there still some format 
change in `docs/layouts/shortcodes/generated/checkpointing_configuration.html`, 
would you please remove them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33945) Cleanup usage of deprecated org.apache.flink.table.api.dataview.ListView#ListView

2024-02-20 Thread Jacky Lau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Lau closed FLINK-33945.
-
Resolution: Fixed

merged 060dcf8fc65e5fc3ade5b6037f7dfdc7919ba2b2

> Cleanup usage of deprecated 
> org.apache.flink.table.api.dataview.ListView#ListView
> -
>
> Key: FLINK-33945
> URL: https://issues.apache.org/jira/browse/FLINK-33945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-6755) Allow triggering Checkpoints through command line client

2024-02-20 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819044#comment-17819044
 ] 

Zakelly Lan commented on FLINK-6755:


[~lincoln.86xy] Thanks for the reminder! I've add a release note.

> Allow triggering Checkpoints through command line client
> 
>
> Key: FLINK-6755
> URL: https://issues.apache.org/jira/browse/FLINK-6755
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client, Runtime / Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Zakelly Lan
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> The command line client currently only allows triggering (and canceling with) 
> Savepoints. 
> While this is good if we want to fork or modify the pipelines in a 
> non-checkpoint compatible way, now with incremental checkpoints this becomes 
> wasteful for simple job restarts/pipeline updates. 
> I suggest we add a new command: 
> ./bin/flink checkpoint  [checkpointDirectory]
> and a new flag -c for the cancel command to indicate we want to trigger a 
> checkpoint:
> ./bin/flink cancel -c [targetDirectory] 
> Otherwise this can work similar to the current savepoint taking logic, we 
> could probably even piggyback on the current messages by adding boolean flag 
> indicating whether it should be a savepoint or a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-6755) Allow triggering Checkpoints through command line client

2024-02-20 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-6755:
---
Release Note: 
Currently the command line interface support triggering a checkpoint manually. 
Usage:
  ./bin/flink checkpoint $JOB_ID [-full]
By specifying the '-full' option, a full checkpoint is triggered. Otherwise an 
incremental checkpoint is triggered if the job is configured to take 
incremental ones periodically.

  was:
Currently the command line interface support triggering a checkpoint manually. 
Usage:
  ./bin/flink checkpoint [-full]
By specifying the '-full' option, a full checkpoint is triggered. Otherwise an 
incremental checkpoint is triggered if the job is configured to take 
incremental ones periodically.


> Allow triggering Checkpoints through command line client
> 
>
> Key: FLINK-6755
> URL: https://issues.apache.org/jira/browse/FLINK-6755
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client, Runtime / Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Zakelly Lan
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> The command line client currently only allows triggering (and canceling with) 
> Savepoints. 
> While this is good if we want to fork or modify the pipelines in a 
> non-checkpoint compatible way, now with incremental checkpoints this becomes 
> wasteful for simple job restarts/pipeline updates. 
> I suggest we add a new command: 
> ./bin/flink checkpoint  [checkpointDirectory]
> and a new flag -c for the cancel command to indicate we want to trigger a 
> checkpoint:
> ./bin/flink cancel -c [targetDirectory] 
> Otherwise this can work similar to the current savepoint taking logic, we 
> could probably even piggyback on the current messages by adding boolean flag 
> indicating whether it should be a savepoint or a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-6755) Allow triggering Checkpoints through command line client

2024-02-20 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-6755:
---
Release Note: 
Currently the command line interface support triggering a checkpoint manually. 
Usage:
  ./bin/flink checkpoint [-full]
By specifying the '-full' option, a full checkpoint is triggered. Otherwise an 
incremental checkpoint is triggered if the job is configured to take 
incremental ones periodically.

> Allow triggering Checkpoints through command line client
> 
>
> Key: FLINK-6755
> URL: https://issues.apache.org/jira/browse/FLINK-6755
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client, Runtime / Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Zakelly Lan
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> The command line client currently only allows triggering (and canceling with) 
> Savepoints. 
> While this is good if we want to fork or modify the pipelines in a 
> non-checkpoint compatible way, now with incremental checkpoints this becomes 
> wasteful for simple job restarts/pipeline updates. 
> I suggest we add a new command: 
> ./bin/flink checkpoint  [checkpointDirectory]
> and a new flag -c for the cancel command to indicate we want to trigger a 
> checkpoint:
> ./bin/flink cancel -c [targetDirectory] 
> Otherwise this can work similar to the current savepoint taking logic, we 
> could probably even piggyback on the current messages by adding boolean flag 
> indicating whether it should be a savepoint or a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34462][table-planner] Improve the exception for negative parameters in window TVF [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24353:
URL: https://github.com/apache/flink/pull/24353#issuecomment-1955775208

   
   ## CI report:
   
   * e9fdf0fa1186ec746ec19c29ba4325754de4a5b7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Draft][bugfix] Move the deserialization of shuffleDescriptor to a separate … [flink]

2024-02-20 Thread via GitHub


caodizhou commented on PR #24115:
URL: https://github.com/apache/flink/pull/24115#issuecomment-1955773927

   @flinkbot  run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33932][checkpointing] Add retry mechanism in RocksDBStateUploader [flink]

2024-02-20 Thread via GitHub


xiangyuf commented on PR #23986:
URL: https://github.com/apache/flink/pull/23986#issuecomment-1955770762

   > Sorry for jumping in but may I ask about the current status?
   
   Hi @Zakelly , I'm still working on 
[FLIP-414](https://cwiki.apache.org/confluence/display/FLINK/FLIP-414%3A+Support+Retry+Mechanism+in+RocksDBStateDataTransfer)
 to support a more general retry mechanism for all statebackends.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-24696][docs-zh]Translate how to configure unaligned checkpoints into Chinese [flink]

2024-02-20 Thread via GitHub


Libra-JL commented on code in PR #22582:
URL: https://github.com/apache/flink/pull/22582#discussion_r1495794819


##
docs/layouts/shortcodes/generated/checkpointing_configuration.html:
##
@@ -12,55 +12,55 @@
 state.backend.incremental
 false
 Boolean
-Option whether the state backend should create incremental 
checkpoints, if possible. For an incremental checkpoint, only a diff from the 
previous checkpoint is stored, rather than the complete checkpoint state. Once 
enabled, the state size shown in web UI or fetched from rest API only 
represents the delta checkpoint size instead of full checkpoint size. Some 
state backends may not support incremental checkpoints and ignore this 
option.
+选择是否启用 State Backend 创建增量 Checkpoint(如果可能)。对于增量 
Checkpoint,只会存储与上一个 Checkpoint 的差异,而不是完整的 Checkpoint State。一旦启用,Web UI 中显示的 
State 大小或从 REST API 获取的 State 大小仅表示增量 Checkpoint 大小,而不是完整的 Checkpoint 大小。某些 
State Backend 可能不支持增量 Checkpoint 并忽略此选项。

Review Comment:
   Thank you for your review. I will make modifications based on your 
suggestions. By the way,I see that the impact version pointed out in the issue 
on Jira does not exist in the code repository branch, and I am not sure if this 
is correct now



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34462) Session window with negative parameter throws unclear exception

2024-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34462:
---
Labels: pull-request-available  (was: )

> Session window with negative parameter throws unclear exception
> ---
>
> Key: FLINK-34462
> URL: https://issues.apache.org/jira/browse/FLINK-34462
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Set invalid parameter in session window get unclear error.
> {code:java}
> // add test in WindowAggregateITCase
> def testEventTimeSessionWindowWithInvalidName(): Unit = {
>   val sql =
> """
>   |SELECT
>   |  window_start,
>   |  window_end,
>   |  COUNT(*),
>   |  SUM(`bigdec`),
>   |  MAX(`double`),
>   |  MIN(`float`),
>   |  COUNT(DISTINCT `string`),
>   |  concat_distinct_agg(`string`)
>   |FROM TABLE(
>   |   SESSION(TABLE T1, DESCRIPTOR(rowtime), INTERVAL '-5' SECOND))
>   |GROUP BY window_start, window_end
> """.stripMargin
>   val sink = new TestingAppendSink
>   tEnv.sqlQuery(sql).toDataStream.addSink(sink)
>   env.execute()
> } 
> {code}
> {code:java}
> java.lang.AssertionError: Sql optimization: Assertion error: null at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:79)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
>  at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>  at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>  at scala.collection.Iterator.foreach(Iterator.scala:937) at 
> scala.collection.Iterator.foreach$(Iterator.scala:937) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156) at 
> scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
>  at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:176)
>  at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:320)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:178)
>  at 
> org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.toStreamInternal(AbstractStreamTableEnvironmentImpl.java:224)
>  at 
> org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.toStreamInternal(AbstractStreamTableEnvironmentImpl.java:219)
>  at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.scala:151)
>  at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.scala:128)
>  at 
> org.apache.flink.table.api.bridge.scala.TableConversions.toDataStream(TableConversions.scala:60)
>  at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowAggregateITCase.testEventTimeSessionWindowWithInvalidName(WindowAggregateITCase.scala:1239)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at 
> java.util.Iterator.forEachRemaining(Iterator.java:116) at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
>  at 
> 

[PR] [FLINK-34462][table-planner] Improve the exception for negative parameters in window TVF [flink]

2024-02-20 Thread via GitHub


xishuaidelin opened a new pull request, #24353:
URL: https://github.com/apache/flink/pull/24353

   
   
   ## What is the purpose of the change
   
   This pull request aims to improve the exception for negative parameters in 
window TVF. 
   
   ## Brief change log
   
   Check arguments in windowSpec.
   
   ## Verifying this change
   
   This change added tests and can be verified in WindowAggregateTest.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34444) Add new endpoint handler to flink

2024-02-20 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-3:
---

Assignee: Mason Chen

> Add new endpoint handler to flink
> -
>
> Key: FLINK-3
> URL: https://issues.apache.org/jira/browse/FLINK-3
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / REST
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34445) Integrate new endpoint with Flink UI metrics section

2024-02-20 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34445:
---

Assignee: Mason Chen

> Integrate new endpoint with Flink UI metrics section
> 
>
> Key: FLINK-34445
> URL: https://issues.apache.org/jira/browse/FLINK-34445
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34064) Expose JobManagerOperatorMetrics via REST API

2024-02-20 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34064:
---

Assignee: Mason Chen

> Expose JobManagerOperatorMetrics via REST API
> -
>
> Key: FLINK-34064
> URL: https://issues.apache.org/jira/browse/FLINK-34064
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.18.0
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>
> Add a REST API to fetch coordinator metrics.
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-417%3A+Expose+JobManagerOperatorMetrics+via+REST+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33989) Insert Statement With Filter Operation Generates Extra Tombstone using Upsert Kafka Connector

2024-02-20 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819043#comment-17819043
 ] 

Benchao Li commented on FLINK-33989:


I think this is related to FLINK-9528, and it's a by design behavior.

> Insert Statement With Filter Operation Generates Extra Tombstone using Upsert 
> Kafka Connector
> -
>
> Key: FLINK-33989
> URL: https://issues.apache.org/jira/browse/FLINK-33989
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Runtime
>Affects Versions: 1.17.2
>Reporter: Flaviu Cicio
>Priority: Major
>
> Given the following Flink SQL tables:
> {code:sql}
> CREATE TABLE input (
>   id STRING NOT NULL, 
>   current_value STRING NOT NULL, 
>   PRIMARY KEY (id) NOT ENFORCED
> ) WITH (
>   'connector' = 'upsert-kafka', 
>   'topic' = 'input', 
>   'key.format' = 'raw', 
>   'properties.bootstrap.servers' = 'kafka:29092', 
>   'properties.group.id' = 'your_group_id', 
>   'value.format' = 'json'
> );
> CREATE TABLE output (
>   id STRING NOT NULL, 
>   current_value STRING NOT NULL, 
>   PRIMARY KEY (id) NOT ENFORCED
> ) WITH (
>   'connector' = 'upsert-kafka', 
>   'topic' = 'output', 
>   'key.format' = 'raw', 
>   'properties.bootstrap.servers' = 'kafka:29092', 
>   'properties.group.id' = 'your_group_id', 
>   'value.format' = 'json'
> ); {code}
> And, the following entries are present in the input Kafka topic:
> {code:json}
> [
>   {
>     "id": "1",
>     "current_value": "abc"
>   },
>   {
>     "id": "1",
>     "current_value": "abcd"
>   }
> ]{code}
> If we execute the following statement:
> {code:sql}
> INSERT INTO output SELECT id, current_value FROM input; {code}
> The following entries are published to the output Kafka topic:
> {code:json}
> [
>   {
>     "id": "1",
>     "current_value": "abc"
>   },
>   {
>     "id": "1",
>     "current_value": "abcd"
>   }
> ]{code}
> But, if we execute the following statement:
> {code:sql}
> INSERT INTO output SELECT id, current_value FROM input WHERE id IN ('1'); 
> {code}
> The following entries are published:
> {code:json}
> [
>   {
>     "id": "1",
>     "current_value": "abc"
>   },
>   null,
>   {
>     "id": "1",
>     "current_value": "abcd"
>   }
> ]{code}
> We would expect the result to be the same for both insert statements.
> As we can see, there is an extra tombstone generated as a result of the 
> second statement.
>  
> Moreover, if we make a select on the input table:
> {code:sql}
> SELECT * FROM input;
> {code}
> We will get the following entries:
> ||op||id||current_value||
> |I|1|abc|
> |-U|1|abc|
> |+U|1|abcd|
> We expected to see only the insert and the update_after entries.
> The update_before is added at DeduplicateFunctionHelper#122.
> This is easily reproducible with this test that we added in the 
> UpsertKafkaTableITCase from flink-connector-kafka:
> {code:java}
> @Test
> public void testAggregateFilterOmit() throws Exception {
> String topic = COUNT_FILTER_TOPIC + "_" + format;
> createTestTopic(topic, 1, 1);
> env.setParallelism(1);
> // -   test   ---
> countFilterToUpsertKafkaOmitUpdateBefore(topic);
> // - clean up ---
> deleteTestTopic(topic);
> }
> private void countFilterToUpsertKafkaOmitUpdateBefore(String table) 
> throws Exception {
> String bootstraps = getBootstrapServers();
> List data =
> Arrays.asList(
> Row.of(1, "Hi"),
> Row.of(1, "Hello"),
> Row.of(2, "Hello world"),
> Row.of(2, "Hello world, how are you?"),
> Row.of(2, "I am fine."),
> Row.of(3, "Luke Skywalker"),
> Row.of(3, "Comment#1"),
> Row.of(3, "Comment#2"),
> Row.of(4, "Comment#3"),
> Row.of(4, null));
> final String createSource =
> String.format(
> "CREATE TABLE aggfilter_%s ("
> + "  `id` INT,\n"
> + "  `comment` STRING\n"
> + ") WITH ("
> + "  'connector' = 'values',"
> + "  'data-id' = '%s'"
> + ")",
> format, TestValuesTableFactory.registerData(data));
> tEnv.executeSql(createSource);
> final String createSinkTable =
> String.format(
> "CREATE TABLE %s (\n"
> + "  `id` INT,\n"
>

[jira] [Closed] (FLINK-34472) loading class of protobuf format descriptor by "Class.forName(className, true, Thread.currentThread().getContextClassLoader())" may can not find class because the current

2024-02-20 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-34472.
--
Fix Version/s: (was: 1.20.0)
   Resolution: Duplicate

[~jeremymu] Thanks for the confirmation, I'm closing this one as "duplicated", 
further discussions can go to FLINK-32418.

> loading class of protobuf format descriptor by "Class.forName(className, 
> true, Thread.currentThread().getContextClassLoader())" may can not find class 
> because the current thread class loader may not contain this class
> -
>
> Key: FLINK-34472
> URL: https://issues.apache.org/jira/browse/FLINK-34472
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.2, 1.18.1
> Environment: 1、standalone session
> 2、flink on k8s 
>Reporter: jeremyMu
>Priority: Major
> Attachments: exception1.png, sourcecode.jpg, sourcecode2.png, 
> sourcecode3.png
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> 1、Stack information:
> Caused by: java.lang.IllegalArgumentException: get test.ProtobufTest1 
> descriptors error!
>     at 
> org.apache.flink.formats.protobuf.util.PbFormatUtils.getDescriptor(PbFormatUtils.java:94)
>     at 
> org.apache.flink.formats.protobuf.serialize.PbRowDataSerializationSchema.(PbRowDataSerializationSchema.java:49)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:47)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:31)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.createSerialization(KafkaDynamicSink.java:388)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:194)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2、my suggestion:
> Add a member variable to PbEncodingFormat: class loader instance, which is 
> passed through DynamicTableFactory.Context method :getClassLoader() 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33945][table] Cleanup usage of deprecated org.apache.flink.tab… [flink]

2024-02-20 Thread via GitHub


HuangXingBo merged PR #23997:
URL: https://github.com/apache/flink/pull/23997


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-6755) Allow triggering Checkpoints through command line client

2024-02-20 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819041#comment-17819041
 ] 

lincoln lee commented on FLINK-6755:


[~Zakelly] Should we add release notes for this ticket?

> Allow triggering Checkpoints through command line client
> 
>
> Key: FLINK-6755
> URL: https://issues.apache.org/jira/browse/FLINK-6755
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client, Runtime / Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Zakelly Lan
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> The command line client currently only allows triggering (and canceling with) 
> Savepoints. 
> While this is good if we want to fork or modify the pipelines in a 
> non-checkpoint compatible way, now with incremental checkpoints this becomes 
> wasteful for simple job restarts/pipeline updates. 
> I suggest we add a new command: 
> ./bin/flink checkpoint  [checkpointDirectory]
> and a new flag -c for the cancel command to indicate we want to trigger a 
> checkpoint:
> ./bin/flink cancel -c [targetDirectory] 
> Otherwise this can work similar to the current savepoint taking logic, we 
> could probably even piggyback on the current messages by adding boolean flag 
> indicating whether it should be a savepoint or a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34298) Release Testing Instructions: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819038#comment-17819038
 ] 

Yubin Li commented on FLINK-34298:
--

[~qingyue] Thanks for your detailed instructions

> Release Testing Instructions: Verify FLINK-33397 Support Configuring 
> Different State TTLs using SQL Hint
> 
>
> Key: FLINK-34298
> URL: https://issues.apache.org/jira/browse/FLINK-34298
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Jane Chan
>Priority: Blocker
> Fix For: 1.19.0
>
>
> This ticket describes how to verify FLINK-33397: Support Configuring 
> Different State TTLs using SQL Hint.
>  
> The verification steps are as follows.
> 1. Start the standalone session cluster and sql client.
> 2. Execute the following DDL statements.
> {code:sql}
> CREATE TABLE `default_catalog`.`default_database`.`Orders` (
>   `order_id` INT,
>   `line_order_id` INT
> ) WITH (
>   'connector' = 'datagen', 
>   'rows-per-second' = '5'
> ); 
> CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
>   `line_order_id` INT,
>   `ship_mode` STRING
> ) WITH (
>   'connector' = 'datagen',
>   'rows-per-second' = '5'
> );
> CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
>   `order_id` INT,
>   `line_order_id` INT,
>   `ship_mode` STRING ) WITH (
>   'connector' = 'print'
> ); {code}
> 3. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to join
> {code:sql}
> -- SET the pipeline level state TTL to 24h
> SET 'table.exec.state.ttl' = '24h';
> -- Configure different state TTL for join operator
> COMPILE PLAN '/path/to/join-plan.json' FOR
> INSERT INTO OrdersShipInfo 
> SELECT /*+STATE_TTL('a' = '2d', 'b' = '12h')*/ a.order_id, a.line_order_id, 
> b.ship_mode 
> FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
> {code}
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamJoin ExecNode.
> {code:json}
> {
> "id" : 5,
> "type" : "stream-exec-join_1",
> "joinSpec" : {
>   ...
> },
> "state" : [ {
>   "index" : 0,
>   "ttl" : "2 d",
>   "name" : "leftState"
> }, {
>   "index" : 1,
>   "ttl" : "12 h",
>   "name" : "rightState"
> } ],
> "inputProperties": [...],
> "outputType": ...,
> "description": ...
> }
> {code}
> 4. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to group aggregate
> {code:sql}
> CREARE TABLE source_t (
> a INT,
> b BIGINT,
> c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> CREARE TABLE sink_t (
> b BIGINT PRIMARY KEY NOT ENFORCED,
> cnt BIGINT,
> avg_a DOUBLE,
> min_c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> COMPILE PLAN '/path/to/agg-plan.json' FOR
> INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '1s') */
> b, 
> COUNT(*) AS cnt,
> AVG(a) FILTER (WHERE a > 1) AS avg_a
> MIN(c) AS min_c 
> FROM source_t GROUP BY b
> {code}
>  
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamExecGroupAggregate ExecNode.
> {code:json}
> "state" : [ {
>   "index" : 0,
>   "ttl" : "1 s",
>   "name" : "groupAggregateState"
> } ]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33196) Support DB2 JDBC driver

2024-02-20 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-33196:
---
Fix Version/s: jdbc-3.2.0
   (was: 1.19.0)

> Support DB2 JDBC driver 
> 
>
> Key: FLINK-33196
> URL: https://issues.apache.org/jira/browse/FLINK-33196
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: david radley
>Priority: Major
> Fix For: jdbc-3.2.0
>
>
> The 
> [docs|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/jdbc/]
>  show the currently supported JDBC drivers. I propose that we add DB2 to this 
> list. 
>  
> I'm happy to provide the implementation. It looks like I need to create a new 
> DB2 dialect in 
> [https://github.com/apache/flink-connector-jdbc/tree/main/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/databases|http://example.com/]
>  , with appropriate DB2 specific tests and update the documentation. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-34298) Release Testing Instructions: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


[ https://issues.apache.org/jira/browse/FLINK-34298 ]


Yubin Li deleted comment on FLINK-34298:
--

was (Author: liyubin117):
[~qingyue] Thanks, I will finish the tests as soon as possible.

> Release Testing Instructions: Verify FLINK-33397 Support Configuring 
> Different State TTLs using SQL Hint
> 
>
> Key: FLINK-34298
> URL: https://issues.apache.org/jira/browse/FLINK-34298
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Jane Chan
>Priority: Blocker
> Fix For: 1.19.0
>
>
> This ticket describes how to verify FLINK-33397: Support Configuring 
> Different State TTLs using SQL Hint.
>  
> The verification steps are as follows.
> 1. Start the standalone session cluster and sql client.
> 2. Execute the following DDL statements.
> {code:sql}
> CREATE TABLE `default_catalog`.`default_database`.`Orders` (
>   `order_id` INT,
>   `line_order_id` INT
> ) WITH (
>   'connector' = 'datagen', 
>   'rows-per-second' = '5'
> ); 
> CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
>   `line_order_id` INT,
>   `ship_mode` STRING
> ) WITH (
>   'connector' = 'datagen',
>   'rows-per-second' = '5'
> );
> CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
>   `order_id` INT,
>   `line_order_id` INT,
>   `ship_mode` STRING ) WITH (
>   'connector' = 'print'
> ); {code}
> 3. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to join
> {code:sql}
> -- SET the pipeline level state TTL to 24h
> SET 'table.exec.state.ttl' = '24h';
> -- Configure different state TTL for join operator
> COMPILE PLAN '/path/to/join-plan.json' FOR
> INSERT INTO OrdersShipInfo 
> SELECT /*+STATE_TTL('a' = '2d', 'b' = '12h')*/ a.order_id, a.line_order_id, 
> b.ship_mode 
> FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
> {code}
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamJoin ExecNode.
> {code:json}
> {
> "id" : 5,
> "type" : "stream-exec-join_1",
> "joinSpec" : {
>   ...
> },
> "state" : [ {
>   "index" : 0,
>   "ttl" : "2 d",
>   "name" : "leftState"
> }, {
>   "index" : 1,
>   "ttl" : "12 h",
>   "name" : "rightState"
> } ],
> "inputProperties": [...],
> "outputType": ...,
> "description": ...
> }
> {code}
> 4. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to group aggregate
> {code:sql}
> CREARE TABLE source_t (
> a INT,
> b BIGINT,
> c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> CREARE TABLE sink_t (
> b BIGINT PRIMARY KEY NOT ENFORCED,
> cnt BIGINT,
> avg_a DOUBLE,
> min_c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> COMPILE PLAN '/path/to/agg-plan.json' FOR
> INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '1s') */
> b, 
> COUNT(*) AS cnt,
> AVG(a) FILTER (WHERE a > 1) AS avg_a
> MIN(c) AS min_c 
> FROM source_t GROUP BY b
> {code}
>  
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamExecGroupAggregate ExecNode.
> {code:json}
> "state" : [ {
>   "index" : 0,
>   "ttl" : "1 s",
>   "name" : "groupAggregateState"
> } ]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34298) Release Testing Instructions: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819037#comment-17819037
 ] 

Yubin Li commented on FLINK-34298:
--

[~qingyue] Thanks, I will finish the tests as soon as possible.

> Release Testing Instructions: Verify FLINK-33397 Support Configuring 
> Different State TTLs using SQL Hint
> 
>
> Key: FLINK-34298
> URL: https://issues.apache.org/jira/browse/FLINK-34298
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Jane Chan
>Priority: Blocker
> Fix For: 1.19.0
>
>
> This ticket describes how to verify FLINK-33397: Support Configuring 
> Different State TTLs using SQL Hint.
>  
> The verification steps are as follows.
> 1. Start the standalone session cluster and sql client.
> 2. Execute the following DDL statements.
> {code:sql}
> CREATE TABLE `default_catalog`.`default_database`.`Orders` (
>   `order_id` INT,
>   `line_order_id` INT
> ) WITH (
>   'connector' = 'datagen', 
>   'rows-per-second' = '5'
> ); 
> CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
>   `line_order_id` INT,
>   `ship_mode` STRING
> ) WITH (
>   'connector' = 'datagen',
>   'rows-per-second' = '5'
> );
> CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
>   `order_id` INT,
>   `line_order_id` INT,
>   `ship_mode` STRING ) WITH (
>   'connector' = 'print'
> ); {code}
> 3. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to join
> {code:sql}
> -- SET the pipeline level state TTL to 24h
> SET 'table.exec.state.ttl' = '24h';
> -- Configure different state TTL for join operator
> COMPILE PLAN '/path/to/join-plan.json' FOR
> INSERT INTO OrdersShipInfo 
> SELECT /*+STATE_TTL('a' = '2d', 'b' = '12h')*/ a.order_id, a.line_order_id, 
> b.ship_mode 
> FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
> {code}
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamJoin ExecNode.
> {code:json}
> {
> "id" : 5,
> "type" : "stream-exec-join_1",
> "joinSpec" : {
>   ...
> },
> "state" : [ {
>   "index" : 0,
>   "ttl" : "2 d",
>   "name" : "leftState"
> }, {
>   "index" : 1,
>   "ttl" : "12 h",
>   "name" : "rightState"
> } ],
> "inputProperties": [...],
> "outputType": ...,
> "description": ...
> }
> {code}
> 4. Compile and verify the INSERT INTO statement with the STATE_TTL hint 
> applied to group aggregate
> {code:sql}
> CREARE TABLE source_t (
> a INT,
> b BIGINT,
> c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> CREARE TABLE sink_t (
> b BIGINT PRIMARY KEY NOT ENFORCED,
> cnt BIGINT,
> avg_a DOUBLE,
> min_c STRING
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '5'
> );
> COMPILE PLAN '/path/to/agg-plan.json' FOR
> INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '1s') */
> b, 
> COUNT(*) AS cnt,
> AVG(a) FILTER (WHERE a > 1) AS avg_a
> MIN(c) AS min_c 
> FROM source_t GROUP BY b
> {code}
>  
> The generated JSON file *should* contain the following "state" JSON array for 
> StreamExecGroupAggregate ExecNode.
> {code:json}
> "state" : [ {
>   "index" : 0,
>   "ttl" : "1 s",
>   "name" : "groupAggregateState"
> } ]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34219) Introduce a new join operator to support minibatch

2024-02-20 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819035#comment-17819035
 ] 

lincoln lee commented on FLINK-34219:
-

[~xu_shuai_] Should we also add release notes for this ticket?

> Introduce a new join operator to support minibatch
> --
>
> Key: FLINK-34219
> URL: https://issues.apache.org/jira/browse/FLINK-34219
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Shuai Xu
>Assignee: Shuai Xu
>Priority: Major
> Fix For: 1.19.0
>
>
> This is the parent task of FLIP-415.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33397) FLIP-373: Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819034#comment-17819034
 ] 

lincoln lee commented on FLINK-33397:
-

[~qingyue][~xuyangzhong] Should we add release notes for this ticket since it's 
a new feature?

> FLIP-373: Support Configuring Different State TTLs using SQL Hint
> -
>
> Key: FLINK-33397
> URL: https://issues.apache.org/jira/browse/FLINK-33397
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> Please refer to 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
>  
> |https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34470) Transactional message + Table api kafka source with 'latest-offset' scan bound mode causes indefinitely hanging

2024-02-20 Thread dongwoo.kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongwoo.kim updated FLINK-34470:

Description: 
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's request 
timeouts after hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code, split is finished only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
{code:java}
if (consumer.position(tp) >= stoppingOffset) {
recordsBySplits.setPartitionStoppingOffset(tp, 
stoppingOffset);
finishSplitAtRecord(
tp,
stoppingOffset,
lastRecord.offset(),
finishedPartitions,
recordsBySplits);
}
{code}
Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
*consumer.position(tp)* gets next record's offset if it exist and the last 
record's offset if the next record doesn't exist. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 



I would be happy to implement about this fix if we can reach on agreement. 
Thanks

  was:
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's request 
timeouts after hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code split is finished only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
{code:java}
if (consumer.position(tp) >= stoppingOffset) {
recordsBySplits.setPartitionStoppingOffset(tp, 
stoppingOffset);
finishSplitAtRecord(
tp,
stoppingOffset,
lastRecord.offset(),
finishedPartitions,
recordsBySplits);
}
{code}
Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
*consumer.position(tp)* gets next record's offset if it exist and the last 
record's offset if the next record doesn't exist. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 



I would be happy to implement about this fix if we can reach on agreement. 
Thanks


> Transactional message + Table api kafka source with 'latest-offset' scan 
> bound mode causes indefinitely hanging
> 

[jira] [Updated] (FLINK-34470) Transactional message + Table api kafka source with 'latest-offset' scan bound mode causes indefinitely hanging

2024-02-20 Thread dongwoo.kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongwoo.kim updated FLINK-34470:

Description: 
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's request 
timeouts after hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code split is finished only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
{code:java}
if (consumer.position(tp) >= stoppingOffset) {
recordsBySplits.setPartitionStoppingOffset(tp, 
stoppingOffset);
finishSplitAtRecord(
tp,
stoppingOffset,
lastRecord.offset(),
finishedPartitions,
recordsBySplits);
}
{code}
Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
*consumer.position(tp)* gets next record's offset if it exist and the last 
record's offset if the next record doesn't exist. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 



I would be happy to implement about this fix if we can reach on agreement. 
Thanks

  was:
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's request 
timeouts after hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code split is finished only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
{code:java}
if (consumer.position(tp) >= stoppingOffset) {
recordsBySplits.setPartitionStoppingOffset(tp, 
stoppingOffset);
finishSplitAtRecord(
tp,
stoppingOffset,
lastRecord.offset(),
finishedPartitions,
recordsBySplits);
}
{code}
Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
*consumer.position(tp)* gets next record's offset if it exist and the last 
record's offset if the next record doesn't exist. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 


I would be happy to implement about this fix if we can reach on agreement. 
Thanks


> Transactional message + Table api kafka source with 'latest-offset' scan 
> bound mode causes indefinitely hanging
> 

[jira] [Updated] (FLINK-34470) Transactional message + Table api kafka source with 'latest-offset' scan bound mode causes indefinitely hanging

2024-02-20 Thread dongwoo.kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongwoo.kim updated FLINK-34470:

Description: 
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's request 
timeouts after hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code split is finished only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
{code:java}
if (consumer.position(tp) >= stoppingOffset) {
recordsBySplits.setPartitionStoppingOffset(tp, 
stoppingOffset);
finishSplitAtRecord(
tp,
stoppingOffset,
lastRecord.offset(),
finishedPartitions,
recordsBySplits);
}
{code}
Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
*consumer.position(tp)* gets next record's offset if it exist and the last 
record's offset if the next record doesn't exist. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 


I would be happy to implement about this fix if we can reach on agreement. 
Thanks

  was:
h2. Summary  
Hi we have faced issue with transactional message and table api kafka source. 
If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql request 
timeouts after  hanging. We can always reproduce this unexpected behavior by 
following below steps.
This is related to this 
[issue|https://issues.apache.org/jira/browse/FLINK-33484] too.

h2. How to reproduce
1. Deploy transactional producer and stop after producing certain amount of 
messages
2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple query 
such as getting count of the produced messages
3. Flink sql job gets stucked and timeouts.

h2. Cause
Transaction producer always produces [control 
records|https://kafka.apache.org/documentation/#controlbatch] at the end of the 
transaction. And these control messages are not polled by 
{*}consumer.poll(){*}. (It is filtered internally). In 
*KafkaPartitionSplitReader* code it finishes split only when 
*lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
well with non transactional messages or streaming environment but in some batch 
use cases it causes unexpected hanging.

h2. Proposed solution
Adding *consumer.position(tp) >= stoppingOffset* condition to the if statement 
[here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
 can solve the problem. 
By this KafkaPartitionSplitReader is available to finish the split even when 
the stopping offset is configured to control record's offset. 

I would be happy to implement about this fix if we can reach on agreement. 
Thanks


> Transactional message + Table api kafka source with 'latest-offset' scan 
> bound mode causes indefinitely hanging
> ---
>
> Key: FLINK-34470
> URL: https://issues.apache.org/jira/browse/FLINK-34470
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: dongwoo.kim
>Priority: Major
>
> h2. Summary  
> Hi we have faced issue with transactional message and table api kafka source. 
> If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's 
> request timeouts after hanging. We can always reproduce 

Re: [PR] [FLINK-34147][table-planner] Enhance the java doc of TimestampData to distinguish the different usage of Instant and LocalDateTime [flink]

2024-02-20 Thread via GitHub


swuferhong commented on PR #24339:
URL: https://github.com/apache/flink/pull/24339#issuecomment-1955682413

   Hi, @lirui  Could you help review this, please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34472) loading class of protobuf format descriptor by "Class.forName(className, true, Thread.currentThread().getContextClassLoader())" may can not find class because the curren

2024-02-20 Thread jeremyMu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeremyMu updated FLINK-34472:
-
Priority: Major  (was: Minor)

> loading class of protobuf format descriptor by "Class.forName(className, 
> true, Thread.currentThread().getContextClassLoader())" may can not find class 
> because the current thread class loader may not contain this class
> -
>
> Key: FLINK-34472
> URL: https://issues.apache.org/jira/browse/FLINK-34472
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.2, 1.18.1
> Environment: 1、standalone session
> 2、flink on k8s 
>Reporter: jeremyMu
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: exception1.png, sourcecode.jpg, sourcecode2.png, 
> sourcecode3.png
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> 1、Stack information:
> Caused by: java.lang.IllegalArgumentException: get test.ProtobufTest1 
> descriptors error!
>     at 
> org.apache.flink.formats.protobuf.util.PbFormatUtils.getDescriptor(PbFormatUtils.java:94)
>     at 
> org.apache.flink.formats.protobuf.serialize.PbRowDataSerializationSchema.(PbRowDataSerializationSchema.java:49)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:47)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:31)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.createSerialization(KafkaDynamicSink.java:388)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:194)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2、my suggestion:
> Add a member variable to PbEncodingFormat: class loader instance, which is 
> passed through DynamicTableFactory.Context method :getClassLoader() 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34472) loading class of protobuf format descriptor by "Class.forName(className, true, Thread.currentThread().getContextClassLoader())" may can not find class because the curren

2024-02-20 Thread jeremyMu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeremyMu updated FLINK-34472:
-
Environment: 
1、standalone session
2、flink on k8s 

> loading class of protobuf format descriptor by "Class.forName(className, 
> true, Thread.currentThread().getContextClassLoader())" may can not find class 
> because the current thread class loader may not contain this class
> -
>
> Key: FLINK-34472
> URL: https://issues.apache.org/jira/browse/FLINK-34472
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.2, 1.18.1
> Environment: 1、standalone session
> 2、flink on k8s 
>Reporter: jeremyMu
>Priority: Minor
> Fix For: 1.20.0
>
> Attachments: exception1.png, sourcecode.jpg, sourcecode2.png, 
> sourcecode3.png
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> 1、Stack information:
> Caused by: java.lang.IllegalArgumentException: get test.ProtobufTest1 
> descriptors error!
>     at 
> org.apache.flink.formats.protobuf.util.PbFormatUtils.getDescriptor(PbFormatUtils.java:94)
>     at 
> org.apache.flink.formats.protobuf.serialize.PbRowDataSerializationSchema.(PbRowDataSerializationSchema.java:49)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:47)
>     at 
> org.apache.flink.formats.protobuf.PbEncodingFormat.createRuntimeEncoder(PbEncodingFormat.java:31)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.createSerialization(KafkaDynamicSink.java:388)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:194)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2、my suggestion:
> Add a member variable to PbEncodingFormat: class loader instance, which is 
> passed through DynamicTableFactory.Context method :getClassLoader() 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34305) Release Testing Instructions: Verify FLINK-33261 Support Setting Parallelism for Table/SQL Sources

2024-02-20 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818802#comment-17818802
 ] 

lincoln lee edited comment on FLINK-34305 at 2/21/24 12:57 AM:
---

[~sudewei.sdw] Sounds good! Actually we don't have a specific deadline, but at 
the moment there is only this testing Instruction left, so the completion of 
this issue could help get 1.19 out sooner, thanks for your work! 


was (Author: lincoln.86xy):
[~sudewei.sdw] Sound good! Actually we don't have a specific deadline, but at 
the moment there is only this testing Instruction left, so the completion of 
this issue could help get 1.19 out sooner, thanks for your work! 

> Release Testing Instructions: Verify FLINK-33261 Support Setting Parallelism 
> for Table/SQL Sources 
> ---
>
> Key: FLINK-34305
> URL: https://issues.apache.org/jira/browse/FLINK-34305
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: SuDewei
>Priority: Blocker
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Bump org.postgresql:postgresql from 42.5.4 to 42.6.1 in /flink-autoscaler-plugin-jdbc [flink-kubernetes-operator]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #779:
URL: https://github.com/apache/flink-kubernetes-operator/pull/779

   Bumps [org.postgresql:postgresql](https://github.com/pgjdbc/pgjdbc) from 
42.5.4 to 42.6.1.
   
   Release notes
   Sourced from https://github.com/pgjdbc/pgjdbc/releases;>org.postgresql:postgresql's 
releases.
   
   v42.6.0
   Changes
   
   bump version for next release https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2859;>#2859)
   rename changelog for 42.6.0 to the correct name https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2858;>#2858)
   Update CHANGELOG for release https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2851;>#2851)
   Fix pks12docs https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2857;>#2857)
   Remove stray whitespace and use code formatting https://github.com/dennis-benzinger-hybris;>@​dennis-benzinger-hybris
 (https://redirect.github.com/pgjdbc/pgjdbc/issues/2854;>#2854)
   
    Features
   
   fix: use PhantomReferences instead of Obejct.finalize to track 
Connection leaks https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2847;>#2847)
   fix: reduce memory overhead of .finalize() methods in PgConnection and 
StreamWrapper https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2817;>#2817)
   refactor:(loom) replace the usages of synchronized with ReentrantLock https://github.com/rbygrave;>@​rbygrave (https://redirect.github.com/pgjdbc/pgjdbc/issues/2635;>#2635)
   
    Documentation
   
   Update site for release 42.5.4 https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2813;>#2813)
   Update docs to reflect changes from 42.5.3 https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2811;>#2811)
   Add copy examples https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2762;>#2762)
   added alias /about/license.html https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2765;>#2765)
   re-add slonik duke image https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2760;>#2760)
   show snapshot dir instead of xml https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2759;>#2759)
   make changelogs more compact, only show the link to the changelog https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2758;>#2758)
   edit changelogs to make them correct and readable https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2743;>#2743)
   
   藺 Maintenance
   
   chore: fix usage of deprecated APIs in tests https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2849;>#2849)
   test: increase timeouts for resolving Maven dependencies in OSGi tests 
https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2848;>#2848)
   chore: pass same hashcode to test task only https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2822;>#2822)
   chore: resolve jacocoReport failure https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2820;>#2820)
   chore: configure Release Drafter to use releases from a single branch 
only https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2819;>#2819)
   feat: add Release Drafter for preparing release notes on GitHub https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2818;>#2818)
   Make sure that github CI runs tests on all PRs https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/2808;>#2808)
   fix: Update function volatility in SchemaTest setup https://github.com/rafiss;>@​rafiss (https://redirect.github.com/pgjdbc/pgjdbc/issues/2806;>#2806)
   chore: split /build.gradle.kts to build-logic/ plugins https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2755;>#2755)
   chore: tune down the number of CI jobs for PR builds from 7 to 5 https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2761;>#2761)
   
   ⬆️ Dependencies
   
   
   fix(deps): update dependency com.google.errorprone:error_prone_core to 
v2.19.1 https://github.com/renovate;>@​renovate (https://redirect.github.com/pgjdbc/pgjdbc/issues/2910;>#2910)
   fix(deps): update dependency 
net.ltgt.errorprone:net.ltgt.errorprone.gradle.plugin to v3.1.0 https://github.com/renovate;>@​renovate (https://redirect.github.com/pgjdbc/pgjdbc/issues/2913;>#2913)
   fix(deps): update dependency checkstyle to v10.12.0 https://github.com/renovate;>@​renovate 

Re: [PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24352:
URL: https://github.com/apache/flink/pull/24352#issuecomment-1955493115

   
   ## CI report:
   
   * 26fc668b0cdb4cf3cccea9637449c37e042877d5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #24352:
URL: https://github.com/apache/flink/pull/24352

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-pulsar]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #83:
URL: https://github.com/apache/flink-connector-pulsar/pull/83

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-pulsar/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-hive]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #17:
URL: https://github.com/apache/flink-connector-hive/pull/17

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-hive/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-jdbc]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #105:
URL: https://github.com/apache/flink-connector-jdbc/pull/105

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-jdbc/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 [flink-connector-kafka]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #87:
URL: https://github.com/apache/flink-connector-kafka/pull/87

   Bumps org.apache.commons:commons-compress from 1.25.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.25.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-kafka/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-mongodb]

2024-02-20 Thread via GitHub


boring-cyborg[bot] commented on PR #29:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/29#issuecomment-1955474211

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-mongodb]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #29:
URL: https://github.com/apache/flink-connector-mongodb/pull/29

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-mongodb/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.26.0 [flink-connector-opensearch]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #41:
URL: https://github.com/apache/flink-connector-opensearch/pull/41

   Bumps org.apache.commons:commons-compress from 1.23.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.23.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 
page](https://github.com/apache/flink-connector-opensearch/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.24.0 [flink-connector-hbase]

2024-02-20 Thread via GitHub


dependabot[bot] commented on PR #38:
URL: 
https://github.com/apache/flink-connector-hbase/pull/38#issuecomment-1955470093

   Superseded by #41.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.postgresql:postgresql from 42.5.1 to 42.7.2 in /flink-connector-jdbc [flink-connector-jdbc]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #104:
URL: https://github.com/apache/flink-connector-jdbc/pull/104

   Bumps [org.postgresql:postgresql](https://github.com/pgjdbc/pgjdbc) from 
42.5.1 to 42.7.2.
   
   Release notes
   Sourced from https://github.com/pgjdbc/pgjdbc/releases;>org.postgresql:postgresql's 
releases.
   
   v42.7.1
   Fixed regressions since 42.7.0
   
   Revert Use canonical DateStyle name (https://redirect.github.com/pgjdbc/pgjdbc/issues/2925;>#2925) 
https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3035;>#3035)
   Revert feat: support SET statements combining with other queries 
with semicolon in PreparedStatement https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3010;>#3010)
   chore: use java.release=8 when building pgjdbc from the generated source 
distribution https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3038;>#3038), the 
driver uses Java 8 methods only
   
   Changes
   
   Apply connectTimeout before SSLSocket.startHandshake to avoid infinite 
wait in case the connection is broken https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/3040;>#3040)
   perf: improve performance of PreparedStatement.setBlob, BlobInputStream, 
and BlobOutputStream with dynamic buffer sizing https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3044;>#3044)
   fix: avoid timezone conversions when sending LocalDateTime to the 
database https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/2852;>#2852)
   fix: support waffle-jna 2.x and 3.x by using reflective approach for 
ManagedSecBufferDesc https://github.com/chrullrich;>@​chrullrich (https://redirect.github.com/pgjdbc/pgjdbc/issues/2720;>#2720)
   
   藺 Maintenance
   
   chore: bump Gradle to 8.5 https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3045;>#3045)
   chore: use Java 17 for building pgjdbc, and use --release 8 to target 
Java 8, add tests with Java 21 and 22 https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3026;>#3026)
   fedora/rpm: move source build to java-17-openjdk-devel https://github.com/praiskup;>@​praiskup (https://redirect.github.com/pgjdbc/pgjdbc/issues/3036;>#3036)
   Update site 42 7 0 https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/3004;>#3004)
   prepared for release 42.7.1 update changelogs https://github.com/davecramer;>@​davecramer (https://redirect.github.com/pgjdbc/pgjdbc/issues/3037;>#3037)
   
   ⬆️ Dependencies
   
   
   fix(deps): update dependency 
org.checkerframework:org.checkerframework.gradle.plugin to v0.6.36 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3060;>#3060)
   chore(deps): update plugin biz.aqute.bnd.builder to v7 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3034;>#3034)
   fix(deps): update dependency 
com.github.spotbugs:com.github.spotbugs.gradle.plugin to v6 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3056;>#3056)
   fix(deps): update dependency 
com.github.spotbugs:com.github.spotbugs.gradle.plugin to v5.2.5 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3032;>#3032)
   chore(deps): update codecov/codecov-action digest to b0466b4 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3059;>#3059)
   fix(deps): update checkerframework to v3.41.0 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3058;>#3058)
   fix(deps): update logback to v1.2.13 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3053;>#3053)
   chore(deps): update codecov/codecov-action digest to 438fa9e https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3051;>#3051)
   fix(deps): update dependency spotbugs to v4.8.2 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3052;>#3052)
   chore: bump Gradle to 8.5 https://github.com/vlsi;>@​vlsi (https://redirect.github.com/pgjdbc/pgjdbc/issues/3045;>#3045)
   fix(deps): update dependency org.ops4j.pax.url:pax-url-aether to v2.6.14 
https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3030;>#3030)
   chore(deps): update plugin org.nosphere.gradle.github.actions to v1.4.0 
https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3031;>#3031)
   chore(deps): update dependency ubuntu to v22 https://github.com/renovate-bot;>@​renovate-bot (https://redirect.github.com/pgjdbc/pgjdbc/issues/3033;>#3033)
   fix(deps): update checkerframework 

[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-elasticsearch]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #92:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/92

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 
page](https://github.com/apache/flink-connector-elasticsearch/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-rabbitmq]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #25:
URL: https://github.com/apache/flink-connector-rabbitmq/pull/25

   Bumps org.apache.commons:commons-compress from 1.24.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.24.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-rabbitmq/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.26.0 [flink-connector-hbase]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #41:
URL: https://github.com/apache/flink-connector-hbase/pull/41

   Bumps org.apache.commons:commons-compress from 1.23.0 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.23.0=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-hbase/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.24.0 [flink-connector-hbase]

2024-02-20 Thread via GitHub


dependabot[bot] closed pull request #38: Bump 
org.apache.commons:commons-compress from 1.23.0 to 1.24.0
URL: https://github.com/apache/flink-connector-hbase/pull/38


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.21 to 1.26.0 [flink-connector-aws]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #131:
URL: https://github.com/apache/flink-connector-aws/pull/131

   Bumps org.apache.commons:commons-compress from 1.21 to 1.26.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress=maven=1.21=1.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-aws/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][test] Replaces String#replaceAll with String#replace to avoid regex compilation [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24351:
URL: https://github.com/apache/flink/pull/24351#issuecomment-1955288674

   
   ## CI report:
   
   * 8e55ffde1ae93b89d09306ff45b682f5edcb93cf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][test] Replaces String#replaceAll with String#replace to avoid regex compilation [flink]

2024-02-20 Thread via GitHub


snuyanzin opened a new pull request, #24351:
URL: https://github.com/apache/flink/pull/24351

   …
   
   

Re: [PR] [FLINK-33517] Implement restore tests for Values node [flink]

2024-02-20 Thread via GitHub


bvarghese1 commented on code in PR #24114:
URL: https://github.com/apache/flink/pull/24114#discussion_r1496573717


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/ValuesTestPrograms.java:
##
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+
+/** {@link TableTestProgram} definitions for testing {@link StreamExecValues}. 
*/
+public class ValuesTestPrograms {
+
+static final TableTestProgram VALUES_TEST =
+TableTestProgram.of("values-test", "validates values node")
+.setupTableSink(
+SinkTestStep.newBuilder("sink_t")
+.addSchema("b INT", "a INT", "c VARCHAR")
+.consumedBeforeRestore("+I[1, 2, Hi]", 
"+I[3, 4, Hello]")

Review Comment:
   Changed the test to be a non restore test. Also this test does not depend on 
a savepoint. It only takes a json compiled plan and executes it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34331][ci] Adds reusable workflow that is used to load the runner configuration based on the projects owner [flink]

2024-02-20 Thread via GitHub


snuyanzin commented on code in PR #24347:
URL: https://github.com/apache/flink/pull/24347#discussion_r1496405112


##
.github/workflows/template.workflow-init.yml:
##
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Workflow

Review Comment:
   should we have it?
   It seems `name` var is self explainable isn't it? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34331][ci] Adds reusable workflow that is used to load the runner configuration based on the projects owner [flink]

2024-02-20 Thread via GitHub


snuyanzin commented on code in PR #24347:
URL: https://github.com/apache/flink/pull/24347#discussion_r1496403718


##
.github/workflows/template.workflow-init.yml:
##
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Workflow
+
+name: "Apache Flink Workflow Initialization"
+
+on:
+  workflow_call:
+outputs:
+  runner_config:
+description: "The runs-on configuration that can be used in the 
runs-on parameter."
+value: ${{ jobs.workflow_init.outputs.runner_config }}
+
+permissions: read-all
+
+jobs:
+  workflow_init:
+name: "Initialize Workflow"
+# no need to fix a specific ubuntu version here

Review Comment:
   should we have some more clarifying comments where version should be changed 
in case of upgrade?
   May be extract it into a dedicated var?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump ip from 2.0.0 to 2.0.1 in /flink-runtime-web/web-dashboard [flink]

2024-02-20 Thread via GitHub


flinkbot commented on PR #24350:
URL: https://github.com/apache/flink/pull/24350#issuecomment-1954947599

   
   ## CI report:
   
   * 7f681063532f9c87ddec9e4c72b53471cf2a98f3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump ip from 2.0.0 to 2.0.1 in /flink-runtime-web/web-dashboard [flink]

2024-02-20 Thread via GitHub


dependabot[bot] opened a new pull request, #24350:
URL: https://github.com/apache/flink/pull/24350

   Bumps [ip](https://github.com/indutny/node-ip) from 2.0.0 to 2.0.1.
   
   Commits
   
   https://github.com/indutny/node-ip/commit/3b0994a74eca51df01f08c40d6a65ba0e1845d04;>3b0994a
 2.0.1
   https://github.com/indutny/node-ip/commit/32f468f1245574785ec080705737a579be1223aa;>32f468f
 lib: fixed CVE-2023-42282 and added unit test
   See full diff in https://github.com/indutny/node-ip/compare/v2.0.0...v2.0.1;>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ip=npm_and_yarn=2.0.0=2.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34064) Expose JobManagerOperatorMetrics via REST API

2024-02-20 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818968#comment-17818968
 ] 

Mason Chen commented on FLINK-34064:


[~mxm] [~fanrui] [~thw] can you assign this ticket to me? Thanks!

> Expose JobManagerOperatorMetrics via REST API
> -
>
> Key: FLINK-34064
> URL: https://issues.apache.org/jira/browse/FLINK-34064
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.18.0
>Reporter: Mason Chen
>Priority: Major
>
> Add a REST API to fetch coordinator metrics.
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-417%3A+Expose+JobManagerOperatorMetrics+via+REST+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33817][flink-protobuf] Set ReadDefaultValues=False by default in proto3 for performance improvement [flink]

2024-02-20 Thread via GitHub


sharath1709 commented on PR #24035:
URL: https://github.com/apache/flink/pull/24035#issuecomment-1954901105

   @libenchao @maosuhan Gentle ping on the PR review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.17][FLINK-32262] Add MAP_ENTRIES support [flink]

2024-02-20 Thread via GitHub


hanyuzheng7 closed pull request #23098: [BP-1.17][FLINK-32262] Add MAP_ENTRIES 
support 
URL: https://github.com/apache/flink/pull/23098


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >