Re: [PR] [FLINK-34648] add waitChangeResultRequest and WaitChangeResultResponse to avoid RPC timeout. [flink-cdc]

2024-04-17 Thread via GitHub


lvyanquan commented on code in PR #3128:
URL: https://github.com/apache/flink-cdc/pull/3128#discussion_r1568374817


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/pipeline/PipelineOptions.java:
##
@@ -85,5 +89,12 @@ public class PipelineOptions {
 .withDescription(
 "The unique ID for schema operator. This ID will 
be used for inter-operator communications and must be unique across 
operators.");
 
+public static final ConfigOption 
PIPELINE_SCHEMA_OPERATOR_RPC_TIMEOUT =
+ConfigOptions.key("schema-operator.rpc-timeout")
+.durationType()
+
.defaultValue(Duration.ofSeconds(SCHEMA_OPERATOR_RPC_TIMEOUT_SECOND_DEFAULT))
+.withDescription(
+"The timeout time for SchemaOperator to wait for 
schema change. the default value is 3 min.");

Review Comment:
   Addressed it and rebased to master.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance

2024-04-17 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan reassigned FLINK-35130:
-

Assignee: Yuxin Tan

> Simplify AvailabilityNotifierImpl to support speculative scheduler and 
> improve performance
> --
>
> Key: FLINK-35130
> URL: https://issues.apache.org/jira/browse/FLINK-35130
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>
> The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
> ids. But the map key is the result partition id, which will change according 
> to the different attempt numbers when speculation is enabled.  This can be 
> resolved by using `inputChannels` to get channel and the map key of 
> inputChannels will not vary with the attempts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34908][pipeline-connector][doris] Fix Mysql pipeline to doris will lost precision for timestamp [flink-cdc]

2024-04-17 Thread via GitHub


gong commented on code in PR #3207:
URL: https://github.com/apache/flink-cdc/pull/3207#discussion_r1568450309


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-doris/src/main/java/org/apache/flink/cdc/connectors/doris/sink/DorisRowConverter.java:
##
@@ -103,7 +103,8 @@ static SerializationConverter 
createExternalConverter(DataType type, ZoneId pipe
 return (index, val) ->
 val.getTimestamp(index, 
DataTypeChecks.getPrecision(type))
 .toLocalDateTime()
-
.format(DorisEventSerializer.DATE_TIME_FORMATTER);
+.toString()
+.replace('T', ' ');

Review Comment:
   @yuxiqian ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-17 Thread via GitHub


MartijnVisser commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568486997


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site
+  push:
+branches:
+  - asf-site
+  workflow_dispatch:
+
+jobs:
+  build-documentation:
+if: github.repository == 'apache/flink-web'
+runs-on: ubuntu-latest
+permissions:
+  # Give the default GITHUB_TOKEN write permission to commit and push the 
changed files back to the repository.
+  contents: write
+steps:
+- name: Checkout repository
+  uses: actions/checkout@v4
+  with:
+submodules: true
+fetch-depth: 0
+
+- name: Setup Hugo
+  uses: peaceiris/actions-hugo@v3
+  with:
+hugo-version: '0.119.0'
+extended: true
+
+- name: Build website
+  run: |
+# Remove old content folder and create new one
+rm -r -f content && mkdir content
+
+# Build the website
+hugo --source docs --destination target
+
+# Move newly generated static HTML to the content serving folder
+mv docs/target/* content
+
+# Copy quickstarts, rewrite rules and Google Search Console 
identifier
+cp -r _include/. content
+
+# Get the current commit author
+echo "author=$(git log -1 --pretty=\"%an <%ae>\")" >> 
$GITHUB_OUTPUT
+
+- name: Commit and push website build
+  if: ${{ github.event_name == 'push' || github.event_name == 
'workflow_dispatch' }}
+  uses: stefanzweifel/git-auto-commit-action@v5

Review Comment:
   Per https://infra.apache.org/github-actions-policy.html we must review and 
pin this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35025][Runtime/State] Abstract stream operators for async state processing [flink]

2024-04-17 Thread via GitHub


fredia merged PR #24657:
URL: https://github.com/apache/flink/pull/24657


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34216) Support fine-grained configuration to control filter push down for MongoDB Connector

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-34216:
--
Fix Version/s: mongodb-1.3.0
   (was: mongodb-1.2.0)

> Support fine-grained configuration to control filter push down for MongoDB 
> Connector
> 
>
> Key: FLINK-34216
> URL: https://issues.apache.org/jira/browse/FLINK-34216
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.0.2
>Reporter: jiabao.sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.3.0
>
>
> Support fine-grained configuration to control filter push down for MongoDB 
> Connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35092][cdc][starrocks] Add starrocks integration test cases [flink-cdc]

2024-04-17 Thread via GitHub


yuxiqian commented on PR #3231:
URL: https://github.com/apache/flink-cdc/pull/3231#issuecomment-2060583059

   cc @banmoy


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance

2024-04-17 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-35130:
--
Description: 
The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
ids. But the map key is the result partition id, which will change according to 
the different attempt numbers when speculation is enabled.  This can be 
resolved by using `inputChannels` to get channel and the map key of 
inputChannels will not vary with the attempts. 
In addition, using that map instead can also improve performance for large 
scale jobs because no extra maps are created.

  was:The AvailabilityNotifierImpl in SingleInputGate has maps storing the 
channel ids. But the map key is the result partition id, which will change 
according to the different attempt numbers when speculation is enabled.  This 
can be resolved by using `inputChannels` to get channel and the map key of 
inputChannels will not vary with the attempts. In addition, using that map 
instead can also improve performance for large scale jobs because 


> Simplify AvailabilityNotifierImpl to support speculative scheduler and 
> improve performance
> --
>
> Key: FLINK-35130
> URL: https://issues.apache.org/jira/browse/FLINK-35130
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>
> The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
> ids. But the map key is the result partition id, which will change according 
> to the different attempt numbers when speculation is enabled.  This can be 
> resolved by using `inputChannels` to get channel and the map key of 
> inputChannels will not vary with the attempts. 
> In addition, using that map instead can also improve performance for large 
> scale jobs because no extra maps are created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-17 Thread via GitHub


MartijnVisser commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568486342


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site

Review Comment:
   I don't think this should work on PRs, but only on pushes? Else you would 
commit code during a PR creation?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35092) Add integrated test for Doris / Starrocks sink pipeline connector

2024-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35092:
---
Labels: pull-request-available  (was: )

> Add integrated test for Doris / Starrocks sink pipeline connector
> -
>
> Key: FLINK-35092
> URL: https://issues.apache.org/jira/browse/FLINK-35092
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiqian YU
>Priority: Minor
>  Labels: pull-request-available
>
> Currently, no integrated test are being applied to Doris pipeline connector 
> (there's only one DorisRowConverterTest case for now). Adding ITcases would 
> improving Doris connector's code quality and reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35031) Latency marker emitting under async execution model

2024-04-17 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei updated FLINK-35031:
---
Summary: Latency marker emitting under async execution model  (was: Event 
timer firing under async execution model)

> Latency marker emitting under async execution model
> ---
>
> Key: FLINK-35031
> URL: https://issues.apache.org/jira/browse/FLINK-35031
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35131) Support and Release Connectors for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35131:
-

 Summary: Support and Release Connectors for Flink 1.19
 Key: FLINK-35131
 URL: https://issues.apache.org/jira/browse/FLINK-35131
 Project: Flink
  Issue Type: Improvement
Reporter: Danny Cranmer


This is the parent task to contain connector support and releases for Flink 
1.19.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35138) Release flink-connector-kafka vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35138:
-

 Summary: Release flink-connector-kafka vX.X.X for Flink 1.19
 Key: FLINK-35138
 URL: https://issues.apache.org/jira/browse/FLINK-35138
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35136) Release flink-connector-hbase vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35136:
-

 Summary: Release flink-connector-hbase vX.X.X for Flink 1.19
 Key: FLINK-35136
 URL: https://issues.apache.org/jira/browse/FLINK-35136
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / HBase
Reporter: Danny Cranmer






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35140) Release flink-connector-opensearch vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35140:
--
Description: 
[https://github.com/apache/flink-connector-opensearch]

 

> Release flink-connector-opensearch vX.X.X for Flink 1.19
> 
>
> Key: FLINK-35140
> URL: https://issues.apache.org/jira/browse/FLINK-35140
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Opensearch
>Reporter: Danny Cranmer
>Priority: Major
>
> [https://github.com/apache/flink-connector-opensearch]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35140) Release flink-connector-opensearch vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35140:
-

 Summary: Release flink-connector-opensearch vX.X.X for Flink 1.19
 Key: FLINK-35140
 URL: https://issues.apache.org/jira/browse/FLINK-35140
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Opensearch
Reporter: Danny Cranmer






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35140) Release flink-connector-opensearch vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35140:
-

Assignee: Sergey Nuyanzin

> Release flink-connector-opensearch vX.X.X for Flink 1.19
> 
>
> Key: FLINK-35140
> URL: https://issues.apache.org/jira/browse/FLINK-35140
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Opensearch
>Reporter: Danny Cranmer
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> [https://github.com/apache/flink-connector-opensearch]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19

2024-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35139:
---
Labels: pull-request-available  (was: )

> Release flink-connector-mongodb vX.X.X for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35139:
--
Fix Version/s: mongodb-1.2.0

> Release flink-connector-mongodb vX.X.X for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34930] Fork existing code from bahir-flink [flink-connector-kudu]

2024-04-17 Thread via GitHub


ferenc-csaky commented on PR #1:
URL: 
https://github.com/apache/flink-connector-kudu/pull/1#issuecomment-2060816220

   > @ferenc-csaky I've asked the Legal team for a confirmation on this topic, 
since the ASF owns both Bahir and Flink code. You can track 
https://issues.apache.org/jira/browse/LEGAL-675
   
   Thank you for taking the time and look into it! For the time being, I 
migrated the Bahir header to the NOTICE. If it is okay to change it, I'll 
remove that commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34633][table] Support unnesting array constants [flink]

2024-04-17 Thread via GitHub


LadyForest commented on code in PR #24510:
URL: https://github.com/apache/flink/pull/24510#discussion_r1568258000


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/logical/UncollectToTableFunctionScanRule.java:
##
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction;
+import org.apache.flink.table.planner.utils.ShortcutUtils;
+import org.apache.flink.table.runtime.functions.table.UnnestRowsFunction;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.plan.RelRule;
+import org.apache.calcite.plan.hep.HepRelVertex;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Uncollect;
+import org.apache.calcite.rel.logical.LogicalFilter;
+import org.apache.calcite.rel.logical.LogicalProject;
+import org.apache.calcite.rel.logical.LogicalTableFunctionScan;
+import org.apache.calcite.rel.logical.LogicalValues;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlFunction;
+import org.immutables.value.Value;
+
+import java.util.Collections;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toRowType;
+
+/**
+ * Planner rule that converts [[Uncollect]] values to
+ * [[org.apache.calcite.rel.core.TableFunctionScan]].
+ */

Review Comment:
   Nit: replace with java doc link
   
   ```suggestion
   /**
* Planner rule that converts {@link Uncollect} values to {@link
* org.apache.calcite.rel.core.TableFunctionScan}.
*/
   ```



##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/UnnestTest.scala:
##
@@ -19,8 +19,34 @@ package org.apache.flink.table.planner.plan.batch.sql
 
 import org.apache.flink.table.planner.plan.common.UnnestTestBase
 import org.apache.flink.table.planner.utils.TableTestUtil
+import org.apache.flink.types.Row
+import org.apache.flink.util.CollectionUtil
+
+import org.assertj.core.api.Assertions.assertThat
+import org.junit.jupiter.api.Test
 
 class UnnestTest extends UnnestTestBase(true) {
 
   override def getTableTestUtil: TableTestUtil = batchTestUtil()
+
+  @Test
+  def testUnnestWithValuesBatch(): Unit = {
+val src = util.tableEnv.sqlQuery("SELECT * FROM UNNEST(ARRAY[1,2,3])")
+val rows: java.util.List[Row] = 
CollectionUtil.iteratorToList(src.execute.collect)
+assertThat(rows.size()).isEqualTo(3)
+assertThat(rows.get(0).toString).isEqualTo("+I[1]")
+assertThat(rows.get(1).toString).isEqualTo("+I[2]")
+assertThat(rows.get(2).toString).isEqualTo("+I[3]")
+  }
+
+  @Test
+  def testUnnestWithValuesBatch2(): Unit = {
+val src =
+  util.tableEnv.sqlQuery("SELECT * FROM (VALUES('a')) CROSS JOIN 
UNNEST(ARRAY[1, 2, 3])")
+val rows: java.util.List[Row] = 
CollectionUtil.iteratorToList(src.execute.collect)
+assertThat(rows.size()).isEqualTo(3)
+assertThat(rows.get(0).toString).isEqualTo("+I[a, 1]")
+assertThat(rows.get(1).toString).isEqualTo("+I[a, 2]")
+assertThat(rows.get(2).toString).isEqualTo("+I[a, 3]")
+  }

Review Comment:
   `org.apache.flink.table.planner.plan.batch.sql.UnnestTest` and 
`org.apache.flink.table.planner.plan.stream.sql.UnnestTest` are mainly focusing 
on the plan test. 
   
   Would you mind moving these tests to 
`org.apache.flink.table.planner.runtime.batch.sql.UnnestITCase` and 
`org.apache.flink.table.planner.runtime.stream.sql.UnnestITCase` respectively?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35133) Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35133:
--
Summary: Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19  
(was: Release flink-connector-cassandra v4.3.0 for Flink 1.18/1.19)

> Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19
> 
>
> Key: FLINK-35133
> URL: https://issues.apache.org/jira/browse/FLINK-35133
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-cassandra



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35133) Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35133:
--
Component/s: Connectors / Cassandra

> Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19
> 
>
> Key: FLINK-35133
> URL: https://issues.apache.org/jira/browse/FLINK-35133
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Cassandra
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-cassandra



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31664] Implement ARRAY_INTERSECT function [flink]

2024-04-17 Thread via GitHub


MartijnVisser commented on PR #24526:
URL: https://github.com/apache/flink/pull/24526#issuecomment-2060771945

   > Furthermore, array_except was merged in version 1.20, and since 1.20 is 
currently only a snapshot version and not officially released, there’s no 
concern of causing compatibility issues due to changing behaviors, so it should 
indeed be corrected. What are your thoughts?
   
   I think you're right. @dawidwys @snuyanzin WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35134) Release flink-connector-elasticsearch vX.X.X for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35134:
-

 Summary: Release flink-connector-elasticsearch vX.X.X for Flink 
1.18/1.19
 Key: FLINK-35134
 URL: https://issues.apache.org/jira/browse/FLINK-35134
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / ElasticSearch
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-elasticsearch



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34908][pipeline-connector][doris] Fix Mysql pipeline to doris will lost precision for timestamp [flink-cdc]

2024-04-17 Thread via GitHub


gong commented on code in PR #3207:
URL: https://github.com/apache/flink-cdc/pull/3207#discussion_r1568504940


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-doris/src/test/java/org/apache/flink/cdc/connectors/doris/sink/DorisRowConverterTest.java:
##
@@ -86,7 +113,9 @@ public void testExternalConvert() {
 row.add(converter.serialize(i, recordData));
 }
 Assert.assertEquals(
-"[true, 1.2, 1.2345, 1, 32, 64, 128, 2021-01-01 08:00:00, 
2021-01-01, a, doris]",
+"[true, 1.2, 1.2345, 1, 32, 64, 128, 2021-01-01 
08:00:00.00, 2021-01-01, a, doris, 2021-01-01 "
++ "08:01:11.00, 2021-01-01 08:01:11.123000, 
2021-01-01 08:01:11.123456, 2021-01-01 "
++ "16:01:11.00, 2021-01-01 16:01:11.123000, 
2021-01-01 16:01:11.123456]",

Review Comment:
   I set pipelineZoneId GMT+08:00



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34936][Checkpointing] Register reused shared state handle to FileMergingSnapshotManager [flink]

2024-04-17 Thread via GitHub


fredia merged PR #24644:
URL: https://github.com/apache/flink/pull/24644


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35137) Release flink-connector-jdbc vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35137:
-

Assignee: Danny Cranmer

> Release flink-connector-jdbc vX.X.X for Flink 1.19
> --
>
> Key: FLINK-35137
> URL: https://issues.apache.org/jira/browse/FLINK-35137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35131) Support and Release Connectors for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35131:
-

Assignee: Danny Cranmer

> Support and Release Connectors for Flink 1.19
> -
>
> Key: FLINK-35131
> URL: https://issues.apache.org/jira/browse/FLINK-35131
> Project: Flink
>  Issue Type: Improvement
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> This is the parent task to contain connector support and releases for Flink 
> 1.19.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35025][Runtime/State] Abstract stream operators for async state processing [flink]

2024-04-17 Thread via GitHub


Zakelly commented on PR #24657:
URL: https://github.com/apache/flink/pull/24657#issuecomment-2060481241

   Thanks @fredia and @yunfengzhou-hub for your detailed review! Really 
appreciate it.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35025) Wire AsyncExecutionController to AbstractStreamOperator

2024-04-17 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei reassigned FLINK-35025:
--

Assignee: Zakelly Lan

> Wire AsyncExecutionController to AbstractStreamOperator
> ---
>
> Key: FLINK-35025
> URL: https://issues.apache.org/jira/browse/FLINK-35025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35133) Release flink-connector-cassandra v4.3.0 for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35133:
-

 Summary: Release flink-connector-cassandra v4.3.0 for Flink 
1.18/1.19
 Key: FLINK-35133
 URL: https://issues.apache.org/jira/browse/FLINK-35133
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-cassandra



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35132:
-

Assignee: Danny Cranmer

> Release flink-connector-aws v4.3.0 for Flink 1.18/1.19
> --
>
> Key: FLINK-35132
> URL: https://issues.apache.org/jira/browse/FLINK-35132
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-4.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35132:
--
Component/s: Connectors / AWS

> Release flink-connector-aws v4.3.0 for Flink 1.18/1.19
> --
>
> Key: FLINK-35132
> URL: https://issues.apache.org/jira/browse/FLINK-35132
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-4.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35132:
-

 Summary: Release flink-connector-aws v4.3.0 for Flink 1.18/1.19
 Key: FLINK-35132
 URL: https://issues.apache.org/jira/browse/FLINK-35132
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer
 Fix For: aws-connector-4.3.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35130][runtime] Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance [flink]

2024-04-17 Thread via GitHub


TanYuxin-tyx opened a new pull request, #24674:
URL: https://github.com/apache/flink/pull/24674

   
   
   
   
   ## What is the purpose of the change
   
   *The AvailabilityNotifierImpl in SingleInputGate has maps storing the 
channel ids. But the map key is the result partition id, which will change 
according to the different attempt numbers when speculation is enabled. This 
can be resolved by using `inputChannels` to get channel and the map key of 
inputChannels will not vary with the attempts.
   In addition, using that map instead can also improve performance for large 
scale jobs because no extra maps are created.*
   
   
   ## Brief change log
   
 - *Simplify AvailabilityNotifierImpl*
 - *Remove useless method in AvailabilityNotifierI*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*HybridShuffleITCase*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35132:
--
Description: https://github.com/apache/flink-connector-aws

> Release flink-connector-aws v4.3.0 for Flink 1.18/1.19
> --
>
> Key: FLINK-35132
> URL: https://issues.apache.org/jira/browse/FLINK-35132
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-4.3.0
>
>
> https://github.com/apache/flink-connector-aws



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34987][state] Introduce Internal State for Async State API [flink]

2024-04-17 Thread via GitHub


masteryhx commented on PR #24651:
URL: https://github.com/apache/flink/pull/24651#issuecomment-2060813581

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32877][Filesystem][Rebased] Add HTTP options to java-storage client [flink]

2024-04-17 Thread via GitHub


flinkbot commented on PR #24673:
URL: https://github.com/apache/flink/pull/24673#issuecomment-2060595395

   
   ## CI report:
   
   * 057a0cd1d1fd47dc886b6e0ce6bb1ae1fd652b57 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34930] Fork existing code from bahir-flink [flink-connector-kudu]

2024-04-17 Thread via GitHub


MartijnVisser commented on PR #1:
URL: 
https://github.com/apache/flink-connector-kudu/pull/1#issuecomment-2060731374

   @ferenc-csaky I've asked the Legal team for a confirmation on this topic, 
since the ASF owns both Bahir and Flink code. You can track 
https://issues.apache.org/jira/browse/LEGAL-675


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34908][pipeline-connector][doris] Fix Mysql pipeline to doris will lost precision for timestamp [flink-cdc]

2024-04-17 Thread via GitHub


yuxiqian commented on code in PR #3207:
URL: https://github.com/apache/flink-cdc/pull/3207#discussion_r1568485784


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-doris/src/test/java/org/apache/flink/cdc/connectors/doris/sink/DorisRowConverterTest.java:
##
@@ -86,7 +113,9 @@ public void testExternalConvert() {
 row.add(converter.serialize(i, recordData));
 }
 Assert.assertEquals(
-"[true, 1.2, 1.2345, 1, 32, 64, 128, 2021-01-01 08:00:00, 
2021-01-01, a, doris]",
+"[true, 1.2, 1.2345, 1, 32, 64, 128, 2021-01-01 
08:00:00.00, 2021-01-01, a, doris, 2021-01-01 "
++ "08:01:11.00, 2021-01-01 08:01:11.123000, 
2021-01-01 08:01:11.123456, 2021-01-01 "
++ "16:01:11.00, 2021-01-01 16:01:11.123000, 
2021-01-01 16:01:11.123456]",

Review Comment:
   This assertion will fail when running on any machine which doesn't use UTC+8 
timezone. Maybe dynamically generating expected data based on local timezone?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34936][Checkpointing] Register reused shared state handle to FileMergingSnapshotManager [flink]

2024-04-17 Thread via GitHub


Zakelly commented on code in PR #24644:
URL: https://github.com/apache/flink/pull/24644#discussion_r1568486250


##
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java:
##
@@ -349,6 +350,13 @@ private long uploadSnapshotFiles(
 ? CheckpointedStateScope.EXCLUSIVE
 : CheckpointedStateScope.SHARED;
 
+// Report the reuse of state handle to stream factory, which 
is essential for file
+// merging mechanism.
+checkpointStreamFactory.reusePreviousStateHandle(

Review Comment:
   Nope. This could be optimized later. (This PR considers the file reusing 
only, not newly introduced files)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35141) Release flink-connector-pulsar vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35141:
-

 Summary: Release flink-connector-pulsar vX.X.X for Flink 1.19
 Key: FLINK-35141
 URL: https://issues.apache.org/jira/browse/FLINK-35141
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Pulsar
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-pulsar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35138) Release flink-connector-kafka vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35138:
--
Component/s: Connectors / Kafka

> Release flink-connector-kafka vX.X.X for Flink 1.19
> ---
>
> Key: FLINK-35138
> URL: https://issues.apache.org/jira/browse/FLINK-35138
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35139:
-

 Summary: Release flink-connector-mongodb vX.X.X for Flink 1.19
 Key: FLINK-35139
 URL: https://issues.apache.org/jira/browse/FLINK-35139
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / MongoDB
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35135) Release flink-connector-gcp-pubsub vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35135:
--
Component/s: Connectors / Google Cloud PubSub

> Release flink-connector-gcp-pubsub vX.X.X for Flink 1.19
> 
>
> Key: FLINK-35135
> URL: https://issues.apache.org/jira/browse/FLINK-35135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Google Cloud PubSub
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-gcp-pubsub



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35028][runtime] Timer firing under async execution model [flink]

2024-04-17 Thread via GitHub


yunfengzhou-hub commented on code in PR #24672:
URL: https://github.com/apache/flink/pull/24672#discussion_r1568257919


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceAsyncImpl.java:
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.runtime.asyncprocessing.AsyncExecutionController;
+import org.apache.flink.runtime.asyncprocessing.RecordContext;
+import org.apache.flink.runtime.metrics.groups.TaskIOMetricGroup;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyGroupedInternalPriorityQueue;
+import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService;
+import org.apache.flink.streaming.runtime.tasks.StreamTaskCancellationContext;
+import org.apache.flink.util.CloseableIterator;
+import org.apache.flink.util.function.BiConsumerWithException;
+
+import static 
org.apache.flink.runtime.asyncprocessing.KeyAccountingUnit.EMPTY_RECORD;
+
+/**
+ * An implementation of {@link InternalTimerService} that is used by {@link
+ * 
org.apache.flink.streaming.runtime.operators.asyncprocessing.AbstractAsyncStateStreamOperator}.
+ * The timer service will set {@link RecordContext} for the timers before 
invoking action to
+ * preserve the execution order between timer firing and records processing.
+ *
+ * @see https://cwiki.apache.org/confluence/display/FLINK/FLIP-425%3A+Asynchronous+Execution+Model#FLIP425:AsynchronousExecutionModel-Timers>FLIP-425
+ * timers section.
+ * @param  Type of timer's key.
+ * @param  Type of the namespace to which timers are scoped.
+ */
+public class InternalTimerServiceAsyncImpl extends 
InternalTimerServiceImpl {
+
+private AsyncExecutionController asyncExecutionController;
+
+InternalTimerServiceAsyncImpl(
+TaskIOMetricGroup taskIOMetricGroup,
+KeyGroupRange localKeyGroupRange,
+KeyContext keyContext,
+ProcessingTimeService processingTimeService,
+KeyGroupedInternalPriorityQueue processingTimeTimersQueue,
+KeyGroupedInternalPriorityQueue eventTimeTimersQueue,
+StreamTaskCancellationContext cancellationContext,
+AsyncExecutionController asyncExecutionController) {
+super(
+taskIOMetricGroup,
+localKeyGroupRange,
+keyContext,
+processingTimeService,
+processingTimeTimersQueue,
+eventTimeTimersQueue,
+cancellationContext);
+this.asyncExecutionController = asyncExecutionController;
+this.processingTimeCallback = this::onAsyncProcessingTime;
+}
+
+private void onAsyncProcessingTime(long time) throws Exception {
+// null out the timer in case the Triggerable calls 
registerProcessingTimeTimer()
+// inside the callback.
+nextTimer = null;
+
+InternalTimer timer;
+
+while ((timer = processingTimeTimersQueue.peek()) != null
+&& timer.getTimestamp() <= time
+&& !cancellationContext.isCancelled()) {
+RecordContext recordCtx =
+asyncExecutionController.buildContext(EMPTY_RECORD, 
timer.getKey());
+recordCtx.retain();
+asyncExecutionController.setCurrentContext(recordCtx);
+keyContext.setCurrentKey(timer.getKey());
+processingTimeTimersQueue.poll();
+final InternalTimer timerToTrigger = timer;
+asyncExecutionController.syncPointRequestWithCallback(
+() -> triggerTarget.onProcessingTime(timerToTrigger));
+taskIOMetricGroup.getNumFiredTimers().inc();
+recordCtx.release();
+}
+
+if (timer != null && nextTimer == null) {
+nextTimer =
+processingTimeService.registerTimer(
+timer.getTimestamp(), this::onAsyncProcessingTime);
+}
+}
+
+/**
+ * Advance one watermark, this will fire some event timers.
+ *
+ * @param time the time in watermark.
+ */
+public 

[jira] [Commented] (FLINK-35031) Event timer firing under async execution model

2024-04-17 Thread Yunfeng Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837980#comment-17837980
 ] 

Yunfeng Zhou commented on FLINK-35031:
--

What is the relationship between this ticket and FLINK-35028? The PR of 
FLINK-35028 has provided implementations to support event time triggers.

> Event timer firing under async execution model
> --
>
> Key: FLINK-35031
> URL: https://issues.apache.org/jira/browse/FLINK-35031
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35092][cdc][starrocks] Add starrocks integration test cases [flink-cdc]

2024-04-17 Thread via GitHub


yuxiqian opened a new pull request, #3231:
URL: https://github.com/apache/flink-cdc/pull/3231

   This closes [FLINK-35092](https://issues.apache.org/jira/browse/FLINK-35092).
   
   Currently, no integrated test are being applied to StarRocks pipeline 
connector which runs on real docker container. Adding one should help improving 
pipeline connectors' reliability.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35031) Event timer firing under async execution model

2024-04-17 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838022#comment-17838022
 ] 

Yanfei Lei commented on FLINK-35031:


[~yunfengzhou] At first I planned to implement event timer and processing timer 
separately, but during the implementation process, I found that they were very 
closely related, so I implemented them together.

I will change this ticket to preserve the order between records and "Latency 
Marker".

> Event timer firing under async execution model
> --
>
> Key: FLINK-35031
> URL: https://issues.apache.org/jira/browse/FLINK-35031
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance

2024-04-17 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-35130:
--
Description: The AvailabilityNotifierImpl in SingleInputGate has maps 
storing the channel ids. But the map key is the result partition id, which will 
change according to the different attempt numbers when speculation is enabled.  
This can be resolved by using `inputChannels` to get channel and the map key of 
inputChannels will not vary with the attempts. In addition, using that map 
instead can also improve performance for large scale jobs because   (was: The 
AvailabilityNotifierImpl in SingleInputGate has maps storing the channel ids. 
But the map key is the result partition id, which will change according to the 
different attempt numbers when speculation is enabled.  This can be resolved by 
using `inputChannels` to get channel and the map key of inputChannels will not 
vary with the attempts.)

> Simplify AvailabilityNotifierImpl to support speculative scheduler and 
> improve performance
> --
>
> Key: FLINK-35130
> URL: https://issues.apache.org/jira/browse/FLINK-35130
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>
> The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
> ids. But the map key is the result partition id, which will change according 
> to the different attempt numbers when speculation is enabled.  This can be 
> resolved by using `inputChannels` to get channel and the map key of 
> inputChannels will not vary with the attempts. In addition, using that map 
> instead can also improve performance for large scale jobs because 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-17 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838045#comment-17838045
 ] 

Etienne Chauchot commented on FLINK-35124:
--

[~dannycranmer] yes, it was because it lead to an empty tools directory in the 
source release. I'll take a look for the suppressions.xml

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Priority: Major
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34127) Kafka connector repo runs a duplicate of `IntegrationTests` framework tests

2024-04-17 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837982#comment-17837982
 ] 

Mason Chen commented on FLINK-34127:


It seems to be a bug in junit. https://github.com/junit-team/junit5/issues/3782

> Kafka connector repo runs a duplicate of `IntegrationTests` framework tests
> ---
>
> Key: FLINK-34127
> URL: https://issues.apache.org/jira/browse/FLINK-34127
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI, Connectors / Kafka
>Affects Versions: kafka-3.0.2
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>
> I found out this behavior when troubleshooting CI flakiness. These 
> integration tests make heavy use of the CI since they require Kafka, 
> Zookeeper, and Docker containers. We can further stablize CI by not 
> redundantly running these set of tests.
> `grep -E ".*testIdleReader\[TestEnvironment.*" 14_Compile\ and\ test.txt` 
> returns:
> ```
> 2024-01-17T00:51:05.2943150Z Test 
> org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceITTest.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: 
> [org.apache.flink.connector.kafka.testutils.DynamicKafkaSourceExternalContext@43e9a8a2],
>  Semantic: [EXACTLY_ONCE]] is running.
> 2024-01-17T00:51:07.6922535Z Test 
> org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceITTest.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: 
> [org.apache.flink.connector.kafka.testutils.DynamicKafkaSourceExternalContext@43e9a8a2],
>  Semantic: [EXACTLY_ONCE]] successfully run.
> 2024-01-17T00:56:27.1326332Z Test 
> org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceITTest.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: 
> [org.apache.flink.connector.kafka.testutils.DynamicKafkaSourceExternalContext@2db4a84a],
>  Semantic: [EXACTLY_ONCE]] is running.
> 2024-01-17T00:56:28.4000830Z Test 
> org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceITTest.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: 
> [org.apache.flink.connector.kafka.testutils.DynamicKafkaSourceExternalContext@2db4a84a],
>  Semantic: [EXACTLY_ONCE]] successfully run.
> 2024-01-17T00:56:58.7830792Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] is running.
> 2024-01-17T00:56:59.0544092Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] successfully run.
> 2024-01-17T00:56:59.3910987Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-TOPIC], Semantic: 
> [EXACTLY_ONCE]] is running.
> 2024-01-17T00:56:59.6025298Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-TOPIC], Semantic: 
> [EXACTLY_ONCE]] successfully run.
> 2024-01-17T00:57:37.8378640Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] is running.
> 2024-01-17T00:57:38.0144732Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] successfully run.
> 2024-01-17T00:57:38.2004796Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-TOPIC], Semantic: 
> [EXACTLY_ONCE]] is running.
> 2024-01-17T00:57:38.4072815Z Test 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.IntegrationTests.testIdleReader[TestEnvironment:
>  [MiniCluster], ExternalContext: [KafkaSource-TOPIC], Semantic: 
> [EXACTLY_ONCE]] successfully run.
> 2024-01-17T01:06:11.2933375Z Test 
> org.apache.flink.tests.util.kafka.KafkaSourceE2ECase.testIdleReader[TestEnvironment:
>  [FlinkContainers], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] is running.
> 2024-01-17T01:06:12.1790031Z Test 
> org.apache.flink.tests.util.kafka.KafkaSourceE2ECase.testIdleReader[TestEnvironment:
>  [FlinkContainers], ExternalContext: [KafkaSource-PARTITION], Semantic: 
> [EXACTLY_ONCE]] successfully run.
> 2024-01-17T01:06:12.5703927Z 

[PR] Add HTTP options to java-storage client [flink]

2024-04-17 Thread via GitHub


JTaky opened a new pull request, #24673:
URL: https://github.com/apache/flink/pull/24673

   
   
   ## What is the purpose of the change
   
   Provide a way to ingest http timeout configuration to gcs-cloud-storage 
library in use.
   
   ## Brief change log
   
   * exposed gs.http.connect-timeout and gs.http.read-timeout configuration.
   * Pluming http timeout configurations to gcs-cloud-storage client.
   * rebase and conflict fix of the original PR 
(https://github.com/apache/flink/pull/23226) made by @singhravidutt. I have no 
rights to patch the original conflict
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
GSRecoverableWriterTest*.
   
   ## Does this pull request potentially affect one of the following parts:
   
   Dependencies (does it add or upgrade a dependency): no
   The public API, i.e., is any changed class annotated with @Public(Evolving): 
no
   The serializers: no
   The runtime per-record code paths (performance sensitive): no
   Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
   The S3 file system connector: no
   
   ## Documentation
   
   * Does this pull request introduce a new feature? yes
   * If yes, how is the feature documented? docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance

2024-04-17 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-35130:
-

 Summary: Simplify AvailabilityNotifierImpl to support speculative 
scheduler and improve performance
 Key: FLINK-35130
 URL: https://issues.apache.org/jira/browse/FLINK-35130
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.20.0
Reporter: Yuxin Tan


The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
ids. But the map key is the result partition id, which will change according to 
the different attempt numbers when speculation is enabled.  This can be 
resolved by using `inputChannels` to get channel and the map key of 
inputChannels will not vary with the attempts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35135) Release flink-connector-gcp-pubsub vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35135:
--
Summary: Release flink-connector-gcp-pubsub vX.X.X for Flink 1.19  (was: 
Release flink-connector-gcp-pubsub vX.X.X for Flink 1.18/1.19)

> Release flink-connector-gcp-pubsub vX.X.X for Flink 1.19
> 
>
> Key: FLINK-35135
> URL: https://issues.apache.org/jira/browse/FLINK-35135
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-gcp-pubsub



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35133) Release flink-connector-cassandra v3.x.x for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35133:
--
Summary: Release flink-connector-cassandra v3.x.x for Flink 1.19  (was: 
Release flink-connector-cassandra v3.x.x for Flink 1.18/1.19)

> Release flink-connector-cassandra v3.x.x for Flink 1.19
> ---
>
> Key: FLINK-35133
> URL: https://issues.apache.org/jira/browse/FLINK-35133
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Cassandra
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-cassandra



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35135) Release flink-connector-gcp-pubsub vX.X.X for Flink 1.18/1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35135:
-

 Summary: Release flink-connector-gcp-pubsub vX.X.X for Flink 
1.18/1.19
 Key: FLINK-35135
 URL: https://issues.apache.org/jira/browse/FLINK-35135
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-gcp-pubsub



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35136) Release flink-connector-hbase vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35136:
--
Description: https://github.com/apache/flink-connector-hbase

> Release flink-connector-hbase vX.X.X for Flink 1.19
> ---
>
> Key: FLINK-35136
> URL: https://issues.apache.org/jira/browse/FLINK-35136
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-hbase



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838058#comment-17838058
 ] 

Danny Cranmer commented on FLINK-35132:
---

https://lists.apache.org/thread/0nw9smt23crx4gwkf6p1dd4jwvp1g5s0

> Release flink-connector-aws v4.3.0 for Flink 1.19
> -
>
> Key: FLINK-35132
> URL: https://issues.apache.org/jira/browse/FLINK-35132
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-4.3.0
>
>
> https://github.com/apache/flink-connector-aws



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35139:
-

Assignee: Danny Cranmer

> Release flink-connector-mongodb vX.X.X for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34214) FLIP-377: Support fine-grained configuration to control filter push down for Table/SQL Sources

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-34214:
--
Fix Version/s: mongodb-1.3.0
   (was: mongodb-1.2.0)

> FLIP-377: Support fine-grained configuration to control filter push down for 
> Table/SQL Sources
> --
>
> Key: FLINK-34214
> URL: https://issues.apache.org/jira/browse/FLINK-34214
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Connectors / MongoDB
>Affects Versions: mongodb-1.0.2, jdbc-3.1.2
>Reporter: jiabao.sun
>Assignee: jiabao.sun
>Priority: Major
> Fix For: jdbc-3.1.3, mongodb-1.3.0
>
>
> This improvement implements [FLIP-377 Support fine-grained configuration to 
> control filter push down for Table/SQL 
> Sources|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768]
> This FLIP has 2 goals:
>  * Introduces a new configuration filter.handling.policy to the JDBC and 
> MongoDB connector.
>  * Suggests a convention option name if other connectors are going to add an 
> option for the same purpose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34987][state] Introduce Internal State for Async State API [flink]

2024-04-17 Thread via GitHub


masteryhx commented on PR #24651:
URL: https://github.com/apache/flink/pull/24651#issuecomment-2060873955

   Rebased again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34615]Split `ExternalizedCheckpointCleanup` out of `Checkpoint… [flink]

2024-04-17 Thread via GitHub


spoon-lz commented on code in PR #24461:
URL: https://github.com/apache/flink/pull/24461#discussion_r1568676454


##
flink-python/docs/reference/pyflink.datastream/checkpoint.rst:
##
@@ -81,7 +81,7 @@ The default limit of concurrently happening checkpoints: one.
 CheckpointConfig.set_checkpoint_storage
 CheckpointConfig.set_checkpoint_storage_dir
 CheckpointConfig.get_checkpoint_storage
-ExternalizedCheckpointCleanup
+ExternalizedCheckpointRetention

Review Comment:
   In python, the class name is currently changed directly. Do we need to keep 
it consistent with Java and create a new `ExternalizedCheckpointRetention.py`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-17 Thread via GitHub


GOODBOY008 commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568682051


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site

Review Comment:
   I want to enable `doc build check` for pr to avoid doc error without auto 
commit. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35120][doris] Add Doris integration test cases [flink-cdc]

2024-04-17 Thread via GitHub


yuxiqian commented on PR #3227:
URL: https://github.com/apache/flink-cdc/pull/3227#issuecomment-2061021229

   @leonardBang Seems it's `flink-cdc-pipeline-connector-values` test that 
keeps failing recently. Will investigate this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35142) Release flink-connector-rabbitmq vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-35142:
-

 Summary: Release flink-connector-rabbitmq vX.X.X for Flink 1.19
 Key: FLINK-35142
 URL: https://issues.apache.org/jira/browse/FLINK-35142
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors/ RabbitMQ
Reporter: Danny Cranmer


https://github.com/apache/flink-connector-rabbitmq



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35143) Expose newly added tables capture in mysql pipeline connector

2024-04-17 Thread Hongshun Wang (Jira)
Hongshun Wang created FLINK-35143:
-

 Summary: Expose newly added tables capture in mysql pipeline 
connector
 Key: FLINK-35143
 URL: https://issues.apache.org/jira/browse/FLINK-35143
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Hongshun Wang


Currently, mysql pipeline connector still don't allowed to capture newly added 
tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35026][runtime][config] Introduce async execution configurations [flink]

2024-04-17 Thread via GitHub


fredia commented on code in PR #24667:
URL: https://github.com/apache/flink/pull/24667#discussion_r1568656329


##
flink-core/src/main/java/org/apache/flink/configuration/ExecutionOptions.java:
##
@@ -181,4 +182,73 @@ public class ExecutionOptions {
 + " operators. NOTE: It takes effect only 
in the BATCH runtime mode and requires sorted inputs"
 + SORT_INPUTS.key()
 + " to be enabled.");
+
+/**
+ * A flag to enable or disable async mode related components when tasks 
initialize. As long as
+ * this option is enabled, the state access of Async state APIs will be 
executed asynchronously.
+ * Otherwise, the state access of Async state APIs will be executed 
synchronously. For Sync
+ * state APIs, the state access is always executed synchronously, enable 
this option would bring
+ * some overhead.
+ *
+ * Note: This is an experimental feature(FLIP-425) under evaluation.
+ */
+@Experimental
+public static final ConfigOption ASYNC_STATE_ENABLED =
+ConfigOptions.key("execution.async-mode.enabled")
+.booleanType()
+.defaultValue(false)
+.withDescription(
+"A flag to enable or disable async mode related 
components when tasks initialize."
++ " As long as this option is enabled, the 
state access of Async state APIs will be executed asynchronously."
++ " Otherwise, the state access of Async 
state APIs will be executed synchronously."
++ " For Sync state APIs, the state access 
is always executed synchronously, enable this option would bring some 
overhead.\n"

Review Comment:
   Thanks for your insight, after discussing offline, we think it would be 
better to expose it on the data stream API in some way so that users can enable 
async execution in a more fine-grained manner.
   So, this option is removed now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34444] Initial implementation of JM operator metric rest api [flink]

2024-04-17 Thread via GitHub


afedulov commented on code in PR #24564:
URL: https://github.com/apache/flink/pull/24564#discussion_r1568653505


##
docs/static/generated/rest_v1_dispatcher.yml:
##
@@ -1089,6 +1089,37 @@ paths:
 application/json:
   schema:
 $ref: '#/components/schemas/JobVertexBackPressureInfo'
+  /jobs/{jobid}/vertices/{vertexid}/coordinator-metrics:
+get:
+  description: Provides access to job manager operator metrics

Review Comment:
   I see, thanks for the clarification. I believe the main issue with the 
original proposal was that it also implied that the user would need to supply 
operator ID (as reflected in the FLIP's rejected approaches 
`/jobs//vertices//operators//metrics`). This would 
necessitate an additional step to identify which operator serves as the 
coordinator.
   
   It seems the challenge of distinguishing between the coordinator's metrics 
and other types of JobManager operators that may emerge in the future remains.  
Suppose we consolidate everything under the `/jm-operator-metrics` endpoint. 
When focusing on the coordinator's metrics for autoscaling purposes, how will 
API users distinguish these from other metrics retrieved from 
`/jm-operator-metrics`? Can be sure that the metrics of interest are always 
uniquely identified by their names, preventing any overlap with those emitted 
by other operators?
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34908][pipeline-connector][doris] Fix Mysql pipeline to doris will lost precision for timestamp [flink-cdc]

2024-04-17 Thread via GitHub


gong commented on PR #3207:
URL: https://github.com/apache/flink-cdc/pull/3207#issuecomment-2061042815

   @PatrickRen Hello, Please help to rerun CI. CI fail is not relation with the 
PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34583) Bug for dynamic table option hints with multiple CTEs

2024-04-17 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838130#comment-17838130
 ] 

xuyang commented on FLINK-34583:


Hi, [~xccui] can you provide more details about this bug? I try to run this 
test in my local env with Flink 1.18-SNAPSHOT, but could not re-produce it.
{code:java}
// run it in org.apache.flink.table.planner.plan.stream.sql.CalcTest
@Test
def test(): Unit = {
  util.tableEnv.executeSql(s"""
  |create temporary table T1 (
  |  a int,
  |  b int,
  |  c int)
  |  with ( 'connector' = 'values' )
  |""".stripMargin)
  util.verifyExecPlan(
"with q1 as (SELECT * FROM T1 /*+ OPTIONS('changelog-mode' = 'I,D') */ 
WHERE a > 10)," +
  "q2 as (SELECT a, b, c FROM q1 where b > 10)," +
  "q3 as (select a,b,c from q1 where c > 20)," +
  "q4 as (select * from q2 join q3 on q2.a = q3.a) SELECT * FROM q4");
}

// result
Join(joinType=[InnerJoin], where=[(a = a0)], select=[a, b, c, a0, b0, c0], 
leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])
:- Exchange(distribution=[hash[a]])
:  +- Calc(select=[a, b, c], where=[((a > 10) AND (b > 10))])
: +- TableSourceScan(table=[[default_catalog, default_database, T1, 
filter=[]]], fields=[a, b, c], hints=[[[OPTIONS 
options:{changelog-mode=I,D}]]])(reuse_id=[1])
+- Exchange(distribution=[hash[a]])
   +- Calc(select=[a, b, c], where=[((a > 10) AND (c > 20))])
  +- Reused(reference_id=[1]){code}

> Bug for dynamic table option hints with multiple CTEs
> -
>
> Key: FLINK-34583
> URL: https://issues.apache.org/jira/browse/FLINK-34583
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Priority: Major
>
> The table options hints don't work well with multiple WITH clauses referring 
> to the same table. Please see the following example.
>  
> The following query with hints works well.
> {code:java}
> SELECT * FROM T1 /*+ OPTIONS('foo' = 'bar') */ WHERE...;{code}
> The following query with multiple WITH clauses also works well.
> {code:java}
> WITH T2 AS (SELECT * FROM T1 /*+ OPTIONS('foo' = 'bar') */ WHERE...),
> T3 AS (SELECT ... FROM T2 WHERE...)
> SELECT * FROM T3;{code}
> The following query with multiple WITH clauses referring to the same original 
> table failed to recognize the hints.
> {code:java}
> WITH T2 AS (SELECT * FROM T1 /*+ OPTIONS('foo' = 'bar') */ WHERE...),
> T3 AS (SELECT ... FROM T2 WHERE...),
> T4 AS (SELECT ... FROM T2 WHERE...),
> T5 AS (SELECT ... FROM T3 JOIN T4 ON...)
> SELECT * FROM T5;{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35139][Connectors/MongoDB] Drop support for Flink 1.17 [flink-connector-mongodb]

2024-04-17 Thread via GitHub


boring-cyborg[bot] commented on PR #35:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/35#issuecomment-2060869153

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35139][Connectors/MongoDB] Drop support for Flink 1.17 [flink-connector-mongodb]

2024-04-17 Thread via GitHub


Jiabao-Sun merged PR #35:
URL: https://github.com/apache/flink-connector-mongodb/pull/35


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35130][runtime] Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance [flink]

2024-04-17 Thread via GitHub


flinkbot commented on PR #24674:
URL: https://github.com/apache/flink/pull/24674#issuecomment-2060781312

   
   ## CI report:
   
   * 70df259709d9640d587eef6c257d9c812b8dfb38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35132:
--
Summary: Release flink-connector-aws v4.3.0 for Flink 1.19  (was: Release 
flink-connector-aws v4.3.0 for Flink 1.18/1.19)

> Release flink-connector-aws v4.3.0 for Flink 1.19
> -
>
> Key: FLINK-35132
> URL: https://issues.apache.org/jira/browse/FLINK-35132
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-4.3.0
>
>
> https://github.com/apache/flink-connector-aws



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35025) Wire AsyncExecutionController to AbstractStreamOperator

2024-04-17 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838093#comment-17838093
 ] 

Zakelly Lan commented on FLINK-35025:
-

Merged into master via c7be45d0...fe8dde4e

> Wire AsyncExecutionController to AbstractStreamOperator
> ---
>
> Key: FLINK-35025
> URL: https://issues.apache.org/jira/browse/FLINK-35025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34936) Register reused state handles to FileMergingSnapshotManager

2024-04-17 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838092#comment-17838092
 ] 

Zakelly Lan commented on FLINK-34936:
-

Merged into master via 31ea1a93

> Register reused state handles to FileMergingSnapshotManager
> ---
>
> Key: FLINK-34936
> URL: https://issues.apache.org/jira/browse/FLINK-34936
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Jinzhong Li
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The shared state files should be registered into the 
> FileMergingSnapshotManager, so that these files can be properly cleaned up  
> when checkpoint aborted/subsumed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35115) Kinesis connector writes wrong Kinesis sequence number at stop with savepoint

2024-04-17 Thread Vadim Vararu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838121#comment-17838121
 ] 

Vadim Vararu commented on FLINK-35115:
--

[~a.pilipenko] Yes, I can reproduce this consistently.

 

I've enabled this logger:

 
{code:java}
logger.kinesis.name = org.apache.flink.streaming.connectors.kinesis
logger.kinesis.level = DEBUG {code}
and got these last logs on TM before triggering the stop-with-savepoint (the 
log at 2024-04-17 14:05:11,753 is the last checkpoint):

 

 
{code:java}
2024-04-17 14:05:06,330 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Subtask 0 is trying to discover new shards that were created due to resharding 
...

2024-04-17 14:05:11,753 DEBUG 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - 
Snapshotting state ...

2024-04-17 14:05:11,753 DEBUG 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - 
Snapshotted state, last processed sequence numbers: 
{StreamShardMetadata{streamName='kinesis-dev-1-20210513-v3-contract-impression',
 shardId='shardId-', parentShardId='null', 
adjacentParentShardId='null', startingHashKey='0', 
endingHashKey='340282366920938463463374607431768211455', 
startingSequenceNumber='49618213417511572504838906841289148356109207047268990978',
 
endingSequenceNumber='null'}=49646826022549514041791139259235973731492142339223191554},
 checkpoint id: 1, timestamp: 1713351911711

2024-04-17 14:05:16,652 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Subtask 0 is trying to discover new shards that were created due to resharding 
...

2024-04-17 14:05:26,930 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Subtask 0 is trying to discover new shards that were created due to resharding 
...

2024-04-17 14:05:27,032 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer [] - 
stream: kinesis-dev-1-20210513-v3-contract-impression, shard: 
shardId-, millis behind latest: 0, batch size: 120

24-04-17 14:05:37,229 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Subtask 0 is trying to discover new shards that were created due to resharding 
...

2024-04-17 14:05:43,079 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer [] - 
stream: kinesis-dev-1-20210513-v3-contract-impression, shard: 
shardId-, millis behind latest: 0, batch size: 1

2024-04-17 14:05:47,752 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Subtask 0 is trying to discover new shards that were created due to resharding 
...

2024-04-17 14:05:50,677 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer [] - 
stream: kinesis-dev-1-20210513-v3-contract-impression, shard: 
shardId-, millis behind latest: 0, batch size: 1{code}
now I trigger the stop-with-savepoint:
{code:java}
2024-04-17 14:05:52,168 INFO 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Starting shutdown of shard consumer threads and AWS SDK resources of subtask 0 
...

2024-04-17 14:05:52,169 DEBUG 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Cancelled discovery

2024-04-17 14:05:52,169 INFO 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Shutting down the shard consumer threads of subtask 0 ...

2024-04-17 14:05:52,645 DEBUG 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - 
snapshotState() called on closed source; returning null.

2024-04-17 14:05:52,669 INFO 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Starting shutdown of shard consumer threads and AWS SDK resources of subtask 0 
...

2024-04-17 14:05:52,670 INFO 
org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher [] - 
Shutting down the shard consumer threads of subtask 0 ...  {code}
and here I start from the savepoint:
{code:java}
2024-04-17 14:12:56,691 INFO  
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Setting 
restore state in the FlinkKinesisConsumer. Using the following offsets: 
{org.apache.flink.streaming.connectors.kinesis.model.StreamShardMetadata$EquivalenceWrapper@f5191c51=49646826022549514041791139259235973731492142339223191554}

2024-04-17 14:12:58,370 INFO  
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer [] - Subtask 
0 is seeding the fetcher with restored shard 
StreamShardHandle{streamName='kinesis-dev-1-20210513-v3-contract-impression', 
shard='{ShardId: shardId-,HashKeyRange: {StartingHashKey: 
0,EndingHashKey: 340282366920938463463374607431768211455},SequenceNumberRange: 
{StartingSequenceNumber: 
49618213417511572504838906841289148356109207047268990978,}}'}, starting 

[jira] [Created] (FLINK-35144) Support multi source sync for FlinkCDC

2024-04-17 Thread Congxian Qiu (Jira)
Congxian Qiu created FLINK-35144:


 Summary: Support multi source sync for FlinkCDC
 Key: FLINK-35144
 URL: https://issues.apache.org/jira/browse/FLINK-35144
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0
Reporter: Congxian Qiu


Currently, the FlinkCDC pipeline can only support a single source in one 
pipeline, we need to start multiple pipelines when there are various sources. 

For upstream which uses sharding, we need to sync multiple sources in one 
pipeline, the current pipeline can't do this because it can only support a 
single source.

This issue wants to support the sync of multiple sources in one pipeline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-17 Thread via GitHub


MartijnVisser commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568720908


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site

Review Comment:
   But in this setup, you will always build and commit the docs as well. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-17 Thread via GitHub


GOODBOY008 commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568682051


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site

Review Comment:
   I want to enable `website build check` for pr to avoid doc error without 
auto commit. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance

2024-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35130:
---
Labels: pull-request-available  (was: )

> Simplify AvailabilityNotifierImpl to support speculative scheduler and 
> improve performance
> --
>
> Key: FLINK-35130
> URL: https://issues.apache.org/jira/browse/FLINK-35130
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
>
> The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel 
> ids. But the map key is the result partition id, which will change according 
> to the different attempt numbers when speculation is enabled.  This can be 
> resolved by using `inputChannels` to get channel and the map key of 
> inputChannels will not vary with the attempts. 
> In addition, using that map instead can also improve performance for large 
> scale jobs because no extra maps are created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30371) JdbcOutputFormat is at risk of database connection leaks

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-30371:
--
Fix Version/s: jdbc-3.3.0
   (was: jdbc-3.2.0)

> JdbcOutputFormat is at risk of database connection leaks
> 
>
> Key: FLINK-30371
> URL: https://issues.apache.org/jira/browse/FLINK-30371
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0, 1.16.1, jdbc-3.0.0
>Reporter: Echo Lee
>Assignee: Echo Lee
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: jdbc-3.3.0
>
>
> When writing to the target table fails for some reason, for example, the 
> target table does not exist.
> The internal call sequence of JdbcOutputFormat is:
> JdbcOutputFormat#flush(throws IOException) --> JdbcOutputFormat#close --> 
> JdbcOutputFormat#flush(throws RuntimeException).
> Will not call the close method of the database connection, when the restart 
> strategy is fixeddelay, maxNumberRestartAttempts is Integer.MAX, this will 
> cause the number of database connections to continue to rise and reach the 
> limit.
>  
> {code:java}
> 2022-12-07 10:49:32,050 ERROR 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat [] - JDBC 
> executeBatch error, retry times = 3
> java.sql.BatchUpdateException: ORA-00942: table or view does not exist{code}
> {code:java}
> Caused by: java.sql.SQLException: Listener refused the connection with the 
> following error:
> ORA-12519, TNS:no appropriate service handler found {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35134) Release flink-connector-elasticsearch vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35134:
--
Summary: Release flink-connector-elasticsearch vX.X.X for Flink 1.19  (was: 
Release flink-connector-elasticsearch vX.X.X for Flink 1.18/1.19)

> Release flink-connector-elasticsearch vX.X.X for Flink 1.19
> ---
>
> Key: FLINK-35134
> URL: https://issues.apache.org/jira/browse/FLINK-35134
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-elasticsearch



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Adding MongoDB Connector v1.2.0 [flink-web]

2024-04-17 Thread via GitHub


dannycranmer opened a new pull request, #735:
URL: https://github.com/apache/flink-web/pull/735

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35139) Release flink-connector-mongodb v1.2.0 for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838086#comment-17838086
 ] 

Danny Cranmer commented on FLINK-35139:
---

https://lists.apache.org/thread/s18g7obgp4sbdtl73571976vqvy1ftk8

> Release flink-connector-mongodb v1.2.0 for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35128][cdc-connector][cdc-base] Re-calculate the starting changelog offset after the new table added [flink-cdc]

2024-04-17 Thread via GitHub


loserwang1024 commented on code in PR #3230:
URL: https://github.com/apache/flink-cdc/pull/3230#discussion_r1568614060


##
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/meta/split/StreamSplit.java:
##
@@ -159,7 +159,15 @@ public String toString() {
 // ---
 public static StreamSplit appendFinishedSplitInfos(
 StreamSplit streamSplit, List 
splitInfos) {
+// re-calculate the starting changelog offset after the new table added
+Offset startingOffset = streamSplit.getStartingOffset();
+for (FinishedSnapshotSplitInfo splitInfo : splitInfos) {
+if (splitInfo.getHighWatermark().isBefore(startingOffset)) {
+startingOffset = splitInfo.getHighWatermark();
+}
+}
 splitInfos.addAll(streamSplit.getFinishedSnapshotSplitInfos());
+
 return new StreamSplit(
 streamSplit.splitId,
 streamSplit.getStartingOffset(),

Review Comment:
   It seems true.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35137) Release flink-connector-jdbc vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35137:
--
Component/s: Connectors / JDBC

> Release flink-connector-jdbc vX.X.X for Flink 1.19
> --
>
> Key: FLINK-35137
> URL: https://issues.apache.org/jira/browse/FLINK-35137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35144) Support various source sync for FlinkCDC in one pipeline

2024-04-17 Thread Congxian Qiu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Congxian Qiu updated FLINK-35144:
-
Summary: Support various source sync for FlinkCDC in one pipeline  (was: 
Support multi source sync for FlinkCDC)

> Support various source sync for FlinkCDC in one pipeline
> 
>
> Key: FLINK-35144
> URL: https://issues.apache.org/jira/browse/FLINK-35144
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Congxian Qiu
>Priority: Major
>
> Currently, the FlinkCDC pipeline can only support a single source in one 
> pipeline, we need to start multiple pipelines when there are various sources. 
> For upstream which uses sharding, we need to sync multiple sources in one 
> pipeline, the current pipeline can't do this because it can only support a 
> single source.
> This issue wants to support the sync of multiple sources in one pipeline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35139) Release flink-connector-mongodb v1.2.0 for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35139:
--
Summary: Release flink-connector-mongodb v1.2.0 for Flink 1.19  (was: 
Release flink-connector-mongodb vX.X.X for Flink 1.19)

> Release flink-connector-mongodb v1.2.0 for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35026][runtime][config] Introduce async execution configurations [flink]

2024-04-17 Thread via GitHub


fredia commented on code in PR #24667:
URL: https://github.com/apache/flink/pull/24667#discussion_r1568659638


##
flink-core/src/main/java/org/apache/flink/configuration/ExecutionOptions.java:
##
@@ -181,4 +182,73 @@ public class ExecutionOptions {
 + " operators. NOTE: It takes effect only 
in the BATCH runtime mode and requires sorted inputs"
 + SORT_INPUTS.key()
 + " to be enabled.");
+
+/**
+ * A flag to enable or disable async mode related components when tasks 
initialize. As long as
+ * this option is enabled, the state access of Async state APIs will be 
executed asynchronously.
+ * Otherwise, the state access of Async state APIs will be executed 
synchronously. For Sync
+ * state APIs, the state access is always executed synchronously, enable 
this option would bring
+ * some overhead.
+ *
+ * Note: This is an experimental feature(FLIP-425) under evaluation.
+ */
+@Experimental
+public static final ConfigOption ASYNC_STATE_ENABLED =
+ConfigOptions.key("execution.async-mode.enabled")
+.booleanType()
+.defaultValue(false)
+.withDescription(
+"A flag to enable or disable async mode related 
components when tasks initialize."
++ " As long as this option is enabled, the 
state access of Async state APIs will be executed asynchronously."
++ " Otherwise, the state access of Async 
state APIs will be executed synchronously."
++ " For Sync state APIs, the state access 
is always executed synchronously, enable this option would bring some 
overhead.\n"
++ " Note: This is an experimental feature 
under evaluation.");
+
+/**
+ * The max limit of in-flight records number in async execution mode, 
'in-flight' refers to the
+ * records that have entered the operator but have not yet been processed 
and emitted to the
+ * downstream. If the in-flight records number exceeds the limit, the 
newly records entering
+ * will be blocked until the in-flight records number drops below the 
limit.
+ */
+@Experimental
+public static final ConfigOption ASYNC_INFLIGHT_RECORDS_LIMIT =
+ConfigOptions.key("execution.async-mode.in-flight-records-limit")
+.intType()
+.defaultValue(6000)
+.withDescription(
+"The max limit of in-flight records number in 
async execution mode, 'in-flight' refers"
++ " to the records that have entered the 
operator but have not yet been processed and"
++ " emitted to the downstream. If the 
in-flight records number exceeds the limit,"
++ " the newly records entering will be 
blocked until the in-flight records number drops below the limit.");
+
+/**
+ * The size of buffer under async execution mode. Async execution mode 
provides a buffer
+ * mechanism to reduce state access. When the number of state requests in 
the buffer exceeds the
+ * batch size, a batched state execution would be triggered. Larger batch 
sizes will bring
+ * higher end-to-end latency, this option works with {@link 
#ASYNC_BUFFER_TIMEOUT} to control
+ * the frequency of triggering.
+ */
+@Experimental
+public static final ConfigOption ASYNC_BUFFER_SIZE =
+ConfigOptions.key("execution.async-mode.buffer-size")
+.intType()
+.defaultValue(1000)
+.withDescription(
+"The size of buffer under async execution mode. 
Async execution mode provides a buffer mechanism to reduce state access."
++ " When the number of state requests in 
the active buffer exceeds the batch size,"
++ " a batched state execution would be 
triggered. Larger batch sizes will bring higher end-to-end latency,"
++ " this option works with 
'execution.async-state.buffer-timeout' to control the frequency of 
triggering.");
+
+/**
+ * The timeout of buffer triggering in milliseconds. If the buffer has not 
reached the {@link
+ * #ASYNC_BUFFER_SIZE} within 'buffer-timeout' milliseconds, a trigger 
will perform actively.
+ */
+@Experimental
+public static final ConfigOption ASYNC_BUFFER_TIMEOUT =
+ConfigOptions.key("execution.async-state.buffer-timeout")

Review Comment:
   Unify those options to `execution.async-mode`, cause we also provide 
`Record-order` mode to preserve the order of records that without state access.



-- 
This is an automated 

Re: [PR] [FLINK-35026][runtime][config] Introduce async execution configurations [flink]

2024-04-17 Thread via GitHub


fredia commented on PR #24667:
URL: https://github.com/apache/flink/pull/24667#issuecomment-2060999083

   @yunfengzhou-hub @Zakelly Thanks for the detailed review, I have rebased 
this PR and addressed some comments, PTAL if you are free.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35144) Support various sources sync for FlinkCDC in one pipeline

2024-04-17 Thread Congxian Qiu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Congxian Qiu updated FLINK-35144:
-
Summary: Support various sources sync for FlinkCDC in one pipeline  (was: 
Support various source sync for FlinkCDC in one pipeline)

> Support various sources sync for FlinkCDC in one pipeline
> -
>
> Key: FLINK-35144
> URL: https://issues.apache.org/jira/browse/FLINK-35144
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Congxian Qiu
>Priority: Major
>
> Currently, the FlinkCDC pipeline can only support a single source in one 
> pipeline, we need to start multiple pipelines when there are various sources. 
> For upstream which uses sharding, we need to sync multiple sources in one 
> pipeline, the current pipeline can't do this because it can only support a 
> single source.
> This issue wants to support the sync of multiple sources in one pipeline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35137) Release flink-connector-jdbc vX.X.X for Flink 1.19

2024-04-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-35137:
--
Fix Version/s: jdbc-3.2.0

> Release flink-connector-jdbc vX.X.X for Flink 1.19
> --
>
> Key: FLINK-35137
> URL: https://issues.apache.org/jira/browse/FLINK-35137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: jdbc-3.2.0
>
>
> https://github.com/apache/flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35028][runtime] Timer firing under async execution model [flink]

2024-04-17 Thread via GitHub


Zakelly commented on code in PR #24672:
URL: https://github.com/apache/flink/pull/24672#discussion_r1568659887


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimeServiceManager.java:
##
@@ -51,6 +52,21 @@  InternalTimerService getInternalTimerService(
 TypeSerializer namespaceSerializer,
 Triggerable triggerable);
 
+/**
+ * Creates an {@link InternalTimerServiceAsyncImpl} for handling a group 
of timers identified by
+ * the given {@code name}. The timers are scoped to a key and namespace. 
Mainly used by async
+ * operators.
+ *
+ * Some essential order preservation will be added when the given 
{@link Triggerable} is
+ * invoked.
+ */
+ InternalTimerService getAsyncInternalTimerService(
+String name,
+TypeSerializer keySerializer,
+TypeSerializer namespaceSerializer,
+Triggerable triggerable,
+AsyncExecutionController asyncExecutionController);

Review Comment:
   It is better to provide the type parameter `K` from here and any other 
methods of this class.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35128][cdc-connector][cdc-base] Re-calculate the starting changelog offset after the new table added [flink-cdc]

2024-04-17 Thread via GitHub


yuxiqian commented on code in PR #3230:
URL: https://github.com/apache/flink-cdc/pull/3230#discussion_r1568608219


##
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/meta/split/StreamSplit.java:
##
@@ -159,7 +159,15 @@ public String toString() {
 // ---
 public static StreamSplit appendFinishedSplitInfos(
 StreamSplit streamSplit, List 
splitInfos) {
+// re-calculate the starting changelog offset after the new table added
+Offset startingOffset = streamSplit.getStartingOffset();
+for (FinishedSnapshotSplitInfo splitInfo : splitInfos) {
+if (splitInfo.getHighWatermark().isBefore(startingOffset)) {
+startingOffset = splitInfo.getHighWatermark();
+}
+}
 splitInfos.addAll(streamSplit.getFinishedSnapshotSplitInfos());
+
 return new StreamSplit(
 streamSplit.splitId,
 streamSplit.getStartingOffset(),

Review Comment:
   CMIIW, but seems newly added code just calculated the earliest starting 
offset into `startingOffset` but didn't really use it to generate new 
`StreamSplit`. Maybe missed a change here?
   
   ```suggestion
   return new StreamSplit(
   streamSplit.splitId,
   startingOffset,
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35025) Wire AsyncExecutionController to AbstractStreamOperator

2024-04-17 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-35025.
-
Resolution: Fixed

> Wire AsyncExecutionController to AbstractStreamOperator
> ---
>
> Key: FLINK-35025
> URL: https://issues.apache.org/jira/browse/FLINK-35025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Yanfei Lei
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35120][doris] Add Doris integration test cases [flink-cdc]

2024-04-17 Thread via GitHub


leonardBang commented on PR #3227:
URL: https://github.com/apache/flink-cdc/pull/3227#issuecomment-2061013529

   The CI failed, could you take a look ? @yuxiqian 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35028][runtime] Timer firing under async execution model [flink]

2024-04-17 Thread via GitHub


fredia commented on code in PR #24672:
URL: https://github.com/apache/flink/pull/24672#discussion_r1568731699


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java:
##
@@ -117,6 +120,7 @@ public class InternalTimerServiceImpl implements 
InternalTimerService {
 startIdx = Math.min(keyGroupIdx, startIdx);
 }
 this.localKeyGroupRangeStartIdx = startIdx;
+this.processingTimeCallback = this::onProcessingTime;

Review Comment:
   The original idea was to avoid rewriting `startTimerService()` and 
`registerProcessingTimeTimer` by introducing `processingTimeCallback`, since it 
needs to be reassigned in the subclass, it cannot be marked as `final`.
   
   For the `onAsyncProcessingTime`,  I changed it back to `onProcessingTime`, 
BTW, `onProcessingTime()` is a private method.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >