Re: [PR] [FLINK-34517][table]fix environment configs ignored when calling procedure operation [flink]

2024-04-28 Thread via GitHub


JustinLeesin commented on PR #24397:
URL: https://github.com/apache/flink/pull/24397#issuecomment-2082006498

   > @JustinLeesin Could you please cherry pick it to release-1.19 branch? And 
if ci passed, please let me know
   
   Hi, it's done on release-1.19 branch , please help to review if convenient 
,PR https://github.com/apache/flink/pull/24656


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34517][table]fix environment configs ignored when calling procedure operation [flink]

2024-04-28 Thread via GitHub


JustinLeesin commented on PR #24397:
URL: https://github.com/apache/flink/pull/24397#issuecomment-2082005693

   > > @JustinLeesin Could you please cherry pick it to release-1.19 branch? 
And if ci passed, please let me know
   > 
   > Sorry to reply so late, I will work on it recently.
   
   Hi, it's done on release-1.19 branch , please help to review if convenient 
,PR #24656


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32852][JUnit5 Migration] Migrate the scheduler package of flink-runtime module to JUnit5 [flink]

2024-04-28 Thread via GitHub


1996fanrui commented on code in PR #24732:
URL: https://github.com/apache/flink/pull/24732#discussion_r1582594242


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTaskTest.java:
##
@@ -18,55 +18,52 @@
 
 package org.apache.flink.runtime.scheduler.adaptive;
 
-import org.apache.flink.core.testutils.FlinkMatchers;
 import org.apache.flink.core.testutils.OneShotLatch;
-import org.apache.flink.testutils.executor.TestExecutorResource;
-import org.apache.flink.util.TestLogger;
+import org.apache.flink.testutils.executor.TestExecutorExtension;
 
-import org.hamcrest.Matchers;
-import org.junit.ClassRule;
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.RegisterExtension;
 
 import java.util.concurrent.ArrayBlockingQueue;
 import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 
-import static org.junit.Assert.assertThat;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
+import static org.apache.flink.core.testutils.FlinkAssertions.assertThatFuture;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
 /** Tests for the {@link BackgroundTask}. */
-public class BackgroundTaskTest extends TestLogger {
+class BackgroundTaskTest {
 
-@ClassRule
-public static final TestExecutorResource 
TEST_EXECUTOR_RESOURCE =
-new TestExecutorResource<>(() -> Executors.newFixedThreadPool(2));
+@RegisterExtension
+public static final TestExecutorExtension 
TEST_EXECUTOR_EXTENSION =
+new TestExecutorExtension<>(() -> Executors.newFixedThreadPool(2));
 
 @Test
 public void testFinishedBackgroundTaskIsTerminated() {
 final BackgroundTask finishedBackgroundTask = 
BackgroundTask.finishedBackgroundTask();
 
-assertTrue(finishedBackgroundTask.getTerminationFuture().isDone());
+
assertThatFuture(finishedBackgroundTask.getTerminationFuture()).isDone();
 finishedBackgroundTask.getTerminationFuture().join();
 }
 
 @Test
 public void testFinishedBackgroundTaskDoesNotContainAResult() {
 final BackgroundTask finishedBackgroundTask = 
BackgroundTask.finishedBackgroundTask();
 
-
assertTrue(finishedBackgroundTask.getResultFuture().isCompletedExceptionally());
+
assertThatFuture(finishedBackgroundTask.getResultFuture()).isCompletedExceptionally();
 }
 
 @Test
 public void testNormalCompletionOfBackgroundTask() {

Review Comment:
   All public can be removed.



##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/SsgNetworkMemoryCalculationUtilsTest.java:
##
@@ -182,21 +182,20 @@ private void 
triggerComputeNumOfSubpartitions(IntermediateResult result) {
 private void assertNetworkMemory(
 List slotSharingGroups, List 
networkMemory) {
 
-assertEquals(slotSharingGroups.size(), networkMemory.size());
+assertThat(networkMemory.size()).isEqualTo(slotSharingGroups.size());

Review Comment:
   hasSameSizeAs



##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/SsgNetworkMemoryCalculationUtilsTest.java:
##
@@ -270,16 +269,16 @@ private void createExecutionGraphAndEnrichNetworkMemory(
 createJobGraph(
 slotSharingGroups, Arrays.asList(4, 5, 6), 
resultPartitionType))
 .setShuffleMaster(SHUFFLE_MASTER)
-.build(EXECUTOR_RESOURCE.getExecutor());
+.build(EXECUTOR_EXTENSION.getExecutor());
 }
 
 private static JobGraph createJobGraph(
 final List slotSharingGroups,
 List parallelisms,
 ResultPartitionType resultPartitionType) {
 
-assertThat(slotSharingGroups.size(), is(3));
-assertThat(parallelisms.size(), is(3));
+assertThat(slotSharingGroups.size()).isEqualTo(3);
+assertThat(parallelisms.size()).isEqualTo(3);

Review Comment:
   hasSize



##
flink-runtime/src/test/java/org/apache/flink/runtime/testutils/InternalMiniClusterExtension.java:
##
@@ -109,4 +117,24 @@ public Object resolveParameter(
 }
 throw new ParameterResolutionException("Unsupported parameter");
 }
+
+@Override
+public void beforeEach(ExtensionContext context) throws Exception {
+miniClusterResource.before();
+}
+
+@Override
+public void afterEach(ExtensionContext context) throws Exception {
+miniClusterResource.after();
+}
+
+@Override
+public void before(ExtensionContext context) throws Exception {
+miniClusterResource.before();
+}
+
+@Override
+public void after(ExtensionContext context) throws Exception {
+miniClusterResource.after();
+}

Review Comme

[jira] [Commented] (FLINK-35236) Flink 1.19 Translation error on the execution_mode/order-of-processing

2024-04-28 Thread hongxu han (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841860#comment-17841860
 ] 

hongxu han commented on FLINK-35236:


 [GitHub Pull Request #24726|https://github.com/apache/flink/pull/24726]

> Flink 1.19 Translation error on the execution_mode/order-of-processing
> --
>
> Key: FLINK-35236
> URL: https://issues.apache.org/jira/browse/FLINK-35236
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: hongxu han
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-25-16-53-34-007.png, 
> image-2024-04-25-16-53-49-052.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/datastream/execution_mode/#order-of-processing]
> !image-2024-04-25-16-53-34-007.png!
> !image-2024-04-25-16-53-49-052.png!
> 应为,常规输入:既不从广播输入也不从 keyed 输入



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35194][table] Support describe job with job id [flink]

2024-04-28 Thread via GitHub


lsyldliu commented on code in PR #24728:
URL: https://github.com/apache/flink/pull/24728#discussion_r1582558688


##
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java:
##
@@ -774,6 +777,53 @@ public ResultFetcher callShowJobsOperation(
 resultRows);
 }
 
+public ResultFetcher callDescribeJobOperation(
+TableEnvironmentInternal tableEnv,
+OperationHandle operationHandle,
+DescribeJobOperation describeJobOperation)
+throws SqlExecutionException {
+Configuration configuration = tableEnv.getConfig().getConfiguration();
+Duration clientTimeout = 
configuration.get(ClientOptions.CLIENT_TIMEOUT);
+String jobId = describeJobOperation.getJobId();
+Optional jobStatusOp =
+runClusterAction(
+configuration,
+operationHandle,
+clusterClient -> {
+try {
+JobID expectedJobId = 
JobID.fromHexString(jobId);
+return clusterClient.listJobs()
+.get(clientTimeout.toMillis(), 
TimeUnit.MILLISECONDS)
+.stream()
+.filter(job -> 
expectedJobId.equals(job.getJobId()))
+.findFirst();
+} catch (Exception e) {
+throw new SqlExecutionException(
+"Failed to get jobs in the cluster.", 
e);

Review Comment:
   String.format("Failed to get job %s in the cluster.", jobId)



##
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/operation/OperationExecutor.java:
##
@@ -774,6 +777,53 @@ public ResultFetcher callShowJobsOperation(
 resultRows);
 }
 
+public ResultFetcher callDescribeJobOperation(
+TableEnvironmentInternal tableEnv,
+OperationHandle operationHandle,
+DescribeJobOperation describeJobOperation)
+throws SqlExecutionException {
+Configuration configuration = tableEnv.getConfig().getConfiguration();
+Duration clientTimeout = 
configuration.get(ClientOptions.CLIENT_TIMEOUT);
+String jobId = describeJobOperation.getJobId();
+Optional jobStatusOp =
+runClusterAction(
+configuration,
+operationHandle,
+clusterClient -> {
+try {
+JobID expectedJobId = 
JobID.fromHexString(jobId);
+return clusterClient.listJobs()
+.get(clientTimeout.toMillis(), 
TimeUnit.MILLISECONDS)
+.stream()
+.filter(job -> 
expectedJobId.equals(job.getJobId()))
+.findFirst();
+} catch (Exception e) {
+throw new SqlExecutionException(
+"Failed to get jobs in the cluster.", 
e);
+}
+});
+
+if (!jobStatusOp.isPresent()) {
+throw new SqlExecutionException("The job described by " + jobId + 
" does not exist.");

Review Comment:
   String.format("Described job %s  does not exist in the cluster.", jobId)?



##
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/SqlGatewayServiceITCase.java:
##
@@ -511,6 +511,57 @@ void testShowJobsOperation(@InjectClusterClient 
RestClusterClient restCluster
 .isBetween(timeOpStart, timeOpSucceed);
 }
 
+@Test
+void testDescribeJobOperation(@InjectClusterClient RestClusterClient 
restClusterClient)
+throws Exception {
+SessionHandle sessionHandle = 
service.openSession(defaultSessionEnvironment);
+Configuration configuration = new 
Configuration(MINI_CLUSTER.getClientConfiguration());
+
+String pipelineName = "test-describe-job";
+configuration.set(PipelineOptions.NAME, pipelineName);
+
+// running jobs
+String sourceDdl = "CREATE TABLE source (a STRING) WITH 
('connector'='datagen');";
+String sinkDdl = "CREATE TABLE sink (a STRING) WITH 
('connector'='blackhole');";
+String insertSql = "INSERT INTO sink SELECT * FROM source;";
+
+service.executeStatement(sessionHandle, sourceDdl, -1, configuration);
+service.executeStatement(sessionHandle, sinkDdl, -1, configuration);
+
+long timeOpStart = System.currentTimeMillis();
+OperationHandle insertsOperationHandle =
+service.executeS

Re: [PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


liyubin117 commented on PR #24735:
URL: https://github.com/apache/flink/pull/24735#issuecomment-2081957087

   @LadyForest Hi, CI passed, looking forward your review, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35222) Adding getJobType for AccessExecutionGraph

2024-04-28 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841854#comment-17841854
 ] 

Rui Fan commented on FLINK-35222:
-

Merged to master(1.20.0) via: 5263f9cbf20536e9a81a0044f22b033e78a908ea

> Adding getJobType for AccessExecutionGraph
> --
>
> Key: FLINK-35222
> URL: https://issues.apache.org/jira/browse/FLINK-35222
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Adding getJobType for AccessExecutionGraph interface, and all implementations 
> need to overrite it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35224) Show the JobType on Flink WebUI

2024-04-28 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841856#comment-17841856
 ] 

Rui Fan commented on FLINK-35224:
-

Merged to master(1.20.0) via: 86de622c807413f2a7a1664113e39798f0fed81d

> Show the JobType on Flink WebUI
> ---
>
> Key: FLINK-35224
> URL: https://issues.apache.org/jira/browse/FLINK-35224
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35223) Add jobType in JobDetailsInfo related rest api

2024-04-28 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-35223.
-
Resolution: Fixed

> Add jobType in JobDetailsInfo related rest api
> --
>
> Key: FLINK-35223
> URL: https://issues.apache.org/jira/browse/FLINK-35223
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35224) Show the JobType on Flink WebUI

2024-04-28 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-35224.
-
Resolution: Fixed

> Show the JobType on Flink WebUI
> ---
>
> Key: FLINK-35224
> URL: https://issues.apache.org/jira/browse/FLINK-35224
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35222) Adding getJobType for AccessExecutionGraph

2024-04-28 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-35222.
-
Resolution: Fixed

> Adding getJobType for AccessExecutionGraph
> --
>
> Key: FLINK-35222
> URL: https://issues.apache.org/jira/browse/FLINK-35222
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Adding getJobType for AccessExecutionGraph interface, and all implementations 
> need to overrite it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35223) Add jobType in JobDetailsInfo related rest api

2024-04-28 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841855#comment-17841855
 ] 

Rui Fan commented on FLINK-35223:
-

Merged to master(1.20.0) via: 917352ed35eff562a032a6e3038aaaf1177d2a8a

> Add jobType in JobDetailsInfo related rest api
> --
>
> Key: FLINK-35223
> URL: https://issues.apache.org/jira/browse/FLINK-35223
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35223][rest] Add jobType in JobDetailsInfo related rest api [flink]

2024-04-28 Thread via GitHub


1996fanrui commented on PR #24718:
URL: https://github.com/apache/flink/pull/24718#issuecomment-2081946736

   Thanks @RocMarshal for the review!
   
   > Would we keep the same style typing in the commits message ? 
   
   Updated, and the CI is green for now. Merging~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35223][rest] Add jobType in JobDetailsInfo related rest api [flink]

2024-04-28 Thread via GitHub


1996fanrui merged PR #24718:
URL: https://github.com/apache/flink/pull/24718


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35221][hive] Support SQL 2011 reserved keywords as identifiers in HiveParser [flink]

2024-04-28 Thread via GitHub


WencongLiu closed pull request #24710: [FLINK-35221][hive] Support SQL 2011 
reserved keywords as identifiers in HiveParser
URL: https://github.com/apache/flink/pull/24710


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35240][Connectors][format]Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record for csv format [flink]

2024-04-28 Thread via GitHub


GOODBOY008 commented on PR #24730:
URL: https://github.com/apache/flink/pull/24730#issuecomment-2081927244

   @robobario Thanks for your kind suggestions. PR is updated, PTAL~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-3.1][minor][docs] Fix route definition example in core concept docs [flink-cdc]

2024-04-28 Thread via GitHub


yuxiqian commented on PR #3270:
URL: https://github.com/apache/flink-cdc/pull/3270#issuecomment-2081901235

   Rebased, cc @PatrickRen


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Fix file reference [flink]

2024-04-28 Thread via GitHub


1996fanrui merged PR #24734:
URL: https://github.com/apache/flink/pull/24734


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Fix typo in checkpoints documentation [flink]

2024-04-28 Thread via GitHub


1996fanrui merged PR #24738:
URL: https://github.com/apache/flink/pull/24738


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33211][table] support flink table lineage [flink]

2024-04-28 Thread via GitHub


HuangZhenQiu commented on PR #24618:
URL: https://github.com/apache/flink/pull/24618#issuecomment-2081875974

   @PatrickRen 
   Thanks for reviewing the RP. For the testing purpose, I only added lineage 
provider implementation for values related source functions and input format. I 
will add lineage provider for Hive in a separate PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33211][table] support flink table lineage [flink]

2024-04-28 Thread via GitHub


HuangZhenQiu commented on code in PR #24618:
URL: https://github.com/apache/flink/pull/24618#discussion_r1582530548


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/lineage/TableDataSetSchemaFacet.java:
##
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.lineage;
+
+import org.apache.flink.streaming.api.lineage.DatasetSchemaFacet;
+import org.apache.flink.streaming.api.lineage.DatasetSchemaField;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.Map;
+
+/** Default implementation for DatasetSchemaFacet. */
+public class TableDataSetSchemaFacet implements DatasetSchemaFacet {

Review Comment:
   I agree. It is just for exposing the data in a structured way. After the 
implementation, I feel we probably don't need to expose CatalogContext and 
CatalogBaseTable to users. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-04-28 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu updated FLINK-34379:
--
Fix Version/s: 1.20.0

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as mo_name from table51 UNION All select 
> att,filename,'table52' as mo_name from table52 UN

[jira] [Resolved] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-04-28 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu resolved FLINK-34379.
---
Resolution: Fixed

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as mo_name from table51 UNION All select 
> att,filename,'table52' as mo_name from table52 UNIO

[jira] [Assigned] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-04-28 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu reassigned FLINK-34379:
-

Assignee: Jeyhun Karimov

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: pull-request-available
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as mo_name from table51 UNION All select 
> att,filename,'table52' as mo_name from table52 UNION All select 
> att,

[jira] [Commented] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-04-28 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841833#comment-17841833
 ] 

dalongliu commented on FLINK-34379:
---

[~jeyhunkarimov]could you please backport it to release-1.19, 1.18, 1.17?

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Priority: Critical
>  Labels: pull-request-available
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as mo_name from table51 UNION All select 
> att,filename,

[jira] [Commented] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-04-28 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841832#comment-17841832
 ] 

dalongliu commented on FLINK-34379:
---

Merged in master: cca14d4210634d481cacb11354e870807d570561

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Priority: Critical
>  Labels: pull-request-available
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as mo_name from table51 UNION All select 
> att,filename,'table52' as mo

Re: [PR] [FLINK-34379][table] Fix OutOfMemoryError with large queries [flink]

2024-04-28 Thread via GitHub


lsyldliu merged PR #24600:
URL: https://github.com/apache/flink/pull/24600


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35255]DataSinkWriterOperator should override snapshotState and processWatermark method [flink-cdc]

2024-04-28 Thread via GitHub


yanghuaiGit commented on PR #3279:
URL: https://github.com/apache/flink-cdc/pull/3279#issuecomment-2081852389

   @PatrickRen  Excuse me, do you have time to take a look at this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-35255]DataSinkWriterOperator should override snapshotState and processWatermark method [flink-cdc]

2024-04-28 Thread via GitHub


yanghuaiGit opened a new pull request, #3279:
URL: https://github.com/apache/flink-cdc/pull/3279

   … for the snapshotState and processWatermark methods


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35255]DataSinkWriterOperator should override snapshotState and processWatermark method [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen commented on PR #3271:
URL: https://github.com/apache/flink-cdc/pull/3271#issuecomment-2081847614

   @yanghuaiGit Could you also back port this commit to release-3.1? Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35255) DataSinkWriterOperator should override snapshotState and processWatermark method

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841827#comment-17841827
 ] 

Qingsheng Ren commented on FLINK-35255:
---

flink-cdc master: 23a67dcdb985f135e984e8e1afa8d1159946f95f

> DataSinkWriterOperator should override snapshotState and processWatermark 
> method
> 
>
> Key: FLINK-35255
> URL: https://issues.apache.org/jira/browse/FLINK-35255
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: yanghuai
>Assignee: yanghuai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> DataSinkWriterOperator just override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#initializeState,but
>  not override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#snapshotState 
> and 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#processWatermark.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35255]DataSinkWriterOperator should override snapshotState and processWatermark method [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3271:
URL: https://github.com/apache/flink-cdc/pull/3271


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35255) DataSinkWriterOperator should override snapshotState and processWatermark method

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35255:
-

Assignee: yanghuai

> DataSinkWriterOperator should override snapshotState and processWatermark 
> method
> 
>
> Key: FLINK-35255
> URL: https://issues.apache.org/jira/browse/FLINK-35255
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: yanghuai
>Assignee: yanghuai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> DataSinkWriterOperator just override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#initializeState,but
>  not override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#snapshotState 
> and 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#processWatermark.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35255) DataSinkWriterOperator should override snapshotState and processWatermark method

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-35255:
--
Fix Version/s: cdc-3.1.0

> DataSinkWriterOperator should override snapshotState and processWatermark 
> method
> 
>
> Key: FLINK-35255
> URL: https://issues.apache.org/jira/browse/FLINK-35255
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: yanghuai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> DataSinkWriterOperator just override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#initializeState,but
>  not override 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#snapshotState 
> and 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator#processWatermark.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34878) [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841825#comment-17841825
 ] 

Qingsheng Ren commented on FLINK-34878:
---

flink-cdc master: 75a553eb924a9108f57ce6c388aef69facc73af0

> [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN
> ---
>
> Key: FLINK-34878
> URL: https://issues.apache.org/jira/browse/FLINK-34878
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Flink CDC Issue Import
>Assignee: Wenkai Qi
>Priority: Major
>  Labels: github-import, pull-request-available
> Fix For: cdc-3.1.0
>
>
> ### Search before asking
> - [X] I searched in the 
> [issues|https://github.com/ververica/flink-cdc-connectors/issues] and found 
> nothing similar.
> ### Motivation
> To be supplemented.
> ### Solution
> To be supplemented.
> ### Alternatives
> None.
> ### Anything else?
> To be supplemented.
> ### Are you willing to submit a PR?
> - [X] I'm willing to submit a PR!
>  Imported from GitHub 
> Url: https://github.com/apache/flink-cdc/issues/3079
> Created by: [aiwenmo|https://github.com/aiwenmo]
> Labels: enhancement, 
> Created at: Mon Feb 26 23:47:53 CST 2024
> State: open



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34878) [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-34878:
-

Assignee: Wenkai Qi

> [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN
> ---
>
> Key: FLINK-34878
> URL: https://issues.apache.org/jira/browse/FLINK-34878
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Flink CDC Issue Import
>Assignee: Wenkai Qi
>Priority: Major
>  Labels: github-import, pull-request-available
> Fix For: cdc-3.1.0
>
>
> ### Search before asking
> - [X] I searched in the 
> [issues|https://github.com/ververica/flink-cdc-connectors/issues] and found 
> nothing similar.
> ### Motivation
> To be supplemented.
> ### Solution
> To be supplemented.
> ### Alternatives
> None.
> ### Anything else?
> To be supplemented.
> ### Are you willing to submit a PR?
> - [X] I'm willing to submit a PR!
>  Imported from GitHub 
> Url: https://github.com/apache/flink-cdc/issues/3079
> Created by: [aiwenmo|https://github.com/aiwenmo]
> Labels: enhancement, 
> Created at: Mon Feb 26 23:47:53 CST 2024
> State: open



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34878][cdc] Flink CDC pipeline transform supports CASE WHEN [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3228:
URL: https://github.com/apache/flink-cdc/pull/3228


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34878) [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-34878:
--
Fix Version/s: cdc-3.1.0

> [Feature][Pipeline] Flink CDC pipeline transform supports CASE WHEN
> ---
>
> Key: FLINK-34878
> URL: https://issues.apache.org/jira/browse/FLINK-34878
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Flink CDC Issue Import
>Priority: Major
>  Labels: github-import, pull-request-available
> Fix For: cdc-3.1.0
>
>
> ### Search before asking
> - [X] I searched in the 
> [issues|https://github.com/ververica/flink-cdc-connectors/issues] and found 
> nothing similar.
> ### Motivation
> To be supplemented.
> ### Solution
> To be supplemented.
> ### Alternatives
> None.
> ### Anything else?
> To be supplemented.
> ### Are you willing to submit a PR?
> - [X] I'm willing to submit a PR!
>  Imported from GitHub 
> Url: https://github.com/apache/flink-cdc/issues/3079
> Created by: [aiwenmo|https://github.com/aiwenmo]
> Labels: enhancement, 
> Created at: Mon Feb 26 23:47:53 CST 2024
> State: open



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35259) FlinkCDC Pipeline transform can't deal timestamp field

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35259:
-

Assignee: Wenkai Qi

> FlinkCDC Pipeline transform can't deal timestamp field
> --
>
> Key: FLINK-35259
> URL: https://issues.apache.org/jira/browse/FLINK-35259
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: Wenkai Qi
>Assignee: Wenkai Qi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When the original table contains fields of type Timestamp, it cannot be 
> converted properly.
> When the added calculation columns contain fields of type Timestamp, it 
> cannot be converted properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35259) FlinkCDC Pipeline transform can't deal timestamp field

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841824#comment-17841824
 ] 

Qingsheng Ren commented on FLINK-35259:
---

flink-cdc master: 0108d0e5d18c55873c3d67e9caad58b9d2148d6a

> FlinkCDC Pipeline transform can't deal timestamp field
> --
>
> Key: FLINK-35259
> URL: https://issues.apache.org/jira/browse/FLINK-35259
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: Wenkai Qi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When the original table contains fields of type Timestamp, it cannot be 
> converted properly.
> When the added calculation columns contain fields of type Timestamp, it 
> cannot be converted properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][docs] Fix typo in checkpoints documentation [flink]

2024-04-28 Thread via GitHub


caicancai commented on PR #24738:
URL: https://github.com/apache/flink/pull/24738#issuecomment-2081833742

   > Hi, @caicancai Thanks for the catching ! Would mind helping have a check 
on the whole doc scope for the case ? It would be appreciated!
   
   Thanks for the reminder, I have searched globally and this is the only place


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34470) Transactional message + Table api kafka source with 'latest-offset' scan bound mode causes indefinitely hanging

2024-04-28 Thread dongwoo.kim (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841823#comment-17841823
 ] 

dongwoo.kim commented on FLINK-34470:
-

Hi [~m.orazow], 
I’ve just made a PR and would appreciate your review
I also left a comment about the metric issue in the discussion area and would 
appreciate any feedback on that.
Thanks

Best,
Dongwoo

> Transactional message + Table api kafka source with 'latest-offset' scan 
> bound mode causes indefinitely hanging
> ---
>
> Key: FLINK-34470
> URL: https://issues.apache.org/jira/browse/FLINK-34470
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.1.0
>Reporter: dongwoo.kim
>Priority: Major
>  Labels: pull-request-available
>
> h2. Summary  
> Hi we have faced issue with transactional message and table api kafka source. 
> If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's 
> request timeouts after hanging. We can always reproduce this unexpected 
> behavior by following below steps.
> This is related to this 
> [issue|https://issues.apache.org/jira/browse/FLINK-33484] too.
> h2. How to reproduce
> 1. Deploy transactional producer and stop after producing certain amount of 
> messages
> 2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple 
> query such as getting count of the produced messages
> 3. Flink sql job gets stucked and timeouts.
> h2. Cause
> Transaction producer always produces [control 
> records|https://kafka.apache.org/documentation/#controlbatch] at the end of 
> the transaction. And these control messages are not polled by 
> {*}consumer.poll(){*}. (It is filtered internally). In 
> *KafkaPartitionSplitReader* code, split is finished only when 
> *lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
> well with non transactional messages or streaming environment but in some 
> batch use cases it causes unexpected hanging.
> h2. Proposed solution
> {code:java}
> if (consumer.position(tp) >= stoppingOffset) {
> recordsBySplits.setPartitionStoppingOffset(tp, 
> stoppingOffset);
> finishSplitAtRecord(
> tp,
> stoppingOffset,
> lastRecord.offset(),
> finishedPartitions,
> recordsBySplits);
> }
> {code}
> Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
> [here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
>  can solve the problem. 
> *consumer.position(tp)* gets next record's offset if it exist and the last 
> record's offset if the next record doesn't exist. 
> By this KafkaPartitionSplitReader is available to finish the split even when 
> the stopping offset is configured to control record's offset. 
> I would be happy to implement about this fix if we can reach on agreement. 
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35158][runtime] Error handling in StateFuture's callback [flink]

2024-04-28 Thread via GitHub


fredia commented on code in PR #24698:
URL: https://github.com/apache/flink/pull/24698#discussion_r1582504623


##
flink-core-api/src/main/java/org/apache/flink/api/common/state/v2/StateFuture.java:
##
@@ -49,7 +49,7 @@ public interface StateFuture {
  * @param action the action to perform before completing the returned 
StateFuture.
  * @return the new StateFuture.
  */
-StateFuture thenAccept(Consumer action);
+StateFuture thenAccept(ConsumerWithException action);

Review Comment:
   Here we classify exceptions into two categories:
   1. Exceptions in user code: Users are **not forced to** handle exceptions. 
For example, users can handle various internal logic exceptions in callbacks, 
or they can directly hand them over to `thenXXX()` without handling them, and 
finally the exceptions will be thrown by the mailbox.
   2. Exceptions in the asynchronous framework: directly let the job fail.
   
   We don't want to be completely aligned with `CompletableFuture` because 
`CompletableFuture` constraints must handle checked exceptions. For example, 
the following code is not allowed in `CompletableFuture`, but is allowed in 
`StateFuture`:
   
   ```Java
   CompletableFuture future = new CompletableFuture<>();
   future.thenAccept((v) -> {
   throw new IOException("test");  // not allow
   });
   StateFutureImpl stateFuture = new StateFutureImpl<>(null, 
exceptionHandler);
   stateFuture.thenAccept(
   (v) -> {
   throw new IOException("test"); // allow
   }
   );
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34470) Transactional message + Table api kafka source with 'latest-offset' scan bound mode causes indefinitely hanging

2024-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34470:
---
Labels: pull-request-available  (was: )

> Transactional message + Table api kafka source with 'latest-offset' scan 
> bound mode causes indefinitely hanging
> ---
>
> Key: FLINK-34470
> URL: https://issues.apache.org/jira/browse/FLINK-34470
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.1.0
>Reporter: dongwoo.kim
>Priority: Major
>  Labels: pull-request-available
>
> h2. Summary  
> Hi we have faced issue with transactional message and table api kafka source. 
> If we configure *'scan.bounded.mode'* to *'latest-offset'* flink sql's 
> request timeouts after hanging. We can always reproduce this unexpected 
> behavior by following below steps.
> This is related to this 
> [issue|https://issues.apache.org/jira/browse/FLINK-33484] too.
> h2. How to reproduce
> 1. Deploy transactional producer and stop after producing certain amount of 
> messages
> 2. Configure *'scan.bounded.mode'* to *'latest-offset'* and submit simple 
> query such as getting count of the produced messages
> 3. Flink sql job gets stucked and timeouts.
> h2. Cause
> Transaction producer always produces [control 
> records|https://kafka.apache.org/documentation/#controlbatch] at the end of 
> the transaction. And these control messages are not polled by 
> {*}consumer.poll(){*}. (It is filtered internally). In 
> *KafkaPartitionSplitReader* code, split is finished only when 
> *lastRecord.offset() >= stoppingOffset - 1* condition is met. This might work 
> well with non transactional messages or streaming environment but in some 
> batch use cases it causes unexpected hanging.
> h2. Proposed solution
> {code:java}
> if (consumer.position(tp) >= stoppingOffset) {
> recordsBySplits.setPartitionStoppingOffset(tp, 
> stoppingOffset);
> finishSplitAtRecord(
> tp,
> stoppingOffset,
> lastRecord.offset(),
> finishedPartitions,
> recordsBySplits);
> }
> {code}
> Replacing if condition to *consumer.position(tp) >= stoppingOffset* in 
> [here|https://github.com/apache/flink-connector-kafka/blob/15f2662eccf461d9d539ed87a78c9851cd17fa43/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L137]
>  can solve the problem. 
> *consumer.position(tp)* gets next record's offset if it exist and the last 
> record's offset if the next record doesn't exist. 
> By this KafkaPartitionSplitReader is available to finish the split even when 
> the stopping offset is configured to control record's offset. 
> I would be happy to implement about this fix if we can reach on agreement. 
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34470][Connectors/Kafka] Fix indefinite blocking by adjusting stopping condition in split reader [flink-connector-kafka]

2024-04-28 Thread via GitHub


boring-cyborg[bot] commented on PR #100:
URL: 
https://github.com/apache/flink-connector-kafka/pull/100#issuecomment-2081822026

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34470][Connectors/Kafka] Fix indefinite blocking by adjusting stopping condition in split reader [flink-connector-kafka]

2024-04-28 Thread via GitHub


dongwoo6kim opened a new pull request, #100:
URL: https://github.com/apache/flink-connector-kafka/pull/100

   ## Problem 
   When using the flink kafka connector in batch scenarios, consuming 
transactional messages can cause indefinite hanging.
   This issue can be easily reproduced with following steps. 
   1. Produce transactional messages and commit them.
   2. Configure `scan.bounded.mode` to `latest-offset` and run consumer using 
flink kafka connector
   
   ## Cause
   The previous stopping condition in the `KafkaPartitionSplitReader` compared 
the offset of the last record with the `stoppingOffset`. This approach works 
for streaming use cases and batch processing of non-transactional messages. 
However, in scenarios involving transactional messages, this is insufficient.   
   
   [Control messages](https://kafka.apache.org/documentation/#controlbatch), 
which are not visible to clients, can occupy the entire range between the last 
record's offset and the stoppingOffset which leads to indefinite blocking.
   
   ## Workaround
   I've modified the stopping condition to use `consumer.position(tp)`, which 
effectively skips any control messages present in the current poll, pointing 
directly to the next record's offset. 
   To handle edge cases, particularly when `properties.max.poll.records` is set 
to `1`, I've adjusted the fetch method to always check all assigned partitions, 
even if no records are returned in a poll.
   
    Edge case example 
   Consider partition `0`, where offsets `13` and `14` are valid records and 
`15` is a control record. If `stoppingOffset` is set to 15 for partition `0`and 
`properties.max.poll.records` is configured to `1`, checking only partitions 
that return records would miss offset 15. By consistently reviewing all 
assigned partitions, the consumer’s position jumps control record in the 
subsequent poll, allowing the system to escape.
   
   ## Discussion
   To address the metric issue in 
[FLINK-33484](https://issues.apache.org/jira/browse/FLINK-33484), I think we 
need to make wrapper class of  `ConsumerRecord` for example 
`ConsumerRecordWithOffsetJump`.
   ```java
   public ConsumerRecordWithOffsetJump(ConsumerRecord record, long 
offsetJump) {
   this.record = record;
   this.offsetJump = offsetJump;
   }
   ```
   And we may need new `KafkaPartitionSplitReader` that implements 
`SplitReader, 
KafkaPartitionSplit>`. 
   So when record is emitted it should set current offset not just 
`record.offset()+1` but`record.offset() + record.jumpValue` in 
[here](https://github.com/apache/flink-connector-kafka/blob/369e7be46a70fd50d68746498aed82105741e7d6/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaRecordEmitter.java#L54).
  
 `jumpValue` is typically 1, except for the last record of each poll where 
it's calculated as `consumer.position() - lastRecord.offset()`.
   If this sounds good to everyone, I'm happy to work on this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Fix typo in checkpoints documentation [flink]

2024-04-28 Thread via GitHub


flinkbot commented on PR #24738:
URL: https://github.com/apache/flink/pull/24738#issuecomment-2081819806

   
   ## CI report:
   
   * 2c141212fb8f6eadbbbdc3cf84710cfec18f0302 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Fix typo in checkpoints documentation [flink]

2024-04-28 Thread via GitHub


caicancai commented on PR #24738:
URL: https://github.com/apache/flink/pull/24738#issuecomment-2081815309

   cc @1996fanrui 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][docs] Fix typo in checkpoints documentation [flink]

2024-04-28 Thread via GitHub


caicancai opened a new pull request, #24738:
URL: https://github.com/apache/flink/pull/24738

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35261) Flink CDC pipeline transform doesn't support decimal-type comparison

2024-04-28 Thread yux (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yux updated FLINK-35261:

Summary: Flink CDC pipeline transform doesn't support decimal-type 
comparison  (was: Flink CDC pipeline transform doesn't support decimal 
comparison)

> Flink CDC pipeline transform doesn't support decimal-type comparison
> 
>
> Key: FLINK-35261
> URL: https://issues.apache.org/jira/browse/FLINK-35261
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Priority: Major
>
> It would be convenient if we can filter by comparing decimal to number 
> literals like:
> {{transform:}}
> {{  - source-table: XXX}}
> {{    filter: price > 50}}
> where price is a Decimal typed column. However currently such expression is 
> not supported, and a runtime exception will be thrown as follows:
>  
> Caused by: org.apache.flink.api.common.InvalidProgramException: Expression 
> cannot be compiled. This is a bug. Please file an issue.
> Expression: import static 
> org.apache.flink.cdc.runtime.functions.SystemFunctionUtils.*;PRICEALPHA > 50
>     at 
> org.apache.flink.cdc.runtime.operators.transform.TransformExpressionCompiler.lambda$compileExpression$0(TransformExpressionCompiler.java:62)
>  
> ~[blob_p-c3b34ad5a5a3a0bc443bb738e308b20b1da04a1f-8d419e2c927baeb0eeb40fb35c0a52dc:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
>  
> ~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
>     at 
> org.apache.flink.cdc.runtime.operators.transform.TransformExpressionCompiler.compileExpression(TransformExpressionCompiler.java:46)
>  
> ~[blob_p-c3b34ad5a5a3a0bc443bb738e308b20b1da04a1f-8d419e2c927baeb0eeb40fb35c0a52dc:3.2-SNAPSHOT]
>     ... 17 more
> Caused by: org.codehaus.commons.compiler.CompileException: Line 1, Column 89: 
> Cannot compare types "java.math.BigDecimal" and "int"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35261) Flink CDC pipeline transform doesn't support decimal comparison

2024-04-28 Thread yux (Jira)
yux created FLINK-35261:
---

 Summary: Flink CDC pipeline transform doesn't support decimal 
comparison
 Key: FLINK-35261
 URL: https://issues.apache.org/jira/browse/FLINK-35261
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: yux


It would be convenient if we can filter by comparing decimal to number literals 
like:

{{transform:}}

{{  - source-table: XXX}}

{{    filter: price > 50}}

where price is a Decimal typed column. However currently such expression is not 
supported, and a runtime exception will be thrown as follows:

 

Caused by: org.apache.flink.api.common.InvalidProgramException: Expression 
cannot be compiled. This is a bug. Please file an issue.
Expression: import static 
org.apache.flink.cdc.runtime.functions.SystemFunctionUtils.*;PRICEALPHA > 50
    at 
org.apache.flink.cdc.runtime.operators.transform.TransformExpressionCompiler.lambda$compileExpression$0(TransformExpressionCompiler.java:62)
 
~[blob_p-c3b34ad5a5a3a0bc443bb738e308b20b1da04a1f-8d419e2c927baeb0eeb40fb35c0a52dc:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
 
~[blob_p-d4bee45ff47f13d247ff78e6f3d164170cf71835-4816fa00c7fd16a7096498e5ed3caaee:3.2-SNAPSHOT]
    at 
org.apache.flink.cdc.runtime.operators.transform.TransformExpressionCompiler.compileExpression(TransformExpressionCompiler.java:46)
 
~[blob_p-c3b34ad5a5a3a0bc443bb738e308b20b1da04a1f-8d419e2c927baeb0eeb40fb35c0a52dc:3.2-SNAPSHOT]
    ... 17 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 1, Column 89: 
Cannot compare types "java.math.BigDecimal" and "int"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32622) Do not add mini-batch assigner operator if it is useless

2024-04-28 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841817#comment-17841817
 ] 

Jing Ge commented on FLINK-32622:
-

[~jeyhunkarimov] could you please backport it to 1.19?

> Do not add mini-batch assigner operator if it is useless
> 
>
> Key: FLINK-32622
> URL: https://issues.apache.org/jira/browse/FLINK-32622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Currently if user config mini-batch for their sql jobs, flink will always add 
> mini-batch assigner operator in job plan even there's no agg/join operators 
> in the job. Mini-batch operator will generate useless event and cause 
> performance issue for them. If the mini-batch is useless for the specific 
> jobs, flink should not add mini-batch assigner even when users turn on 
> mini-batch mechanism. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32622) Do not add mini-batch assigner operator if it is useless

2024-04-28 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-32622.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Do not add mini-batch assigner operator if it is useless
> 
>
> Key: FLINK-32622
> URL: https://issues.apache.org/jira/browse/FLINK-32622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Currently if user config mini-batch for their sql jobs, flink will always add 
> mini-batch assigner operator in job plan even there's no agg/join operators 
> in the job. Mini-batch operator will generate useless event and cause 
> performance issue for them. If the mini-batch is useless for the specific 
> jobs, flink should not add mini-batch assigner even when users turn on 
> mini-batch mechanism. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32622) Do not add mini-batch assigner operator if it is useless

2024-04-28 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841816#comment-17841816
 ] 

Jing Ge commented on FLINK-32622:
-

master: b1544e4e513d2b75b350c20dbb1c17a8232c22fd

> Do not add mini-batch assigner operator if it is useless
> 
>
> Key: FLINK-32622
> URL: https://issues.apache.org/jira/browse/FLINK-32622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
>
> Currently if user config mini-batch for their sql jobs, flink will always add 
> mini-batch assigner operator in job plan even there's no agg/join operators 
> in the job. Mini-batch operator will generate useless event and cause 
> performance issue for them. If the mini-batch is useless for the specific 
> jobs, flink should not add mini-batch assigner even when users turn on 
> mini-batch mechanism. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32622) Do not add mini-batch assigner operator if it is useless

2024-04-28 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-32622:
---

Assignee: Jeyhun Karimov

> Do not add mini-batch assigner operator if it is useless
> 
>
> Key: FLINK-32622
> URL: https://issues.apache.org/jira/browse/FLINK-32622
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
>
> Currently if user config mini-batch for their sql jobs, flink will always add 
> mini-batch assigner operator in job plan even there's no agg/join operators 
> in the job. Mini-batch operator will generate useless event and cause 
> performance issue for them. If the mini-batch is useless for the specific 
> jobs, flink should not add mini-batch assigner even when users turn on 
> mini-batch mechanism. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32622][table-planner] Optimize mini-batch assignment [flink]

2024-04-28 Thread via GitHub


JingGe merged PR #23470:
URL: https://github.com/apache/flink/pull/23470


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35191][table] Support alter materialized table related syntaxes: suspend, resume, refresh, set and reset [flink]

2024-04-28 Thread via GitHub


flinkbot commented on PR #24737:
URL: https://github.com/apache/flink/pull/24737#issuecomment-2081799039

   
   ## CI report:
   
   * bd8de05cf3e81e54afd57e25f0ed7cf985925604 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35258) Broken links to Doris in Flink CDC Documentation

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-35258.
---
Resolution: Fixed

>  Broken links to Doris in Flink CDC Documentation
> -
>
> Key: FLINK-35258
> URL: https://issues.apache.org/jira/browse/FLINK-35258
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qingsheng Ren
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> These broken links are detected by CI: 
>  
> {code:java}
> ERROR: 3 dead links found!
> 535  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 536  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 537  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404 
> ERROR: 3 dead links found!
> 1008  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 1009  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 1010  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35258) Broken links to Doris in Flink CDC Documentation

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35258:
-

Assignee: yux  (was: Qingsheng Ren)

>  Broken links to Doris in Flink CDC Documentation
> -
>
> Key: FLINK-35258
> URL: https://issues.apache.org/jira/browse/FLINK-35258
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qingsheng Ren
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> These broken links are detected by CI: 
>  
> {code:java}
> ERROR: 3 dead links found!
> 535  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 536  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 537  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404 
> ERROR: 3 dead links found!
> 1008  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 1009  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 1010  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35258) Broken links to Doris in Flink CDC Documentation

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841813#comment-17841813
 ] 

Qingsheng Ren commented on FLINK-35258:
---

flink-cdc master: a513e9f82e815409a8c91bdc999196d1658137f0

release-3.1: 65a6880e9c67c1dcc2ac506334c475583f6f86b1

release-3.0: 326739905ab8152cbde8289a9f29f18e25a016b7

>  Broken links to Doris in Flink CDC Documentation
> -
>
> Key: FLINK-35258
> URL: https://issues.apache.org/jira/browse/FLINK-35258
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> These broken links are detected by CI: 
>  
> {code:java}
> ERROR: 3 dead links found!
> 535  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 536  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 537  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404 
> ERROR: 3 dead links found!
> 1008  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD/
>  → Status: 404
> 1009  [✖] 
> https://doris.apache.org/zh-CN/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/
>  → Status: 404
> 1010  [✖] 
> https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Types/BOOLEAN/
>  → Status: 404{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35191) Support alter materialized table related syntaxes: suspend, resume, refresh, set and reset

2024-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35191:
---
Labels: pull-request-available  (was: )

> Support alter materialized table related syntaxes: suspend, resume, refresh, 
> set and reset
> --
>
> Key: FLINK-35191
> URL: https://issues.apache.org/jira/browse/FLINK-35191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> SQL statement as follows:
> {code:SQL}
> // suspend
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name SUSPEND
> // resume
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name RESUME
>  
> [WITH('key1' = 'val1', 'key2' = 'val2')]
> // refresh
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name REFRESH
>  
> [PARTITION (key1=val1, key2=val2, ...)]
> // set freshness
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name 
>  
>   SET FRESHNESS = INTERVAL '' { SECOND | MINUTE | HOUR | DAY }
> // set refresh mode
> ALTER MATERIALIZED TABLE  [catalog_name.][db_name.]table_name 
>  
>   SET REFRESH_MODE = { FULL | CONTINUOUS }
> // set options
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name  SET ('key' = 
> 'val');
> // reset options
> ALTER MATERIALIZED TABLE [catalog_name.][db_name.]table_name RESET ('key');
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35191][table] Support alter materialized table related syntaxes: suspend, resume, refresh, set and reset [flink]

2024-04-28 Thread via GitHub


hackergin opened a new pull request, #24737:
URL: https://github.com/apache/flink/pull/24737

   ## What is the purpose of the change
   
   *Support alter materialized table related syntaxes: suspend, resume, 
refresh, set and reset*
   
   Note: 
   For the sake of current implementation convenience, ALTER IF NOT EXIST is 
not supported.
   
   ## Brief change log
   
 - *Supporter parser alter materialized table related statements.*

   ## Verifying this change
   
   This change can be verified by adding more test cases in 
*MaterializedTableStatementParserTest*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (Will add relevant documents in a 
separate PR.)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35251) SchemaDerivation serializes mapping incorrectly on checkpoint

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-35251.
---
Resolution: Fixed

> SchemaDerivation serializes mapping incorrectly on checkpoint
> -
>
> Key: FLINK-35251
> URL: https://issues.apache.org/jira/browse/FLINK-35251
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> When schema registry is being serialized on checkpoint, SchemaDerivation 
> mistakenly use `out.write()` instead of `out.writeInt()` for serializing the 
> size of derivation mapping. This will cause an EOFException when the job is 
> recovered from checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35251) SchemaDerivation serializes mapping incorrectly on checkpoint

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841551#comment-17841551
 ] 

Qingsheng Ren edited comment on FLINK-35251 at 4/29/24 2:25 AM:


fixed in 
master(3.2) via 9cc3451ddf6546317c4c3af1efbc68c2d745ef7d
3.1 via : 711bc0038cf46a8a07cce7a1d777a4fd370f7ee3


was (Author: leonard xu):
fixed in 
master(3.2) via 9cc3451ddf6546317c4c3af1efbc68c2d745ef7d
3.1 via : TODO

> SchemaDerivation serializes mapping incorrectly on checkpoint
> -
>
> Key: FLINK-35251
> URL: https://issues.apache.org/jira/browse/FLINK-35251
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> When schema registry is being serialized on checkpoint, SchemaDerivation 
> mistakenly use `out.write()` instead of `out.writeInt()` for serializing the 
> size of derivation mapping. This will cause an EOFException when the job is 
> recovered from checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35256) Pipeline transform ignores column type nullability

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-35256.
---
Resolution: Fixed

> Pipeline transform ignores column type nullability
> --
>
> Key: FLINK-35256
> URL: https://issues.apache.org/jira/browse/FLINK-35256
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
> Attachments: log.txt
>
>
> Flink CDC 3.1.0 brought transform feature, allowing column type / value 
> transformation prior to data routing process. However after the 
> transformation, column type marked as `NOT NULL` lost their annotation, 
> causing some downstream sinks to fail since they require primary key to be 
> NOT NULL.
> Here's the minimum reproducible example about this problem:
> ```yaml
> source:
> type: mysql
> ...
> sink:
> type: starrocks
> name: StarRocks Sink
> ...
> pipeline:
> name: Sync MySQL Database to StarRocks
> parallelism: 4
> transform:
>  - source-table: reicigo.\.*
> projection: ID, UPPER(ID) AS UPID
> ```
> In the MySQL source table, primary key column `ID` is marked as `NOT NULL`, 
> but such information was lost at downstream, causing the following exception 
> (see attachment).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35256) Pipeline transform ignores column type nullability

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841641#comment-17841641
 ] 

Qingsheng Ren edited comment on FLINK-35256 at 4/29/24 2:23 AM:


flink-cdc master: 6258bec5bb31533c501bedffef36805b0fd95cf5

flink-cdc release-3.1: de19f7bc6d2745930995846fdfbc6c6b60de5f74


was (Author: renqs):
flink-cdc master: 6258bec5bb31533c501bedffef36805b0fd95cf5

> Pipeline transform ignores column type nullability
> --
>
> Key: FLINK-35256
> URL: https://issues.apache.org/jira/browse/FLINK-35256
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
> Attachments: log.txt
>
>
> Flink CDC 3.1.0 brought transform feature, allowing column type / value 
> transformation prior to data routing process. However after the 
> transformation, column type marked as `NOT NULL` lost their annotation, 
> causing some downstream sinks to fail since they require primary key to be 
> NOT NULL.
> Here's the minimum reproducible example about this problem:
> ```yaml
> source:
> type: mysql
> ...
> sink:
> type: starrocks
> name: StarRocks Sink
> ...
> pipeline:
> name: Sync MySQL Database to StarRocks
> parallelism: 4
> transform:
>  - source-table: reicigo.\.*
> projection: ID, UPPER(ID) AS UPID
> ```
> In the MySQL source table, primary key column `ID` is marked as `NOT NULL`, 
> but such information was lost at downstream, causing the following exception 
> (see attachment).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35260) Translate "Watermark alignment "page into Chinese

2024-04-28 Thread hongxu han (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841811#comment-17841811
 ] 

hongxu han edited comment on FLINK-35260 at 4/29/24 2:23 AM:
-

[https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/datastream/event-time/generating_watermarks/]

could you assign this to me?


was (Author: maomao):
[https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/datastream/event-time/generating_watermarks/]

> Translate "Watermark alignment "page into Chinese
> -
>
> Key: FLINK-35260
> URL: https://issues.apache.org/jira/browse/FLINK-35260
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.19.0
>Reporter: hongxu han
>Priority: Major
> Attachments: image-2024-04-29-10-21-00-565.png
>
>
> Watermark alignment lack of chinese translation
> !image-2024-04-29-10-21-00-565.png|width=408,height=215!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35260) Translate "Watermark alignment "page into Chinese

2024-04-28 Thread hongxu han (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841811#comment-17841811
 ] 

hongxu han commented on FLINK-35260:


[https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/datastream/event-time/generating_watermarks/]

> Translate "Watermark alignment "page into Chinese
> -
>
> Key: FLINK-35260
> URL: https://issues.apache.org/jira/browse/FLINK-35260
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.19.0
>Reporter: hongxu han
>Priority: Major
> Attachments: image-2024-04-29-10-21-00-565.png
>
>
> Watermark alignment lack of chinese translation
> !image-2024-04-29-10-21-00-565.png|width=408,height=215!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-3.1][FLINK-35256][runtime] Fix transform node does not respect type nullability [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3277:
URL: https://github.com/apache/flink-cdc/pull/3277


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [minor][docs] Fix route definition example in core concept docs [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3269:
URL: https://github.com/apache/flink-cdc/pull/3269


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35260) Translate "Watermark alignment "page into Chinese

2024-04-28 Thread hongxu han (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongxu han updated FLINK-35260:
---
 Attachment: image-2024-04-29-10-21-00-565.png
Description: 
Watermark alignment lack of chinese translation

!image-2024-04-29-10-21-00-565.png|width=408,height=215!

  was:Watermark alignment lack of chinese translation


> Translate "Watermark alignment "page into Chinese
> -
>
> Key: FLINK-35260
> URL: https://issues.apache.org/jira/browse/FLINK-35260
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.19.0
>Reporter: hongxu han
>Priority: Major
> Attachments: image-2024-04-29-10-21-00-565.png
>
>
> Watermark alignment lack of chinese translation
> !image-2024-04-29-10-21-00-565.png|width=408,height=215!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35260) Translate "Watermark alignment "page into Chinese

2024-04-28 Thread hongxu han (Jira)
hongxu han created FLINK-35260:
--

 Summary: Translate "Watermark alignment "page into Chinese
 Key: FLINK-35260
 URL: https://issues.apache.org/jira/browse/FLINK-35260
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.19.0
Reporter: hongxu han


Watermark alignment lack of chinese translation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35252) Update the operators marked as deprecated in the instance program on the official website

2024-04-28 Thread hongxu han (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongxu han updated FLINK-35252:
---
Component/s: Examples
 (was: Documentation / Training / Exercises)

> Update the operators marked as deprecated in the instance program on the 
> official website
> -
>
> Key: FLINK-35252
> URL: https://issues.apache.org/jira/browse/FLINK-35252
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: hongxu han
>Priority: Major
> Attachments: image-2024-04-28-10-05-37-671.png, 
> image-2024-04-28-10-07-11-736.png, image-2024-04-28-10-07-36-248.png, 
> image-2024-04-28-10-08-14-928.png, image-2024-04-28-10-09-32-184.png
>
>
> Update the operators marked as deprecated in the instance program on the 
> official website.
> !image-2024-04-28-10-07-36-248.png|width=386,height=199!
> !image-2024-04-28-10-08-14-928.png|width=448,height=82!
> The recommended usage now is
> Duration.ofSeconds(5)
> !image-2024-04-28-10-09-32-184.png|width=474,height=78!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35236) Flink 1.19 Translation error on the execution_mode/order-of-processing

2024-04-28 Thread hongxu han (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841799#comment-17841799
 ] 

hongxu han commented on FLINK-35236:


hi [~Weijie Guo],can you help me to review it.

thanks. 

> Flink 1.19 Translation error on the execution_mode/order-of-processing
> --
>
> Key: FLINK-35236
> URL: https://issues.apache.org/jira/browse/FLINK-35236
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: hongxu han
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-25-16-53-34-007.png, 
> image-2024-04-25-16-53-49-052.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/datastream/execution_mode/#order-of-processing]
> !image-2024-04-25-16-53-34-007.png!
> !image-2024-04-25-16-53-49-052.png!
> 应为,常规输入:既不从广播输入也不从 keyed 输入



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


liyubin117 commented on PR #24735:
URL: https://github.com/apache/flink/pull/24735#issuecomment-2081711322

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Ensure that all yaml has a unified format [flink-kubernetes-operator]

2024-04-28 Thread via GitHub


caicancai closed pull request #820: [hotfix] Ensure that all yaml has a unified 
format
URL: https://github.com/apache/flink-kubernetes-operator/pull/820


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35240][Connectors][format]Disable FLUSH_AFTER_WRITE_VALUE to avoid flush per record for csv format [flink]

2024-04-28 Thread via GitHub


robobario commented on code in PR #24730:
URL: https://github.com/apache/flink/pull/24730#discussion_r1582425139


##
flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvBulkWriter.java:
##
@@ -51,13 +54,19 @@ class CsvBulkWriter implements BulkWriter {
 checkNotNull(mapper);
 checkNotNull(schema);
 
+// Prevent Jackson's writeValue() method calls from closing the stream.
+mapper.getFactory().disable(JsonGenerator.Feature.AUTO_CLOSE_TARGET);
+mapper.disable(SerializationFeature.FLUSH_AFTER_WRITE_VALUE);
+
 this.converter = checkNotNull(converter);
 this.stream = checkNotNull(stream);
 this.converterContext = converterContext;
 this.csvWriter = mapper.writer(schema);
-
-// Prevent Jackson's writeValue() method calls from closing the stream.
-mapper.getFactory().disable(JsonGenerator.Feature.AUTO_CLOSE_TARGET);
+try {
+this.generator = csvWriter.createGenerator(stream, 
JsonEncoding.UTF8);

Review Comment:
   The Generator manages some resources with their own buffers so I think we 
should `close` the generator during `finish()`. The underlying `CsvEncoder` 
uses a char buffer that jackson [can 
recycle](https://github.com/FasterXML/jackson-dataformats-text/blob/3d3165e58b90618a5fbccf630f1604a383afe78c/csv/src/main/java/com/fasterxml/jackson/dataformat/csv/impl/CsvEncoder.java#L1015)
 using a threadlocal pool.
   
   Disabling `AUTO_CLOSE_TARGET` will still prevent the closing of the 
underlying stream.



##
flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvBulkWriter.java:
##
@@ -98,11 +107,12 @@ static  CsvBulkWriter forPojo(Class 
pojoClass, FSDataOutputStr
 @Override
 public void addElement(T element) throws IOException {
 final R r = converter.convert(element, converterContext);
-csvWriter.writeValue(stream, r);
+csvWriter.writeValue(generator, r);
 }
 
 @Override
 public void flush() throws IOException {
+generator.flush();
 stream.flush();

Review Comment:
   The `generator.flush()` also flushes the stream, so the following 
`stream.flush()` is redundant.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35165] AdaptiveBatch Scheduler should not restrict the default source parall… [flink]

2024-04-28 Thread via GitHub


flinkbot commented on PR #24736:
URL: https://github.com/apache/flink/pull/24736#issuecomment-2081579518

   
   ## CI report:
   
   * 7a3264c0dcc8b19c4b7eb70a73d430d1ad2e76f6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35165) AdaptiveBatch Scheduler should not restrict the default source parallelism to the max parallelism set

2024-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35165:
---
Labels: pull-request-available  (was: )

> AdaptiveBatch Scheduler should not restrict the default source parallelism to 
> the max parallelism set
> -
>
> Key: FLINK-35165
> URL: https://issues.apache.org/jira/browse/FLINK-35165
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>  Labels: pull-request-available
>
> Copy-pasting the reasoning mentioned on this [discussion 
> thread|https://lists.apache.org/thread/o887xhvvmn2rg5tyymw348yl2mqt23o7].
> Let me state why I think 
> "{_}jobmanager.adaptive-batch-scheduler.default-source-parallelism{_}" should 
> not be bound by the 
> "{_}jobmanager.adaptive-batch-scheduler.max-parallelism{_}".
>  *  Source vertex is unique and does not have any upstream vertices - 
> Downstream vertices read shuffled data partitioned by key, which is not the 
> case for the Source vertex
>  * Limiting source parallelism by downstream vertices' max parallelism is 
> incorrect
>  * If we say for ""semantic consistency" the source vertex parallelism has to 
> be bound by the overall job's max parallelism, it can lead to following 
> issues:
>  ** High filter selectivity with huge amounts of data to read
>  ** Setting high "*jobmanager.adaptive-batch-scheduler.max-parallelism*" so 
> that source parallelism can be set higher can lead to small blocks and 
> sub-optimal performance.
>  ** Setting high "*jobmanager.adaptive-batch-scheduler.max-parallelism*" 
> requires careful tuning of network buffer configurations which is unnecessary 
> in cases where it is not required just so that the source parallelism can be 
> set high.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35165] AdaptiveBatch Scheduler should not restrict the default source parall… [flink]

2024-04-28 Thread via GitHub


venkata91 opened a new pull request, #24736:
URL: https://github.com/apache/flink/pull/24736

   …elism to the max parallelism set
   
   
   ## What is the purpose of the change
   
   With AdaptiveBatchScheduler, the current behavior is if both 
`execution.batch.adaptive.auto-parallelism.default-source-parallelism` and 
`execution.batch.adaptive.auto-parallelism.max-parallelism` configurations are 
specified and if the 
`execution.batch.adaptive.auto-parallelism.default-source-parallelism` is 
greater than the `execution.batch.adaptive.auto-parallelism.max-parallelism`, 
the source parallelism is bounded to 
`execution.batch.adaptive.auto-parallelism.max-parallelism`.
   
   - Source vertex is unique and does not have any upstream vertices - 
Downstream vertices read shuffled data partitioned by key, which is not the 
case for the Source vertex
   - Limiting source parallelism by downstream vertices' max parallelism is 
incorrect.
   
   For eg: In the case of, "High filter selectivity with huge amounts of data 
to read", this has the following issues:
   - Setting high "execution.batch.adaptive.auto-parallelism.max-parallelism" 
so that source parallelism can be set higher can lead to small blocks and 
sub-optimal performance.
   Setting high "execution.batch.adaptive.auto-parallelism.max-parallelism" 
requires careful tuning of network buffer configurations which is unnecessary 
in cases where it is not required just so that the source parallelism can be 
set high.
   
   The proposed solution is to decouple the configs 
`execution.batch.adaptive.auto-parallelism.default-source-parallelism` and 
`execution.batch.adaptive.auto-parallelism.max-parallelism` and not bound the 
value of source parallelism to 
`execution.batch.adaptive.auto-parallelism.max-parallelism`.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Ensure that all yaml has a unified format [flink-kubernetes-operator]

2024-04-28 Thread via GitHub


gyfora commented on PR #820:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/820#issuecomment-2081571402

   @caicancai I think these changes are a bit random and do not make too much 
sense. Can you include it together with some other fix instead?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


liyubin117 commented on PR #24735:
URL: https://github.com/apache/flink/pull/24735#issuecomment-2081561616

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


liyubin117 commented on PR #24735:
URL: https://github.com/apache/flink/pull/24735#issuecomment-2081561574

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


flinkbot commented on PR #24735:
URL: https://github.com/apache/flink/pull/24735#issuecomment-2081503746

   
   ## CI report:
   
   * 247f5afe51f1a1c13fdeefec61f303ac1d98f9f8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35259) FlinkCDC Pipeline transform can't deal timestamp field

2024-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35259:
---
Labels: pull-request-available  (was: )

> FlinkCDC Pipeline transform can't deal timestamp field
> --
>
> Key: FLINK-35259
> URL: https://issues.apache.org/jira/browse/FLINK-35259
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: Wenkai Qi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When the original table contains fields of type Timestamp, it cannot be 
> converted properly.
> When the added calculation columns contain fields of type Timestamp, it 
> cannot be converted properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34916) Support `ALTER CATALOG SET` syntax

2024-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34916:
---
Labels: pull-request-available  (was: )

> Support `ALTER CATALOG SET` syntax
> --
>
> Key: FLINK-34916
> URL: https://issues.apache.org/jira/browse/FLINK-34916
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-22-18-30-33-182.png
>
>
> Set one or more properties in the specified catalog. If a particular property 
> is already set in the catalog, override the old value with the new one.
> !image-2024-03-22-18-30-33-182.png|width=736,height=583!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34916][table] Support `ALTER CATALOG SET` syntax [flink]

2024-04-28 Thread via GitHub


liyubin117 opened a new pull request, #24735:
URL: https://github.com/apache/flink/pull/24735

   ## What is the purpose of the change
   
   Set one or more properties in the specified catalog. If a particular 
property is already set in the catalog, override the old value with the new one.
   
   ## Brief change log
   
   * ALTER CATALOG catalog_name SET (key1=val1, ...)
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   flink-table/flink-sql-client/src/test/resources/sql/catalog_database.q
   flink-table/flink-sql-gateway/src/test/resources/sql/catalog_database.q
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? yes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35259) FlinkCDC Pipeline transform can't deal timestamp field

2024-04-28 Thread Wenkai Qi (Jira)
Wenkai Qi created FLINK-35259:
-

 Summary: FlinkCDC Pipeline transform can't deal timestamp field
 Key: FLINK-35259
 URL: https://issues.apache.org/jira/browse/FLINK-35259
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: 3.1.0
Reporter: Wenkai Qi
 Fix For: 3.1.0


When the original table contains fields of type Timestamp, it cannot be 
converted properly.

When the added calculation columns contain fields of type Timestamp, it cannot 
be converted properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35256) Pipeline transform ignores column type nullability

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-35256:
--
Fix Version/s: cdc-3.1.0

> Pipeline transform ignores column type nullability
> --
>
> Key: FLINK-35256
> URL: https://issues.apache.org/jira/browse/FLINK-35256
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
> Attachments: log.txt
>
>
> Flink CDC 3.1.0 brought transform feature, allowing column type / value 
> transformation prior to data routing process. However after the 
> transformation, column type marked as `NOT NULL` lost their annotation, 
> causing some downstream sinks to fail since they require primary key to be 
> NOT NULL.
> Here's the minimum reproducible example about this problem:
> ```yaml
> source:
> type: mysql
> ...
> sink:
> type: starrocks
> name: StarRocks Sink
> ...
> pipeline:
> name: Sync MySQL Database to StarRocks
> parallelism: 4
> transform:
>  - source-table: reicigo.\.*
> projection: ID, UPPER(ID) AS UPID
> ```
> In the MySQL source table, primary key column `ID` is marked as `NOT NULL`, 
> but such information was lost at downstream, causing the following exception 
> (see attachment).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35256][runtime] Fix transform node does not respect type nullability [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen commented on PR #3272:
URL: https://github.com/apache/flink-cdc/pull/3272#issuecomment-2081482756

   @yuxiqian Could you also back port the patch onto `release-3.1`? Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35256) Pipeline transform ignores column type nullability

2024-04-28 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35256:
-

Assignee: yux

> Pipeline transform ignores column type nullability
> --
>
> Key: FLINK-35256
> URL: https://issues.apache.org/jira/browse/FLINK-35256
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Attachments: log.txt
>
>
> Flink CDC 3.1.0 brought transform feature, allowing column type / value 
> transformation prior to data routing process. However after the 
> transformation, column type marked as `NOT NULL` lost their annotation, 
> causing some downstream sinks to fail since they require primary key to be 
> NOT NULL.
> Here's the minimum reproducible example about this problem:
> ```yaml
> source:
> type: mysql
> ...
> sink:
> type: starrocks
> name: StarRocks Sink
> ...
> pipeline:
> name: Sync MySQL Database to StarRocks
> parallelism: 4
> transform:
>  - source-table: reicigo.\.*
> projection: ID, UPPER(ID) AS UPID
> ```
> In the MySQL source table, primary key column `ID` is marked as `NOT NULL`, 
> but such information was lost at downstream, causing the following exception 
> (see attachment).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35256) Pipeline transform ignores column type nullability

2024-04-28 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841641#comment-17841641
 ] 

Qingsheng Ren commented on FLINK-35256:
---

flink-cdc master: 6258bec5bb31533c501bedffef36805b0fd95cf5

> Pipeline transform ignores column type nullability
> --
>
> Key: FLINK-35256
> URL: https://issues.apache.org/jira/browse/FLINK-35256
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Priority: Major
>  Labels: pull-request-available
> Attachments: log.txt
>
>
> Flink CDC 3.1.0 brought transform feature, allowing column type / value 
> transformation prior to data routing process. However after the 
> transformation, column type marked as `NOT NULL` lost their annotation, 
> causing some downstream sinks to fail since they require primary key to be 
> NOT NULL.
> Here's the minimum reproducible example about this problem:
> ```yaml
> source:
> type: mysql
> ...
> sink:
> type: starrocks
> name: StarRocks Sink
> ...
> pipeline:
> name: Sync MySQL Database to StarRocks
> parallelism: 4
> transform:
>  - source-table: reicigo.\.*
> projection: ID, UPPER(ID) AS UPID
> ```
> In the MySQL source table, primary key column `ID` is marked as `NOT NULL`, 
> but such information was lost at downstream, causing the following exception 
> (see attachment).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35256][runtime] Fix transform node does not respect type nullability [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3272:
URL: https://github.com/apache/flink-cdc/pull/3272


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-3.1][FLINK-35258][cdc][doc] Fix broken links to Doris in documentation [flink-cdc]

2024-04-28 Thread via GitHub


PatrickRen merged PR #3274:
URL: https://github.com/apache/flink-cdc/pull/3274


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34915.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34915.
-

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841635#comment-17841635
 ] 

Jane Chan commented on FLINK-34915:
---

Fixed in master e412402ca4dfc438e28fb990dc53ea7809430aee

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-28 Thread via GitHub


LadyForest merged PR #24630:
URL: https://github.com/apache/flink/pull/24630


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35256][runtime] Fix transform node does not respect type nullability [flink-cdc]

2024-04-28 Thread via GitHub


lvyanquan commented on PR #3272:
URL: https://github.com/apache/flink-cdc/pull/3272#issuecomment-2081442070

   I've tested that if namespace of TableId is not set and `__namespace_name__` 
column will be `null` string, so it won't cause error if MetadataColumns are 
not null.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-35221][hive] Support SQL 2011 reserved keywords as identifiers in HiveParser [flink-connector-hive]

2024-04-28 Thread via GitHub


WencongLiu opened a new pull request, #18:
URL: https://github.com/apache/flink-connector-hive/pull/18

   ## What is the purpose of the change
   
   According to Hive user documentation[1], starting from version 0.13.0, Hive 
prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 
and earlier allow using SQL11 reserved keywords as identifiers by setting 
`hive.support.sql11.reserved.keywords=false` in hive-site.xml. This 
compatibility feature facilitates jobs that utilize keywords as identifiers.
   
   HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to 
treat SQL11 reserved keywords as identifiers. This poses a challenge for users 
migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
where keywords are used as identifiers. Addressing this issue is necessary to 
support such cases.
   
   [1] [LanguageManual DDL - Apache Hive - Apache Software 
Foundation](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
   
   
   ## Brief change log
   
 - *Modify the ANTLR Files for Parsing Hive Syntax.*
 - *Add introduction in docs.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >