[jira] [Updated] (FLINK-35152) Flink CDC Doris Sink Auto create table event should support setting auto partition fields for each table

2024-04-18 Thread tumengyao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tumengyao updated FLINK-35152:
--
Summary: Flink CDC  Doris Sink Auto create table event should support 
setting auto partition fields for each table  (was: Flink CDC  Doris/Starrocks 
Sink Auto create table event should support setting auto partition fields for 
each table)

> Flink CDC  Doris Sink Auto create table event should support setting auto 
> partition fields for each table
> -
>
> Key: FLINK-35152
> URL: https://issues.apache.org/jira/browse/FLINK-35152
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: tumengyao
>Priority: Minor
>  Labels: Doris
>
> In some scenarios, when creating a physical table in Doris, appropriate 
> partition fields need to be selected to speed up the efficiency of data query 
> and calculation. In addition, partition tables support more applications, 
> such as hot and cold data layering and so on.
> The current Flink CDC Doris Sink's create table event creates a table with no 
> partitions set.
> The Auto Partition function supported by doris 2.1.x simplifies the creation 
> and management of partitions. We just need to add some configuration items to 
> the Flink CDC job. To tell Flink CDC which fields Doris Sink will use in the 
> create table event to create partitions, you can get a partition table in 
> Doris.
> Here's an example:
> source: Mysql
> source_table:
> CREATE TABLE table1 (
> col1 INT AUTO_INCREMENT PRIMARY KEY,
> col2 DECIMAL(18, 2),
> col3 VARCHAR(500),
> col4 TEXT,
> col5 DATETIME DEFAULT CURRENT_TIMESTAMP
> );
> If you want to specify the partition of table test.table1, you need to add 
> sink-table-partition-keys , sink-table-partition-type information ,, to 
> mysql_to_doris.yaml
> route:
> source-table: test.table1
> sink-table:ods.ods_table1
> sink-table-partition-key:col5
> sink-table-partition-func-call-expr:date_trunc(`col5`, 'month')
> sink-table-partition-type:auto range
> The auto range partition in Doris 2.1.x does not support null partitions. So 
> you need to set test.table1.col5 == null then '1990-01-01 00:00:00' else 
> test.table1.col5 end
> Now after submitting the mysql_to_doris.ymal Flink CDC job, an ods.ods_table1 
> data table should appear in the Doris database
> The data table DDL is as follows:
> CREATE TABLE table1 (
> col1 INT ,
> col5 DATETIME not null,
> col2 DECIMAL(18, 2),
> col3 VARCHAR(500),
> col4 STRING
> ) unique KEY(`col1`,`col5`)
> AUTO PARTITION BY RANGE date_trunc(`col5`, 'month')()
> DISTRIBUTED BY HASH (`id`) BUCKETS AUTO
> PROPERTIES (
> ...
> );



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34633][table] Support unnesting array constants [flink]

2024-04-18 Thread via GitHub


jeyhunkarimov commented on PR #24510:
URL: https://github.com/apache/flink/pull/24510#issuecomment-2063179382

   Thanks a lot for the review @LadyForest 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35028][runtime] Timer firing under async execution model [flink]

2024-04-18 Thread via GitHub


fredia commented on code in PR #24672:
URL: https://github.com/apache/flink/pull/24672#discussion_r1570140913


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceAsyncImpl.java:
##
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.runtime.asyncprocessing.AsyncExecutionController;
+import org.apache.flink.runtime.asyncprocessing.RecordContext;
+import org.apache.flink.runtime.metrics.groups.TaskIOMetricGroup;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyGroupedInternalPriorityQueue;
+import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService;
+import org.apache.flink.streaming.runtime.tasks.StreamTaskCancellationContext;
+import org.apache.flink.util.CloseableIterator;
+import org.apache.flink.util.function.BiConsumerWithException;
+
+import static 
org.apache.flink.runtime.asyncprocessing.KeyAccountingUnit.EMPTY_RECORD;
+
+/**
+ * An implementation of {@link InternalTimerService} that is used by {@link
+ * 
org.apache.flink.streaming.runtime.operators.asyncprocessing.AbstractAsyncStateStreamOperator}.
+ * The timer service will set {@link RecordContext} for the timers before 
invoking action to
+ * preserve the execution order between timer firing and records processing.
+ *
+ * @see https://cwiki.apache.org/confluence/display/FLINK/FLIP-425%3A+Asynchronous+Execution+Model#FLIP425:AsynchronousExecutionModel-Timers>FLIP-425
+ * timers section.
+ * @param  Type of timer's key.
+ * @param  Type of the namespace to which timers are scoped.
+ */
+public class InternalTimerServiceAsyncImpl extends 
InternalTimerServiceImpl {
+
+private AsyncExecutionController asyncExecutionController;
+
+InternalTimerServiceAsyncImpl(
+TaskIOMetricGroup taskIOMetricGroup,
+KeyGroupRange localKeyGroupRange,
+KeyContext keyContext,
+ProcessingTimeService processingTimeService,
+KeyGroupedInternalPriorityQueue processingTimeTimersQueue,
+KeyGroupedInternalPriorityQueue eventTimeTimersQueue,
+StreamTaskCancellationContext cancellationContext,
+AsyncExecutionController asyncExecutionController) {
+super(
+taskIOMetricGroup,
+localKeyGroupRange,
+keyContext,
+processingTimeService,
+processingTimeTimersQueue,
+eventTimeTimersQueue,
+cancellationContext);
+this.asyncExecutionController = asyncExecutionController;
+this.processingTimeCallback = this::onProcessingTime;
+}
+
+private void onProcessingTime(long time) throws Exception {
+// null out the timer in case the Triggerable calls 
registerProcessingTimeTimer()
+// inside the callback.
+nextTimer = null;
+
+InternalTimer timer;
+
+while ((timer = processingTimeTimersQueue.peek()) != null
+&& timer.getTimestamp() <= time
+&& !cancellationContext.isCancelled()) {
+RecordContext recordCtx =
+asyncExecutionController.buildContext(EMPTY_RECORD, 
timer.getKey());
+recordCtx.retain();
+asyncExecutionController.setCurrentContext(recordCtx);
+keyContext.setCurrentKey(timer.getKey());
+processingTimeTimersQueue.poll();
+final InternalTimer timerToTrigger = timer;
+asyncExecutionController.syncPointRequestWithCallback(
+() -> triggerTarget.onProcessingTime(timerToTrigger));
+taskIOMetricGroup.getNumFiredTimers().inc();
+recordCtx.release();
+}
+
+if (timer != null && nextTimer == null) {
+nextTimer =
+processingTimeService.registerTimer(
+timer.getTimestamp(), this::onProcessingTime);
+}
+}
+
+/**
+ * Advance one watermark, this will fire some event timers.
+ *
+ * @param time the time in watermark.
+ */
+@Override
+public void advance

[jira] [Updated] (FLINK-35153) Internal Async State Implementation and StateDescriptor for Map/List State

2024-04-18 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-35153:

Parent: FLINK-34974
Issue Type: Sub-task  (was: Bug)

> Internal Async State Implementation and StateDescriptor for Map/List State
> --
>
> Key: FLINK-35153
> URL: https://issues.apache.org/jira/browse/FLINK-35153
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zakelly Lan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35153) Internal Async State Implementation and StateDescriptor for Map/List State

2024-04-18 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35153:
---

 Summary: Internal Async State Implementation and StateDescriptor 
for Map/List State
 Key: FLINK-35153
 URL: https://issues.apache.org/jira/browse/FLINK-35153
 Project: Flink
  Issue Type: Bug
Reporter: Zakelly Lan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-18 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838499#comment-17838499
 ] 

Leonard Xu commented on FLINK-35151:


Thanks [~pacinogong]for the report, [~ruanhang1993] Would you like to take a 
look this issue?

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35028][runtime] Timer firing under async execution model [flink]

2024-04-18 Thread via GitHub


fredia commented on PR #24672:
URL: https://github.com/apache/flink/pull/24672#issuecomment-2063204411

   @Zakelly @yunfengzhou-hub Thanks for the detailed review, I updated the PR 
and addressed some comments, would you please take a look again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35097) Table API Filesystem connector with 'raw' format repeats last line

2024-04-18 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-35097:


Assignee: Kumar Mallikarjuna

> Table API Filesystem connector with 'raw' format repeats last line
> --
>
> Key: FLINK-35097
> URL: https://issues.apache.org/jira/browse/FLINK-35097
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.17.1
> Environment: I ran the above test with 1.17.1. I checked for existing 
> bug tickets and release notes, but did not find anything, so assuming this 
> effects 1.18 and 1.19.
>Reporter: David Perkins
>Assignee: Kumar Mallikarjuna
>Priority: Major
>  Labels: pull-request-available
>
> When using the Filesystem connector with 'raw' format to read text data that 
> contains new lines, a row is returned for every line, but always contains the 
> contents of the last line.
> For example, with the following file.
> {quote}
> line 1
> line 2
> line 3
> {quote}
> And table definition
> {quote}
> create TABLE MyRawTable (
>  `doc` string,
>  ) WITH (
>   'path' = 'file:///path/to/data',
>   'format' = 'raw',
>'connector' = 'filesystem'
> );
> {quote}
> Selecting `*` from the table produces three rows all with "line 3" for `doc`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35097][table] Fix 'raw' format deserialization [flink]

2024-04-18 Thread via GitHub


twalthr commented on code in PR #24661:
URL: https://github.com/apache/flink/pull/24661#discussion_r1570166387


##
flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/formats/raw/RawFormatSerDeSchemaTest.java:
##
@@ -197,12 +247,12 @@ public static TestSpec type(DataType fieldType) {
 return new TestSpec(fieldType);
 }
 
-public TestSpec value(Object value) {
-this.value = value;
+public TestSpec values(Object[] values) {

Review Comment:
   make this a var arg to avoid the need for `new X[]{}` in the test specs. 
this will improve code readibility.



##
flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/formats/raw/RawFormatSerDeSchemaTest.java:
##
@@ -197,12 +247,12 @@ public static TestSpec type(DataType fieldType) {
 return new TestSpec(fieldType);
 }
 
-public TestSpec value(Object value) {
-this.value = value;
+public TestSpec values(Object[] values) {
+this.values = values;
 return this;
 }
 
-public TestSpec binary(byte[] bytes) {
+public TestSpec binary(byte[][] bytes) {

Review Comment:
   same as above, make this a var arg



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35153) Internal Async State Implementation and StateDescriptor for Map/List State

2024-04-18 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-35153:

Component/s: Runtime / State Backends

> Internal Async State Implementation and StateDescriptor for Map/List State
> --
>
> Key: FLINK-35153
> URL: https://issues.apache.org/jira/browse/FLINK-35153
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Zakelly Lan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34574] Add CPU and memory size autoscaler quota [flink-kubernetes-operator]

2024-04-18 Thread via GitHub


gaborgsomogyi commented on code in PR #789:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/789#discussion_r1570203474


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingExecutor.java:
##
@@ -129,8 +133,10 @@ public boolean scaleResource(
 scalingSummaries,
 autoScalerEventHandler);
 
-if (scalingWouldExceedClusterResources(
-configOverrides.newConfigWithOverrides(conf),
+var memoryTuningEnabled = 
conf.get(AutoScalerOptions.MEMORY_TUNING_ENABLED);
+if (scalingWouldExceedMaxResources(
+memoryTuningEnabled ? 
configOverrides.newConfigWithOverrides(conf) : conf,

Review Comment:
   Forgotten, but now I've added.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35115][Connectors/Kinesis] Allow kinesis consumer to snapshotState after operator had been cancelled [flink-connector-aws]

2024-04-18 Thread via GitHub


vahmed-hamdy commented on code in PR #138:
URL: 
https://github.com/apache/flink-connector-aws/pull/138#discussion_r1570201177


##
flink-connector-aws/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumerTest.java:
##
@@ -312,6 +316,148 @@ public void testListStateChangedAfterSnapshotState() 
throws Exception {
 }
 }
 
+@Test
+public void testSnapshotStateDuringStopWithSavepoint() throws Exception {
+
+// 
--
+// setup config, initial state and expected state snapshot
+// 
--
+Properties config = TestUtils.getStandardProperties();
+
+ArrayList> initialState = 
new ArrayList<>(1);
+initialState.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("11")));
+
+ArrayList> 
expectedStateSnapshot1 =
+new ArrayList<>(1);
+expectedStateSnapshot1.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("12")));
+ArrayList> 
expectedStateSnapshot2 =
+new ArrayList<>(1);
+expectedStateSnapshot2.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("13")));
+
+// 
--
+// mock operator state backend and initial state for initializeState()
+// 
--
+
+TestingListState> 
listState =
+new TestingListState<>();
+for (Tuple2 state : initialState) 
{
+listState.add(state);
+}
+
+OperatorStateStore operatorStateStore = mock(OperatorStateStore.class);

Review Comment:
   It is against [coding standards to use 
mockito](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#avoid-mockito---use-reusable-test-implementations),
 aware that the standard is already broken in this test suite but I believe we 
shouldn't add more debt.
   Can we try using an [existing test util 
instead](https://github.com/apache/flink/blob/43a3d50ce3982b9abf04b81407fed46c5c25f819/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/MockOperatorStateStore.java#L34)
   
   Also we can extend the existing `TestableFlinkKinesisConsumer` hierarchy



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)
Dmitriy Linevich created FLINK-35154:


 Summary: Javadoc aggregate fails
 Key: FLINK-35154
 URL: https://issues.apache.org/jira/browse/FLINK-35154
 Project: Flink
  Issue Type: Bug
Reporter: Dmitriy Linevich


Javadoc plugin fails with error cannot find symbol. Using
{code:java}
javadoc:aggregate{code}
ERROR:

!image-2024-04-18-15-20-56-467.png!

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838529#comment-17838529
 ] 

Dmitriy Linevich commented on FLINK-35154:
--

[~MartijnVisser] [~trohrmann] 

Hi,

Please, assign me to this task

> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error cannot find symbol. Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> !image-2024-04-18-15-20-56-467.png!
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Linevich updated FLINK-35154:
-
Description: 
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:

!image-2024-04-18-15-20-56-467.png!

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])

  was:
Javadoc plugin fails with error cannot find symbol. Using
{code:java}
javadoc:aggregate{code}
ERROR:

!image-2024-04-18-15-20-56-467.png!

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])


> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error "cannot find symbol". Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> !image-2024-04-18-15-20-56-467.png!
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33460][Connector/JDBC] Support property authentication connection. [flink-connector-jdbc]

2024-04-18 Thread via GitHub


RocMarshal commented on PR #115:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/115#issuecomment-2063308586

   hi, @snuyanzin @eskabetxe Could you help to have a review if you had the 
free time ?
   Thank you~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Linevich updated FLINK-35154:
-
Description: 
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
                                                                       ^

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])

  was:
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:

!image-2024-04-18-15-20-56-467.png!

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])


> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error "cannot find symbol". Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> {code:java}
> [WARNING] The requested profile "include-hadoop" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "arm" could not be activated because it does 
> not exist.
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) 
> on project flink-parent: An error has occurred in JavaDocs report generation: 
> [ERROR] Exit code: 1 - 
> /{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
>  error: type GenericEnumSymbol does not take parameters
> [ERROR] public enum EventType implements 
> org.apache.avro.generic.GenericEnumSymbol {
> [ERROR]    {code}
>                                                                        ^
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35115][Connectors/Kinesis] Allow kinesis consumer to snapshotState after operator had been cancelled [flink-connector-aws]

2024-04-18 Thread via GitHub


dannycranmer commented on code in PR #138:
URL: 
https://github.com/apache/flink-connector-aws/pull/138#discussion_r1570265098


##
flink-connector-aws/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumerTest.java:
##
@@ -312,6 +316,148 @@ public void testListStateChangedAfterSnapshotState() 
throws Exception {
 }
 }
 
+@Test
+public void testSnapshotStateDuringStopWithSavepoint() throws Exception {
+
+// 
--
+// setup config, initial state and expected state snapshot
+// 
--
+Properties config = TestUtils.getStandardProperties();
+
+ArrayList> initialState = 
new ArrayList<>(1);
+initialState.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("11")));
+
+ArrayList> 
expectedStateSnapshot1 =
+new ArrayList<>(1);
+expectedStateSnapshot1.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("12")));
+ArrayList> 
expectedStateSnapshot2 =
+new ArrayList<>(1);
+expectedStateSnapshot2.add(
+Tuple2.of(
+KinesisDataFetcher.convertToStreamShardMetadata(
+new StreamShardHandle(
+"fakeStream1",
+new Shard()
+.withShardId(
+KinesisShardIdGenerator
+
.generateFromShardOrder(0,
+new SequenceNumber("13")));
+
+// 
--
+// mock operator state backend and initial state for initializeState()
+// 
--
+
+TestingListState> 
listState =
+new TestingListState<>();
+for (Tuple2 state : initialState) 
{
+listState.add(state);
+}
+
+OperatorStateStore operatorStateStore = mock(OperatorStateStore.class);

Review Comment:
   Since Mockito is already used in this calss, and this connector is on the 
deprecation path I ok to use it. However, the test is very long and it is hard 
to know what is going on. Can we break out into methods and reduce the 
complexity of the test case for readability.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35124:
-

Assignee: Danny Cranmer

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35088) watermark alignment maxAllowedWatermarkDrift and updateInterval param need check

2024-04-18 Thread elon_X (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838534#comment-17838534
 ] 

elon_X commented on FLINK-35088:


[~martijnvisser] [~masc]  I'm sorry for the late reply. I have conducted a 
retest based on Flink version 1.18 and found that the problem still persists. 
Then, I checked the latest code on the main Flink branch and found that there 
is no validation for these two parameters. What are your thoughts on this?

> watermark alignment maxAllowedWatermarkDrift and updateInterval param need 
> check
> 
>
> Key: FLINK-35088
> URL: https://issues.apache.org/jira/browse/FLINK-35088
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.16.1
>Reporter: elon_X
>Priority: Major
> Attachments: image-2024-04-11-20-12-29-951.png
>
>
> When I use watermark alignment,
> 1.I found that setting maxAllowedWatermarkDrift to a negative number 
> initially led me to believe it could support delaying the consumption of the 
> source, so I tried it. Then, the upstream data flow would hang indefinitely.
> Root cause:
> {code:java}
> long maxAllowedWatermark = globalCombinedWatermark.getTimestamp()             
>     + watermarkAlignmentParams.getMaxAllowedWatermarkDrift();  {code}
> If maxAllowedWatermarkDrift is negative, SourceOperator: maxAllowedWatermark 
> < lastEmittedWatermark, then the SourceReader will be blocked indefinitely 
> and cannot recover.
> I'm not sure if this is a supported feature of watermark alignment. If it's 
> not, I think an additional parameter validation should be implemented to 
> throw an exception on the client side if the value is negative.
> 2.The updateInterval parameter also lacks validation. If I set it to 0, the 
> task will throw an exception when starting the job manager. The JDK class 
> java.util.concurrent.ScheduledThreadPoolExecutor performs the validation and 
> throws the exception.
> {code:java}
> java.lang.IllegalArgumentException: null
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:565)
>  ~[?:1.8.0_351]
>   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.(SourceCoordinator.java:191)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider.getCoordinator(SourceCoordinatorProvider.java:92)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.createNewInternalCoordinator(RecreateOnResetOperatorCoordinator.java:333)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.(RecreateOnResetOperatorCoordinator.java:59)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.(RecreateOnResetOperatorCoordinator.java:42)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$Provider.create(RecreateOnResetOperatorCoordinator.java:201)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$Provider.create(RecreateOnResetOperatorCoordinator.java:195)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:529)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:494)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.createOperatorCoordinatorHolder(ExecutionJobVertex.java:286)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:223)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:901)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:891)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:848)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGr

[jira] [Commented] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838537#comment-17838537
 ] 

Danny Cranmer commented on FLINK-35124:
---

This resulted in bad source archives being generated for JDBC and MongoDB 
connectors. I am going to revert this change for now:
 * [https://lists.apache.org/thread/q6dmc5dbz7kcfvpo99pj2sh5mzhffgl5]
 * [https://lists.apache.org/thread/s18g7obgp4sbdtl73571976vqvy1ftk8]

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35124] Include Maven build configuration in the pristine source clone [flink-connector-shared-utils]

2024-04-18 Thread via GitHub


dannycranmer opened a new pull request, #40:
URL: https://github.com/apache/flink-connector-shared-utils/pull/40

   Reverting 
https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a
 to ensure Maven build configuration is included in the source archive 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35124:
---
Labels: pull-request-available  (was: )

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35002) GitHub action request timeout to ArtifactService

2024-04-18 Thread Ryan Skraba (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Skraba updated FLINK-35002:

Summary: GitHub action request timeout  to ArtifactService  (was: GitHub 
action/upload-artifact@v4 can timeout)

> GitHub action request timeout  to ArtifactService
> -
>
> Key: FLINK-35002
> URL: https://issues.apache.org/jira/browse/FLINK-35002
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Ryan Skraba
>Priority: Major
>  Labels: github-actions, test-stability
>
> A timeout can occur when uploading a successfully built artifact:
>  * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650]
> {code:java}
> 2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file 
> uploaded
> 2024-04-02T02:20:15.6360133Z Artifact name is valid!
> 2024-04-02T02:20:15.6362872Z Root directory input is valid!
> 2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 3000 ms...
> 2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 4785 ms...
> 2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 7375 ms...
> 2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 14988 ms...
> 2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to 
> make request after 5 attempts: Request timeout: 
> /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact
> 2024-04-02T02:22:59.9893296Z Post job cleanup.
> 2024-04-02T02:22:59.9958844Z Post job cleanup. {code}
> (This is unlikely to be something we can fix, but we can track it.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Linevich updated FLINK-35154:
-
Description: 
Javadoc plugin fails with error. Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
 

For our flink 1.17 ERROR
{code:java}
[ERROR] 
/{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
 error: cannot find symbol
[ERROR] import static 
org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
[ERROR] ^
[ERROR]   symbol:   static DEFAULT_DATABASE_NAME
[ERROR]   location: class{code}
 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])

  was:
Javadoc plugin fails with error. Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
 

For our flink 1.17 ERROR

 
{code:java}
[ERROR] 
/{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
 error: cannot find symbol
[ERROR] import static 
org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
[ERROR] ^
[ERROR]   symbol:   static DEFAULT_DATABASE_NAME
[ERROR]   location: class{code}
 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])


> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error. Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> {code:java}
> [WARNING] The requested profile "include-hadoop" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "arm" could not be activated because it does 
> not exist.
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) 
> on project flink-parent: An error has occurred in JavaDocs report generation: 
> [ERROR] Exit code: 1 - 
> /{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
>  error: type GenericEnumSymbol does not take parameters
> [ERROR] public enum EventType implements 
> org.apache.avro.generic.GenericEnumSymbol {
> [ERROR]    {code}
>  
> For our flink 1.17 ERROR
> {code:java}
> [ERROR] 
> /{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
>  error: cannot find symbol
> [ERROR] import static 
> org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
> [ERROR] ^
> [ERROR]   symbol:   static DEFAULT_DATABASE_NAME
> [ERROR]   location: class{code}
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Linevich updated FLINK-35154:
-
Description: 
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
 

For our flink 1.17 ERROR

 
{code:java}
[ERROR] 
/{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
 error: cannot find symbol
[ERROR] import static 
org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
[ERROR] ^
[ERROR]   symbol:   static DEFAULT_DATABASE_NAME
[ERROR]   location: class{code}
 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])

  was:
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
                                                                       ^

 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])


> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error "cannot find symbol". Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> {code:java}
> [WARNING] The requested profile "include-hadoop" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "arm" could not be activated because it does 
> not exist.
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) 
> on project flink-parent: An error has occurred in JavaDocs report generation: 
> [ERROR] Exit code: 1 - 
> /{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
>  error: type GenericEnumSymbol does not take parameters
> [ERROR] public enum EventType implements 
> org.apache.avro.generic.GenericEnumSymbol {
> [ERROR]    {code}
>  
> For our flink 1.17 ERROR
>  
> {code:java}
> [ERROR] 
> /{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
>  error: cannot find symbol
> [ERROR] import static 
> org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
> [ERROR] ^
> [ERROR]   symbol:   static DEFAULT_DATABASE_NAME
> [ERROR]   location: class{code}
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35154) Javadoc aggregate fails

2024-04-18 Thread Dmitriy Linevich (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Linevich updated FLINK-35154:
-
Description: 
Javadoc plugin fails with error. Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
 

For our flink 1.17 ERROR

 
{code:java}
[ERROR] 
/{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
 error: cannot find symbol
[ERROR] import static 
org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
[ERROR] ^
[ERROR]   symbol:   static DEFAULT_DATABASE_NAME
[ERROR]   location: class{code}
 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])

  was:
Javadoc plugin fails with error "cannot find symbol". Using
{code:java}
javadoc:aggregate{code}
ERROR:
{code:java}
[WARNING] The requested profile "include-hadoop" could not be activated because 
it does not exist.
[WARNING] The requested profile "arm" could not be activated because it does 
not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) on 
project flink-parent: An error has occurred in JavaDocs report generation: 
[ERROR] Exit code: 1 - 
/{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
 error: type GenericEnumSymbol does not take parameters
[ERROR] public enum EventType implements 
org.apache.avro.generic.GenericEnumSymbol {
[ERROR]    {code}
 

For our flink 1.17 ERROR

 
{code:java}
[ERROR] 
/{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
 error: cannot find symbol
[ERROR] import static 
org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
[ERROR] ^
[ERROR]   symbol:   static DEFAULT_DATABASE_NAME
[ERROR]   location: class{code}
 

Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
version exists bug same with 
[this|https://github.com/checkstyle/checkstyle/issues/291])


> Javadoc aggregate fails
> ---
>
> Key: FLINK-35154
> URL: https://issues.apache.org/jira/browse/FLINK-35154
> Project: Flink
>  Issue Type: Bug
>Reporter: Dmitriy Linevich
>Priority: Minor
>
> Javadoc plugin fails with error. Using
> {code:java}
> javadoc:aggregate{code}
> ERROR:
> {code:java}
> [WARNING] The requested profile "include-hadoop" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "arm" could not be activated because it does 
> not exist.
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate (default-cli) 
> on project flink-parent: An error has occurred in JavaDocs report generation: 
> [ERROR] Exit code: 1 - 
> /{flinkProjectDir}/flink-end-to-end-tests/flink-confluent-schema-registry/target/generated-sources/example/avro/EventType.java:8:
>  error: type GenericEnumSymbol does not take parameters
> [ERROR] public enum EventType implements 
> org.apache.avro.generic.GenericEnumSymbol {
> [ERROR]    {code}
>  
> For our flink 1.17 ERROR
>  
> {code:java}
> [ERROR] 
> /{flinkProjectDir}/flink-connectors/flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java:21:
>  error: cannot find symbol
> [ERROR] import static 
> org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
> [ERROR] ^
> [ERROR]   symbol:   static DEFAULT_DATABASE_NAME
> [ERROR]   location: class{code}
>  
> Need to increase version of javadoc plugin from 2.9.1 to 2.10.4 (In current 
> version exists bug same with 
> [this|https://github.com/checkstyle/checkstyle/issues/291])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35002) GitHub action request timeout to ArtifactService

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838541#comment-17838541
 ] 

Ryan Skraba edited comment on FLINK-35002 at 4/18/24 8:55 AM:
--

I changed the title; requests to the artifact service can timeout during other 
stages of the build than just uploading, and it seems like the same network 
issue.

1.19 Java 17 / E2E (group 2) 
https://github.com/apache/flink/commit/a2c3d27f5dced2ba73307e8230cd07a11b26c401/checks/23956874905/logs
 

During **Download build artifacts from compile job**:

{code:java}
2024-04-18T02:20:57.1951531Z ##[group]Run actions/download-artifact@v4
2024-04-18T02:20:57.1952046Z with:
2024-04-18T02:20:57.1952529Z   name: build-artifacts-nightly-beta-java17-229
2024-04-18T02:20:57.1953033Z   path: /home/runner/work/flink/flink
2024-04-18T02:20:57.1953552Z   merge-multiple: false
2024-04-18T02:20:57.1953902Z   repository: apache/flink
2024-04-18T02:20:57.1954306Z   run-id: 8731358696
2024-04-18T02:20:57.1954704Z env:
2024-04-18T02:20:57.1955038Z   MOUNTED_WORKING_DIR: /__w/flink/flink
2024-04-18T02:20:57.1955517Z   CONTAINER_LOCAL_WORKING_DIR: /root/flink
2024-04-18T02:20:57.1956053Z   FLINK_ARTIFACT_DIR: /home/runner/work/flink/flink
2024-04-18T02:20:57.1956590Z   FLINK_ARTIFACT_FILENAME: flink_artifacts.tar.gz
2024-04-18T02:20:57.1957184Z   MAVEN_REPO_FOLDER: 
/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1957943Z   MAVEN_ARGS: 
-Dmaven.repo.local=/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1958748Z   DOCKER_IMAGES_CACHE_FOLDER: 
/home/runner/work/flink/flink/.docker-cache
2024-04-18T02:20:57.1959387Z   GHA_JOB_TIMEOUT: 310
2024-04-18T02:20:57.1959867Z   E2E_CACHE_FOLDER: 
/home/runner/work/flink/flink/.e2e-cache
2024-04-18T02:20:57.1960480Z   E2E_TARBALL_CACHE: 
/home/runner/work/flink/flink/.e2e-tar-cache
2024-04-18T02:20:57.1961116Z   GHA_PIPELINE_START_TIME: 2024-04-18 
02:19:06+00:00
2024-04-18T02:20:57.1961649Z   JAVA_HOME: /usr/lib/jvm/temurin-17-jdk-amd64
2024-04-18T02:20:57.1963231Z   PATH: 
/usr/lib/jvm/temurin-17-jdk-amd64/bin:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
2024-04-18T02:20:57.1964849Z ##[endgroup]
2024-04-18T02:20:57.3842499Z Downloading single artifact
2024-04-18T02:21:02.4187408Z Attempt 1 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 3000 ms...
2024-04-18T02:21:10.4281352Z Attempt 2 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 4605 ms...
2024-04-18T02:21:20.0388024Z Attempt 3 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 8717 ms...
2024-04-18T02:21:33.7715121Z Attempt 4 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 12219 ms...
2024-04-18T02:21:51.0125881Z ##[error]Unable to download artifact(s): Failed to 
ListArtifacts: Failed to make request after 5 attempts: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts
{code}
 


was (Author: ryanskraba):
I changed the title; requests to the artifact service can timeout during other 
stages of the build than just uploading, and it seems like the same network 
issue.

During **Download build artifacts from compile job**:

{code:java}
2024-04-18T02:20:57.1951531Z ##[group]Run actions/download-artifact@v4
2024-04-18T02:20:57.1952046Z with:
2024-04-18T02:20:57.1952529Z   name: build-artifacts-nightly-beta-java17-229
2024-04-18T02:20:57.1953033Z   path: /home/runner/work/flink/flink
2024-04-18T02:20:57.1953552Z   merge-multiple: false
2024-04-18T02:20:57.1953902Z   repository: apache/flink
2024-04-18T02:20:57.1954306Z   run-id: 8731358696
2024-04-18T02:20:57.1954704Z env:
2024-04-18T02:20:57.1955038Z   MOUNTED_WORKING_DIR: /__w/flink/flink
2024-04-18T02:20:57.1955517Z   CONTAINER_LOCAL_WORKING_DIR: /root/flink
2024-04-18T02:20:57.1956053Z   FLINK_ARTIFACT_DIR: /home/runner/work/flink/flink
2024-04-18T02:20:57.1956590Z   FLINK_ARTIFACT_FILENAME: flink_artifacts.tar.gz
2024-04-18T02:20:57.1957184Z   MAVEN_REPO_FOLDER: 
/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1957943Z   MAVEN_ARGS: 
-Dmaven.repo.local=/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1958748Z   DOCKER_IMAGES_CACHE_FOLDER: 
/home/runner/work/flink/flink/.docker-cache
2024-04-18T02:20:57.1959387Z   GHA_JOB_TIMEOUT: 310
2024-04-18T02:20:57.1959867Z   E2E_CACHE_FOLDER: 
/home/runner/work/flink/flink/.e2e-cache
2024-04-18T02:20:57.1960480Z  

[jira] [Commented] (FLINK-35002) GitHub action request timeout to ArtifactService

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838541#comment-17838541
 ] 

Ryan Skraba commented on FLINK-35002:
-

I changed the title; requests to the artifact service can timeout during other 
stages of the build than just uploading, and it seems like the same network 
issue.

During **Download build artifacts from compile job**:

{code:java}
2024-04-18T02:20:57.1951531Z ##[group]Run actions/download-artifact@v4
2024-04-18T02:20:57.1952046Z with:
2024-04-18T02:20:57.1952529Z   name: build-artifacts-nightly-beta-java17-229
2024-04-18T02:20:57.1953033Z   path: /home/runner/work/flink/flink
2024-04-18T02:20:57.1953552Z   merge-multiple: false
2024-04-18T02:20:57.1953902Z   repository: apache/flink
2024-04-18T02:20:57.1954306Z   run-id: 8731358696
2024-04-18T02:20:57.1954704Z env:
2024-04-18T02:20:57.1955038Z   MOUNTED_WORKING_DIR: /__w/flink/flink
2024-04-18T02:20:57.1955517Z   CONTAINER_LOCAL_WORKING_DIR: /root/flink
2024-04-18T02:20:57.1956053Z   FLINK_ARTIFACT_DIR: /home/runner/work/flink/flink
2024-04-18T02:20:57.1956590Z   FLINK_ARTIFACT_FILENAME: flink_artifacts.tar.gz
2024-04-18T02:20:57.1957184Z   MAVEN_REPO_FOLDER: 
/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1957943Z   MAVEN_ARGS: 
-Dmaven.repo.local=/home/runner/work/flink/flink/.m2/repository
2024-04-18T02:20:57.1958748Z   DOCKER_IMAGES_CACHE_FOLDER: 
/home/runner/work/flink/flink/.docker-cache
2024-04-18T02:20:57.1959387Z   GHA_JOB_TIMEOUT: 310
2024-04-18T02:20:57.1959867Z   E2E_CACHE_FOLDER: 
/home/runner/work/flink/flink/.e2e-cache
2024-04-18T02:20:57.1960480Z   E2E_TARBALL_CACHE: 
/home/runner/work/flink/flink/.e2e-tar-cache
2024-04-18T02:20:57.1961116Z   GHA_PIPELINE_START_TIME: 2024-04-18 
02:19:06+00:00
2024-04-18T02:20:57.1961649Z   JAVA_HOME: /usr/lib/jvm/temurin-17-jdk-amd64
2024-04-18T02:20:57.1963231Z   PATH: 
/usr/lib/jvm/temurin-17-jdk-amd64/bin:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
2024-04-18T02:20:57.1964849Z ##[endgroup]
2024-04-18T02:20:57.3842499Z Downloading single artifact
2024-04-18T02:21:02.4187408Z Attempt 1 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 3000 ms...
2024-04-18T02:21:10.4281352Z Attempt 2 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 4605 ms...
2024-04-18T02:21:20.0388024Z Attempt 3 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 8717 ms...
2024-04-18T02:21:33.7715121Z Attempt 4 of 5 failed with error: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts. Retrying 
request in 12219 ms...
2024-04-18T02:21:51.0125881Z ##[error]Unable to download artifact(s): Failed to 
ListArtifacts: Failed to make request after 5 attempts: Request timeout: 
/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts
{code}
 

> GitHub action request timeout  to ArtifactService
> -
>
> Key: FLINK-35002
> URL: https://issues.apache.org/jira/browse/FLINK-35002
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Ryan Skraba
>Priority: Major
>  Labels: github-actions, test-stability
>
> A timeout can occur when uploading a successfully built artifact:
>  * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650]
> {code:java}
> 2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file 
> uploaded
> 2024-04-02T02:20:15.6360133Z Artifact name is valid!
> 2024-04-02T02:20:15.6362872Z Root directory input is valid!
> 2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 3000 ms...
> 2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 4785 ms...
> 2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 7375 ms...
> 2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 14988 ms...
> 2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to 
> make request after 5 attempts: Request timeout

[jira] [Commented] (FLINK-35088) watermark alignment maxAllowedWatermarkDrift and updateInterval param need check

2024-04-18 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838543#comment-17838543
 ] 

Martijn Visser commented on FLINK-35088:


[~fanrui] What are your thoughts on this?

> watermark alignment maxAllowedWatermarkDrift and updateInterval param need 
> check
> 
>
> Key: FLINK-35088
> URL: https://issues.apache.org/jira/browse/FLINK-35088
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.16.1
>Reporter: elon_X
>Priority: Major
> Attachments: image-2024-04-11-20-12-29-951.png
>
>
> When I use watermark alignment,
> 1.I found that setting maxAllowedWatermarkDrift to a negative number 
> initially led me to believe it could support delaying the consumption of the 
> source, so I tried it. Then, the upstream data flow would hang indefinitely.
> Root cause:
> {code:java}
> long maxAllowedWatermark = globalCombinedWatermark.getTimestamp()             
>     + watermarkAlignmentParams.getMaxAllowedWatermarkDrift();  {code}
> If maxAllowedWatermarkDrift is negative, SourceOperator: maxAllowedWatermark 
> < lastEmittedWatermark, then the SourceReader will be blocked indefinitely 
> and cannot recover.
> I'm not sure if this is a supported feature of watermark alignment. If it's 
> not, I think an additional parameter validation should be implemented to 
> throw an exception on the client side if the value is negative.
> 2.The updateInterval parameter also lacks validation. If I set it to 0, the 
> task will throw an exception when starting the job manager. The JDK class 
> java.util.concurrent.ScheduledThreadPoolExecutor performs the validation and 
> throws the exception.
> {code:java}
> java.lang.IllegalArgumentException: null
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:565)
>  ~[?:1.8.0_351]
>   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.(SourceCoordinator.java:191)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider.getCoordinator(SourceCoordinatorProvider.java:92)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.createNewInternalCoordinator(RecreateOnResetOperatorCoordinator.java:333)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.(RecreateOnResetOperatorCoordinator.java:59)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.(RecreateOnResetOperatorCoordinator.java:42)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$Provider.create(RecreateOnResetOperatorCoordinator.java:201)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$Provider.create(RecreateOnResetOperatorCoordinator.java:195)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:529)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:494)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.createOperatorCoordinatorHolder(ExecutionJobVertex.java:286)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:223)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:901)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:891)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:848)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:830)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:203)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
>   at 
> org.apache

[jira] [Commented] (FLINK-32523) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout on AZP

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838550#comment-17838550
 ] 

Ryan Skraba commented on FLINK-32523:
-

1.19 AdaptiveScheduler / Test (module: tests) 
https://github.com/apache/flink/actions/runs/8731358221/job/23956908384#step:10:8332

> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout 
> on AZP
> ---
>
> Key: FLINK-32523
> URL: https://issues.apache.org/jira/browse/FLINK-32523
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Hangxiang Yu
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Attachments: failure.log
>
>
> This build
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50795&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8638
>  fails with timeout
> {noformat}
> Jul 03 01:26:35 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 milliseconds
> Jul 03 01:26:35   at java.lang.Object.wait(Native Method)
> Jul 03 01:26:35   at java.lang.Object.wait(Object.java:502)
> Jul 03 01:26:35   at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:198)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:189)
> Jul 03 01:26:35   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 01:26:35   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 01:26:35   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 01:26:35   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 03 01:26:35   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 03 01:26:35   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34224) ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest timed out

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838552#comment-17838552
 ] 

Ryan Skraba commented on FLINK-34224:
-

1.20 Hadoop 3.1.3 / Test (module: core) 
https://github.com/apache/flink/actions/runs/8731358306/job/23956935029#step:10:12643

> ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest 
> timed out
> ---
>
> Key: FLINK-34224
> URL: https://issues.apache.org/jira/browse/FLINK-34224
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: github-actions, test-stability
>
> The timeout appeared in the GitHub Actions workflow (currently in test phase; 
> [FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink%27s+current+Azure+CI+infrastructure]):
> https://github.com/XComp/flink/actions/runs/7632434859/job/20793613726#step:10:11040
> {code}
> Jan 24 01:38:36 "ForkJoinPool-1-worker-1" #16 daemon prio=5 os_prio=0 
> tid=0x7f3b200ae800 nid=0x406e3 waiting on condition [0x7f3b1ba0e000]
> Jan 24 01:38:36java.lang.Thread.State: WAITING (parking)
> Jan 24 01:38:36   at sun.misc.Unsafe.park(Native Method)
> Jan 24 01:38:36   - parking to wait for  <0xdfbbb358> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Jan 24 01:38:36   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Jan 24 01:38:36   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Jan 24 01:38:36   at 
> org.apache.flink.changelog.fs.ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest.java:251)
> Jan 24 01:38:36   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35041) IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838551#comment-17838551
 ] 

Ryan Skraba commented on FLINK-35041:
-

* 1.20 Java 8 / Test (module: core) 
[https://github.com/apache/flink/actions/runs/8731358306/job/23956957736#step:10:8376]
 * 1.20 Java 17 / Test (module: core) 
[https://github.com/apache/flink/actions/runs/8731358306/job/23956870590#step:10:8521]

 

> IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration failed
> --
>
> Key: FLINK-35041
> URL: https://issues.apache.org/jira/browse/FLINK-35041
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> Apr 08 03:22:45 03:22:45.450 [ERROR] 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration
>  -- Time elapsed: 0.034 s <<< FAILURE!
> Apr 08 03:22:45 org.opentest4j.AssertionFailedError: 
> Apr 08 03:22:45 
> Apr 08 03:22:45 expected: false
> Apr 08 03:22:45  but was: true
> Apr 08 03:22:45   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 08 03:22:45   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 08 03:22:45   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(K.java:45)
> Apr 08 03:22:45   at 
> org.apache.flink.runtime.state.DiscardRecordedStateObject.verifyDiscard(DiscardRecordedStateObject.java:34)
> Apr 08 03:22:45   at 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandleTest.testSharedStateReRegistration(IncrementalRemoteKeyedStateHandleTest.java:211)
> Apr 08 03:22:45   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 08 03:22:45   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Apr 08 03:22:45   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> {code}
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=77a9d8e1-d610-59b3-fc2a-4766541e0e33&t=125e07e7-8de0-5c6c-a541-a567415af3ef&l=9238]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35115][Connectors/Kinesis] Allow kinesis consumer to snapshotState after operator had been cancelled [flink-connector-aws]

2024-04-18 Thread via GitHub


hlteoh37 commented on code in PR #138:
URL: 
https://github.com/apache/flink-connector-aws/pull/138#discussion_r1570320954


##
flink-connector-aws/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisConsumer.java:
##
@@ -149,6 +149,7 @@ public class FlinkKinesisConsumer extends 
RichParallelSourceFunction
 sequenceNumsToRestore;
 
 private volatile boolean running = true;
+private volatile boolean closed = false;

Review Comment:
   Can we add a doc to explain difference between `running` and `closed`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838548#comment-17838548
 ] 

Ryan Skraba commented on FLINK-34227:
-

1.19 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/8731358221/job/23956907827#step:10:12482

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2131)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2099)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2077)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:876)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowDistinctAggregateITCase.testHopWindow_Cube(WindowDistinctAggregateITCase.scala:550)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2024-04-18 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838553#comment-17838553
 ] 

Ryan Skraba commented on FLINK-28440:
-

1.19 Java 8 / Test (module: tests) 
[https://github.com/apache/flink/actions/runs/8731358696/job/23956855275#step:10:8099]
1.19 Java 11 / Test (module: tests) 
[https://github.com/apache/flink/actions/runs/8731358696/job/23956873835#step:10:7968]

> EventTimeWindowCheckpointingITCase failed with restore
> --
>
> Key: FLINK-28440
> URL: https://issues.apache.org/jira/browse/FLINK-28440
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0, 1.17.0, 1.18.0, 1.19.0
>Reporter: Huang Xingbo
>Assignee: Yanfei Lei
>Priority: Critical
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-assigned, test-stability
> Fix For: 1.20.0
>
> Attachments: image-2023-02-01-00-51-54-506.png, 
> image-2023-02-01-01-10-01-521.png, image-2023-02-01-01-19-12-182.png, 
> image-2023-02-01-16-47-23-756.png, image-2023-02-01-16-57-43-889.png, 
> image-2023-02-02-10-52-56-599.png, image-2023-02-03-10-09-07-586.png, 
> image-2023-02-03-12-03-16-155.png, image-2023-02-03-12-03-56-614.png
>
>
> {code:java}
> Caused by: java.lang.Exception: Exception while creating 
> StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:256)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:268)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:722)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:698)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:665)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for WindowOperator_0a448493b4782967b150582570326227_(2/4) from 
> any of the 1 provided restore options.
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:353)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:165)
>   ... 11 more
> Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: 
> /tmp/junit1835099326935900400/junit1113650082510421526/52ee65b7-033f-4429-8ddd-adbe85e27ced
>  (No such file or directory)
>   at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.advance(StateChangelogHandleStreamHandleReader.java:87)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.hasNext(StateChangelogHandleStreamHandleReader.java:69)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.readBackendHandle(ChangelogBackendRestoreOperation.java:96)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:75)
>   at 
> org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:92)
>   at 
> org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:336)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRes

Re: [PR] [FLINK-35135][Connectors/Google Cloud PubSub] Drop support for Flink 1.17 [flink-connector-gcp-pubsub]

2024-04-18 Thread via GitHub


boring-cyborg[bot] commented on PR #25:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/25#issuecomment-2063392255

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35135][Connectors/Google Cloud PubSub] Drop support for Flink 1.17 [flink-connector-gcp-pubsub]

2024-04-18 Thread via GitHub


dannycranmer commented on code in PR #25:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/25#discussion_r1570324100


##
.github/workflows/push_pr.yml:
##
@@ -28,11 +28,9 @@ jobs:
   compile_and_test:
 strategy:
   matrix:
-  flink: [ 1.17-SNAPSHOT ]
-  jdk: [ '8, 11' ]
+  flink: [ 1.18-SNAPSHOT ]

Review Comment:
   I am not sure, I agree that makes sense though.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35135][Connectors/Google Cloud PubSub] Drop support for Flink 1.17 [flink-connector-gcp-pubsub]

2024-04-18 Thread via GitHub


dannycranmer merged PR #25:
URL: https://github.com/apache/flink-connector-gcp-pubsub/pull/25


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838558#comment-17838558
 ] 

Danny Cranmer commented on FLINK-35124:
---

>  In the case of regular connectors it was containing ci and maven subdirs


I think this is ok, since it is _source code_ (configuration) within the git 
repository after all

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35136] Bump connector version to 4.0, adapt CI workflows [flink-connector-hbase]

2024-04-18 Thread via GitHub


ferenc-csaky commented on PR #46:
URL: 
https://github.com/apache/flink-connector-hbase/pull/46#issuecomment-2063424524

   @dannycranmer if you have some time can you trigger a CI run? Thanks in 
advance!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-18 Thread via GitHub


LadyForest commented on code in PR #24630:
URL: https://github.com/apache/flink/pull/24630#discussion_r1570354008


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java:
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogDescriptor;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.api.internal.TableResultUtils.buildTableResult;
+
+/** Operation to describe a DESCRIBE CATALOG catalog_name statement. */
+@Internal
+public class DescribeCatalogOperation implements Operation, 
ExecutableOperation {
+
+private final String catalogName;
+private final boolean isExtended;
+
+public DescribeCatalogOperation(String catalogName, boolean isExtended) {
+this.catalogName = catalogName;
+this.isExtended = isExtended;
+}
+
+public String getCatalogName() {
+return catalogName;
+}
+
+public boolean isExtended() {
+return isExtended;
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", catalogName);
+params.put("isExtended", isExtended);
+return OperationUtils.formatWithChildren(
+"DESCRIBE CATALOG", params, Collections.emptyList(), 
Operation::asSummaryString);
+}
+
+@Override
+public TableResultInternal execute(Context ctx) {
+CatalogDescriptor catalogDescriptor =
+ctx.getCatalogManager()
+.getCatalogDescriptor(catalogName)
+.orElseThrow(
+() ->
+new ValidationException(
+String.format(
+"Cannot obtain 
metadata information from Catalog %s.",
+catalogName)));
+Map properties = 
catalogDescriptor.getConfiguration().toMap();
+List> rows =
+new ArrayList<>(
+Arrays.asList(
+Arrays.asList("Name", catalogName),
+Arrays.asList(
+"Type",
+properties.getOrDefault(
+
CommonCatalogOptions.CATALOG_TYPE.key(), "")),
+Arrays.asList("Comment", "") // TODO: retain 
for future needs
+));
+if (isExtended) {
+rows.add(Arrays.asList("Properties", 
convertPropertiesToString(properties)));
+}
+
+return buildTableResult(
+Arrays.asList("catalog_description_item", 
"catalog_description_value")
+.toArray(new String[0]),
+Arrays.asList(DataTypes.STRING(), 
DataTypes.STRING()).toArray(new DataType[0]),
+rows.stream().map(List::toArray).toArray(Object[][]::new));
+}
+
+private String convertPropertiesToString(Map map) {
+StringBuilder stringBuilder = new StringBuilder();
+for (Map.Entry entry : map.entrySet()) {
+stringBuilder.append(
+String.format(
+"('%s','%s'), ",
+EncodingUtils.escapeSingleQuotes(entry.getKey()),
+
EncodingUtils.escapeSingleQuotes(entry.getValue(;
+}
+// remove the last unnecessary comma and space
+ 

Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-18 Thread via GitHub


LadyForest commented on code in PR #24630:
URL: https://github.com/apache/flink/pull/24630#discussion_r1570354008


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java:
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogDescriptor;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.api.internal.TableResultUtils.buildTableResult;
+
+/** Operation to describe a DESCRIBE CATALOG catalog_name statement. */
+@Internal
+public class DescribeCatalogOperation implements Operation, 
ExecutableOperation {
+
+private final String catalogName;
+private final boolean isExtended;
+
+public DescribeCatalogOperation(String catalogName, boolean isExtended) {
+this.catalogName = catalogName;
+this.isExtended = isExtended;
+}
+
+public String getCatalogName() {
+return catalogName;
+}
+
+public boolean isExtended() {
+return isExtended;
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", catalogName);
+params.put("isExtended", isExtended);
+return OperationUtils.formatWithChildren(
+"DESCRIBE CATALOG", params, Collections.emptyList(), 
Operation::asSummaryString);
+}
+
+@Override
+public TableResultInternal execute(Context ctx) {
+CatalogDescriptor catalogDescriptor =
+ctx.getCatalogManager()
+.getCatalogDescriptor(catalogName)
+.orElseThrow(
+() ->
+new ValidationException(
+String.format(
+"Cannot obtain 
metadata information from Catalog %s.",
+catalogName)));
+Map properties = 
catalogDescriptor.getConfiguration().toMap();
+List> rows =
+new ArrayList<>(
+Arrays.asList(
+Arrays.asList("Name", catalogName),
+Arrays.asList(
+"Type",
+properties.getOrDefault(
+
CommonCatalogOptions.CATALOG_TYPE.key(), "")),
+Arrays.asList("Comment", "") // TODO: retain 
for future needs
+));
+if (isExtended) {
+rows.add(Arrays.asList("Properties", 
convertPropertiesToString(properties)));
+}
+
+return buildTableResult(
+Arrays.asList("catalog_description_item", 
"catalog_description_value")
+.toArray(new String[0]),
+Arrays.asList(DataTypes.STRING(), 
DataTypes.STRING()).toArray(new DataType[0]),
+rows.stream().map(List::toArray).toArray(Object[][]::new));
+}
+
+private String convertPropertiesToString(Map map) {
+StringBuilder stringBuilder = new StringBuilder();
+for (Map.Entry entry : map.entrySet()) {
+stringBuilder.append(
+String.format(
+"('%s','%s'), ",
+EncodingUtils.escapeSingleQuotes(entry.getKey()),
+
EncodingUtils.escapeSingleQuotes(entry.getValue(;
+}
+// remove the last unnecessary comma and space
+ 

Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-18 Thread via GitHub


LadyForest commented on code in PR #24630:
URL: https://github.com/apache/flink/pull/24630#discussion_r1570355268


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java:
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogDescriptor;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.api.internal.TableResultUtils.buildTableResult;
+
+/** Operation to describe a DESCRIBE CATALOG catalog_name statement. */
+@Internal
+public class DescribeCatalogOperation implements Operation, 
ExecutableOperation {
+
+private final String catalogName;
+private final boolean isExtended;
+
+public DescribeCatalogOperation(String catalogName, boolean isExtended) {
+this.catalogName = catalogName;
+this.isExtended = isExtended;
+}
+
+public String getCatalogName() {
+return catalogName;
+}
+
+public boolean isExtended() {
+return isExtended;
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", catalogName);
+params.put("isExtended", isExtended);
+return OperationUtils.formatWithChildren(
+"DESCRIBE CATALOG", params, Collections.emptyList(), 
Operation::asSummaryString);
+}
+
+@Override
+public TableResultInternal execute(Context ctx) {
+CatalogDescriptor catalogDescriptor =
+ctx.getCatalogManager()
+.getCatalogDescriptor(catalogName)
+.orElseThrow(
+() ->
+new ValidationException(
+String.format(
+"Cannot obtain 
metadata information from Catalog %s.",
+catalogName)));
+Map properties = 
catalogDescriptor.getConfiguration().toMap();
+List> rows =
+new ArrayList<>(
+Arrays.asList(
+Arrays.asList("Name", catalogName),
+Arrays.asList(
+"Type",
+properties.getOrDefault(
+
CommonCatalogOptions.CATALOG_TYPE.key(), "")),
+Arrays.asList("Comment", "") // TODO: retain 
for future needs
+));
+if (isExtended) {
+rows.add(Arrays.asList("Properties", 
convertPropertiesToString(properties)));
+}
+
+return buildTableResult(
+Arrays.asList("catalog_description_item", 
"catalog_description_value")
+.toArray(new String[0]),
+Arrays.asList(DataTypes.STRING(), 
DataTypes.STRING()).toArray(new DataType[0]),
+rows.stream().map(List::toArray).toArray(Object[][]::new));
+}
+
+private String convertPropertiesToString(Map map) {
+StringBuilder stringBuilder = new StringBuilder();
+for (Map.Entry entry : map.entrySet()) {
+stringBuilder.append(
+String.format(
+"('%s','%s'), ",
+EncodingUtils.escapeSingleQuotes(entry.getKey()),
+
EncodingUtils.escapeSingleQuotes(entry.getValue(;
+}
+// remove the last unnecessary comma and space
+ 

[jira] [Commented] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838572#comment-17838572
 ] 

Etienne Chauchot commented on FLINK-35124:
--

Ok fair enough to put back {_}ci{_},  _maven_ and _releasing_ dirs in the 
pristine source. But did you find the reason why the suppressions.xml path ends 
up being /tools/maven/suppressions.xml and not tools/maven/suppressions.xml ?

 

 

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838572#comment-17838572
 ] 

Etienne Chauchot edited comment on FLINK-35124 at 4/18/24 9:45 AM:
---

Ok fair enough to put back {_}ci{_},  _maven_ and _releasing_ dirs in the 
pristine source and exclude only shared (because it refers to an external 
repo). But did you find the reason why the suppressions.xml path ends up being 
/tools/maven/suppressions.xml and not tools/maven/suppressions.xml ?

 

 


was (Author: echauchot):
Ok fair enough to put back {_}ci{_},  _maven_ and _releasing_ dirs in the 
pristine source. But did you find the reason why the suppressions.xml path ends 
up being /tools/maven/suppressions.xml and not tools/maven/suppressions.xml ?

 

 

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35124] Include Maven build configuration in the pristine source clone [flink-connector-shared-utils]

2024-04-18 Thread via GitHub


dannycranmer commented on PR #40:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/40#issuecomment-2063479840

   >  But did you find the reason why the suppressions.xml path ends up being 
/tools/maven/suppressions.xml and not tools/maven/suppressions.xml ?
   
   No, but I think this is a red herring. It resolves the correct file even 
though there is a `/` at the start. The Flink config is the same 
https://github.com/apache/flink/blob/master/pom.xml#L2196. If it were looking 
there as an absolute path, it would not find it on my machine!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35124] Include Maven build configuration in the pristine source clone [flink-connector-shared-utils]

2024-04-18 Thread via GitHub


dannycranmer merged PR #40:
URL: https://github.com/apache/flink-connector-shared-utils/pull/40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren updated FLINK-35127:
--
Priority: Blocker  (was: Major)

> CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
> --
>
> Key: FLINK-35127
> URL: https://issues.apache.org/jira/browse/FLINK-35127
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Jiabao Sun
>Assignee: LvYanquan
>Priority: Blocker
>  Labels: test-stability
> Fix For: cdc-3.1.0
>
>
> {code}
> [INFO] Running 
> org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
> Error: Exception in thread "surefire-forkedjvm-command-thread" 
> java.lang.OutOfMemoryError: Java heap space
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
> (1/4)#0"
> {code}
> https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838575#comment-17838575
 ] 

Danny Cranmer edited comment on FLINK-35124 at 4/18/24 9:54 AM:


Merged commit 
[{{c411561}}|https://github.com/apache/flink-connector-shared-utils/commit/c4115618085ac046033368e8e3a7eee59874608f]
 into apache:release_utils 


was (Author: dannycranmer):
Merged commit c411561 into apache:release_utils 

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838575#comment-17838575
 ] 

Danny Cranmer commented on FLINK-35124:
---

Merged commit c411561 into apache:release_utils 

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-18 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838576#comment-17838576
 ] 

Qingsheng Ren commented on FLINK-35127:
---

I increased the priority to blocker as many PRs are waiting for CI results.

> CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
> --
>
> Key: FLINK-35127
> URL: https://issues.apache.org/jira/browse/FLINK-35127
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Jiabao Sun
>Assignee: LvYanquan
>Priority: Blocker
>  Labels: test-stability
> Fix For: cdc-3.1.0
>
>
> {code}
> [INFO] Running 
> org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
> Error: Exception in thread "surefire-forkedjvm-command-thread" 
> java.lang.OutOfMemoryError: Java heap space
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
> (1/4)#0"
> {code}
> https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35129) Postgres source commits the offset after every multiple checkpoint cycles.

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35129:
-

Assignee: Muhammet Orazov

> Postgres source commits the offset after every multiple checkpoint cycles.
> --
>
> Key: FLINK-35129
> URL: https://issues.apache.org/jira/browse/FLINK-35129
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Hongshun Wang
>Assignee: Muhammet Orazov
>Priority: Major
>
> After entering the Stream phase, the offset consumed by the global slot is 
> committed upon the completion of each checkpoint, preventing log files from 
> being unable to be recycled continuously, which could lead to insufficient 
> disk space.
> However, the job can only restart from the latest checkpoint or savepoint. if 
> restored from an earlier state, WAL may already have been recycled.
>  
> The way to solve it is to commit the offset after every multiple checkpoint 
> cycles. The number of checkpoint cycles is determine by connector option, and 
> the default value is 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35128) Re-calculate the starting change log offset after the new table added

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35128:
-

Assignee: Hongshun Wang

> Re-calculate the starting change log offset after the new table added
> -
>
> Key: FLINK-35128
> URL: https://issues.apache.org/jira/browse/FLINK-35128
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>
> In mysql cdc, re-calculate the starting binlog offset after the new table 
> added in MySqlBinlogSplit#appendFinishedSplitInfos, while there lack of same 
> action in StreamSplit#appendFinishedSplitInfos. This will cause data loss if 
> any newly added table snapshot split's highwatermark is smaller.
>  
> Some unstable test problem occurs because of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35127:
-

Assignee: LvYanquan

> CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
> --
>
> Key: FLINK-35127
> URL: https://issues.apache.org/jira/browse/FLINK-35127
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Jiabao Sun
>Assignee: LvYanquan
>Priority: Major
>  Labels: test-stability
> Fix For: cdc-3.1.0
>
>
> {code}
> [INFO] Running 
> org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
> Error: Exception in thread "surefire-forkedjvm-command-thread" 
> java.lang.OutOfMemoryError: Java heap space
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
> (1/4)#0"
> {code}
> https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35143) Expose newly added tables capture in mysql pipeline connector

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35143:
-

Assignee: Hongshun Wang

> Expose newly added tables capture in mysql pipeline connector
> -
>
> Key: FLINK-35143
> URL: https://issues.apache.org/jira/browse/FLINK-35143
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
>
> Currently, mysql pipeline connector still don't allowed to capture newly 
> added tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35120) Add Doris Pipeline connector integration test cases

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35120:
-

Assignee: Xiqian YU

> Add Doris Pipeline connector integration test cases
> ---
>
> Key: FLINK-35120
> URL: https://issues.apache.org/jira/browse/FLINK-35120
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Minor
>  Labels: pull-request-available
>
> Currently, Flink CDC Doris pipeline connector has very limited test cases 
> (which only covers row convertion). Adding an ITCase testing its data 
> pipeline and metadata applier should help improving connector's reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35155) Introduce TableRuntimeException

2024-04-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35155:


 Summary: Introduce TableRuntimeException
 Key: FLINK-35155
 URL: https://issues.apache.org/jira/browse/FLINK-35155
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


The `throwException` internal function throws a {{RuntimeException}}. It would 
be nice to have a specific kind of exception thrown from there, so that it's 
easier to classify those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35102) Incorret Type mapping for Flink CDC Doris connector

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35102:
-

Assignee: Xiqian YU

> Incorret Type mapping for Flink CDC Doris connector
> ---
>
> Key: FLINK-35102
> URL: https://issues.apache.org/jira/browse/FLINK-35102
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Major
>  Labels: pull-request-available
>
> According to Flink CDC Doris connector docs, CHAR and VARCHAR are mapped to 
> 3-bytes since Doris uses UTF-8 variable-length encoding internally.
> |CHAR(n)|CHAR(n*3)|In Doris, strings are stored in UTF-8 encoding, so English 
> characters occupy 1 byte and Chinese characters occupy 3 bytes. The length 
> here is multiplied by 3. The maximum length of CHAR is 255. Once exceeded, it 
> will automatically be converted to VARCHAR type.|
> |VARCHAR(n)|VARCHAR(n*3)|Same as above. The length here is multiplied by 3. 
> The maximum length of VARCHAR is 65533. Once exceeded, it will automatically 
> be converted to STRING type.|
> However, currently Doris connector maps `CHAR(n)` to `CHAR(n)` and 
> `VARCHAR(n)` to `VARCHAR(n * 4)`, which is inconsistent with specification in 
> docs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35077) Add package license check for Flink CDC modules.

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35077:
-

Assignee: Xiqian YU

> Add package license check for Flink CDC modules.
> 
>
> Key: FLINK-35077
> URL: https://issues.apache.org/jira/browse/FLINK-35077
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Minor
>  Labels: pull-request-available
>
> Currently, Flink project has CI scripts checking if dependencies with 
> incompatible licenses are introduced.
> Flink CDC module heavily relies on external libraries (especially 
> connectors), so running similar checking scripts during every CI would be 
> helpful preventing developers introducing questionable dependencies by 
> accident.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35144) Support various sources sync for FlinkCDC in one pipeline

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren closed FLINK-35144.
-
Resolution: Duplicate

> Support various sources sync for FlinkCDC in one pipeline
> -
>
> Key: FLINK-35144
> URL: https://issues.apache.org/jira/browse/FLINK-35144
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Congxian Qiu
>Priority: Major
>
> Currently, the FlinkCDC pipeline can only support a single source in one 
> pipeline, we need to start multiple pipelines when there are various sources. 
> For upstream which uses sharding, we need to sync multiple sources in one 
> pipeline, the current pipeline can't do this because it can only support a 
> single source.
> This issue wants to support the sync of multiple sources in one pipeline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35072) Doris pipeline sink does not support applying AlterColumnTypeEvent

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35072:
-

Assignee: Xiqian YU

> Doris pipeline sink does not support applying AlterColumnTypeEvent
> --
>
> Key: FLINK-35072
> URL: https://issues.apache.org/jira/browse/FLINK-35072
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Minor
>  Labels: pull-request-available
>
> According to [Doris 
> documentation|https://doris.apache.org/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN/],
>  altering column types dynamically is supported (via ALTER TABLE ... MODIFY 
> COLUMN statement) when lossless conversion is available. However, now Doris 
> pipeline connector has no support to AlterColumnTypeEvent, and raises 
> RuntimeException all the time.
> It would be convenient for users if they can sync compatible type 
> conversions, and could be easily implemented by extending Doris' 
> SchemaChangeManager helper class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35092) Add integrated test for Doris / Starrocks sink pipeline connector

2024-04-18 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35092:
-

Assignee: Xiqian YU

> Add integrated test for Doris / Starrocks sink pipeline connector
> -
>
> Key: FLINK-35092
> URL: https://issues.apache.org/jira/browse/FLINK-35092
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Minor
>  Labels: pull-request-available
>
> Currently, no integrated test are being applied to Doris pipeline connector 
> (there's only one DorisRowConverterTest case for now). Adding ITcases would 
> improving Doris connector's code quality and reliability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35127] remove HybridSource to avoid CI failure. [flink-cdc]

2024-04-18 Thread via GitHub


lvyanquan opened a new pull request, #3237:
URL: https://github.com/apache/flink-cdc/pull/3237

   At present, CI testing often causes OOM due to ValuesSource, which may be 
caused by the use of HybridSource. Considering that this will block other prs, 
remove it first to restore CI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35127] remove HybridSource to avoid CI failure. [flink-cdc]

2024-04-18 Thread via GitHub


lvyanquan commented on PR #3237:
URL: https://github.com/apache/flink-cdc/pull/3237#issuecomment-2063493735

   @PatrickRen @yuxiqian PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35127:
---
Labels: pull-request-available test-stability  (was: test-stability)

> CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
> --
>
> Key: FLINK-35127
> URL: https://issues.apache.org/jira/browse/FLINK-35127
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Jiabao Sun
>Assignee: LvYanquan
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: cdc-3.1.0
>
>
> {code}
> [INFO] Running 
> org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
> Error: Exception in thread "surefire-forkedjvm-command-thread" 
> java.lang.OutOfMemoryError: Java heap space
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
> (1/4)#0"
> {code}
> https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34444] Initial implementation of JM operator metric rest api [flink]

2024-04-18 Thread via GitHub


afedulov commented on code in PR #24564:
URL: https://github.com/apache/flink/pull/24564#discussion_r1568653505


##
docs/static/generated/rest_v1_dispatcher.yml:
##
@@ -1089,6 +1089,37 @@ paths:
 application/json:
   schema:
 $ref: '#/components/schemas/JobVertexBackPressureInfo'
+  /jobs/{jobid}/vertices/{vertexid}/coordinator-metrics:
+get:
+  description: Provides access to job manager operator metrics

Review Comment:
   I see, thanks for the clarification. I believe the main issue with the 
original proposal was that it also implied that the user would need to supply 
operator ID (as reflected in the FLIP's rejected approaches 
`/jobs//vertices//operators//metrics`). This would 
necessitate an additional step to identify which operator serves as the 
coordinator.
   
   It seems the challenge of distinguishing between the coordinator's metrics 
and other types of JobManager operators that may emerge in the future remains.  
Suppose we consolidate everything under the `/jm-operator-metrics` endpoint. 
When focusing on the coordinator's metrics for autoscaling purposes, how will 
API users distinguish these from other metrics retrieved from 
`/jm-operator-metrics`? Can we be sure that the metrics of interest are always 
uniquely identified by their names, preventing any overlap with those emitted 
by other operators? 
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Update joining.md [flink]

2024-04-18 Thread via GitHub


TobaccoProduct opened a new pull request, #24677:
URL: https://github.com/apache/flink/pull/24677

   Document bug fixed:
missing character '>' in line 257, between 'String' and '()'
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Has this request introduced new features? (yes / no) no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


ZmmBigdata commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063508038

   > @ZmmBigdata @Jiabao-Sun Issue was fixed in pr #3217 and merged.
   
   @GOODBOY008 In PR # 3217, only two files were modified, while I modified 
three files


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35156) Wire new operators for async state with DataStream V2

2024-04-18 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35156:
---

 Summary: Wire new operators for async state with DataStream V2
 Key: FLINK-35156
 URL: https://issues.apache.org/jira/browse/FLINK-35156
 Project: Flink
  Issue Type: Sub-task
Reporter: Zakelly Lan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35156) Wire new operators for async state with DataStream V2

2024-04-18 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan reassigned FLINK-35156:
---

Assignee: Zakelly Lan

> Wire new operators for async state with DataStream V2
> -
>
> Key: FLINK-35156
> URL: https://issues.apache.org/jira/browse/FLINK-35156
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Update joining.md [flink]

2024-04-18 Thread via GitHub


flinkbot commented on PR #24677:
URL: https://github.com/apache/flink/pull/24677#issuecomment-2063525108

   
   ## CI report:
   
   * ef29926a603bc02167158b83b8c705208018adce UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-18 Thread via GitHub


liyubin117 commented on code in PR #24630:
URL: https://github.com/apache/flink/pull/24630#discussion_r1570448160


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java:
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogDescriptor;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.api.internal.TableResultUtils.buildTableResult;
+
+/** Operation to describe a DESCRIBE CATALOG catalog_name statement. */
+@Internal
+public class DescribeCatalogOperation implements Operation, 
ExecutableOperation {
+
+private final String catalogName;
+private final boolean isExtended;
+
+public DescribeCatalogOperation(String catalogName, boolean isExtended) {
+this.catalogName = catalogName;
+this.isExtended = isExtended;
+}
+
+public String getCatalogName() {
+return catalogName;
+}
+
+public boolean isExtended() {
+return isExtended;
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", catalogName);
+params.put("isExtended", isExtended);
+return OperationUtils.formatWithChildren(
+"DESCRIBE CATALOG", params, Collections.emptyList(), 
Operation::asSummaryString);
+}
+
+@Override
+public TableResultInternal execute(Context ctx) {
+CatalogDescriptor catalogDescriptor =
+ctx.getCatalogManager()
+.getCatalogDescriptor(catalogName)
+.orElseThrow(
+() ->
+new ValidationException(
+String.format(
+"Cannot obtain 
metadata information from Catalog %s.",
+catalogName)));
+Map properties = 
catalogDescriptor.getConfiguration().toMap();
+List> rows =
+new ArrayList<>(
+Arrays.asList(
+Arrays.asList("Name", catalogName),
+Arrays.asList(
+"Type",
+properties.getOrDefault(
+
CommonCatalogOptions.CATALOG_TYPE.key(), "")),
+Arrays.asList("Comment", "") // TODO: retain 
for future needs
+));
+if (isExtended) {
+rows.add(Arrays.asList("Properties", 
convertPropertiesToString(properties)));
+}
+
+return buildTableResult(
+Arrays.asList("catalog_description_item", 
"catalog_description_value")
+.toArray(new String[0]),
+Arrays.asList(DataTypes.STRING(), 
DataTypes.STRING()).toArray(new DataType[0]),
+rows.stream().map(List::toArray).toArray(Object[][]::new));
+}
+
+private String convertPropertiesToString(Map map) {
+StringBuilder stringBuilder = new StringBuilder();
+for (Map.Entry entry : map.entrySet()) {
+stringBuilder.append(
+String.format(
+"('%s','%s'), ",
+EncodingUtils.escapeSingleQuotes(entry.getKey()),
+
EncodingUtils.escapeSingleQuotes(entry.getValue(;
+}
+// remove the last unnecessary comma and space
+ 

[PR] Adding GCP Pub Sub Conenctor v3.1.0 [flink-web]

2024-04-18 Thread via GitHub


dannycranmer opened a new pull request, #736:
URL: https://github.com/apache/flink-web/pull/736

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35053) TIMESTAMP with TIME ZONE not supported by JDBC connector for Postgres

2024-04-18 Thread Pietro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pietro updated FLINK-35053:
---
Description: 
The JDBC sink for Postgres does not support {{{}TIMESTAMP WITH TIME ZONE{}}}, 
nor {{TIMESTAMP_LTZ}} types.

Related issues: FLINK-22199, FLINK-20869
h2. Problem Explanation

A Postgres {{target_table}} has a field {{tm_tz}} of type {{timestamptz}} .
{code:sql}
-- Postgres DDL
CREATE TABLE target_table (
tm_tz TIMESTAMP WITH TIME ZONE
)
{code}
In Flink we have a table with a column of type {{{}TIMESTAMP_LTZ(6){}}}, and 
our goal is to sink it to {{{}target_table{}}}.
{code:sql}
-- Flink DDL
CREATE TABLE sink (
tm_tz TIMESTAMP_LTZ(6)
) WITH (
'connector' = 'jdbc',
'table-name' = 'target_table'
...
)
{code}
According to 
[AbstractPostgresCompatibleDialect.supportedTypes()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java#L109],
 {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is supported, while 
{{TIMESTAMP_WITH_TIME_ZONE}} is not.

However, when the converter is created via 
[AbstractJdbcRowConverter.externalConverter()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L246],
 it throws an {{UnsupportedOperationException}} since 
{{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is *not* among the available types, while 
[{{TIMESTAMP_WITH_TIME_ZONE}}|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L168]
 is.
{code:java}
Exception in thread "main" java.lang.UnsupportedOperationException: Unsupported 
type:TIMESTAMP_LTZ(6)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createInternalConverter(AbstractJdbcRowConverter.java:186)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createPrimitiveConverter(PostgresRowConverter.java:99)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createInternalConverter(PostgresRowConverter.java:58)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createNullableInternalConverter(AbstractJdbcRowConverter.java:118)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.(AbstractJdbcRowConverter.java:68)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.(PostgresRowConverter.java:47)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresDialect.getRowConverter(PostgresDialect.java:51)
at 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSource.getScanRuntimeProvider(JdbcDynamicTableSource.java:184)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:478)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:161)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.convertSourceToRel(DynamicSourceUtils.java:125)
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:118)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205)
at 
org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69)
at 
org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48)
at 
org.apache.flink.table.planner.operations.conve

[jira] [Updated] (FLINK-35053) TIMESTAMP with TIME ZONE not supported by JDBC connector for Postgres

2024-04-18 Thread Pietro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pietro updated FLINK-35053:
---
Description: 
The JDBC sink for Postgres does not support {{{}TIMESTAMP WITH TIME ZONE{}}}, 
nor {{TIMESTAMP_LTZ}} types.

Related issues: FLINK-22199, FLINK-20869
h2. Problem Explanation

A Postgres {{target_table}} has a field {{tm_tz}} of type {{timestamptz}} .
{code:sql}
-- Postgres DDL
CREATE TABLE target_table (
tm_tz TIMESTAMP WITH TIME ZONE
)
{code}
In Flink we have a table with a column of type {{{}TIMESTAMP_LTZ(6){}}}, and 
our goal is to sink it to {{{}target_table{}}}.
{code:sql}
-- Flink DDL
CREATE TABLE sink (
tm_tz TIMESTAMP_LTZ(6)
) WITH (
'connector' = 'jdbc',
'table-name' = 'target_table'
...
)
{code}
According to 
[AbstractPostgresCompatibleDialect.supportedTypes()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java#L109],
 {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is supported, while 
{{TIMESTAMP_WITH_TIME_ZONE}} is not.

However, when the converter is created via 
[AbstractJdbcRowConverter.externalConverter()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L246],
 it throws an {{UnsupportedOperationException}} since 
{{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is *not* among the available types, while 
[{{TIMESTAMP_WITH_TIME_ZONE}}|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L168]
 is.
{code:java}
Exception in thread "main" java.lang.UnsupportedOperationException: Unsupported 
type:TIMESTAMP_LTZ(6)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createInternalConverter(AbstractJdbcRowConverter.java:186)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createPrimitiveConverter(PostgresRowConverter.java:99)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createInternalConverter(PostgresRowConverter.java:58)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createNullableInternalConverter(AbstractJdbcRowConverter.java:118)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.(AbstractJdbcRowConverter.java:68)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.(PostgresRowConverter.java:47)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresDialect.getRowConverter(PostgresDialect.java:51)
at 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSource.getScanRuntimeProvider(JdbcDynamicTableSource.java:184)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:478)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:161)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.convertSourceToRel(DynamicSourceUtils.java:125)
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:118)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205)
at 
org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69)
at 
org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48)
at 
org.apache.flink.table.planner.operations.conve

[jira] [Resolved] (FLINK-35124) Connector Release Fails to run Checkstyle

2024-04-18 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-35124.
---
Resolution: Fixed

> Connector Release Fails to run Checkstyle
> -
>
> Key: FLINK-35124
> URL: https://issues.apache.org/jira/browse/FLINK-35124
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> During a release of the AWS connectors the build was failing at the 
> \{{./tools/releasing/shared/stage_jars.sh}} step due to a checkstyle error.
>  
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:3.1.2:check (validate) on 
> project flink-connector-aws: Failed during checkstyle execution: Unable to 
> find suppressions file at location: /tools/maven/suppressions.xml: Could not 
> find resource '/tools/maven/suppressions.xml'. -> [Help 1] {code}
>  
> Looks like it is caused by this 
> [https://github.com/apache/flink-connector-shared-utils/commit/a75b89ee3f8c9a03e97ead2d0bd9d5b7bb02b51a]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35137) Release flink-connector-jdbc v3.2.0 for Flink 1.19

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838591#comment-17838591
 ] 

Danny Cranmer commented on FLINK-35137:
---

https://lists.apache.org/thread/b7xbjo4crt1527ldksw4nkwo8vs56csy

> Release flink-connector-jdbc v3.2.0 for Flink 1.19
> --
>
> Key: FLINK-35137
> URL: https://issues.apache.org/jira/browse/FLINK-35137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: jdbc-3.2.0
>
>
> https://github.com/apache/flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35135) Release flink-connector-gcp-pubsub v3.1.0 for Flink 1.19

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838590#comment-17838590
 ] 

Danny Cranmer commented on FLINK-35135:
---

https://lists.apache.org/thread/b7l1r0y7nwox2vhf2z3kwjn41clf6w1v

> Release flink-connector-gcp-pubsub v3.1.0 for Flink 1.19
> 
>
> Key: FLINK-35135
> URL: https://issues.apache.org/jira/browse/FLINK-35135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Google Cloud PubSub
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: gcp-pubsub-3.1.0
>
>
> https://github.com/apache/flink-connector-gcp-pubsub



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35133) Release flink-connector-cassandra v3.x.x for Flink 1.19

2024-04-18 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-35133:
-

Assignee: Danny Cranmer

> Release flink-connector-cassandra v3.x.x for Flink 1.19
> ---
>
> Key: FLINK-35133
> URL: https://issues.apache.org/jira/browse/FLINK-35133
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Cassandra
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> https://github.com/apache/flink-connector-cassandra



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35139) Release flink-connector-mongodb v1.2.0 for Flink 1.19

2024-04-18 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838592#comment-17838592
 ] 

Danny Cranmer commented on FLINK-35139:
---

https://lists.apache.org/thread/2982v6n5q0bgldrp919t5t6d19xsl710

> Release flink-connector-mongodb v1.2.0 for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


ZmmBigdata commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063554270

   @GOODBOY008 Do I need to resynchronize the master repositories and resubmit 
the PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34915][table] Complete `DESCRIBE CATALOG` syntax [flink]

2024-04-18 Thread via GitHub


liyubin117 commented on code in PR #24630:
URL: https://github.com/apache/flink/pull/24630#discussion_r1570466913


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java:
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.catalog.CatalogDescriptor;
+import org.apache.flink.table.catalog.CommonCatalogOptions;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.api.internal.TableResultUtils.buildTableResult;
+
+/** Operation to describe a DESCRIBE CATALOG catalog_name statement. */
+@Internal
+public class DescribeCatalogOperation implements Operation, 
ExecutableOperation {
+
+private final String catalogName;
+private final boolean isExtended;
+
+public DescribeCatalogOperation(String catalogName, boolean isExtended) {
+this.catalogName = catalogName;
+this.isExtended = isExtended;
+}
+
+public String getCatalogName() {
+return catalogName;
+}
+
+public boolean isExtended() {
+return isExtended;
+}
+
+@Override
+public String asSummaryString() {
+Map params = new LinkedHashMap<>();
+params.put("identifier", catalogName);
+params.put("isExtended", isExtended);
+return OperationUtils.formatWithChildren(
+"DESCRIBE CATALOG", params, Collections.emptyList(), 
Operation::asSummaryString);
+}
+
+@Override
+public TableResultInternal execute(Context ctx) {
+CatalogDescriptor catalogDescriptor =
+ctx.getCatalogManager()
+.getCatalogDescriptor(catalogName)
+.orElseThrow(
+() ->
+new ValidationException(
+String.format(
+"Cannot obtain 
metadata information from Catalog %s.",
+catalogName)));
+Map properties = 
catalogDescriptor.getConfiguration().toMap();
+List> rows =
+new ArrayList<>(
+Arrays.asList(
+Arrays.asList("Name", catalogName),
+Arrays.asList(
+"Type",
+properties.getOrDefault(
+
CommonCatalogOptions.CATALOG_TYPE.key(), "")),
+Arrays.asList("Comment", "") // TODO: retain 
for future needs
+));
+if (isExtended) {
+rows.add(Arrays.asList("Properties", 
convertPropertiesToString(properties)));
+}
+
+return buildTableResult(
+Arrays.asList("catalog_description_item", 
"catalog_description_value")

Review Comment:
   reasonable enough, done :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


Jiabao-Sun commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063581154

   @ZmmBigdata yes, please go ahead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


Jiabao-Sun commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063583687

   Hi @ZmmBigdata, you don't need submit a new PR.
   Just rebase the master branch into yours.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35053) TIMESTAMP with TIME ZONE not supported by JDBC connector for Postgres

2024-04-18 Thread Pietro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pietro updated FLINK-35053:
---
Description: 
The JDBC sink for Postgres does not support {{{}TIMESTAMP WITH TIME ZONE{}}}, 
nor {{TIMESTAMP_LTZ}} types.

Related issues: FLINK-22199, FLINK-20869
h2. Problem Explanation

A Postgres {{target_table}} has a field {{tm_tz}} of type {{timestamptz}} .
{code:sql}
-- Postgres DDL
CREATE TABLE target_table (
tm_tz TIMESTAMP WITH TIME ZONE
)
{code}
In Flink we have a table with a column of type {{{}TIMESTAMP_LTZ(6){}}}, and 
our goal is to sink it to {{{}target_table{}}}.
{code:sql}
-- Flink DDL
CREATE TABLE sink (
tm_tz TIMESTAMP_LTZ(6)
) WITH (
'connector' = 'jdbc',
'table-name' = 'target_table'
...
)
{code}
According to 
[AbstractPostgresCompatibleDialect.supportedTypes()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java#L109],
 {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is supported, while 
{{TIMESTAMP_WITH_TIME_ZONE}} is not.

However, when the converter is created via 
[AbstractJdbcRowConverter.externalConverter()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L246],
 it throws an {{UnsupportedOperationException}} since 
{{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is *not* among the available types, while 
[{{TIMESTAMP_WITH_TIME_ZONE}}|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L168]
 is.
{code:java}
Exception in thread "main" java.lang.UnsupportedOperationException: Unsupported 
type:TIMESTAMP_LTZ(6)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createInternalConverter(AbstractJdbcRowConverter.java:186)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createPrimitiveConverter(PostgresRowConverter.java:99)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createInternalConverter(PostgresRowConverter.java:58)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createNullableInternalConverter(AbstractJdbcRowConverter.java:118)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.(AbstractJdbcRowConverter.java:68)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.(PostgresRowConverter.java:47)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresDialect.getRowConverter(PostgresDialect.java:51)
at 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSource.getScanRuntimeProvider(JdbcDynamicTableSource.java:184)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:478)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:161)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.convertSourceToRel(DynamicSourceUtils.java:125)
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:118)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205)
at 
org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69)
at 
org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48)
at 
org.apache.flink.table.planner.operations.conve

Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


ZmmBigdata commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063603076

   > Hi @ZmmBigdata, you don't need submit a new PR. Just rebase the master 
branch into yours.
   
   @Jiabao-Sun Ok, thank you for your reply. Will this PR still be merged?Not 
very familiar with this process.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [docs]Problem with Case Document Format in Quickstart [flink-cdc]

2024-04-18 Thread via GitHub


Jiabao-Sun commented on PR #3229:
URL: https://github.com/apache/flink-cdc/pull/3229#issuecomment-2063604709

   > > Hi @ZmmBigdata, you don't need submit a new PR. Just rebase the master 
branch into yours.
   > 
   > @Jiabao-Sun Ok, thank you for your reply. Will this PR still be merged?Not 
very familiar with this process.
   
   Yes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35053) TIMESTAMP with TIME ZONE not supported by JDBC connector for Postgres

2024-04-18 Thread Pietro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pietro updated FLINK-35053:
---
Description: 
The JDBC sink for Postgres does not support {{{}TIMESTAMP WITH TIME ZONE{}}}, 
nor {{TIMESTAMP_LTZ}} types.

Related issues: FLINK-22199, FLINK-20869
h2. Problem Explanation

A Postgres {{target_table}} has a field {{tm_tz}} of type {{timestamptz}} .
{code:sql}
-- Postgres DDL
CREATE TABLE target_table (
tm_tz TIMESTAMP WITH TIME ZONE
)
{code}
In Flink we have a table with a column of type {{{}TIMESTAMP_LTZ(6){}}}, and 
our goal is to sink it to {{{}target_table{}}}.
{code:sql}
-- Flink DDL
CREATE TABLE sink (
tm_tz TIMESTAMP_LTZ(6)
) WITH (
'connector' = 'jdbc',
'table-name' = 'target_table'
...
)
{code}
According to 
[AbstractPostgresCompatibleDialect.supportedTypes()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java#L109],
 {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is supported, while 
{{TIMESTAMP_WITH_TIME_ZONE}} is not.

However, when the converter is created via 
[AbstractJdbcRowConverter.externalConverter()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L246],
 it throws an {{UnsupportedOperationException}} since 
{{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is *not* among the available types, while 
[{{TIMESTAMP_WITH_TIME_ZONE}}|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L168]
 is.
{code:java}
Exception in thread "main" java.lang.UnsupportedOperationException: Unsupported 
type:TIMESTAMP_LTZ(6)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createInternalConverter(AbstractJdbcRowConverter.java:186)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createPrimitiveConverter(PostgresRowConverter.java:99)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createInternalConverter(PostgresRowConverter.java:58)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createNullableInternalConverter(AbstractJdbcRowConverter.java:118)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.(AbstractJdbcRowConverter.java:68)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.(PostgresRowConverter.java:47)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresDialect.getRowConverter(PostgresDialect.java:51)
at 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSource.getScanRuntimeProvider(JdbcDynamicTableSource.java:184)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:478)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:161)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.convertSourceToRel(DynamicSourceUtils.java:125)
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:118)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205)
at 
org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69)
at 
org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48)
at 
org.apache.flink.table.planner.operations.conve

[jira] [Updated] (FLINK-35053) TIMESTAMP with TIME ZONE not supported by JDBC connector for Postgres

2024-04-18 Thread Pietro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pietro updated FLINK-35053:
---
Description: 
The JDBC sink for Postgres does not support {{{}TIMESTAMP WITH TIME ZONE{}}}, 
nor {{TIMESTAMP_LTZ}} types.

Related issues: FLINK-22199, FLINK-20869
h2. Problem Explanation

A Postgres {{target_table}} has a field {{tm_tz}} of type {{timestamptz}} .
{code:sql}
-- Postgres DDL
CREATE TABLE target_table (
tm_tz TIMESTAMP WITH TIME ZONE
)
{code}
In Flink we have a table with a column of type {{{}TIMESTAMP_LTZ(6){}}}, and 
our goal is to sink it to {{{}target_table{}}}.
{code:sql}
-- Flink DDL
CREATE TABLE sink (
tm_tz TIMESTAMP_LTZ(6)
) WITH (
'connector' = 'jdbc',
'table-name' = 'target_table'
...
)
{code}
According to 
[AbstractPostgresCompatibleDialect.supportedTypes()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java#L109],
 {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is supported, while 
{{TIMESTAMP_WITH_TIME_ZONE}} is not.

However, when the converter is created via 
[AbstractJdbcRowConverter.externalConverter()|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L246],
 it throws an {{UnsupportedOperationException}} since 
{{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} is *not* among the available types, while 
[{{TIMESTAMP_WITH_TIME_ZONE}}|https://github.com/apache/flink-connector-jdbc/blob/7025642d88ff661e486745b23569595e1813a1d0/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/converter/AbstractJdbcRowConverter.java#L168]
 is.
{code:java}
Exception in thread "main" java.lang.UnsupportedOperationException: Unsupported 
type:TIMESTAMP_LTZ(6)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createInternalConverter(AbstractJdbcRowConverter.java:186)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createPrimitiveConverter(PostgresRowConverter.java:99)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.createInternalConverter(PostgresRowConverter.java:58)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.createNullableInternalConverter(AbstractJdbcRowConverter.java:118)
at 
org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.(AbstractJdbcRowConverter.java:68)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresRowConverter.(PostgresRowConverter.java:47)
at 
org.apache.flink.connector.jdbc.databases.postgres.dialect.PostgresDialect.getRowConverter(PostgresDialect.java:51)
at 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSource.getScanRuntimeProvider(JdbcDynamicTableSource.java:184)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.validateScanSource(DynamicSourceUtils.java:478)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.prepareDynamicSource(DynamicSourceUtils.java:161)
at 
org.apache.flink.table.planner.connectors.DynamicSourceUtils.convertSourceToRel(DynamicSourceUtils.java:125)
at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:118)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:4002)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2872)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2432)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2346)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2291)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:728)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:714)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3848)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:618)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:229)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:205)
at 
org.apache.flink.table.planner.operations.SqlNodeConvertContext.toRelRoot(SqlNodeConvertContext.java:69)
at 
org.apache.flink.table.planner.operations.converters.SqlQueryConverter.convertSqlNode(SqlQueryConverter.java:48)
at 
org.apache.flink.table.planner.operations.conve

[jira] [Created] (FLINK-35157) Sources with watermark alignment get stuck once some subtasks finish

2024-04-18 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-35157:
--

 Summary: Sources with watermark alignment get stuck once some 
subtasks finish
 Key: FLINK-35157
 URL: https://issues.apache.org/jira/browse/FLINK-35157
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.18.1, 1.19.0, 1.17.2
Reporter: Gyula Fora


The current watermark alignment logic can easily get stuck if some subtasks 
finish while others are still running.

The reason is that once a source subtask finishes, the subtask is not excluded 
from alignment, effectively blocking the rest of the job to make progress 
beyond last wm + alignment time for the finished sources.

This can be easily reproduced by the following simple pipeline:
{noformat}
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
DataStream s = env.fromSource(new NumberSequenceSource(0, 100),

WatermarkStrategy.forMonotonousTimestamps().withTimestampAssigner((SerializableTimestampAssigner)
 (aLong, l) -> aLong).withWatermarkAlignment("g1", Duration.ofMillis(10), 
Duration.ofSeconds(2)),
"Sequence Source").filter((FilterFunction) aLong -> {
Thread.sleep(200);
return true;
}
);

s.print();
env.execute();{noformat}
The solution could be to send out a max watermark event once the sources finish 
or to exclude them from the source coordinator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][values] Temporary fix for ValuesDataSource stuck in infinite loop [flink-cdc]

2024-04-18 Thread via GitHub


yuxiqian closed pull request #3235: [hotfix][values] Temporary fix for 
ValuesDataSource stuck in infinite loop
URL: https://github.com/apache/flink-cdc/pull/3235


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][values] Temporary fix for ValuesDataSource stuck in infinite loop [flink-cdc]

2024-04-18 Thread via GitHub


yuxiqian commented on PR #3235:
URL: https://github.com/apache/flink-cdc/pull/3235#issuecomment-2063673721

   Fixed in #3237.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34953][ci] Add github ci for flink-web to auto commit build files [flink-web]

2024-04-18 Thread via GitHub


GOODBOY008 commented on code in PR #732:
URL: https://github.com/apache/flink-web/pull/732#discussion_r1568758762


##
.github/workflows/docs.yml:
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: "Flink Web CI"
+on:
+  pull_request:
+branches:
+  - asf-site
+  push:
+branches:
+  - asf-site
+  workflow_dispatch:
+
+jobs:
+  build-documentation:
+if: github.repository == 'apache/flink-web'
+runs-on: ubuntu-latest
+permissions:
+  # Give the default GITHUB_TOKEN write permission to commit and push the 
changed files back to the repository.
+  contents: write
+steps:
+- name: Checkout repository
+  uses: actions/checkout@v4
+  with:
+submodules: true
+fetch-depth: 0
+
+- name: Setup Hugo
+  uses: peaceiris/actions-hugo@v3
+  with:
+hugo-version: '0.119.0'
+extended: true
+
+- name: Build website
+  run: |
+# Remove old content folder and create new one
+rm -r -f content && mkdir content
+
+# Build the website
+hugo --source docs --destination target
+
+# Move newly generated static HTML to the content serving folder
+mv docs/target/* content
+
+# Copy quickstarts, rewrite rules and Google Search Console 
identifier
+cp -r _include/. content
+
+# Get the current commit author
+echo "author=$(git log -1 --pretty=\"%an <%ae>\")" >> 
$GITHUB_OUTPUT
+
+- name: Commit and push website build
+  if: ${{ github.event_name == 'push' || github.event_name == 
'workflow_dispatch' }}

Review Comment:
   @MartijnVisser With `push` event for pr merge into branch and 
`workflow_dispatch` for manual trigger to rebuild website.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35158) Error handling in StateFuture's callback

2024-04-18 Thread Yanfei Lei (Jira)
Yanfei Lei created FLINK-35158:
--

 Summary: Error handling in StateFuture's callback
 Key: FLINK-35158
 URL: https://issues.apache.org/jira/browse/FLINK-35158
 Project: Flink
  Issue Type: Sub-task
Reporter: Yanfei Lei






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35156][Runtime] Make operators of DataStream V2 integrate with async state processing framework [flink]

2024-04-18 Thread via GitHub


Zakelly opened a new pull request, #24678:
URL: https://github.com/apache/flink/pull/24678

   ## What is the purpose of the change
   
   This is a simple PR that wire the new introduced operators of DataStream V2 
with the `AbstractAsyncStateStreamOperator`.
   
   ## Brief change log
   
- Introduce `AbstractAsyncStateUdfStreamOperator` that is nearly identical 
with `AbstractUdfStreamOperator`, but extends from 
`AbstractAsyncStateStreamOperator`
- Replace base class of `ProcessOperator` (v2) from 
`AbstractUdfStreamOperator` to `AbstractAsyncStateUdfStreamOperator`.
   
   ## Verifying this change
   
   This change is a trivial rework without any test coverage. More will be 
added when the whole state processing works.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35156) Wire new operators for async state with DataStream V2

2024-04-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35156:
---
Labels: pull-request-available  (was: )

> Wire new operators for async state with DataStream V2
> -
>
> Key: FLINK-35156
> URL: https://issues.apache.org/jira/browse/FLINK-35156
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35155] Introduce TableRuntimeException [flink]

2024-04-18 Thread via GitHub


dawidwys opened a new pull request, #24679:
URL: https://github.com/apache/flink/pull/24679

   ## What is the purpose of the change
   
   Introduce `TableRuntimeException` to better classify intentional exceptions 
thrown from SQL runtime.
   
   
   ## Verifying this change
   
   Added tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35155) Introduce TableRuntimeException

2024-04-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35155:
---
Labels: pull-request-available  (was: )

> Introduce TableRuntimeException
> ---
>
> Key: FLINK-35155
> URL: https://issues.apache.org/jira/browse/FLINK-35155
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The `throwException` internal function throws a {{RuntimeException}}. It 
> would be nice to have a specific kind of exception thrown from there, so that 
> it's easier to classify those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35007) Update Flink Kafka connector to support 1.19

2024-04-18 Thread yazgoo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838630#comment-17838630
 ] 

yazgoo commented on FLINK-35007:


Hi,

Do you plan on publishing flink-connector-kafka:3.1.0-1.19 ?

Thanks !

> Update Flink Kafka connector to support 1.19
> 
>
> Key: FLINK-35007
> URL: https://issues.apache.org/jira/browse/FLINK-35007
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: kafka-4.0.0, kafka-3.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >