Re: [PR] [cdc-cli][cdc-composer] Add 'flink-config' for pipeline yaml [flink-cdc]

2024-03-21 Thread via GitHub


joyCurry30 closed pull request #2837: [cdc-cli][cdc-composer] Add 
'flink-config' for pipeline yaml
URL: https://github.com/apache/flink-cdc/pull/2837


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34537] Autoscaler JDBC Support HikariPool [flink-kubernetes-operator]

2024-03-21 Thread via GitHub


1996fanrui commented on code in PR #785:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/785#discussion_r1535117714


##
flink-autoscaler-standalone/src/test/java/org/apache/flink/autoscaler/standalone/AutoscalerStateStoreFactoryTest.java:
##
@@ -64,19 +63,20 @@ void testCreateJdbcStateStoreWithoutURL() {
 @Test
 void testCreateJdbcStateStore() throws Exception {
 final var jdbcUrl = "jdbc:derby:memory:test";
-DriverManager.getConnection(String.format("%s;create=true", 
jdbcUrl)).close();
-
-// Test for create JDBC State store.
+// Test for create JDBC Event Handler.

Review Comment:
   This test class is related to state store instead of event handler.
   
   Please check the comment, `STATE_STORE_TYPE` and 
`testCreateJdbcEventHandler`.



##
flink-autoscaler-standalone/src/main/java/org/apache/flink/autoscaler/standalone/utils/HikariJDBCUtil.java:
##
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.autoscaler.standalone.utils;
+
+import org.apache.flink.configuration.Configuration;
+
+import com.zaxxer.hikari.HikariConfig;
+import com.zaxxer.hikari.HikariDataSource;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+
+import static 
org.apache.flink.autoscaler.standalone.config.AutoscalerStandaloneOptions.JDBC_PASSWORD_ENV_VARIABLE;
+import static 
org.apache.flink.autoscaler.standalone.config.AutoscalerStandaloneOptions.JDBC_URL;
+import static 
org.apache.flink.autoscaler.standalone.config.AutoscalerStandaloneOptions.JDBC_USERNAME;
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+public class HikariJDBCUtil {

Review Comment:
   This class doesn't have any comment. It will break the CI.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34692) Update Flink website to point to the new Flink CDC “Get Started” page

2024-03-21 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-34692.

Resolution: Implemented

flink-cdc master: b92bed50448ee2b106d6a04076f009ed7a0bbe0e

> Update Flink website to point to the new Flink CDC “Get Started” page 
> --
>
> Key: FLINK-34692
> URL: https://issues.apache.org/jira/browse/FLINK-34692
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC, Project Website
>Reporter: Qingsheng Ren
>Assignee: Hang Ruan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34692] Update 'With Flink CDC' to point to the new Flink CDC 'Get Started' page [flink-web]

2024-03-21 Thread via GitHub


leonardBang merged PR #727:
URL: https://github.com/apache/flink-web/pull/727


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34906] Only scale when all tasks are running [flink-kubernetes-operator]

2024-03-21 Thread via GitHub


1996fanrui commented on code in PR #801:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/801#discussion_r1535104524


##
flink-autoscaler-standalone/src/test/java/org/apache/flink/autoscaler/standalone/flinkcluster/FlinkClusterJobListFetcherTest.java:
##
@@ -231,6 +214,22 @@ CompletableFuture sendRequest(M h, U p, R r) {
 return (CompletableFuture)
 CompletableFuture.completedFuture(
 
configurationsOrException.left().get(jobID));
+} else if (h instanceof JobsOverviewHeaders) {

Review Comment:
   Changing this file due to we start request `JobsOverviewHeaders` instead of 
`listJobs` in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34615]Split `ExternalizedCheckpointCleanup` out of `Checkpoint… [flink]

2024-03-21 Thread via GitHub


masteryhx commented on code in PR #24461:
URL: https://github.com/apache/flink/pull/24461#discussion_r1535027271


##
flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java:
##
@@ -122,8 +121,9 @@ private StreamExecutionEnvironment createEnv(
 env.setParallelism(p);
 env.enableCheckpointing(100);
 env.getCheckpointConfig()
-.setExternalizedCheckpointCleanup(
-
CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
+.setExternalizedCheckpointCleanupRetention(

Review Comment:
   +1, This is consistent with real config name.
   BTW, I'd also suggest to rename the new class name to 
`ExternalizedCheckpointRetention` or at least add some descriptions e.g. 'Also 
called ExternalizedCheckpointRetention' in java doc and doc to link them for 
the convinency of users.
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
We make the assumption that Global Failures (with null Task name) may only be 
RootExceptions and and Local/Task exception may be part of concurrent 
exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
However, when the Adaptive scheduler is in a Restarting phase due to an 
existing failure (that is now the new Root) we can still, in rare occasions, 
capture new Global failures, violating this condition (with an assertion is 
thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
{code:java}
The taskName must not be null for a non-global failure.  {code}

A solution to this could be to ignore Global failures while being in a 
Restarting phase on the Adaptive scheduler.

This PR also fixes a smaller bug where we dont pass the 
[taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
 properly.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)

  was:
We make the assumption that Global Failures (with null Task name) may only be 
RootExceptions and and Local/Task exception may be part of concurrent 
exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}) -- 
However, when the Adaptive scheduler is in a Restarting phase due to an 
existing failure (that is now the new Root) we can still, in rare occasions, 
capture new Global failures, violating this condition (with an assertion is 
thrown as part of {{{}assertLocalExceptionInfo{}}}).

A solution to this could be to ignore Global failures while being in a 
Restarting phase on the Adaptive scheduler.

This PR also fixes a smaller bug where we dont pass the 
[taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
 properly.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
> {code:java}
> The taskName must not be null for a non-global failure.  {code}
> A solution to this could be to ignore Global failures while being in a 
> Restarting phase on the Adaptive scheduler.
> This PR also fixes a smaller bug where we dont pass the 
> [taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
>  properly.
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
We make the assumption that Global Failures (with null Task name) may only be 
RootExceptions and and Local/Task exception may be part of concurrent 
exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
However, when the Adaptive scheduler is in a Restarting phase due to an 
existing failure (that is now the new Root) we can still, in rare occasions, 
capture new Global failures, violating this condition (with an assertion is 
thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
{code:java}
The taskName must not be null for a non-global failure.  {code}
A solution to this could be to ignore Global failures while being in a 
Restarting phase on the Adaptive scheduler.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)

  was:
We make the assumption that Global Failures (with null Task name) may only be 
RootExceptions and and Local/Task exception may be part of concurrent 
exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
However, when the Adaptive scheduler is in a Restarting phase due to an 
existing failure (that is now the new Root) we can still, in rare occasions, 
capture new Global failures, violating this condition (with an assertion is 
thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
{code:java}
The taskName must not be null for a non-global failure.  {code}

A solution to this could be to ignore Global failures while being in a 
Restarting phase on the Adaptive scheduler.

This PR also fixes a smaller bug where we dont pass the 
[taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
 properly.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
> {code:java}
> The taskName must not be null for a non-global failure.  {code}
> A solution to this could be to ignore Global failures while being in a 
> Restarting phase on the Adaptive scheduler.
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
We make the assumption that Global Failures (with null Task name) may only be 
RootExceptions and and Local/Task exception may be part of concurrent 
exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}) -- 
However, when the Adaptive scheduler is in a Restarting phase due to an 
existing failure (that is now the new Root) we can still, in rare occasions, 
capture new Global failures, violating this condition (with an assertion is 
thrown as part of {{{}assertLocalExceptionInfo{}}}).

A solution to this could be to ignore Global failures while being in a 
Restarting phase on the Adaptive scheduler.

This PR also fixes a smaller bug where we dont pass the 
[taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
 properly.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)

  was:
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions (jobs are 
considered in FAILED state when this happens and no further exceptions are 
captured) and *Local/Task* may be part of concurrent exceptions List *--* if 
this precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.

The issue lies within 
[convertFailures|[https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L422]]
 logic where we take the failureCollection pointer and convert it to a 
HistoryEntry.
In more detail, we are passing the first Failure and a pointer to the remaining 
failures collection as part of HistoryEntry creation — and then add the entry 
in the exception History.
In our specific scenario a Local Failure first comes in, we call 
convertFailures that creates a HistoryEntry and removes the LocalFailure from 
the collection while also passing a pointer to the empty failureCollection. 
Then a Global failure comes in (and before conversion), it is added to the 
failureCollection (that was empty) just before serving the requestJob that 
returns the List of History Entries.
This messes things up, as the LocalFailure now has a 
ConcurrentExceptionsCollection with a Global Failure that should never happen 
(causing the assertion).
A solution is to create a Copy of the failureCollection in the conversion 
instead of passing the pointer around (as I did in the updated PR)

This PR also fixes a smaller bug where we dont pass the 
[taskName|[https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
 properly.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}) -- 
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}).
> A solution to this could be to ignore Global failures while being in a 
> Restarting phase on the Adaptive scheduler.
> This PR also fixes a smaller bug where we dont pass the 
> [taskName|https://github.com/apache/flink/pull/23440/files#diff-0c8b850bbd267631fbe04bb44d8bb3c7e87c3c6aabae904fabdb758026f7fa76R104]
>  properly.
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33985][runtime] Support obtain all partitions existing in cluster through ShuffleMaster. [flink]

2024-03-21 Thread via GitHub


flinkbot commented on PR #24553:
URL: https://github.com/apache/flink/pull/24553#issuecomment-2014352795

   
   ## CI report:
   
   * aba805d48cecbc21eeb32da963e62581255bc7a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33985) Support obtain all partitions existing in cluster through ShuffleMaster.

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33985:
---
Labels: pull-request-available  (was: )

> Support obtain all partitions existing in cluster through ShuffleMaster.
> 
>
> Key: FLINK-33985
> URL: https://issues.apache.org/jira/browse/FLINK-33985
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Junrui Li
>Priority: Major
>  Labels: pull-request-available
>
> Support obtain all partitions existing in cluster through ShuffleMaster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33985][runtime] Support obtain all partitions existing in cluster through ShuffleMaster. [flink]

2024-03-21 Thread via GitHub


JunRuiLee opened a new pull request, #24553:
URL: https://github.com/apache/flink/pull/24553

   
   
   ## What is the purpose of the change
   
   [FLINK-33985][runtime] Support obtain all partitions existing in cluster 
through ShuffleMaster.
   
   
   ## Brief change log
   
   
 - Support obtain all partitions existing in cluster through ShuffleMaster.
 - TaskExecutor will not release partitions immediately when JM Failover 
enabled.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-30088][runtime] Fix excessive state updates for TtlMapState and TtlListState [flink]

2024-03-21 Thread via GitHub


rovboyko commented on PR #21406:
URL: https://github.com/apache/flink/pull/21406#issuecomment-2014338903

   @Myasuka after implementation of 
[FLINK-30535](https://issues.apache.org/jira/browse/FLINK-30535) and 
[FLINK-33881](https://issues.apache.org/jira/browse/FLINK-33881) I changed the 
current commit to optimize only TtlMapState.
   
   After the optimization benchamrk results seem not so bad (+17% performance):
   
   Benchmark | backendType | expiredOption | stateVisibility | updateType | 
Non-optimized | Optimized | Units
   -- | -- | -- | -- | -- | -- | -- | --
   TtlMapStateBenchmark.mapAdd | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnCreateAndWrite | 361.671 | 521.839 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnReadAndWrite | 366.513 | 523.892 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 364.311 | 461.037 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 362.902 | 510.513 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | NeverExpired | NeverReturnExpired | 
OnCreateAndWrite | 361.08 | 520.319 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | NeverExpired | NeverReturnExpired | 
OnReadAndWrite | 364.026 | 524.573 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 362.199 | 515.707 | ops/ms
   TtlMapStateBenchmark.mapAdd | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 357.642 | 433.35 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnCreateAndWrite | 304.937 | 386.748 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnReadAndWrite | 260.486 | 321.793 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 324.409 | 397.574 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 265.357 | 353.769 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | NeverExpired | NeverReturnExpired | 
OnCreateAndWrite | 328.767 | 405.722 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | NeverExpired | NeverReturnExpired | 
OnReadAndWrite | 255.596 | 338.208 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 317.633 | 400.027 | ops/ms
   TtlMapStateBenchmark.mapGet | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 254.176 | 343.937 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnCreateAndWrite | 316.203 | 412.834 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnReadAndWrite | 327.426 | 414.538 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 323.163 | 406.269 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 334.041 | 420.389 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | NeverExpired | NeverReturnExpired | 
OnCreateAndWrite | 346.116 | 406.019 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | NeverExpired | NeverReturnExpired | 
OnReadAndWrite | 320.814 | 405.261 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 340.874 | 409.718 | ops/ms
   TtlMapStateBenchmark.mapIsEmpty | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 333.791 | 408.396 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnCreateAndWrite | 2760.188 | 3413.68 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | Expire3PercentPerIteration | 
NeverReturnExpired | OnReadAndWrite | 2322.375 | 2419.871 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 2694.602 | 3372.643 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | Expire3PercentPerIteration | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 2275.37 | 2156.808 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | NeverExpired | NeverReturnExpired 
| OnCreateAndWrite | 2723.703 | 3369.097 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | NeverExpired | NeverReturnExpired 
| OnReadAndWrite | 2324.465 | 2453.601 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnCreateAndWrite | 2694.324 | 3364.168 | ops/ms
   TtlMapStateBenchmark.mapIterator | HEAP | NeverExpired | 
ReturnExpiredIfNotCleanedUp | OnReadAndWrite | 2339.08 | 2377.672 | ops/ms
   TtlMapStateBenchmark.mapPutAll | HEAP | Expire3PercentPerIter

Re: [PR] [FLINK-34615]Split `ExternalizedCheckpointCleanup` out of `Checkpoint… [flink]

2024-03-21 Thread via GitHub


masteryhx commented on code in PR #24461:
URL: https://github.com/apache/flink/pull/24461#discussion_r1535027271


##
flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java:
##
@@ -122,8 +121,9 @@ private StreamExecutionEnvironment createEnv(
 env.setParallelism(p);
 env.enableCheckpointing(100);
 env.getCheckpointConfig()
-.setExternalizedCheckpointCleanup(
-
CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
+.setExternalizedCheckpointCleanupRetention(

Review Comment:
   +1, This is consistent with real config name.
   BTW, I'd also suggest to rename the new class name 
`ExternalizedCheckpointCleanup` or at least add some descriptions e.g. 'Also 
called ExternalizedCheckpointRetention' in java doc and doc to link them for 
the convinency of users.
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34516) Move CheckpointingMode to flink-core

2024-03-21 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-34516.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

merged 8fac8046...77945916 into master

> Move CheckpointingMode to flink-core
> 
>
> Key: FLINK-34516
> URL: https://issues.apache.org/jira/browse/FLINK-34516
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34516] Move CheckpointingMode to flink-core [flink]

2024-03-21 Thread via GitHub


masteryhx commented on PR #24381:
URL: https://github.com/apache/flink/pull/24381#issuecomment-2014287639

   merged 8fac8046...77945916 into master


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34516] Move CheckpointingMode to flink-core [flink]

2024-03-21 Thread via GitHub


masteryhx closed pull request #24381: [FLINK-34516] Move CheckpointingMode to 
flink-core
URL: https://github.com/apache/flink/pull/24381


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34707] Update japicmp configuration for 1.19 [flink]

2024-03-21 Thread via GitHub


lincoln-lil merged PR #24514:
URL: https://github.com/apache/flink/pull/24514


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34707][tests] Update base version for japicmp check [flink]

2024-03-21 Thread via GitHub


lincoln-lil merged PR #24515:
URL: https://github.com/apache/flink/pull/24515


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34731) Remove SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler

2024-03-21 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-34731.
---
Resolution: Done

master: cf0d75c4bb324825a057dc72243bb6a2046f8479

> Remove SpeculativeScheduler and incorporate its features into 
> AdaptiveBatchScheduler
> 
>
> Key: FLINK-34731
> URL: https://issues.apache.org/jira/browse/FLINK-34731
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Presently, speculative execution is exposed to users as a feature of the 
> AdaptiveBatchScheduler.
> To streamline our codebase and reduce maintenance overhead, this ticket will 
> consolidate the SpeculativeScheduler into the AdaptiveBatchScheduler, 
> eliminating the need for a separate SpeculativeScheduler class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34731][runtime] Remove SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler. [flink]

2024-03-21 Thread via GitHub


zhuzhurk closed pull request #24524: [FLINK-34731][runtime] Remove 
SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler.
URL: https://github.com/apache/flink/pull/24524


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34615]Split `ExternalizedCheckpointCleanup` out of `Checkpoint… [flink]

2024-03-21 Thread via GitHub


Zakelly commented on code in PR #24461:
URL: https://github.com/apache/flink/pull/24461#discussion_r1534971391


##
flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceRescaleITCase.java:
##
@@ -122,8 +121,9 @@ private StreamExecutionEnvironment createEnv(
 env.setParallelism(p);
 env.enableCheckpointing(100);
 env.getCheckpointConfig()
-.setExternalizedCheckpointCleanup(
-
CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
+.setExternalizedCheckpointCleanupRetention(

Review Comment:
   Actually I'd prefer `setExternalizedCheckpointRetention` @masteryhx WDYT?



##
flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java:
##
@@ -303,22 +302,24 @@ private static void setupCheckpointing(
 ENVIRONMENT_EXTERNALIZE_CHECKPOINT_CLEANUP.key(),
 
ENVIRONMENT_EXTERNALIZE_CHECKPOINT_CLEANUP.defaultValue());
 
-CheckpointConfig.ExternalizedCheckpointCleanup cleanupMode;
+org.apache.flink.configuration.ExternalizedCheckpointCleanup 
cleanupMode;

Review Comment:
   Can we import this on top?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34909) OceanBase transctionID by Flink CDC

2024-03-21 Thread xiaotouming (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaotouming updated FLINK-34909:

Summary: OceanBase transctionID by Flink CDC  (was: OceanBase事务ID需求)

> OceanBase transctionID by Flink CDC
> ---
>
> Key: FLINK-34909
> URL: https://issues.apache.org/jira/browse/FLINK-34909
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: xiaotouming
>Priority: Major
> Fix For: cdc-3.1.0
>
>
> 可以通过flink data stream方式解析到OceanBase的事务ID



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34909) OceanBase transctionID by Flink CDC

2024-03-21 Thread xiaotouming (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaotouming updated FLINK-34909:

Description: want to get OceanBase transctionID by Flink CDC  (was: 
可以通过flink data stream方式解析到OceanBase的事务ID)

> OceanBase transctionID by Flink CDC
> ---
>
> Key: FLINK-34909
> URL: https://issues.apache.org/jira/browse/FLINK-34909
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: xiaotouming
>Priority: Major
> Fix For: cdc-3.1.0
>
>
> want to get OceanBase transctionID by Flink CDC



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34903) Add mysql-pipeline-connector with table.exclude.list option to exclude unnecessary tables

2024-03-21 Thread Thorne (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829718#comment-17829718
 ] 

Thorne commented on FLINK-34903:


ok,tks,i have do it

> Add mysql-pipeline-connector with  table.exclude.list option to exclude 
> unnecessary tables 
> ---
>
> Key: FLINK-34903
> URL: https://issues.apache.org/jira/browse/FLINK-34903
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Thorne
>Priority: Major
>  Labels: cdc
> Fix For: cdc-3.1.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
>     When using the MySQL Pipeline connector for whole-database 
> synchronization, users currently cannot exclude unnecessary tables. Taking 
> reference from Debezium's parameters, specifically the 
> {*}table.exclude.list{*}, if the *table.include.list* is declared, then the 
> *table.exclude.list* parameter will not take effect. However, the tables 
> specified in the tables parameter of the MySQL Pipeline connector are 
> effectively added to the *table.include.list* in Debezium's context.
> !screenshot-1.png!
> !screenshot-2.png|width=834,height=86!
> debezium opthion  desc
> !screenshot-3.png|width=831,height=217!
>     In summary, it is necessary to introduce an externally-exposed 
> *table.exclude.list* parameter within the MySQL Pipeline connector to 
> facilitate the exclusion of tables. This is because the current setup does 
> not allow for excluding unnecessary tables when including others through the 
> tables parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34909) OceanBase事务ID需求

2024-03-21 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-34909.
--
Resolution: Invalid

Please change to English to describe your ticket, feel free to reopen once 
changed

> OceanBase事务ID需求
> ---
>
> Key: FLINK-34909
> URL: https://issues.apache.org/jira/browse/FLINK-34909
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: xiaotouming
>Priority: Major
> Fix For: cdc-3.1.0
>
>
> 可以通过flink data stream方式解析到OceanBase的事务ID



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34903) Add mysql-pipeline-connector with table.exclude.list option to exclude unnecessary tables

2024-03-21 Thread Hongshun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829715#comment-17829715
 ] 

Hongshun Wang commented on FLINK-34903:
---

[~fffy] , Please change PR titile by [FLINK-34903][]Add 
mysql-pipeline-connector with table.exclude.list option to exclude unnecessary 
tables

> Add mysql-pipeline-connector with  table.exclude.list option to exclude 
> unnecessary tables 
> ---
>
> Key: FLINK-34903
> URL: https://issues.apache.org/jira/browse/FLINK-34903
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Thorne
>Priority: Major
>  Labels: cdc
> Fix For: cdc-3.1.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
>     When using the MySQL Pipeline connector for whole-database 
> synchronization, users currently cannot exclude unnecessary tables. Taking 
> reference from Debezium's parameters, specifically the 
> {*}table.exclude.list{*}, if the *table.include.list* is declared, then the 
> *table.exclude.list* parameter will not take effect. However, the tables 
> specified in the tables parameter of the MySQL Pipeline connector are 
> effectively added to the *table.include.list* in Debezium's context.
> !screenshot-1.png!
> !screenshot-2.png|width=834,height=86!
> debezium opthion  desc
> !screenshot-3.png|width=831,height=217!
>     In summary, it is necessary to introduce an externally-exposed 
> *table.exclude.list* parameter within the MySQL Pipeline connector to 
> facilitate the exclusion of tables. This is because the current setup does 
> not allow for excluding unnecessary tables when including others through the 
> tables parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32513) Job in BATCH mode with a significant number of transformations freezes on method StreamGraphGenerator.existsUnboundedSource()

2024-03-21 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-32513:
---

Assignee: Jeyhun Karimov

> Job in BATCH mode with a significant number of transformations freezes on 
> method StreamGraphGenerator.existsUnboundedSource()
> -
>
> Key: FLINK-32513
> URL: https://issues.apache.org/jira/browse/FLINK-32513
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3, 1.16.1, 1.17.1
> Environment: All modes (local, k8s session, k8s application, ...)
> Flink 1.15.3
> Flink 1.16.1
> Flink 1.17.1
>Reporter: Vladislav Keda
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-07-10-17-26-46-544.png
>
>
> Flink job executed in BATCH mode with a significant number of transformations 
> (more than 30 in my case) takes very long time to start due to the method 
> StreamGraphGenerator.existsUnboundedSource(). Also, during the execution of 
> the method, a lot of memory is consumed, which causes the GC to fire 
> frequently.
> Thread Dump:
> {code:java}
> "main@1" prio=5 tid=0x1 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
>       at java.util.ArrayList.addAll(ArrayList.java:702)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:224)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.PartitionTransformation.getTransitivePredecessors(PartitionTransformation.java:95)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:223)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.PartitionTransformation.getTransitivePredecessors(PartitionTransformation.java:95)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:223)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api

Re: [PR] [FLINK-34643] Fix concurrency issue in LoggerAuditingExtension [flink]

2024-03-21 Thread via GitHub


rkhachatryan commented on PR #24550:
URL: https://github.com/apache/flink/pull/24550#issuecomment-2013992333

   Sure, my theory is that although the order is correct, the write to this 
field (`loggingEvents`) might not be visible to other threads, including 
threads started by Flink and logging threads (if any).
   So some threads might see `null` there, catch NPE and log it to 
ConsoleLogger.
   I'm not 100% sure that this is the issue, but it seems more likely than 
async logging or buffering of log records.
   
   The fix is to use `volatile` or avoid using field (use closure). I'm using 
both (just in case some new usage will be added later).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


snuyanzin merged PR #92:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/92


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34902][table] Fix column mismatch IndexOutOfBoundsException [flink]

2024-03-21 Thread via GitHub


flinkbot commented on PR #24552:
URL: https://github.com/apache/flink/pull/24552#issuecomment-2013078895

   
   ## CI report:
   
   * d19f36951321edaed8d17c5a8bfe310ef4b2521d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34902) INSERT INTO column mismatch leads to IndexOutOfBoundsException

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34902:
---
Labels: pull-request-available  (was: )

> INSERT INTO column mismatch leads to IndexOutOfBoundsException
> --
>
> Key: FLINK-34902
> URL: https://issues.apache.org/jira/browse/FLINK-34902
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Priority: Major
>  Labels: pull-request-available
>
> SQL:
> {code}
> INSERT INTO t (a, b) SELECT 1;
> {code}
>  
> Stack trace:
> {code}
> org.apache.flink.table.api.ValidationException: SQL validation failed. Index 
> 1 out of bounds for length 1
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:200)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:117)
>     at
> Caused by: java.lang.IndexOutOfBoundsException: Index 1 out of bounds for 
> length 1
>     at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
>     at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
>     at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
>     at java.base/java.util.Objects.checkIndex(Objects.java:374)
>     at java.base/java.util.ArrayList.get(ArrayList.java:459)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.$anonfun$reorder$1(PreValidateReWriter.scala:355)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.$anonfun$reorder$1$adapted(PreValidateReWriter.scala:355)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34902][table] Fix column mismatch IndexOutOfBoundsException exception [flink]

2024-03-21 Thread via GitHub


jeyhunkarimov opened a new pull request, #24552:
URL: https://github.com/apache/flink/pull/24552

   ## What is the purpose of the change
   
   Column mismatch validation error should not throw IndexOutOfBoundsException
   
   
   ## Brief change log
   
 - Check column mismatch for SELECT and VALUES clauses
 -  Add tests
   
   
   ## Verifying this change
   
   `org.apache.flink.table.planner.calcite. FlinkCalciteSqlValidatorTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34898) Cannot create ARRAY of named STRUCTs

2024-03-21 Thread Chloe He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829636#comment-17829636
 ] 

Chloe He commented on FLINK-34898:
--

[~hackergin] I have updated the title and the content of this ticket. Thank you!

> Cannot create ARRAY of named STRUCTs
> 
>
> Key: FLINK-34898
> URL: https://issues.apache.org/jira/browse/FLINK-34898
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Chloe He
>Priority: Major
> Attachments: image-2024-03-21-12-00-00-183.png
>
>
> I want to construct data that consists of arrays of named STRUCT. For 
> example, one field may look like `[\{"a": 1}]`. I am able to construct this 
> named STRUCT as
> {code:java}
> SELECT CAST(ROW(1) as ROW) AS row1;  {code}
> but when I try to wrap this in an ARRAY, it fails:
> {code:java}
> SELECT ARRAY[CAST(ROW(1) as ROW)] AS row1;  
> // error
> Caused by: java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlBasicCall: ROW(1)
> {code}
> These are the workarounds that I found:
> {code:java}
> SELECT ROW(ROW(CAST(ROW(1) as ROW))) AS row1; 
> // or
> SELECT cast(ARRAY[ROW(1)] as ARRAY>); {code}
> but I think this is a bug that we need to follow up and fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34898) Cannot create ARRAY of named STRUCTs

2024-03-21 Thread Chloe He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chloe He updated FLINK-34898:
-
Affects Version/s: 1.19.0
  Description: 
I want to construct data that consists of arrays of named STRUCT. For example, 
one field may look like `[\{"a": 1}]`. I am able to construct this named STRUCT 
as
{code:java}
SELECT CAST(ROW(1) as ROW) AS row1;  {code}
but when I try to wrap this in an ARRAY, it fails:
{code:java}
SELECT ARRAY[CAST(ROW(1) as ROW)] AS row1;  

// error
Caused by: java.lang.UnsupportedOperationException: class 
org.apache.calcite.sql.SqlBasicCall: ROW(1)
{code}
These are the workarounds that I found:
{code:java}
SELECT ROW(ROW(CAST(ROW(1) as ROW))) AS row1; 
// or
SELECT cast(ARRAY[ROW(1)] as ARRAY>); {code}
but I think this is a bug that we need to follow up and fix.

  was:
I'm trying to create named structs using Flink SQL and I found a previous 
ticket https://issues.apache.org/jira/browse/FLINK-9161 that mentions the use 
of the following syntax:
{code:java}
SELECT CAST(('a', 1) as ROW) AS row1;
{code}
However, my named struct has a single field and effectively it should look 
something like `\{"a": 1}`. I can't seem to be able to find a way to construct 
this. I have experimented with a few different syntax and it either throws 
parsing error or casting error:
{code:java}
Cast function cannot convert value of type INTEGER to type 
RecordType(VARCHAR(2147483647) a) {code}

  Summary: Cannot create ARRAY of named STRUCTs  (was: Cannot 
create named STRUCT with a single field)

> Cannot create ARRAY of named STRUCTs
> 
>
> Key: FLINK-34898
> URL: https://issues.apache.org/jira/browse/FLINK-34898
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Chloe He
>Priority: Major
> Attachments: image-2024-03-21-12-00-00-183.png
>
>
> I want to construct data that consists of arrays of named STRUCT. For 
> example, one field may look like `[\{"a": 1}]`. I am able to construct this 
> named STRUCT as
> {code:java}
> SELECT CAST(ROW(1) as ROW) AS row1;  {code}
> but when I try to wrap this in an ARRAY, it fails:
> {code:java}
> SELECT ARRAY[CAST(ROW(1) as ROW)] AS row1;  
> // error
> Caused by: java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlBasicCall: ROW(1)
> {code}
> These are the workarounds that I found:
> {code:java}
> SELECT ROW(ROW(CAST(ROW(1) as ROW))) AS row1; 
> // or
> SELECT cast(ARRAY[ROW(1)] as ARRAY>); {code}
> but I think this is a bug that we need to follow up and fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34239) Introduce a deep copy method of SerializerConfig for merging with Table configs in org.apache.flink.table.catalog.DataTypeFactoryImpl

2024-03-21 Thread Kumar Mallikarjuna (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829635#comment-17829635
 ] 

Kumar Mallikarjuna commented on FLINK-34239:


Hello [~Zhanghao Chen] , [~zjureel], I've raised a PR for the change. Could you 
please take a look! Thanks! :)

> Introduce a deep copy method of SerializerConfig for merging with Table 
> configs in org.apache.flink.table.catalog.DataTypeFactoryImpl 
> --
>
> Key: FLINK-34239
> URL: https://issues.apache.org/jira/browse/FLINK-34239
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Kumar Mallikarjuna
>Priority: Major
>  Labels: pull-request-available
>
> *Problem*
> Currently, 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig
>  will create a deep-copy of the SerializerConfig and merge Table config into 
> it. However, the deep copy is done by manully calling the getter and setter 
> methods of SerializerConfig, and is prone to human errors, e.g. missing 
> copying a newly added field in SerializerConfig.
> *Proposal*
> Introduce a deep copy method for SerializerConfig and replace the curr impl 
> in 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34409][ci] Enables (most) tests that were disabled for the AdaptiveScheduler due to missing features [flink]

2024-03-21 Thread via GitHub


snuyanzin commented on code in PR #24285:
URL: https://github.com/apache/flink/pull/24285#discussion_r1534233653


##
flink-end-to-end-tests/run-nightly-tests.sh:
##
@@ -125,30 +125,28 @@ function run_group_1 {
 # Docker / Container / Kubernetes tests
 

 
-if [[ ${PROFILE} != *"enable-adaptive-scheduler"* ]]; then
-run_test "Wordcount on Docker test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_docker_embedded_job.sh dummy-fs"
-
-run_test "Run Kubernetes test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_embedded_job.sh"
-run_test "Run kubernetes session test (default input)" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_session.sh"
-run_test "Run kubernetes session test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_session.sh dummy-fs"
-run_test "Run kubernetes application test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_application.sh"
-run_test "Run kubernetes application HA test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_application_ha.sh"
-run_test "Run Kubernetes IT test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_itcases.sh"
-
-run_test "Running Flink over NAT end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_nat.sh" "skip_check_exceptions"
-
-if [[ `uname -i` != 'aarch64' ]]; then
-# Skip PyFlink e2e test, because MiniConda and Pyarrow which 
Pyflink depends doesn't support aarch64 currently.
-run_test "Run kubernetes pyflink application test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_pyflink_application.sh"
-
-# Hadoop YARN deosn't support aarch64 at this moment. See: 
https://issues.apache.org/jira/browse/HADOOP-16723
-# These tests are known to fail on JDK11. See FLINK-13719
-if [[ ${PROFILE} != *"jdk11"* ]]; then
-run_test "Running Kerberized YARN per-job on Docker test 
(default input)" "$END_TO_END_DIR/test-scripts/test_yarn_job_kerberos_docker.sh"
-run_test "Running Kerberized YARN per-job on Docker test 
(custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_yarn_job_kerberos_docker.sh dummy-fs"
-run_test "Running Kerberized YARN application on Docker test 
(default input)" 
"$END_TO_END_DIR/test-scripts/test_yarn_application_kerberos_docker.sh"
-run_test "Running Kerberized YARN application on Docker test 
(custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_yarn_application_kerberos_docker.sh dummy-fs"
-fi
+run_test "Wordcount on Docker test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_docker_embedded_job.sh dummy-fs"
+
+run_test "Run Kubernetes test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_embedded_job.sh"
+run_test "Run kubernetes session test (default input)" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_session.sh"
+run_test "Run kubernetes session test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_session.sh dummy-fs"
+run_test "Run kubernetes application test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_application.sh"
+run_test "Run kubernetes application HA test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_application_ha.sh"
+run_test "Run Kubernetes IT test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_itcases.sh"
+
+run_test "Running Flink over NAT end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_nat.sh" "skip_check_exceptions"
+
+if [[ `uname -i` != 'aarch64' ]]; then
+# Skip PyFlink e2e test, because MiniConda and Pyarrow which Pyflink 
depends doesn't support aarch64 currently.
+run_test "Run kubernetes pyflink application test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_pyflink_application.sh"
+
+# Hadoop YARN deosn't support aarch64 at this moment. See: 
https://issues.apache.org/jira/browse/HADOOP-16723

Review Comment:
   ```suggestion
   # Hadoop YARN doesn't support aarch64 at this moment. See: 
https://issues.apache.org/jira/browse/HADOOP-16723
   ```
   nit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-34910.

Resolution: Fixed

Fixed in 709bf93534fcdfd2b4452667af450f1748bf1ccc

> Can not plan window join without projections
> 
>
> Key: FLINK-34910
> URL: https://issues.apache.org/jira/browse/FLINK-34910
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When running:
> {code}
>   @Test
>   def testWindowJoinWithoutProjections(): Unit = {
> val sql =
>   """
> |SELECT *
> |FROM
> |  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS L
> |JOIN
> |  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS R
> |ON L.window_start = R.window_start AND L.window_end = R.window_end 
> AND L.a = R.a
>   """.stripMargin
> util.verifyRelPlan(sql)
>   }
> {code}
> It fails with:
> {code}
> FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
> proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
> PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
> window_time0])
> +- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
> requiredColumns=[{}])
>:- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
> 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>:  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>: +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
>:+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable]], fields=[a, b, c, rowtime])
>+- 
> FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
>  90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>   +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>  +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
> +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable2]], fields=[a, b, c, rowtime])
> Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). 
> This is a bug, please file a JIRA issue.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}
> In prior versions this had another problem of ambiguous {{rowtime}} column, 
> but this has been fixed by [FLINK-32648]. In versions < 1.19 
> WindowTableFunctions were incorrectly scoped, because they were not extending 
> from Calcite's SqlWindowTableFunction and the scoping implemented in 
> SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34910] Fix optimizing window join [flink]

2024-03-21 Thread via GitHub


dawidwys merged PR #24549:
URL: https://github.com/apache/flink/pull/24549


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34643] Fix concurrency issue in LoggerAuditingExtension [flink]

2024-03-21 Thread via GitHub


XComp commented on PR #24550:
URL: https://github.com/apache/flink/pull/24550#issuecomment-2012773585

   Can you elaborate a bit why this would solve the issue? Isn't `beforeEach` 
called only once per test method? :thinking: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34409][ci] Enables (most) tests that were disabled for the AdaptiveScheduler due to missing features [flink]

2024-03-21 Thread via GitHub


XComp commented on PR #24285:
URL: https://github.com/apache/flink/pull/24285#issuecomment-2012732613

   I rebased the branch to most-recent `master`. But it won't make much of a 
difference: The failed tests in CI are unrelated to the changes of this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34409][ci] Enables (most) tests that were disabled for the AdaptiveScheduler due to missing features [flink]

2024-03-21 Thread via GitHub


XComp commented on PR #24285:
URL: https://github.com/apache/flink/pull/24285#issuecomment-2012724874

   > this thing from PR description is not clear to me
   > if we temporarily enable it, what is the long term plan here?
   
   I was referring to commit 
https://github.com/apache/flink/pull/24285/commits/d0ea2d70bffaef5a03669240564e148ab6ea14d3
 that enables the AdaptiveScheduler profile to verify the test execution but 
won't be merged into `master`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34409][ci] Enables (most) tests that were disabled for the AdaptiveScheduler due to missing features [flink]

2024-03-21 Thread via GitHub


snuyanzin commented on PR #24285:
URL: https://github.com/apache/flink/pull/24285#issuecomment-2012704935

   should it be rebased because of FLINK-34718
   to be sure that ci is green?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-21 Thread Ryan Skraba (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Skraba updated FLINK-34911:

Description: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]

 
{code:java}
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
Mar 21 01:50:42 01:50:42.554 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
{code}
>From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
>specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
>incremental checkpointing enabled.

The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
{{MiniClusterWithClientResource}}
{code:java}
~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat watchdog| 
grep "Tests run\|Running org.apache.flink" | grep -o "org.apache.flink[^ ]*$" | 
sort | uniq -c | sort -n | head
      1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
      2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
{code}
 
{color:#00} {color}

  was:
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]

 
{code:java}
 Mar 21 01:50:42 01:50:42.553 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
on project flink-tests: 
Mar 21 01:50:42 01:50:42.553 [ERROR] 
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to 
/__w/1/s/flink-tests/target/surefire-reports for the individual test results.
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to dump files (if any exist) 
[date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
Mar 21 01:50:42 01:50:42.553 [ERROR] ExecutionException The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-2024

[jira] [Updated] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-21 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34911:
--
Component/s: Runtime / State Backends

> ChangelogRecoveryRescaleITCase failed fatally with 127 exit code
> 
>
> Key: FLINK-34911
> URL: https://issues.apache.org/jira/browse/FLINK-34911
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]
>  
> {code:java}
> Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
> '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' 
> '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
> '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar'
>  '/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
> 'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
> Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
> output in log
> Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
> Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
> Mar 21 01:50:42 01:50:42.553 [ERROR] 
> org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
> Mar 21 01:50:42 01:50:42.554 [ERROR]  at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
> {code}
> From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
> specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
> incremental checkpointing enabled.
> The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
> {{MiniClusterWithClientResource}}
> {code:java}
> ~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat 
> watchdog| grep "Tests run\|Running org.apache.flink" | grep -o 
> "org.apache.flink[^ ]*$" | sort | uniq -c | sort -n | head
>       1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
>       2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
> {code}
>  
> {color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-21 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34911:
--
Priority: Critical  (was: Major)

> ChangelogRecoveryRescaleITCase failed fatally with 127 exit code
> 
>
> Key: FLINK-34911
> URL: https://issues.apache.org/jira/browse/FLINK-34911
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]
>  
> {code:java}
> Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
> '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' 
> '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
> '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar'
>  '/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
> 'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
> Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
> output in log
> Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
> Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
> Mar 21 01:50:42 01:50:42.553 [ERROR] 
> org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
> Mar 21 01:50:42 01:50:42.554 [ERROR]  at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
> {code}
> From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
> specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
> incremental checkpointing enabled.
> The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
> {{MiniClusterWithClientResource}}
> {code:java}
> ~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat 
> watchdog| grep "Tests run\|Running org.apache.flink" | grep -o 
> "org.apache.flink[^ ]*$" | sort | uniq -c | sort -n | head
>       1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
>       2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
> {code}
>  
> {color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-21 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-34911:
---

 Summary: ChangelogRecoveryRescaleITCase failed fatally with 127 
exit code
 Key: FLINK-34911
 URL: https://issues.apache.org/jira/browse/FLINK-34911
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Ryan Skraba


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]

 
{code:java}
 Mar 21 01:50:42 01:50:42.553 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
on project flink-tests: 
Mar 21 01:50:42 01:50:42.553 [ERROR] 
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to 
/__w/1/s/flink-tests/target/surefire-reports for the individual test results.
Mar 21 01:50:42 01:50:42.553 [ERROR] Please refer to dump files (if any exist) 
[date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
Mar 21 01:50:42 01:50:42.553 [ERROR] ExecutionException The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
'/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' '-XX:+UseG1GC' 
'-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
'--add-opens=java.base/java.util=ALL-UNNAMED' 
'--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
'/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar' 
'/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
output in log
Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
Mar 21 01:50:42 01:50:42.553 [ERROR] 
org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
Mar 21 01:50:42 01:50:42.553 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
Mar 21 01:50:42 01:50:42.554 [ERROR]at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
{code}
>From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
>specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
>incremental checkpointing enabled.

The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
{{MiniClusterWithClientResource}}
{code:java}
~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat watchdog| 
grep "Tests run\|Running org.apache.flink" | grep -o "org.apache.flink[^ ]*$" | 
sort | uniq -c | sort -n | head
      1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
      2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
      2 org.apache.flink.api.connector.source.lib.util.GatedRateLimiterTest
      2 
org.apache.flink.api.connector.source.lib.util.RateLimitedSourceReaderITCase
      2 org.apache.flink.api.datastream.DataStreamBatchExecutionITCase
      2 org.apache.flink.api.datastream.DataStreamCollectTestITCase{code}
 
{color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34713) Updates the docs stable version

2024-03-21 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829596#comment-17829596
 ] 

lincoln lee edited comment on FLINK-34713 at 3/21/24 2:59 PM:
--

master also needed: 
[8ee552a326e9fbcad1df5cfc1abb23ac2cdd56af|https://github.com/apache/flink/commit/8ee552a326e9fbcad1df5cfc1abb23ac2cdd56af]


was (Author: lincoln.86xy):
master also needed: https://github.com/apache/flink/pull/24518/files

> Updates the docs stable version
> ---
>
> Key: FLINK-34713
> URL: https://issues.apache.org/jira/browse/FLINK-34713
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lincoln Lee
>Assignee: lincoln lee
>Priority: Major
>
> Update docs to "stable" in {{docs/config.toml}} in the branch of the 
> _just-released_ version:
>  * Change V{{{}ersion{}}} from {{{}x.y-SNAPSHOT }}to \{{{}x.y.z{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6.0}}
>  * Change V{{{}ersionTitle{}}} from {{x.y-SNAPSHOT}} to {{{}x.y{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6}}
>  * Change Branch from {{master}} to {{{}release-x.y{}}}, i.e. {{master}} to 
> {{release-1.6}}
>  * Change {{baseURL}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{javadocs_baseurl}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{IsStable}} to {{true}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34706) Promote release 1.19

2024-03-21 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829244#comment-17829244
 ] 

lincoln lee edited comment on FLINK-34706 at 3/21/24 2:59 PM:
--

# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in [https://github.com/apache/flink/pull/24548]
 # (/) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version:  (for 1.17) 
[https://github.com/apache/flink/pull/24547]
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]
   
[8ee552a|https://github.com/apache/flink/commit/8ee552a326e9fbcad1df5cfc1abb23ac2cdd56af]


was (Author: lincoln.86xy):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in [https://github.com/apache/flink/pull/24548]
 # (/) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version:  (for 1.17) 
[https://github.com/apache/flink/pull/24547]
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]
   [8ee552a|https://github.com/apache/flink/pull/24518/files]

> Promote release 1.19
> 
>
> Key: FLINK-34706
> URL: https://issues.apache.org/jira/browse/FLINK-34706
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.0
>Reporter: Lincoln Lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK

[jira] [Commented] (FLINK-34713) Updates the docs stable version

2024-03-21 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829596#comment-17829596
 ] 

lincoln lee commented on FLINK-34713:
-

master also needed: https://github.com/apache/flink/pull/24518/files

> Updates the docs stable version
> ---
>
> Key: FLINK-34713
> URL: https://issues.apache.org/jira/browse/FLINK-34713
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lincoln Lee
>Assignee: lincoln lee
>Priority: Major
>
> Update docs to "stable" in {{docs/config.toml}} in the branch of the 
> _just-released_ version:
>  * Change V{{{}ersion{}}} from {{{}x.y-SNAPSHOT }}to \{{{}x.y.z{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6.0}}
>  * Change V{{{}ersionTitle{}}} from {{x.y-SNAPSHOT}} to {{{}x.y{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6}}
>  * Change Branch from {{master}} to {{{}release-x.y{}}}, i.e. {{master}} to 
> {{release-1.6}}
>  * Change {{baseURL}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{javadocs_baseurl}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{IsStable}} to {{true}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34706) Promote release 1.19

2024-03-21 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829244#comment-17829244
 ] 

lincoln lee edited comment on FLINK-34706 at 3/21/24 2:57 PM:
--

# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 # (/) Update the list of previous version in {{docs/config.toml}} on the 
master branch: Done in [https://github.com/apache/flink/pull/24548]
 # (/) Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the 
branch of the _previous_ Flink version:  (for 1.17) 
[https://github.com/apache/flink/pull/24547]
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]
   [8ee552a|https://github.com/apache/flink/pull/24518/files]


was (Author: lincoln.86xy):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 #  Update the list of previous version in {{docs/config.toml}} on the master 
branch: Done in https://github.com/apache/flink/pull/24548
 #  Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch 
of the _previous_ Flink version:  (for 1.17) 
https://github.com/apache/flink/pull/24547
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]

> Promote release 1.19
> 
>
> Key: FLINK-34706
> URL: https://issues.apache.org/jira/browse/FLINK-34706
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.0
>Reporter: Lincoln Lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-32920), the last step of the 
> process is to promote the release within the project and beyond. Please

Re: [PR] [FLINK-34716][release] Build 1.19 docs in GitHub Action and mark 1.19 as stable in docs [flink]

2024-03-21 Thread via GitHub


lincoln-lil merged PR #24551:
URL: https://github.com/apache/flink/pull/24551


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34716][release] Build 1.19 docs in GitHub Action and mark 1.19 as stable in docs [flink]

2024-03-21 Thread via GitHub


lincoln-lil commented on PR #24551:
URL: https://github.com/apache/flink/pull/24551#issuecomment-2012514631

   Skip the ci for just fixing the master doc build to restore the web link 
`https://nightlies.apache.org/flink/flink-docs-stable/` pointing to 1.19.0 
instead of 1.18.1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34716][release] Build 1.19 docs in GitHub Action and mark 1.19 as stable in docs [flink]

2024-03-21 Thread via GitHub


lincoln-lil opened a new pull request, #24551:
URL: https://github.com/apache/flink/pull/24551

   Sync the docs build changes with 1.19  
https://github.com/apache/flink/pull/24518


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Add support for Flink 1.20 and drop 1.16 [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


snuyanzin opened a new pull request, #94:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/94

   The PR drops support for Flink 1.16 and adds 1.20 snapshot


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Update dependencies [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


snuyanzin opened a new pull request, #93:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/93

   jackson from 2.13.4.20221013 to 2.15.3
   junit5 from 5.9.1 to 5.10.2
   assertj from 3.23.1 to 3.25.3
   testcontainers from 1.17.2 to 1.19.7
   mockito from 3.4.6 to 3.12.4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34435) Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch

2024-03-21 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34435.
-
Resolution: Fixed

> Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch
> --
>
> Key: FLINK-34435
> URL: https://issues.apache.org/jira/browse/FLINK-34435
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-connector-elasticsearch/pull/90



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34435) Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch

2024-03-21 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34435.
---

> Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch
> --
>
> Key: FLINK-34435
> URL: https://issues.apache.org/jira/browse/FLINK-34435
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-connector-elasticsearch/pull/90



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34435) Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch

2024-03-21 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829575#comment-17829575
 ] 

Sergey Nuyanzin commented on FLINK-34435:
-

Merged as 
[55c5982bba88a3f1806b4939be788d8f77bead72|https://github.com/apache/flink-connector-elasticsearch/commit/55c5982bba88a3f1806b4939be788d8f77bead72]

> Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch
> --
>
> Key: FLINK-34435
> URL: https://issues.apache.org/jira/browse/FLINK-34435
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-connector-elasticsearch/pull/90



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34435) Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34435:
---
Labels: pull-request-available  (was: )

> Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch
> --
>
> Key: FLINK-34435
> URL: https://issues.apache.org/jira/browse/FLINK-34435
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/flink-connector-elasticsearch/pull/90



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34435] Bump org.yaml:snakeyaml from 1.31 to 2.2 [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


boring-cyborg[bot] commented on PR #90:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/90#issuecomment-2012397218

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34435] Bump org.yaml:snakeyaml from 1.31 to 2.2 [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


snuyanzin merged PR #90:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/90


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Update copyright year to 2024 [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


snuyanzin merged PR #89:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/89


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34893) Bump Checkstyle to 9+

2024-03-21 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34893.
---

> Bump Checkstyle to 9+
> -
>
> Key: FLINK-34893
> URL: https://issues.apache.org/jira/browse/FLINK-34893
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The issue with current checkstyle is that there is checkstyle IntellijIdea 
> plugin
> And recently it dropped checkstyle 8 support [1]
> At the same time we can not move to Checkstyle 10 since 10.x requires java 11+
> [1] https://github.com/jshiell/checkstyle-idea/blob/main/CHANGELOG.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34893) Bump Checkstyle to 9+

2024-03-21 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34893.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Bump Checkstyle to 9+
> -
>
> Key: FLINK-34893
> URL: https://issues.apache.org/jira/browse/FLINK-34893
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The issue with current checkstyle is that there is checkstyle IntellijIdea 
> plugin
> And recently it dropped checkstyle 8 support [1]
> At the same time we can not move to Checkstyle 10 since 10.x requires java 11+
> [1] https://github.com/jshiell/checkstyle-idea/blob/main/CHANGELOG.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34893) Bump Checkstyle to 9+

2024-03-21 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829567#comment-17829567
 ] 

Sergey Nuyanzin commented on FLINK-34893:
-

Merged as 
[https://github.com/apache/flink/commit/23c2fd0a32de93c31f3afd1422575d1d459eb90d|23c2fd0a32de93c31f3afd1422575d1d459eb90d]

> Bump Checkstyle to 9+
> -
>
> Key: FLINK-34893
> URL: https://issues.apache.org/jira/browse/FLINK-34893
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The issue with current checkstyle is that there is checkstyle IntellijIdea 
> plugin
> And recently it dropped checkstyle 8 support [1]
> At the same time we can not move to Checkstyle 10 since 10.x requires java 11+
> [1] https://github.com/jshiell/checkstyle-idea/blob/main/CHANGELOG.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34893] Bump checkstyle to 9.3 [flink]

2024-03-21 Thread via GitHub


snuyanzin merged PR #24540:
URL: https://github.com/apache/flink/pull/24540


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34893] Bump checkstyle to 9.3 [flink]

2024-03-21 Thread via GitHub


snuyanzin commented on PR #24540:
URL: https://github.com/apache/flink/pull/24540#issuecomment-2012349819

   Thanks for taking a look @RyanSkraba 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34643] Fix concurrency issue in LoggerAuditingExtension [flink]

2024-03-21 Thread via GitHub


flinkbot commented on PR #24550:
URL: https://github.com/apache/flink/pull/24550#issuecomment-2012310006

   
   ## CI report:
   
   * 040e396ae5a8c84a62bc1b305dc222bb313a2940 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34643) JobIDLoggingITCase failed

2024-03-21 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829560#comment-17829560
 ] 

Roman Khachatryan commented on FLINK-34643:
---

Or maybe it's actually simple: [https://github.com/apache/flink/pull/24550] :)

> JobIDLoggingITCase failed
> -
>
> Key: FLINK-34643
> URL: https://issues.apache.org/jira/browse/FLINK-34643
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=7897
> {code}
> Mar 09 01:24:23 01:24:23.498 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 4.209 s <<< FAILURE! -- in 
> org.apache.flink.test.misc.JobIDLoggingITCase
> Mar 09 01:24:23 01:24:23.498 [ERROR] 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) 
> -- Time elapsed: 1.459 s <<< ERROR!
> Mar 09 01:24:23 java.lang.IllegalStateException: Too few log events recorded 
> for org.apache.flink.runtime.jobmaster.JobMaster (12) - this must be a bug in 
> the test code
> Mar 09 01:24:23   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:148)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:132)
> Mar 09 01:24:23   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 01:24:23   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Mar 09 01:24:23 
> {code}
> The other test failures of this build were also caused by the same test:
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=2c3cbe13-dee0-5837-cf47-3053da9a8a78&t=b78d9d30-509a-5cea-1fef-db7abaa325ae&l=8349
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1&l=8209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34643] Fix concurrency issue in LoggerAuditingExtension [flink]

2024-03-21 Thread via GitHub


rkhachatryan opened a new pull request, #24550:
URL: https://github.com/apache/flink/pull/24550

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34369) Elasticsearch connector supports SSL context

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34369:
---
Labels: pull-request-available  (was: )

> Elasticsearch connector supports SSL context
> 
>
> Key: FLINK-34369
> URL: https://issues.apache.org/jira/browse/FLINK-34369
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> The current Flink ElasticSearch connector does not support SSL option, 
> causing issues connecting to secure ES clusters.
> As SSLContext is not serializable and possibly environment aware, we can add 
> a (serializable) provider of SSL context to the {{NetworkClientConfig}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34369][connectors/elasticsearch] Elasticsearch connector supports SSL context [flink-connector-elasticsearch]

2024-03-21 Thread via GitHub


reta commented on PR #91:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/91#issuecomment-2012289683

   @liuml07 could you please use the same configuration/API model as [1] does 
for SSL support in OpenSearch? Besides just having familiar configuration, the 
API is friendly to SQL connector (where providing hostname verifier could be 
challenging), thank you.
   
   The idea basically is that `NetworkClientConfig` has a setting):
- allowInsecure: booleab (uses trustall model in case of self-signed certs)
- you could certainly also keep more elaborate configuration with 
SSLContext / SSLEngine / ... in case it is needed
   
   Thank you.
   
   [1] 
https://github.com/apache/flink-connector-opensearch/tree/main/flink-connector-opensearch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34643) JobIDLoggingITCase failed

2024-03-21 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829557#comment-17829557
 ] 

Roman Khachatryan commented on FLINK-34643:
---

Thanks for reporting

My first suspicion was that the assertion happens too early for some reason, 
but that's not the case, because later log messages are present.

The only reason I can think of is async logging or buffering in log4j - will 
try to verify that.

Btw, some runs are about old version ("Too few log events recorded" was removed 
in master), but others are valid.

 

> JobIDLoggingITCase failed
> -
>
> Key: FLINK-34643
> URL: https://issues.apache.org/jira/browse/FLINK-34643
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=7897
> {code}
> Mar 09 01:24:23 01:24:23.498 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 4.209 s <<< FAILURE! -- in 
> org.apache.flink.test.misc.JobIDLoggingITCase
> Mar 09 01:24:23 01:24:23.498 [ERROR] 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) 
> -- Time elapsed: 1.459 s <<< ERROR!
> Mar 09 01:24:23 java.lang.IllegalStateException: Too few log events recorded 
> for org.apache.flink.runtime.jobmaster.JobMaster (12) - this must be a bug in 
> the test code
> Mar 09 01:24:23   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:148)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:132)
> Mar 09 01:24:23   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 01:24:23   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Mar 09 01:24:23 
> {code}
> The other test failures of this build were also caused by the same test:
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=2c3cbe13-dee0-5837-cf47-3053da9a8a78&t=b78d9d30-509a-5cea-1fef-db7abaa325ae&l=8349
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1&l=8209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-21 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829555#comment-17829555
 ] 

Andriy Redko commented on FLINK-34663:
--

[~wgendy] apologies for the delay, it seems like the only path to move forward 
is to have dedicated support for OSv1 and OSv2 (as for Elasticsearch), that 
should be fixed by FLINK-33859 (expecting to get it merged soon), thank you

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34910] Fix optimizing window join [flink]

2024-03-21 Thread via GitHub


flinkbot commented on PR #24549:
URL: https://github.com/apache/flink/pull/24549#issuecomment-2012256673

   
   ## CI report:
   
   * d16fe63ecacbe579ca6c7eda4135b597c4eb6832 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34706][docs] Deprecates 1.17 docs. [flink]

2024-03-21 Thread via GitHub


lincoln-lil merged PR #24547:
URL: https://github.com/apache/flink/pull/24547


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34706][docs] Add 1.19 to PreviousDocs list. [flink]

2024-03-21 Thread via GitHub


lincoln-lil merged PR #24548:
URL: https://github.com/apache/flink/pull/24548


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34910:
---
Labels: pull-request-available  (was: )

> Can not plan window join without projections
> 
>
> Key: FLINK-34910
> URL: https://issues.apache.org/jira/browse/FLINK-34910
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When running:
> {code}
>   @Test
>   def testWindowJoinWithoutProjections(): Unit = {
> val sql =
>   """
> |SELECT *
> |FROM
> |  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS L
> |JOIN
> |  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
> MINUTE)) AS R
> |ON L.window_start = R.window_start AND L.window_end = R.window_end 
> AND L.a = R.a
>   """.stripMargin
> util.verifyRelPlan(sql)
>   }
> {code}
> It fails with:
> {code}
> FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
> proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
> PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
> window_time0])
> +- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
> requiredColumns=[{}])
>:- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
> 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>:  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>: +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
>:+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable]], fields=[a, b, c, rowtime])
>+- 
> FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
>  90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
> b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
> proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
> *ROWTIME* window_time)])
>   +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
> 1000:INTERVAL SECOND)])
>  +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS 
> proctime])
> +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, MyTable2]], fields=[a, b, c, rowtime])
> Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). 
> This is a bug, please file a JIRA issue.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}
> In prior versions this had another problem of ambiguous {{rowtime}} column, 
> but this has been fixed by [FLINK-32648]. In versions < 1.19 
> WindowTableFunctions were incorrectly scoped, because they were not extending 
> from Calcite's SqlWindowTableFunction and the scoping implemented in 
> SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34707][tests] Update base version for japicmp check [flink]

2024-03-21 Thread via GitHub


lincoln-lil commented on PR #24515:
URL: https://github.com/apache/flink/pull/24515#issuecomment-2012246995

   @Myasuka Thanks for reviewing! Please help review the cherry-pick one: 
https://github.com/apache/flink/pull/24514


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34910] Fix optimizing window join [flink]

2024-03-21 Thread via GitHub


dawidwys opened a new pull request, #24549:
URL: https://github.com/apache/flink/pull/24549

   ## What is the purpose of the change
   
   Fix support for 
   
   ```
   SELECT * FROM
  TABLE(TUMBLE)
   JOIN
  TABLE(TUMBLE...) 
   ```
   
   without enclosing the tables with a `SELECT * FROM`
   
   ## Brief change log
   
   Make the `JoinTableFunctionScanToCorrelateRule` stricter so that it converts 
Joins that have a chance to pass the corresponding 
`StreamPhysicalConstantTableFunctionScanRule` rule.
   
   ## Verifying this change
   
   Added a test
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34706) Promote release 1.19

2024-03-21 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829244#comment-17829244
 ] 

lincoln lee edited comment on FLINK-34706 at 3/21/24 1:00 PM:
--

# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 #  Update the list of previous version in {{docs/config.toml}} on the master 
branch: Done in https://github.com/apache/flink/pull/24548
 #  Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch 
of the _previous_ Flink version:  (for 1.17) 
https://github.com/apache/flink/pull/24547
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]


was (Author: lincoln.86xy):
# (/) Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # (/) Release announced on the user@ mailing list: [[announcement 
link|https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm]|https://lists.apache.org/thread/72nmfwsgs7sqkw7mykz4h36hgb7wo04d]
 # (/) Blog post published, if applicable:[ blog 
post|https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/]
 # (/) Release recorded in [reporter.apache.org: 
https://reporter.apache.org/addrelease.html?flink|https://reporter.apache.org/addrelease.html?flink]
 # (/) Release announced on social media: 
[Twitter|https://twitter.com/ApacheFlink/status/1638839542403981312?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1638839542403981312%7Ctwgr%5E7f3046f67668cf3ebbd929ef126a32473db2a1b5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FApacheFlink2Fstatus2F1638839542403981312widget%3DTweet]
 # (/) Completion declared on the dev@ [mailing list 
|https://lists.apache.org/thread/z8sfwlppsodcyng62c584n76b69b16fc]
 # (/) Update Homebrew: 
[https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
automatically - at least for minor releases  for both minor and major 
releases): [https://formulae.brew.sh/formula/apache-flink#default]
 # (/) No need to update quickstart scripts in {{{}flink-web{}}}, under the 
{{q/}} directory (alread use global version variables) 
 #  Updated the japicmp configuration: Done in 
https://issues.apache.org/jira/browse/FLINK-34707
 #  Update the list of previous version in {{docs/config.toml}} on the master 
branch: Done in 
 #  Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch 
of the _previous_ Flink version:  (for 1.17)
 # (/) Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml] Done 
in 
[a6a4667|https://github.com/apache/flink/commit/a6a4667202a0f89fe63ff4f2e476c0200ec66e63]

> Promote release 1.19
> 
>
> Key: FLINK-34706
> URL: https://issues.apache.org/jira/browse/FLINK-34706
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.0
>Reporter: Lincoln Lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-32920), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|h

[jira] [Commented] (FLINK-34737) "Deployment - Kubernetes" Page for Flink CDC Chinese Documentation

2024-03-21 Thread ZhengYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829544#comment-17829544
 ] 

ZhengYu Chen commented on FLINK-34737:
--

https://issues.apache.org/jira/browse/FLINK-34737 doc assign to me  cc  
[~kunni] 

> "Deployment - Kubernetes" Page for Flink CDC Chinese Documentation
> --
>
> Key: FLINK-34737
> URL: https://issues.apache.org/jira/browse/FLINK-34737
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Flink CDC
>Affects Versions: 3.1.0
>Reporter: LvYanquan
>Priority: Major
> Fix For: 3.1.0
>
>
> Translate 
> [https://github.com/apache/flink-cdc/blob/master/docs/content/docs/deployment/kubernetes.md]
>  into Chinese.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34910:


 Summary: Can not plan window join without projections
 Key: FLINK-34910
 URL: https://issues.apache.org/jira/browse/FLINK-34910
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


When running:
{code}
  @Test
  def testWindowJoinWithoutProjections(): Unit = {
val sql =
  """
|SELECT *
|FROM
|  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS L
|JOIN
|  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS R
|ON L.window_start = R.window_start AND L.window_end = R.window_end AND 
L.a = R.a
  """.stripMargin
util.verifyRelPlan(sql)
  }
{code}

It fails with:
{code}
FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
window_time0])
+- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
requiredColumns=[{}])
   :- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) b, 
BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* proctime, 
TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) *ROWTIME* 
window_time)])
   :  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
   : +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
   :+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable]], fields=[a, b, c, rowtime])
   +- 
FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
*ROWTIME* window_time)])
  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
 +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable2]], fields=[a, b, c, rowtime])

Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). This 
is a bug, please file a JIRA issue.
Please check the documentation for the set of currently supported SQL features.
{code}

In prior versions this had another problem of ambiguous {{rowtime}} column, but 
this has been fixed by [FLINK-32648]. In versions < 1.19 WindowTableFunctions 
were incorrectly scoped, because they were not extending from Calcite's 
SqlWindowTableFunction and the scoping implemented in 
SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34707][tests] Update base version for japicmp check [flink]

2024-03-21 Thread via GitHub


lincoln-lil commented on code in PR #24515:
URL: https://github.com/apache/flink/pull/24515#discussion_r1533840206


##
pom.xml:
##
@@ -2358,12 +2358,6 @@ under the License.

@org.apache.flink.annotation.PublicEvolving

@org.apache.flink.annotation.Internal


Review Comment:
   This  marker is used for location by the script, see: 
https://github.com/apache/flink/blob/release-1.19/tools/releasing/update_japicmp_configuration.sh#L63



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34909) OceanBase事务ID需求

2024-03-21 Thread xiaotouming (Jira)
xiaotouming created FLINK-34909:
---

 Summary: OceanBase事务ID需求
 Key: FLINK-34909
 URL: https://issues.apache.org/jira/browse/FLINK-34909
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.1.0
Reporter: xiaotouming
 Fix For: cdc-3.1.0


可以通过flink data stream方式解析到OceanBase的事务ID



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34044] Copy dynamic table options before mapping deprecated configs [flink-connector-aws]

2024-03-21 Thread via GitHub


vahmed-hamdy commented on code in PR #132:
URL: 
https://github.com/apache/flink-connector-aws/pull/132#discussion_r1533768503


##
flink-connector-aws/flink-connector-aws-kinesis-streams/src/test/java/org/apache/flink/connector/kinesis/table/util/KinesisProducerOptionsMapperTest.java:
##
@@ -76,4 +77,19 @@ void testProducerEndpointAndPortExtraction() {
 
 
Assertions.assertThat(actualMappedProperties).isEqualTo(expectedOptions);
 }
+
+@Test
+void testProducerOptionsMapperDoesNotModifyOptionsInstance() {
+Map deprecatedOptions = new HashMap<>();
+deprecatedOptions.put("sink.producer.kinesis-endpoint", 
"some-end-point.kinesis");
+deprecatedOptions.put("sink.producer.kinesis-port", "1234");
+
+Map deprecatedOptionsImmutable =
+Collections.unmodifiableMap(deprecatedOptions);
+Assertions.assertThatNoException()
+.isThrownBy(
+() ->
+new 
KinesisProducerOptionsMapper(deprecatedOptionsImmutable)
+.mapDeprecatedClientOptions());
+}

Review Comment:
   Good point, Thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34743][autoscaler] Memory tuning takes effect even if the parallelism isn't changed [flink-kubernetes-operator]

2024-03-21 Thread via GitHub


mxm commented on PR #799:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/799#issuecomment-2012067487

   > > Autoscaling wouldn't have a chance to realize its SLOs.
   > 
   > You are right. Autoscaler supports scaling parallelism and memory for now. 
As I understand, the downtime cannot be guaranteed even if users only use 
scaling parallelism. For example, flink jobs don't use the Adaptive Scheduler 
and the input rate is always changed, then flink jobs will be scaled frequently.
   
   I agree that there are edge cases where the autoscaler cannot fulfill its 
service objectives. However, that doesn't mean we need to give up on them 
entirely. With restarts due to autotuning at any point in time, the autoscaling 
algorithm is inherently broken because downtime is never factored into the 
autoscaling decision. 
   
   You mentioned the adaptive scheduler. Frankly, the use of the adaptive 
scheduler with autoscaling isn't fully developed. I would discourage users from 
using it with autoscaling at its current state.
   
   > Fortunately, scaling parallelism consider the restart time than scaling 
memory, and then increase some parallelisms.
   
   +1
   
   > 
   > > For this feature to be mergable, it will either have to be disabled by 
default (opt-in via config)
   > 
   > IIUC, `job.autoscaler.memory.tuning.enabled` is disabled by default. It 
means the memory tuning is turned off by default even if this PR is merged, 
right?
   
   Autoscaling is also disabled by default. I think we want to make sure 
autoscaling and autotuning work together collaboratively.
   
   > 
   > > or be integrated with autoscaling, i.e. figure out a way to balance 
tuning / autoscaling decisions and feed back tuning decisions to the 
autoscaling algorithm to scale up whenever we redeploy for memory changes to 
avoid falling behind and preventing autoscaling to scale up after downtime due 
to memory reconfigurations.
   > 
   > The restartTime has been considered during `computeScalingSummary`, but we 
may ignore it due to the new parallelism is `WithinUtilizationTarget`. Do you 
mean we force adjust the parallelism to the new parallelism when scaling memory 
happens even if the new parallelism `WithinUtilizationTarget`?
   
   True, the rescale time has been considered for the downscale / upscale 
processing capacity, but the current processing capacity doesn't factor in 
downtime. Unplanned restarts would reduce the processing capacity. If we know 
we are going to restart, the autoscaling algorithm should factor this in, e.g. 
by reducing the calculated processing capacity accordingly.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34898) Cannot create named STRUCT with a single field

2024-03-21 Thread Feng Jin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829513#comment-17829513
 ] 

Feng Jin edited comment on FLINK-34898 at 3/21/24 11:51 AM:


I tried this case and indeed encountered an error, but I managed to pass the 
test by trying a different approach.

 

 
{code:java}
//代码占位符


-- The query is normal and results can be obtained.
Flink SQL> SELECT cast(ARRAY[ROW(1)] as ARRAY>);
[INFO] Result retrieval cancelled.


-- Got the exception

Flink SQL> SELECT ARRAY[cast(ROW(1) as ROW)];
[ERROR] Could not execute SQL statement. Reason:
java.lang.UnsupportedOperationException: class 
org.apache.calcite.sql.SqlBasicCall: ROW(1)
 {code}
 

 

I think this might indeed be a bug, we need to follow up and fix it.

 

[~chloehe]  Can you help to modify the title and content of Jira?  Please 
provide the specific query and corresponding error message.

 

 


was (Author: hackergin):
I tried this case and indeed encountered an error, but I managed to pass the 
test by trying a different approach.

 

 
{code:java}
//代码占位符


-- The query is normal and results can be obtained.
Flink SQL> SELECT cast(ARRAY[ROW(1)] as ARRAY>);
[INFO] Result retrieval cancelled.


-- Got the exception

Flink SQL> SELECT ARRAY[cast(ROW(1) as ROW)];
[ERROR] Could not execute SQL statement. Reason:
java.lang.UnsupportedOperationException: class 
org.apache.calcite.sql.SqlBasicCall: ROW(1)
 {code}
 

 

I think this might indeed be a bug, we need to follow up and fix it.

 

[~chloehe]  Can you help me modify the title and content of Jira?  Please 
provide the specific query and corresponding error message.

 

 

> Cannot create named STRUCT with a single field
> --
>
> Key: FLINK-34898
> URL: https://issues.apache.org/jira/browse/FLINK-34898
> Project: Flink
>  Issue Type: Bug
>Reporter: Chloe He
>Priority: Major
> Attachments: image-2024-03-21-12-00-00-183.png
>
>
> I'm trying to create named structs using Flink SQL and I found a previous 
> ticket https://issues.apache.org/jira/browse/FLINK-9161 that mentions the use 
> of the following syntax:
> {code:java}
> SELECT CAST(('a', 1) as ROW) AS row1;
> {code}
> However, my named struct has a single field and effectively it should look 
> something like `\{"a": 1}`. I can't seem to be able to find a way to 
> construct this. I have experimented with a few different syntax and it either 
> throws parsing error or casting error:
> {code:java}
> Cast function cannot convert value of type INTEGER to type 
> RecordType(VARCHAR(2147483647) a) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34903) Add mysql-pipeline-connector with table.exclude.list option to exclude unnecessary tables

2024-03-21 Thread Thorne (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thorne updated FLINK-34903:
---
Description: 
    When using the MySQL Pipeline connector for whole-database synchronization, 
users currently cannot exclude unnecessary tables. Taking reference from 
Debezium's parameters, specifically the {*}table.exclude.list{*}, if the 
*table.include.list* is declared, then the *table.exclude.list* parameter will 
not take effect. However, the tables specified in the tables parameter of the 
MySQL Pipeline connector are effectively added to the *table.include.list* in 
Debezium's context.

!screenshot-1.png!

!screenshot-2.png|width=834,height=86!

debezium opthion  desc

!screenshot-3.png|width=831,height=217!

    In summary, it is necessary to introduce an externally-exposed 
*table.exclude.list* parameter within the MySQL Pipeline connector to 
facilitate the exclusion of tables. This is because the current setup does not 
allow for excluding unnecessary tables when including others through the tables 
parameter.

  was:
    When using the MySQL Pipeline connector for whole-database synchronization, 
users currently cannot exclude unnecessary tables. Taking reference from 
Debezium's parameters, specifically the {*}table.exclude.list{*}, if the 
*table.include.list* is declared, then the *table.exclude.list* parameter will 
not take effect. However, the tables specified in the tables parameter of the 
MySQL Pipeline connector are effectively added to the *table.include.list* in 
Debezium's context.

    In summary, it is necessary to introduce an externally-exposed 
*table.exclude.list* parameter within the MySQL Pipeline connector to 
facilitate the exclusion of tables. This is because the current setup does not 
allow for excluding unnecessary tables when including others through the tables 
parameter.


> Add mysql-pipeline-connector with  table.exclude.list option to exclude 
> unnecessary tables 
> ---
>
> Key: FLINK-34903
> URL: https://issues.apache.org/jira/browse/FLINK-34903
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Thorne
>Priority: Major
>  Labels: cdc
> Fix For: cdc-3.1.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
>     When using the MySQL Pipeline connector for whole-database 
> synchronization, users currently cannot exclude unnecessary tables. Taking 
> reference from Debezium's parameters, specifically the 
> {*}table.exclude.list{*}, if the *table.include.list* is declared, then the 
> *table.exclude.list* parameter will not take effect. However, the tables 
> specified in the tables parameter of the MySQL Pipeline connector are 
> effectively added to the *table.include.list* in Debezium's context.
> !screenshot-1.png!
> !screenshot-2.png|width=834,height=86!
> debezium opthion  desc
> !screenshot-3.png|width=831,height=217!
>     In summary, it is necessary to introduce an externally-exposed 
> *table.exclude.list* parameter within the MySQL Pipeline connector to 
> facilitate the exclusion of tables. This is because the current setup does 
> not allow for excluding unnecessary tables when including others through the 
> tables parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34898) Cannot create named STRUCT with a single field

2024-03-21 Thread Feng Jin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829513#comment-17829513
 ] 

Feng Jin commented on FLINK-34898:
--

I tried this case and indeed encountered an error, but I managed to pass the 
test by trying a different approach.

 

 
{code:java}
//代码占位符


-- The query is normal and results can be obtained.
Flink SQL> SELECT cast(ARRAY[ROW(1)] as ARRAY>);
[INFO] Result retrieval cancelled.


-- Got the exception

Flink SQL> SELECT ARRAY[cast(ROW(1) as ROW)];
[ERROR] Could not execute SQL statement. Reason:
java.lang.UnsupportedOperationException: class 
org.apache.calcite.sql.SqlBasicCall: ROW(1)
 {code}
 

 

I think this might indeed be a bug, we need to follow up and fix it.

 

[~chloehe]  Can you help me modify the title and content of Jira?  Please 
provide the specific query and corresponding error message.

 

 

> Cannot create named STRUCT with a single field
> --
>
> Key: FLINK-34898
> URL: https://issues.apache.org/jira/browse/FLINK-34898
> Project: Flink
>  Issue Type: Bug
>Reporter: Chloe He
>Priority: Major
> Attachments: image-2024-03-21-12-00-00-183.png
>
>
> I'm trying to create named structs using Flink SQL and I found a previous 
> ticket https://issues.apache.org/jira/browse/FLINK-9161 that mentions the use 
> of the following syntax:
> {code:java}
> SELECT CAST(('a', 1) as ROW) AS row1;
> {code}
> However, my named struct has a single field and effectively it should look 
> something like `\{"a": 1}`. I can't seem to be able to find a way to 
> construct this. I have experimented with a few different syntax and it either 
> throws parsing error or casting error:
> {code:java}
> Cast function cannot convert value of type INTEGER to type 
> RecordType(VARCHAR(2147483647) a) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34903) Add mysql-pipeline-connector with table.exclude.list option to exclude unnecessary tables

2024-03-21 Thread Thorne (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829512#comment-17829512
 ] 

Thorne commented on FLINK-34903:


i take a pr in https://github.com/apache/flink-cdc/pull/3186

> Add mysql-pipeline-connector with  table.exclude.list option to exclude 
> unnecessary tables 
> ---
>
> Key: FLINK-34903
> URL: https://issues.apache.org/jira/browse/FLINK-34903
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Thorne
>Priority: Major
>  Labels: cdc
> Fix For: cdc-3.1.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
>     When using the MySQL Pipeline connector for whole-database 
> synchronization, users currently cannot exclude unnecessary tables. Taking 
> reference from Debezium's parameters, specifically the 
> {*}table.exclude.list{*}, if the *table.include.list* is declared, then the 
> *table.exclude.list* parameter will not take effect. However, the tables 
> specified in the tables parameter of the MySQL Pipeline connector are 
> effectively added to the *table.include.list* in Debezium's context.
>     In summary, it is necessary to introduce an externally-exposed 
> *table.exclude.list* parameter within the MySQL Pipeline connector to 
> facilitate the exclusion of tables. This is because the current setup does 
> not allow for excluding unnecessary tables when including others through the 
> tables parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34731][runtime] Remove SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler. [flink]

2024-03-21 Thread via GitHub


zhuzhurk commented on code in PR #24524:
URL: https://github.com/apache/flink/pull/24524#discussion_r1533719880


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AdaptiveBatchScheduler.java:
##
@@ -232,8 +230,8 @@ protected void startSchedulingInternal() {
 }
 });
 
-speculativeExecutionHandler.startSlowTaskDetector(
-getExecutionGraph(), getMainThreadExecutor());
+speculativeExecutionHandler.init(

Review Comment:
   I prefer this to happen at the beginning of `startSchedulingInternal`, doing 
initialization before triggering scheduling.



##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AdaptiveBatchSchedulerFactory.java:
##
@@ -162,6 +147,74 @@ public SchedulerNG createInstance(
 jobGraph.getName(),
 jobGraph.getJobID());
 
+return createScheduler(
+log,
+jobGraph,
+ioExecutor,
+jobMasterConfiguration,
+futureExecutor,
+userCodeLoader,
+checkpointRecoveryFactory,
+rpcTimeout,
+blobWriter,
+jobManagerJobMetricGroup,
+shuffleMaster,
+partitionTracker,
+executionDeploymentTracker,
+initializationTimestamp,
+mainThreadExecutor,
+jobStatusListener,
+failureEnrichers,
+blocklistOperations,
+new DefaultExecutionOperations(),
+allocatorFactory,
+restartBackoffTimeStrategy,
+new ScheduledExecutorServiceAdapter(futureExecutor),
+DefaultVertexParallelismAndInputInfosDecider.from(
+getDefaultMaxParallelism(jobMasterConfiguration, 
executionConfig),
+jobMasterConfiguration));
+}
+
+public AdaptiveBatchScheduler createScheduler(

Review Comment:
   Better to be static and `@VisibleForTesting`.



##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AdaptiveBatchSchedulerFactory.java:
##
@@ -162,6 +147,74 @@ public SchedulerNG createInstance(
 jobGraph.getName(),
 jobGraph.getJobID());
 
+return createScheduler(
+log,
+jobGraph,
+ioExecutor,
+jobMasterConfiguration,
+futureExecutor,
+userCodeLoader,
+checkpointRecoveryFactory,
+rpcTimeout,
+blobWriter,
+jobManagerJobMetricGroup,
+shuffleMaster,
+partitionTracker,
+executionDeploymentTracker,
+initializationTimestamp,
+mainThreadExecutor,
+jobStatusListener,
+failureEnrichers,
+blocklistOperations,
+new DefaultExecutionOperations(),
+allocatorFactory,
+restartBackoffTimeStrategy,
+new ScheduledExecutorServiceAdapter(futureExecutor),
+DefaultVertexParallelismAndInputInfosDecider.from(
+getDefaultMaxParallelism(jobMasterConfiguration, 
executionConfig),
+jobMasterConfiguration));
+}
+
+public AdaptiveBatchScheduler createScheduler(
+Logger log,
+JobGraph jobGraph,
+Executor ioExecutor,
+Configuration jobMasterConfiguration,
+ScheduledExecutorService futureExecutor,
+ClassLoader userCodeLoader,
+CheckpointRecoveryFactory checkpointRecoveryFactory,
+Time rpcTimeout,
+BlobWriter blobWriter,
+JobManagerJobMetricGroup jobManagerJobMetricGroup,
+ShuffleMaster shuffleMaster,
+JobMasterPartitionTracker partitionTracker,
+ExecutionDeploymentTracker executionDeploymentTracker,
+long initializationTimestamp,
+ComponentMainThreadExecutor mainThreadExecutor,
+JobStatusListener jobStatusListener,
+Collection failureEnrichers,
+BlocklistOperations blocklistOperations,
+ExecutionOperations executionOperations,
+ExecutionSlotAllocatorFactory allocatorFactory,
+RestartBackoffTimeStrategy restartBackoffTimeStrategy,
+ScheduledExecutor delayExecutor,
+VertexParallelismAndInputInfosDecider 
vertexParallelismAndInputInfosDecider)
+throws Exception {
+
+checkState(
+jobGraph.getJobType() == JobType.BATCH,
+"Adaptive batch scheduler only supports batch jobs");
+checkAllExchangesAreSupported(jobGraph);
+
+final boolean enableSpeculativeExecuti

[jira] [Commented] (FLINK-34908) [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829507#comment-17829507
 ] 

Leonard Xu commented on FLINK-34908:


Thanks [~pacinogong], assigned this ticket to you.

> [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision 
> for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Add mysql-pipeline-connector with table.exclude.list option to exclud… [flink-cdc]

2024-03-21 Thread via GitHub


shiyiky opened a new pull request, #3186:
URL: https://github.com/apache/flink-cdc/pull/3186

   desc :[FLINK-34903](https://issues.apache.org/jira/browse/FLINK-34903)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34908) [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-34908:
--

Assignee: Xin Gong

> [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision 
> for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34731][runtime] Remove SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler. [flink]

2024-03-21 Thread via GitHub


JunRuiLee commented on PR #24524:
URL: https://github.com/apache/flink/pull/24524#issuecomment-2011980990

   Thanks @zhuzhurk for the review, I've updated this pr, PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34731][runtime] Remove SpeculativeScheduler and incorporate its features into AdaptiveBatchScheduler. [flink]

2024-03-21 Thread via GitHub


JunRuiLee commented on code in PR #24524:
URL: https://github.com/apache/flink/pull/24524#discussion_r1533688677


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/DefaultSpeculativeExecutionHandler.java:
##
@@ -7,62 +7,37 @@
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.

Review Comment:
   Fixed. 
   What happened was that I removed the original SpeculativeScheduler.class 
file and added a new DefaultSpeculativeExecutionHandler.class file. However, 
during the commit, these two actions were somehow combined into a single modify 
operation... 🤦‍♂️🤦‍♂️



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34643) JobIDLoggingITCase failed

2024-03-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829485#comment-17829485
 ] 

Matthias Pohl edited comment on FLINK-34643 at 3/21/24 11:13 AM:
-

* 
https://github.com/apache/flink/actions/runs/8290287716/job/22688325865#step:10:9328
* 
https://github.com/apache/flink/actions/runs/8304571223/job/22730531076#step:10:9194
* 
https://github.com/apache/flink/actions/runs/8312246651/job/22747312383#step:10:8539
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764925776#step:10:8913
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764920830#step:10:8727
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764903331#step:10:9336
* 
https://github.com/apache/flink/actions/runs/8336454518/job/22813901357#step:10:8952
* 
https://github.com/apache/flink/actions/runs/8336454518/job/22813876201#step:10:9327
* 
https://github.com/apache/flink/actions/runs/8352823788/job/22863786799#step:10:8952
* 
https://github.com/apache/flink/actions/runs/8352823788/job/22863772571#step:10:9337
* 
https://github.com/apache/flink/actions/runs/8368626493/job/22913270846#step:10:8418


was (Author: mapohl):
* 
https://github.com/apache/flink/actions/runs/8290287716/job/22688325865#step:10:9328
* 
https://github.com/apache/flink/actions/runs/8304571223/job/22730531076#step:10:9194
* 
https://github.com/apache/flink/actions/runs/8312246651/job/22747312383#step:10:8539
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764925776#step:10:8913
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764920830#step:10:8727
* 
https://github.com/apache/flink/actions/runs/8320242443/job/22764903331#step:10:9336
* 
https://github.com/apache/flink/actions/runs/8336454518/job/22813901357#step:10:8952
* 
https://github.com/apache/flink/actions/runs/8336454518/job/22813876201#step:10:9327
* 
https://github.com/apache/flink/actions/runs/8352823788/job/22863786799#step:10:8952
* 
https://github.com/apache/flink/actions/runs/8352823788/job/22863772571#step:10:9337

> JobIDLoggingITCase failed
> -
>
> Key: FLINK-34643
> URL: https://issues.apache.org/jira/browse/FLINK-34643
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=7897
> {code}
> Mar 09 01:24:23 01:24:23.498 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 4.209 s <<< FAILURE! -- in 
> org.apache.flink.test.misc.JobIDLoggingITCase
> Mar 09 01:24:23 01:24:23.498 [ERROR] 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) 
> -- Time elapsed: 1.459 s <<< ERROR!
> Mar 09 01:24:23 java.lang.IllegalStateException: Too few log events recorded 
> for org.apache.flink.runtime.jobmaster.JobMaster (12) - this must be a bug in 
> the test code
> Mar 09 01:24:23   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:148)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:132)
> Mar 09 01:24:23   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 01:24:23   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Mar 09 01:24:23 
> {code}
> The other test failures of this build were also caused by the same test:
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=2c3cbe13-dee0-5837-cf47-3053da9a8a78&t=b78d9d30-509a-5cea-1fef-db7abaa325ae&l=8349
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1&l=8209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33186) CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished fails on AZP

2024-03-21 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17829501#comment-17829501
 ] 

Matthias Pohl commented on FLINK-33186:
---

https://github.com/apache/flink/actions/runs/8369823390/job/22916375709#step:10:7894

>  CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished 
> fails on AZP
> -
>
> Key: FLINK-33186
> URL: https://issues.apache.org/jira/browse/FLINK-33186
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Sergey Nuyanzin
>Assignee: Jiang Xin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53509&view=logs&j=baf26b34-3c6a-54e8-f93f-cf269b32f802&t=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9&l=8762
> fails as
> {noformat}
> Sep 28 01:23:43 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task local 
> checkpoint failure.
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.PendingCheckpoint.abort(PendingCheckpoint.java:550)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2248)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2235)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.lambda$null$9(CheckpointCoordinator.java:817)
> Sep 28 01:23:43   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Sep 28 01:23:43   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 28 01:23:43   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> Sep 28 01:23:43   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> Sep 28 01:23:43   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Sep 28 01:23:43   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Sep 28 01:23:43   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >