(flink-docker) branch master updated: [FLINK-33842][docs] Update minor version for 1.18.1 release

2024-01-16 Thread jingge
This is an automated email from the ASF dual-hosted git repository.

jingge pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new 6279879  [FLINK-33842][docs] Update minor version for 1.18.1 release
6279879 is described below

commit 627987997ca7ec86bcc3d80b26df58aa595b91af
Author: jingge 
AuthorDate: Tue Jan 16 22:48:12 2024 +0100

[FLINK-33842][docs] Update minor version for 1.18.1 release
---
 1.18/scala_2.12-java11-ubuntu/Dockerfile   | 4 ++--
 1.18/scala_2.12-java11-ubuntu/release.metadata | 2 +-
 1.18/scala_2.12-java17-ubuntu/Dockerfile   | 4 ++--
 1.18/scala_2.12-java17-ubuntu/release.metadata | 2 +-
 1.18/scala_2.12-java8-ubuntu/Dockerfile| 4 ++--
 1.18/scala_2.12-java8-ubuntu/release.metadata  | 2 +-
 6 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/1.18/scala_2.12-java11-ubuntu/Dockerfile 
b/1.18/scala_2.12-java11-ubuntu/Dockerfile
index a53d960..4871b08 100644
--- a/1.18/scala_2.12-java11-ubuntu/Dockerfile
+++ b/1.18/scala_2.12-java11-ubuntu/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz.asc
 \
 GPG_KEY=96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82 \
 CHECK_GPG=true
 
diff --git a/1.18/scala_2.12-java11-ubuntu/release.metadata 
b/1.18/scala_2.12-java11-ubuntu/release.metadata
index cd4f723..3ef7a4f 100644
--- a/1.18/scala_2.12-java11-ubuntu/release.metadata
+++ b/1.18/scala_2.12-java11-ubuntu/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.18.0-scala_2.12-java11, 1.18-scala_2.12-java11, scala_2.12-java11, 
1.18.0-scala_2.12, 1.18-scala_2.12, scala_2.12, 1.18.0-java11, 1.18-java11, 
java11, 1.18.0, 1.18, latest
+Tags: 1.18.1-scala_2.12-java11, 1.18-scala_2.12-java11, scala_2.12-java11, 
1.18.1-scala_2.12, 1.18-scala_2.12, scala_2.12, 1.18.1-java11, 1.18-java11, 
java11, 1.18.1, 1.18, latest
 Architectures: amd64,arm64v8
diff --git a/1.18/scala_2.12-java17-ubuntu/Dockerfile 
b/1.18/scala_2.12-java17-ubuntu/Dockerfile
index 2a41dae..11229e4 100644
--- a/1.18/scala_2.12-java17-ubuntu/Dockerfile
+++ b/1.18/scala_2.12-java17-ubuntu/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz.asc
 \
 GPG_KEY=96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82 \
 CHECK_GPG=true
 
diff --git a/1.18/scala_2.12-java17-ubuntu/release.metadata 
b/1.18/scala_2.12-java17-ubuntu/release.metadata
index 2a19f9f..7f254e7 100644
--- a/1.18/scala_2.12-java17-ubuntu/release.metadata
+++ b/1.18/scala_2.12-java17-ubuntu/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.18.0-scala_2.12-java17, 1.18-scala_2.12-java17, scala_2.12-java17, 
1.18.0-java17, 1.18-java17, java17
+Tags: 1.18.1-scala_2.12-java17, 1.18-scala_2.12-java17, scala_2.12-java17, 
1.18.1-java17, 1.18-java17, java17
 Architectures: amd64,arm64v8
diff --git a/1.18/scala_2.12-java8-ubuntu/Dockerfile 
b/1.18/scala_2.12-java8-ubuntu/Dockerfile
index 72f7282..d210dcf 100644
--- a/1.18/scala_2.12-java8-ubuntu/Dockerfile
+++ b/1.18/scala_2.12-java8-ubuntu/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.18.1/flink-1.18.1-bin-scala_2.12.tgz.asc
 \
 GPG_KEY=96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82 \
 CHECK_GPG=true
 
diff --git a/1.18/scala_2.12-java8-ubuntu/release.metadata 
b/1.18/scala_2.12-java8-ubuntu/release.metadata
index fccb4b0..c7d4bf8 100644
--- a/1.18/scala_2.12-java8-ubuntu/release.metadata
+++ b/1.18/scala_2.12-java8-ubuntu/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.18.0-scala_2.12-java8, 1.18-scala_2.12-java8, scala_2.12-java8, 
1.18.0-java8, 1.18-java8, 

(flink-docker) branch dev-1.18 updated: [FLINK-33842][docker] Add GPG key for 1.18.1 release

2024-01-16 Thread jingge
This is an automated email from the ASF dual-hosted git repository.

jingge pushed a commit to branch dev-1.18
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.18 by this push:
 new df317cb  [FLINK-33842][docker] Add GPG key for 1.18.1 release
df317cb is described below

commit df317cb99bdc28b9f0aafe8616aa075a07499fbf
Author: jingge 
AuthorDate: Tue Jan 16 19:00:45 2024 +0100

[FLINK-33842][docker] Add GPG key for 1.18.1 release
---
 add-version.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/add-version.sh b/add-version.sh
index 43b5f82..0ae0965 100755
--- a/add-version.sh
+++ b/add-version.sh
@@ -102,6 +102,8 @@ elif [ "$flink_version" = "1.17.0" ]; then
 gpg_key="A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B"
 elif [ "$flink_version" = "1.18.0" ]; then
 gpg_key="96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82"
+elif [ "$flink_version" = "1.18.1" ]; then
+gpg_key="96AE0E32CBE6E0753CE6DF6CB078D1D3253A8D82"
 else
 error "Missing GPG key ID for this release"
 fi



(flink) branch master updated (cdf314d30b5 -> a886339dbb3)

2024-01-16 Thread guoweijie
This is an automated email from the ASF dual-hosted git repository.

guoweijie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from cdf314d30b5 [FLINK-34116][test] Enhance 
GlobalConfigurationTest.testInvalidStandardYamlFile for JDK compatibility.
 add 2839d06559c [FLINK-33743][runtime] Distinguish channel and subpartition
 add c65d5f18ad5 [FLINK-33743][runtime] Replace consumer-side 
SubpartitionId with InputChannelId
 add 0e8b9808839 [FLINK-33743][runtime] Modify parameter subpartitionIndex 
to subpartitionIndexSet
 add 6b44f29a1ce [FLINK-33743][runtime] Identify subpartitionId in 
notifyRequiredSegment
 add f0436e53313 [FLINK-33743][runtime] Disable tier shuffle during recovery
 add 439d1091daa [FLINK-33743][runtime] Acquire subpartition id for input 
channel dynamically in tiered shuffle
 add 4e6796dd147 [FLINK-33743][runtime] Change numCreditsAvailable to 
isCreditAvailable in ResultSubpartitionView#getAvailabilityAndBacklog
 add 8651f734b16 [FLINK-33743][runtime] Add ResultSubpartitionView 
parameter to notifyDataAvailable() method
 add 1199c7106a4 [FLINK-33743][runtime] Support consuming multiple 
subpartition in one inputchannel
 add f2417a74bd7 [FLINK-33743][runtime] Rename InputChannelStatus to 
SubpartitionStatus
 add 32de7521ac7 [FLINK-33743][runtime] Align watermark at subpartition 
granularity
 add 8358e3aa2ee [FLINK-33743][runtime] Ignore RecordAttributes when 
adaptive parallelism is used
 add be127310958 [FLINK-33743][runtime] Flush one accumulated buffer at a 
time
 add a886339dbb3 [FLINK-33743][runtime] Optimize partial record split logic

No new revisions were added by this update.

Summary of changes:
 .../ResultPartitionDeploymentDescriptor.java   |   4 +
 .../flink/runtime/executiongraph/IndexRange.java   |   4 +-
 .../IntermediateResultPartition.java   |   8 +
 .../io/network/NetworkSequenceViewReader.java  |  15 +-
 .../runtime/io/network/PartitionRequestClient.java |  11 +-
 .../runtime/io/network/api/RecoveryMetadata.java   |  72 
 .../network/api/serialization/EventSerializer.java |  13 +
 .../api/writer/ChannelSelectorRecordWriter.java|  10 +-
 .../io/network/api/writer/RecordWriter.java|  30 +-
 .../network/api/writer/ResultPartitionWriter.java  |   8 +-
 .../flink/runtime/io/network/buffer/Buffer.java|  50 ++-
 .../runtime/io/network/buffer/BufferBuilder.java   |  10 +
 .../metrics/ExclusiveBuffersUsageGauge.java|   4 +-
 .../network/metrics/FloatingBuffersUsageGauge.java |   2 +-
 .../io/network/metrics/InputGateMetrics.java   |  12 +-
 .../CreditBasedPartitionRequestClientHandler.java  |   5 +-
 .../CreditBasedSequenceNumberingViewReader.java|  56 ++-
 .../runtime/io/network/netty/NettyMessage.java |  58 ++-
 .../network/netty/NettyPartitionRequestClient.java |  21 +-
 .../netty/NettyPartitionRequestListener.java   |  13 +-
 .../io/network/netty/PartitionRequestQueue.java|   8 +-
 .../netty/PartitionRequestServerHandler.java   |   5 +-
 ...edBlockingSubpartitionDirectTransferReader.java |  11 +-
 .../BoundedBlockingSubpartitionReader.java |  11 +-
 .../partition/BufferAvailabilityListener.java  |   8 +-
 .../network/partition/BufferReaderWriterUtil.java  |  18 +
 ...ithChannel.java => BufferWithSubpartition.java} |  14 +-
 .../partition/BufferWritingResultPartition.java|   9 +-
 .../runtime/io/network/partition/DataBuffer.java   |  19 +-
 .../io/network/partition/DeduplicatedQueue.java|  70 
 .../io/network/partition/HashBasedDataBuffer.java  |  60 ++--
 .../partition/NoOpResultSubpartitionView.java  |   7 +-
 .../network/partition/PartitionedFileWriter.java   |  39 ++-
 .../network/partition/PipelinedSubpartition.java   |   4 +-
 .../partition/PipelinedSubpartitionView.java   |  11 +-
 .../io/network/partition/ResultPartition.java  |  38 ++
 .../network/partition/ResultPartitionFactory.java  |  11 +-
 .../network/partition/ResultPartitionManager.java  |  13 +-
 .../network/partition/ResultPartitionProvider.java |   6 +-
 .../partition/ResultSubpartitionIndexSet.java  |  77 
 .../network/partition/ResultSubpartitionView.java  |  15 +-
 .../partition/RoundRobinSubpartitionSelector.java  |  73 
 .../io/network/partition/SortBasedDataBuffer.java  |  12 +-
 .../runtime/io/network/partition/SortBuffer.java   |  34 +-
 .../partition/SortMergeResultPartition.java|  44 +--
 .../partition/SortMergeSubpartitionReader.java |  11 +-
 .../io/network/partition/SubpartitionSelector.java |  56 +++
 .../partition/UnionResultSubpartitionView.java | 258 ++
 .../network/partition/consumer/InputChannel.java   |  60 +++-
 .../partition/consumer/LocalInputChannel.java  |  57 ++-
 .../consumer/LocalRecoveredInputChannel.java   |   7 +-
 .../partition/consumer/RecoveredInputChannel.java  |  13 +-
 

(flink) branch master updated (a20d57b18bb -> cdf314d30b5)

2024-01-16 Thread zhuzh
This is an automated email from the ASF dual-hosted git repository.

zhuzh pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from a20d57b18bb [FLINK-34102][configuration] Fix the invalid configuration 
when using 'env.log.max' on yarn deployment mode
 add cdf314d30b5 [FLINK-34116][test] Enhance 
GlobalConfigurationTest.testInvalidStandardYamlFile for JDK compatibility.

No new revisions were added by this update.

Summary of changes:
 .../flink/configuration/GlobalConfigurationTest.java   | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)



(flink) 02/02: [FLINK-34102][configuration] Fix the invalid configuration when using 'env.log.max' on yarn deployment mode

2024-01-16 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a20d57b18bbab93e943d0b64ffb993382697d242
Author: Roc Marshal 
AuthorDate: Tue Jan 16 13:22:37 2024 +0800

[FLINK-34102][configuration] Fix the invalid configuration when using 
'env.log.max' on yarn deployment mode
---
 .../src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java 
b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
index 97e49b8046a..795567bc9f8 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
@@ -207,6 +207,7 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 this.userJarInclusion = getUserJarInclusionMode(flinkConfiguration);
 
 adaptEnvSetting(flinkConfiguration, CoreOptions.FLINK_LOG_LEVEL, 
"ROOT_LOG_LEVEL");
+adaptEnvSetting(flinkConfiguration, CoreOptions.FLINK_LOG_MAX, 
"MAX_LOG_FILE_NUMBER");
 
 
getLocalFlinkDistPath(flinkConfiguration).ifPresent(this::setLocalJarPath);
 decodeFilesToShipToCluster(flinkConfiguration, 
YarnConfigOptions.SHIP_FILES)



(flink) 01/02: [FLINK-33988][configuration] Fix the invalid configuration when using initialized root logger level on yarn deployment mode

2024-01-16 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit eb6d6ff56747e3b34e941a4cd301cd851c0b0b25
Author: Roc Marshal 
AuthorDate: Mon Jan 8 13:26:34 2024 +0800

[FLINK-33988][configuration] Fix the invalid configuration when using 
initialized root logger level on yarn deployment mode
---
 .../org/apache/flink/yarn/YarnClusterDescriptor.java  | 19 +++
 1 file changed, 19 insertions(+)

diff --git 
a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java 
b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
index a0732992e8b..97e49b8046a 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
@@ -136,6 +136,8 @@ import static 
org.apache.flink.client.deployment.application.ApplicationConfigur
 import static 
org.apache.flink.configuration.ConfigConstants.DEFAULT_FLINK_USR_LIB_DIR;
 import static org.apache.flink.configuration.ConfigConstants.ENV_FLINK_LIB_DIR;
 import static org.apache.flink.configuration.ConfigConstants.ENV_FLINK_OPT_DIR;
+import static 
org.apache.flink.configuration.ResourceManagerOptions.CONTAINERIZED_MASTER_ENV_PREFIX;
+import static 
org.apache.flink.configuration.ResourceManagerOptions.CONTAINERIZED_TASK_MANAGER_ENV_PREFIX;
 import static 
org.apache.flink.runtime.entrypoint.component.FileJobGraphRetriever.JOB_GRAPH_FILE_PATH;
 import static org.apache.flink.util.Preconditions.checkArgument;
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -204,6 +206,8 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 this.flinkConfiguration = 
Preconditions.checkNotNull(flinkConfiguration);
 this.userJarInclusion = getUserJarInclusionMode(flinkConfiguration);
 
+adaptEnvSetting(flinkConfiguration, CoreOptions.FLINK_LOG_LEVEL, 
"ROOT_LOG_LEVEL");
+
 
getLocalFlinkDistPath(flinkConfiguration).ifPresent(this::setLocalJarPath);
 decodeFilesToShipToCluster(flinkConfiguration, 
YarnConfigOptions.SHIP_FILES)
 .ifPresent(this::addShipFiles);
@@ -216,6 +220,21 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 this.nodeLabel = 
flinkConfiguration.getString(YarnConfigOptions.NODE_LABEL);
 }
 
+/** Adapt flink env setting. */
+private static  void adaptEnvSetting(
+Configuration config, ConfigOption configOption, String envKey) 
{
+config.getOptional(configOption)
+.ifPresent(
+value -> {
+config.setString(
+CONTAINERIZED_MASTER_ENV_PREFIX + envKey,
+String.valueOf(value));
+config.setString(
+CONTAINERIZED_TASK_MANAGER_ENV_PREFIX + 
envKey,
+String.valueOf(value));
+});
+}
+
 private Optional> decodeFilesToShipToCluster(
 final Configuration configuration, final 
ConfigOption> configOption) {
 checkNotNull(configuration);



(flink) branch master updated (fb7324b05bb -> a20d57b18bb)

2024-01-16 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from fb7324b05bb [FLINK-33679][checkpointing] Fix the misleading 
description about RestoreMode#LEGACY(#24101)
 new eb6d6ff5674 [FLINK-33988][configuration] Fix the invalid configuration 
when using initialized root logger level on yarn deployment mode
 new a20d57b18bb [FLINK-34102][configuration] Fix the invalid configuration 
when using 'env.log.max' on yarn deployment mode

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/flink/yarn/YarnClusterDescriptor.java | 20 
 1 file changed, 20 insertions(+)



(flink) branch master updated: [FLINK-33679][checkpointing] Fix the misleading description about RestoreMode#LEGACY(#24101)

2024-01-16 Thread hangxiang
This is an automated email from the ASF dual-hosted git repository.

hangxiang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new fb7324b05bb [FLINK-33679][checkpointing] Fix the misleading 
description about RestoreMode#LEGACY(#24101)
fb7324b05bb is described below

commit fb7324b05bbc11a92a686e8d7c3cfe5ef39be148
Author: Zakelly 
AuthorDate: Tue Jan 16 18:36:40 2024 +0800

[FLINK-33679][checkpointing] Fix the misleading description about 
RestoreMode#LEGACY(#24101)
---
 docs/layouts/shortcodes/generated/savepoint_config_configuration.html   | 2 +-
 .../src/main/java/org/apache/flink/runtime/jobgraph/RestoreMode.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/savepoint_config_configuration.html 
b/docs/layouts/shortcodes/generated/savepoint_config_configuration.html
index 2a40bd78b33..13baca7c989 100644
--- a/docs/layouts/shortcodes/generated/savepoint_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/savepoint_config_configuration.html
@@ -12,7 +12,7 @@
 execution.savepoint-restore-mode
 NO_CLAIM
 Enum
-Describes the mode how Flink should restore from the given 
savepoint or retained checkpoint.Possible values:"CLAIM": 
Flink will take ownership of the given snapshot. It will clean the snapshot 
once it is subsumed by newer ones."NO_CLAIM": Flink will not claim 
ownership of the snapshot files. However it will make sure it does not depend 
on any artefacts from the restored snapshot. In order to do that, Flink will 
take the first checkpoint as a f [...]
+Describes the mode how Flink should restore from the given 
savepoint or retained checkpoint.Possible values:"CLAIM": 
Flink will take ownership of the given snapshot. It will clean the snapshot 
once it is subsumed by newer ones."NO_CLAIM": Flink will not claim 
ownership of the snapshot files. However it will make sure it does not depend 
on any artefacts from the restored snapshot. In order to do that, Flink will 
take the first checkpoint as a f [...]
 
 
 execution.savepoint.ignore-unclaimed-state
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/RestoreMode.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/RestoreMode.java
index da6c325265a..14bac3c083b 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/RestoreMode.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/RestoreMode.java
@@ -37,7 +37,7 @@ public enum RestoreMode implements DescribedEnum {
 + " Flink will take the first checkpoint as a full one, 
which means it might"
 + " reupload/duplicate files that are part of the restored 
checkpoint."),
 LEGACY(
-"This is the mode in which Flink worked so far. It will not claim 
ownership of the"
+"This is the mode in which Flink worked until 1.15. It will not 
claim ownership of the"
 + " snapshot and will not delete the files. However, it 
can directly depend on"
 + " the existence of the files of the restored checkpoint. 
It might not be safe"
 + " to delete checkpoints that were restored in legacy 
mode ");



(flink) branch master updated (488d60a1d39 -> 534df6490e0)

2024-01-16 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 488d60a1d39 [FLINK-34066][table-planner] Fix LagFunction throw NPE 
when input argument are not null (#24075)
 add 36886541522 [FLINK-33958] Implement restore tests for IntervalJoin node
 add 534df6490e0 [FLINK-33958] Remove IntervalJoin Json Plan & IT tests

No new revisions were added by this update.

Summary of changes:
 .../exec/stream/IntervalJoinJsonPlanTest.java  |  99 ---
 .../nodes/exec/stream/IntervalJoinRestoreTest.java |  41 ++
 .../exec/stream/IntervalJoinTestPrograms.java  | 194 +
 .../jsonplan/IntervalJoinJsonPlanITCase.java   | 117 ---
 .../testProcessingTimeInnerJoinWithOnClause.out| 784 -
 .../plan/interval-join-event-time.json}| 257 +++
 .../interval-join-event-time/savepoint/_metadata   | Bin 0 -> 19515 bytes
 .../plan/interval-join-negative-interval.json} | 321 -
 .../savepoint/_metadata| Bin 0 -> 9505 bytes
 .../plan/interval-join-proc-time.json  | 445 
 .../interval-join-proc-time/savepoint/_metadata| Bin 0 -> 19511 bytes
 11 files changed, 920 insertions(+), 1338 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinJsonPlanTest.java
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinRestoreTest.java
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinTestPrograms.java
 delete mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/runtime/stream/jsonplan/IntervalJoinJsonPlanITCase.java
 delete mode 100644 
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinJsonPlanTest_jsonplan/testProcessingTimeInnerJoinWithOnClause.out
 copy 
flink-table/flink-table-planner/src/test/resources/{org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinJsonPlanTest_jsonplan/testRowTimeInnerJoinWithOnClause.out
 => 
restore-tests/stream-exec-interval-join_1/interval-join-event-time/plan/interval-join-event-time.json}
 (66%)
 create mode 100644 
flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-interval-join_1/interval-join-event-time/savepoint/_metadata
 rename 
flink-table/flink-table-planner/src/test/resources/{org/apache/flink/table/planner/plan/nodes/exec/stream/IntervalJoinJsonPlanTest_jsonplan/testRowTimeInnerJoinWithOnClause.out
 => 
restore-tests/stream-exec-interval-join_1/interval-join-negative-interval/plan/interval-join-negative-interval.json}
 (62%)
 create mode 100644 
flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-interval-join_1/interval-join-negative-interval/savepoint/_metadata
 create mode 100644 
flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-interval-join_1/interval-join-proc-time/plan/interval-join-proc-time.json
 create mode 100644 
flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-interval-join_1/interval-join-proc-time/savepoint/_metadata



(flink) annotated tag release-1.18.1 updated (a8c8b1c0e2c -> 1a713e5ac88)

2024-01-16 Thread jingge
This is an automated email from the ASF dual-hosted git repository.

jingge pushed a change to annotated tag release-1.18.1
in repository https://gitbox.apache.org/repos/asf/flink.git


*** WARNING: tag release-1.18.1 was modified! ***

from a8c8b1c0e2c (commit)
  to 1a713e5ac88 (tag)
 tagging a8c8b1c0e2c5e2e468f6de62f31a69e90af5c96e (commit)
 replaces pre-apache-rename
  by jingge
  on Tue Jan 16 18:41:13 2024 +0100

- Log -
Release Flink 1.18.1
-BEGIN PGP SIGNATURE-

iQIzBAABCgAdFiEElq4OMsvm4HU85t9ssHjR0yU6jYIFAmWmv7kACgkQsHjR0yU6
jYKbDQ//eMAxtINCWip1zrVzWoIKiBXEkMCXq84KxeQ9e3QRAntPgZX1dSUyLpM6
jFNkfYTEezXl/8/29JNgEppALOD/g3DjImxW9nRFtYUymL2PnpDYhHj97HWpmnGt
ew6pOzT3cX+ff5f/0Tpd5fwyCw1ZpG3mgiaxlhVcQwBfY2PdaRGXyLjPM58ZWQc4
ivJHc8G9Y5Ja2vr35ghBKoA1IlL5swTX+qRZsRO3Xg8xooEAqcJ6P0rhthMv2Gw1
QrFK2RNEjdU0swtks5rcWNYFR1/XSaPXEHbB8gmWAsZXu2KINLpBRKDLv5gTv5Qs
vaP1dHiKU49Wwv2hYQrJGO6XaWBjDs7PnM8MJUQK1UmcjlhTzdkt9VUYjxO6WaMk
81dbj9agvAOoFxzY1urchylXvKnlxO4JleMj2pK2QuEId7VNRk84tICuo4S8Z9uj
Gs+uLHFsrSywtCWR8XaOR0PRrYa33gKBX2nWXkatJYqwxbANjFSmxc3I/+EuqWk0
BomjByKmgM7x+zH2VCoMlSd8CoB4puxHWcxG5Rx87OKXtiwl0kCqz9WpvKLbb/Vt
UAp1uQpj8Pwzi6ilAV/uuyUxyu0y+Gu6yycesOPi0ArLCUvhmoG+XXKGlCp9n2+z
NJ1Ajs7lgz55l8qNl9ZY9fNDx74HB+esMtwYQLl9Jc9w6cQGA0Q=
=MDs6
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



svn commit: r66638 - /dev/flink/flink-1.18.1-rc1/

2024-01-16 Thread jingge
Author: jingge
Date: Tue Jan 16 13:06:26 2024
New Revision: 66638

Log:
deleting flink-1.18.1-rc1

Removed:
dev/flink/flink-1.18.1-rc1/



(flink) branch master updated (d92ab390ff0 -> 488d60a1d39)

2024-01-16 Thread shengkai
This is an automated email from the ASF dual-hosted git repository.

shengkai pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from d92ab390ff0 [FLINK-27756] Refactor Async Sink send gauge test
 add 488d60a1d39 [FLINK-34066][table-planner] Fix LagFunction throw NPE 
when input argument are not null (#24075)

No new revisions were added by this update.

Summary of changes:
 .../flink/table/planner/runtime/stream/sql/AggregateITCase.scala | 5 +++--
 .../flink/table/runtime/functions/aggregate/LagAggFunction.java  | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)



(flink-connector-elasticsearch) branch v3.0 updated: [hotfix] Set maximum supported Flink version to 1.17 for `v3.0`

2024-01-16 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch v3.0
in repository 
https://gitbox.apache.org/repos/asf/flink-connector-elasticsearch.git


The following commit(s) were added to refs/heads/v3.0 by this push:
 new f94649a  [hotfix] Set maximum supported Flink version to 1.17 for 
`v3.0`
f94649a is described below

commit f94649a60af367109f38c651d4dc819e0a430523
Author: Martijn Visser <2989614+martijnvis...@users.noreply.github.com>
AuthorDate: Tue Jan 16 10:42:10 2024 +0100

[hotfix] Set maximum supported Flink version to 1.17 for `v3.0`
---
 .github/workflows/push_pr.yml | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/.github/workflows/push_pr.yml b/.github/workflows/push_pr.yml
index 89ea3fb..4f22f02 100644
--- a/.github/workflows/push_pr.yml
+++ b/.github/workflows/push_pr.yml
@@ -16,13 +16,18 @@
 # limitations under the License.
 

 
-name: Build flink-connector-elasticsearch
+name: CI
 on: [push, pull_request]
 concurrency:
   group: ${{ github.workflow }}-${{ github.ref }}
   cancel-in-progress: true
 jobs:
   compile_and_test:
+strategy:
+  matrix:
+flink: [ 1.16-SNAPSHOT, 1.17-SNAPSHOT ]
+jdk: [ '8, 11' ]
 uses: apache/flink-connector-shared-utils/.github/workflows/ci.yml@ci_utils
 with:
-  flink_version: 1.16.2
+  flink_version: ${{ matrix.flink }}
+  jdk_version: ${{ matrix.jdk }}



(flink-connector-elasticsearch) branch main updated: [hotfix] Set maximum supported Flink version to 1.17 for `v3.0`

2024-01-16 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch main
in repository 
https://gitbox.apache.org/repos/asf/flink-connector-elasticsearch.git


The following commit(s) were added to refs/heads/main by this push:
 new 153b8fc  [hotfix] Set maximum supported Flink version to 1.17 for 
`v3.0`
153b8fc is described below

commit 153b8fc23e14c03c4bacf2c24fbe0fee286ec6e2
Author: Martijn Visser <2989614+martijnvis...@users.noreply.github.com>
AuthorDate: Tue Jan 16 10:39:25 2024 +0100

[hotfix] Set maximum supported Flink version to 1.17 for `v3.0`
---
 .github/workflows/weekly.yml | 4 
 1 file changed, 4 deletions(-)

diff --git a/.github/workflows/weekly.yml b/.github/workflows/weekly.yml
index 19904b2..109058c 100644
--- a/.github/workflows/weekly.yml
+++ b/.github/workflows/weekly.yml
@@ -46,10 +46,6 @@ jobs:
 }, {
   flink: 1.17.1,
   branch: v3.0
-}, {
-  flink: 1.18.0,
-  jdk: '8, 11, 17',
-  branch: v3.0
 }]
 uses: apache/flink-connector-shared-utils/.github/workflows/ci.yml@ci_utils
 with:



(flink) branch release-1.17 updated: [FLINK-27756] Refactor Async Sink send gauge test

2024-01-16 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new ca62b0070cf [FLINK-27756] Refactor Async Sink send gauge test
ca62b0070cf is described below

commit ca62b0070cf30c3154b180b8ffddcc422c066cc7
Author: Ahmed Hamdy 
AuthorDate: Mon Jan 1 15:41:27 2024 +

[FLINK-27756] Refactor Async Sink send gauge test
---
 .../base/sink/writer/AsyncSinkWriterTest.java  | 18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
index ac4d9447e15..774d2436143 100644
--- 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
+++ 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
@@ -112,16 +112,20 @@ public class AsyncSinkWriterTest {
 AsyncSinkWriterImpl sink =
 new AsyncSinkWriterImplBuilder()
 .context(sinkInitContext)
-.maxBatchSize(4)
-.delay(100)
+.maxBatchSize(2)
+.delay(50)
 .build();
-for (int i = 0; i < 4; i++) {
-sink.write(String.valueOf(i));
-}
+sink.write(String.valueOf(1));
+// introduce artificial delay, shouldn't be calculated in send time
+Thread.sleep(50);
+long sendStartTimestamp = System.currentTimeMillis();
 sink.flush(true);
+long sendCompleteTimestamp = System.currentTimeMillis();
+
+assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue())
+.isGreaterThanOrEqualTo(50);
 assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue())
-.isGreaterThanOrEqualTo(99);
-
assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue()).isLessThan(120);
+.isLessThanOrEqualTo(sendCompleteTimestamp - 
sendStartTimestamp);
 }
 
 @Test



(flink) branch release-1.18 updated: [FLINK-27756] Refactor Async Sink send gauge test

2024-01-16 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch release-1.18
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.18 by this push:
 new 2b6c656ff31 [FLINK-27756] Refactor Async Sink send gauge test
2b6c656ff31 is described below

commit 2b6c656ff31b6d84ca9b9cea4e10e96a36ceda20
Author: Ahmed Hamdy 
AuthorDate: Mon Jan 1 15:41:27 2024 +

[FLINK-27756] Refactor Async Sink send gauge test
---
 .../base/sink/writer/AsyncSinkWriterTest.java  | 18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
index ac4d9447e15..774d2436143 100644
--- 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
+++ 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterTest.java
@@ -112,16 +112,20 @@ public class AsyncSinkWriterTest {
 AsyncSinkWriterImpl sink =
 new AsyncSinkWriterImplBuilder()
 .context(sinkInitContext)
-.maxBatchSize(4)
-.delay(100)
+.maxBatchSize(2)
+.delay(50)
 .build();
-for (int i = 0; i < 4; i++) {
-sink.write(String.valueOf(i));
-}
+sink.write(String.valueOf(1));
+// introduce artificial delay, shouldn't be calculated in send time
+Thread.sleep(50);
+long sendStartTimestamp = System.currentTimeMillis();
 sink.flush(true);
+long sendCompleteTimestamp = System.currentTimeMillis();
+
+assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue())
+.isGreaterThanOrEqualTo(50);
 assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue())
-.isGreaterThanOrEqualTo(99);
-
assertThat(sinkInitContext.getCurrentSendTimeGauge().get().getValue()).isLessThan(120);
+.isLessThanOrEqualTo(sendCompleteTimestamp - 
sendStartTimestamp);
 }
 
 @Test



(flink) branch master updated (eb8af0c589c -> d92ab390ff0)

2024-01-16 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from eb8af0c589c [FLINK-33980][core] Support generate StateBackend and 
CheckpointStorage from job configuration.
 add d92ab390ff0 [FLINK-27756] Refactor Async Sink send gauge test

No new revisions were added by this update.

Summary of changes:
 .../base/sink/writer/AsyncSinkWriterTest.java  | 18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)