(flink) branch master updated: [hotfix] Mention mTLS in SSL documentation page (#24755)

2024-05-16 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 48e5a39c955 [hotfix] Mention mTLS in SSL documentation page (#24755)
48e5a39c955 is described below

commit 48e5a39c9558083afa7589d2d8b054b625f61ee9
Author: Robert Metzger <89049+rmetz...@users.noreply.github.com>
AuthorDate: Thu May 16 15:48:45 2024 +0200

[hotfix] Mention mTLS in SSL documentation page (#24755)
---
 docs/content.zh/docs/deployment/security/security-ssl.md | 2 +-
 docs/content/docs/deployment/security/security-ssl.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/content.zh/docs/deployment/security/security-ssl.md 
b/docs/content.zh/docs/deployment/security/security-ssl.md
index 4b18ad983dc..baa3fc2091e 100644
--- a/docs/content.zh/docs/deployment/security/security-ssl.md
+++ b/docs/content.zh/docs/deployment/security/security-ssl.md
@@ -49,7 +49,7 @@ Internal connectivity includes:
   - The data plane: The connections between TaskManagers to exchange data 
during shuffles, broadcasts, redistribution, etc.
   - The Blob Service (distribution of libraries and other artifacts).
 
-All internal connections are SSL authenticated and encrypted. The connections 
use **mutual authentication**, meaning both server
+All internal connections are SSL authenticated and encrypted. The connections 
use **mutual authentication** (mTLS), meaning both server
 and client side of each connection need to present the certificate to each 
other. The certificate acts effectively as a shared
 secret when a dedicated CA is used to exclusively sign an internal cert.
 The certificate for internal communication is not needed by any other party to 
interact with Flink, and can be simply
diff --git a/docs/content/docs/deployment/security/security-ssl.md 
b/docs/content/docs/deployment/security/security-ssl.md
index 90f4db70c3b..a13d663c2ac 100644
--- a/docs/content/docs/deployment/security/security-ssl.md
+++ b/docs/content/docs/deployment/security/security-ssl.md
@@ -49,7 +49,7 @@ Internal connectivity includes:
   - The data plane: The connections between TaskManagers to exchange data 
during shuffles, broadcasts, redistribution, etc.
   - The Blob Service (distribution of libraries and other artifacts).
 
-All internal connections are SSL authenticated and encrypted. The connections 
use **mutual authentication**, meaning both server
+All internal connections are SSL authenticated and encrypted. The connections 
use **mutual authentication** (mTLS), meaning both server
 and client side of each connection need to present the certificate to each 
other. The certificate acts effectively as a shared
 secret when a dedicated CA is used to exclusively sign an internal cert.
 The certificate for internal communication is not needed by any other party to 
interact with Flink, and can be simply



(flink) branch release-1.19 updated: [hotfix][table] Fix typo in maven shade configuration (#24620)

2024-04-04 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.19
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.19 by this push:
 new 300889629bb [hotfix][table] Fix typo in maven shade configuration 
(#24620)
300889629bb is described below

commit 300889629bb3d53420f3238cc1968c3b2e0ea085
Author: Robert Metzger <89049+rmetz...@users.noreply.github.com>
AuthorDate: Thu Apr 4 21:11:26 2024 +0200

[hotfix][table] Fix typo in maven shade configuration (#24620)

Co-authored-by: anupamaggarwal 
---
 flink-table/flink-sql-jdbc-driver-bundle/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/flink-table/flink-sql-jdbc-driver-bundle/pom.xml 
b/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
index d310182adff..2bb417458be 100644
--- a/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
+++ b/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
@@ -111,7 +111,7 @@

org.apache.flink:flink-table-common

org.apache.flink:flink-annotations

org.apache.flink:flink-core
-   
org.apache.flink:flink-runtime
+   
org.apache.flink:flink-runtime

org.apache.flink:flink-clients

org.apache.flink:flink-table-api-java

org.apache.flink:flink-json



(flink) branch release-1.18 updated: [hotfix][table] Fix typo in maven shade configuration (#24621)

2024-04-04 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.18
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.18 by this push:
 new 4c580cbe87d [hotfix][table] Fix typo in maven shade configuration 
(#24621)
4c580cbe87d is described below

commit 4c580cbe87d3b76d0906a1d264176f88480a890d
Author: Robert Metzger <89049+rmetz...@users.noreply.github.com>
AuthorDate: Thu Apr 4 21:11:02 2024 +0200

[hotfix][table] Fix typo in maven shade configuration (#24621)

Co-authored-by: anupamaggarwal 
---
 flink-table/flink-sql-jdbc-driver-bundle/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/flink-table/flink-sql-jdbc-driver-bundle/pom.xml 
b/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
index 2f8fcba3288..b918e1482fa 100644
--- a/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
+++ b/flink-table/flink-sql-jdbc-driver-bundle/pom.xml
@@ -111,7 +111,7 @@

org.apache.flink:flink-table-common

org.apache.flink:flink-annotations

org.apache.flink:flink-core
-   
org.apache.flink:flink-runtime
+   
org.apache.flink:flink-runtime

org.apache.flink:flink-clients

org.apache.flink:flink-table-api-java

org.apache.flink:flink-json



(flink) branch master updated: [FLINK-29122][core] Improve robustness of FileUtils.expandDirectory() (#24307)

2024-03-13 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0da60ca1a47 [FLINK-29122][core] Improve robustness of 
FileUtils.expandDirectory() (#24307)
0da60ca1a47 is described below

commit 0da60ca1a4754f858cf7c52dd4f0c97ae0e1b0cb
Author: Anupam Aggarwal 
AuthorDate: Wed Mar 13 20:31:09 2024 +0530

[FLINK-29122][core] Improve robustness of FileUtils.expandDirectory() 
(#24307)
---
 .../main/java/org/apache/flink/util/FileUtils.java | 14 +++
 .../java/org/apache/flink/util/FileUtilsTest.java  | 49 ++
 2 files changed, 63 insertions(+)

diff --git a/flink-core/src/main/java/org/apache/flink/util/FileUtils.java 
b/flink-core/src/main/java/org/apache/flink/util/FileUtils.java
index e09ec67f987..1571906a006 100644
--- a/flink-core/src/main/java/org/apache/flink/util/FileUtils.java
+++ b/flink-core/src/main/java/org/apache/flink/util/FileUtils.java
@@ -515,12 +515,22 @@ public final class FileUtils {
 }
 }
 
+/**
+ * Un-archives files inside the target directory. Illegal fs access 
outside target directory is
+ * not permitted.
+ *
+ * @param file path to zipped/archived file
+ * @param targetDirectory directory path where file needs to be unarchived
+ * @return path to folder with unarchived contents
+ * @throws IOException if file open fails or in case of unsafe access 
outside target directory
+ */
 public static Path expandDirectory(Path file, Path targetDirectory) throws 
IOException {
 FileSystem sourceFs = file.getFileSystem();
 FileSystem targetFs = targetDirectory.getFileSystem();
 Path rootDir = null;
 try (ZipInputStream zis = new ZipInputStream(sourceFs.open(file))) {
 ZipEntry entry;
+String targetDirStr = targetDirectory.toString();
 while ((entry = zis.getNextEntry()) != null) {
 Path relativePath = new Path(entry.getName());
 if (rootDir == null) {
@@ -529,6 +539,10 @@ public final class FileUtils {
 }
 
 Path newFile = new Path(targetDirectory, relativePath);
+if (!newFile.toString().startsWith(targetDirStr)) {
+throw new IOException("Illegal escape from target 
directory");
+}
+
 if (entry.isDirectory()) {
 targetFs.mkdirs(newFile);
 } else {
diff --git a/flink-core/src/test/java/org/apache/flink/util/FileUtilsTest.java 
b/flink-core/src/test/java/org/apache/flink/util/FileUtilsTest.java
index 0a9953c060b..1576305d0aa 100644
--- a/flink-core/src/test/java/org/apache/flink/util/FileUtilsTest.java
+++ b/flink-core/src/test/java/org/apache/flink/util/FileUtilsTest.java
@@ -51,6 +51,8 @@ import java.util.List;
 import java.util.Random;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipOutputStream;
 
 import static org.assertj.core.api.Assertions.assertThat;
 import static org.assertj.core.api.Assertions.assertThatThrownBy;
@@ -517,6 +519,53 @@ public class FileUtilsTest {
 assertDirEquals(compressDir.resolve(originalDir), 
extractDir.resolve(originalDir));
 }
 
+/**
+ * Generates zip archive, containing path to file/dir passed in, 
un-archives the generated zip
+ * using {@link 
org.apache.flink.util.FileUtils#expandDirectory(org.apache.flink.core.fs.Path,
+ * org.apache.flink.core.fs.Path)} and returns path to expanded folder.
+ *
+ * @param prefix prefix to use for creating source and destination folders
+ * @param path path to contents of zip
+ * @return Path to folder containing unzipped contents
+ * @throws IOException if I/O error in file creation, un-archiving detects 
unsafe access outside
+ * target folder
+ */
+private Path writeZipAndFetchExpandedPath(String prefix, String path) 
throws IOException {
+// random source folder
+String sourcePath = prefix + "src";
+String dstPath = prefix + "dst";
+java.nio.file.Path srcFolder = TempDirUtils.newFolder(temporaryFolder, 
sourcePath).toPath();
+java.nio.file.Path zippedFile = srcFolder.resolve("file.zip");
+ZipOutputStream out = new 
ZipOutputStream(Files.newOutputStream(zippedFile));
+ZipEntry e1 = new ZipEntry(path);
+out.putNextEntry(e1);
+out.close();
+java.nio.file.Path dstFolder = TempDirUtils.newFolder(temporaryFolder, 
dstPath).toPath();
+return FileUtils.expandDirectory(
+new Path(zippedFile.toString()), new 
Path(dstFolder.toString()));
+}
+
+@Test
+public void testExpandDirWithValidPaths() {
+Assertions.assertDoesNotThrow(() -> w

(flink) branch master updated (f88f7506834 -> f99e2e4fd6f)

2024-03-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from f88f7506834 [hotfix][runtime] Remove unneeded 
requestTaskManagerFileUploadByName (#24386)
 add f99e2e4fd6f [hotfix][table] Fix typo in maven shade configuration

No new revisions were added by this update.

Summary of changes:
 flink-table/flink-sql-jdbc-driver-bundle/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink] branch master updated: [FLINK-30636] [docs]: Typo fix; 'to to' -> 'to'

2023-08-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b78cd3f5363 [FLINK-30636] [docs]: Typo fix; 'to to' -> 'to'
b78cd3f5363 is described below

commit b78cd3f536331d02eff9af4702904f331d90bc07
Author: Gunnar Morling 
AuthorDate: Wed Jan 11 11:22:26 2023 +0100

[FLINK-30636] [docs]: Typo fix; 'to to' -> 'to'
---
 .../docs/dev/table/hive-compatibility/hive-dialect/insert.md  | 2 +-
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md | 2 +-
 .../src/main/java/org/apache/flink/api/common/state/ListState.java| 2 +-
 flink-python/pyflink/datastream/state.py  | 2 +-
 .../org/apache/flink/runtime/webmonitor/testutils/HttpTestClient.java | 2 +-
 .../src/main/java/org/apache/flink/runtime/execution/Environment.java | 2 +-
 .../org/apache/flink/runtime/io/network/api/TaskEventHandler.java | 3 +--
 .../org/apache/flink/runtime/iterative/task/IterationHeadTask.java| 2 +-
 .../apache/flink/runtime/jobgraph/tasks/TaskOperatorEventGateway.java | 4 ++--
 .../org/apache/flink/runtime/state/internal/InternalListState.java| 2 +-
 .../runtime/io/network/partition/consumer/LocalInputChannelTest.java  | 4 ++--
 .../streaming/state/snapshot/RocksDBFullSnapshotResources.java| 2 +-
 .../streaming/api/functions/source/ContinuousFileReaderOperator.java  | 2 +-
 .../table/planner/plan/nodes/exec/batch/BatchExecLegacySink.java  | 2 +-
 .../flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java  | 3 +--
 .../table/planner/plan/nodes/exec/common/CommonExecLegacySink.java| 2 +-
 .../table/planner/plan/nodes/exec/stream/StreamExecLegacySink.java| 2 +-
 .../flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java| 2 +-
 .../planner/plan/nodes/physical/batch/BatchPhysicalLegacySink.scala   | 2 +-
 .../table/planner/plan/nodes/physical/batch/BatchPhysicalSink.scala   | 4 +---
 .../planner/plan/nodes/physical/stream/StreamPhysicalLegacySink.scala | 2 +-
 .../table/planner/plan/nodes/physical/stream/StreamPhysicalSink.scala | 2 +-
 .../scala/org/apache/flink/table/planner/plan/utils/RelShuttles.scala | 2 +-
 .../test/checkpointing/ProcessingTimeWindowCheckpointingITCase.java   | 2 +-
 24 files changed, 26 insertions(+), 30 deletions(-)

diff --git 
a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md 
b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index c4d862ef628..938ed6002e7 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -74,7 +74,7 @@ The dynamic partition columns must be specified last among 
the columns in the `S
 **Note:**
 
 In Hive, by default, users must specify at least one static partition in case 
of accidentally overwriting all partitions, and users can
-set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to 
allow all partitions to be dynamic.
+set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to 
allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all 
partitions are allowed to be dynamic.
 {{< /hint >}}
diff --git 
a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md 
b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index c4d862ef628..938ed6002e7 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -74,7 +74,7 @@ The dynamic partition columns must be specified last among 
the columns in the `S
 **Note:**
 
 In Hive, by default, users must specify at least one static partition in case 
of accidentally overwriting all partitions, and users can
-set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to 
allow all partitions to be dynamic.
+set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to 
allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all 
partitions are allowed to be dynamic.
 {{< /hint >}}
diff --git 
a/flink-core/src/main/java/org/apache/flink/api/common/state/ListState.java 
b/flink-core/src/main/java/org/apache/flink/api/common/state/ListState.java
index fb6492834d7..7508054f5c0 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/state/ListState.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/state/ListState.java
@@ -44,7 +44,7 @@ import java.util.List;
 public interface ListState extends MergingState> {
 
 /**
- * Updates the operator state accessible by {@link #get()} by updating 
existing values to to the
+ * Updat

[flink] branch release-1.17 updated: [FLINK-31834][Azure] Free up disk space before caching

2023-05-05 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new 91dfb22e0bc [FLINK-31834][Azure] Free up disk space before caching
91dfb22e0bc is described below

commit 91dfb22e0bc7ac10a9a9f59cd9da6d62a723dadd
Author: Robert Metzger 
AuthorDate: Tue Apr 18 16:45:46 2023 +0200

[FLINK-31834][Azure] Free up disk space before caching
---
 tools/azure-pipelines/e2e-template.yml | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/azure-pipelines/e2e-template.yml 
b/tools/azure-pipelines/e2e-template.yml
index d9e90045947..c0cb74674d0 100644
--- a/tools/azure-pipelines/e2e-template.yml
+++ b/tools/azure-pipelines/e2e-template.yml
@@ -44,6 +44,10 @@ jobs:
   echo "##vso[task.setvariable variable=skip;]0"
 fi
   displayName: Check if Docs only PR
+# free up disk space before running anything caching related. Caching has 
proven to fail in the past, due to lacking disk space.
+- script: ./tools/azure-pipelines/free_disk_space.sh
+  target: host
+  displayName: Free up disk space
 # the cache task does not create directories on a cache miss, and can 
later fail when trying to tar the directory if the test haven't created it
 # this may for example happen if a given directory is only used by a 
subset of tests, which are run in a different 'group'
 - bash: |
@@ -96,8 +100,6 @@ jobs:
 source ./tools/ci/maven-utils.sh
 setup_maven
 
-echo "Free up disk space"
-./tools/azure-pipelines/free_disk_space.sh
 
 # the APT mirrors access is based on a proposal from 
https://github.com/actions/runner-images/issues/7048#issuecomment-1419426054
 echo "Configure APT mirrors"



[flink] branch master updated (921267bcd06 -> 15c4d88eb78)

2023-04-19 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 921267bcd06 [FLINK-31398][sql-gateway] Don't wrap context classloader 
for OperationExecutor when executing statement. (#22294)
 add 15c4d88eb78 [FLINK-31834][Azure] Free up disk space before caching

No new revisions were added by this update.

Summary of changes:
 tools/azure-pipelines/e2e-template.yml | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)



[flink] branch master updated: [hotfix] Update copyright NOTICE year to 2023

2023-03-21 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4b10c7ccc40 [hotfix] Update copyright NOTICE year to 2023
4b10c7ccc40 is described below

commit 4b10c7ccc40b4466ffa057205cf4225c7c19b399
Author: Robert Metzger 
AuthorDate: Tue Mar 21 12:46:07 2023 +0100

[hotfix] Update copyright NOTICE year to 2023
---
 NOTICE  | 2 +-
 .../flink-connector-hive/src/main/resources/META-INF/NOTICE | 2 +-
 .../flink-sql-connector-hbase-1.4/src/main/resources/META-INF/NOTICE| 2 +-
 .../flink-sql-connector-hbase-2.2/src/main/resources/META-INF/NOTICE| 2 +-
 .../flink-sql-connector-hive-2.3.9/src/main/resources/META-INF/NOTICE   | 2 +-
 .../flink-sql-connector-hive-3.1.3/src/main/resources/META-INF/NOTICE   | 2 +-
 .../flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE| 2 +-
 flink-dist-scala/src/main/resources/META-INF/NOTICE | 2 +-
 flink-dist/src/main/resources/META-INF/NOTICE   | 2 +-
 .../src/main/resources/META-INF/NOTICE  | 2 +-
 .../flink-azure-fs-hadoop/src/main/resources/META-INF/NOTICE| 2 +-
 .../flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE   | 2 +-
 flink-filesystems/flink-gs-fs-hadoop/src/main/resources/META-INF/NOTICE | 2 +-
 .../flink-oss-fs-hadoop/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE | 2 +-
 flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE | 2 +-
 .../src/main/resources/META-INF/NOTICE  | 2 +-
 flink-formats/flink-sql-avro/src/main/resources/META-INF/NOTICE | 2 +-
 flink-formats/flink-sql-orc/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-formats/flink-sql-parquet/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-formats/flink-sql-protobuf/src/main/resources/META-INF/NOTICE | 2 +-
 flink-kubernetes/src/main/resources/META-INF/NOTICE | 2 +-
 flink-metrics/flink-metrics-datadog/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-metrics/flink-metrics-graphite/src/main/resources/META-INF/NOTICE | 2 +-
 flink-metrics/flink-metrics-influxdb/src/main/resources/META-INF/NOTICE | 2 +-
 .../flink-metrics-prometheus/src/main/resources/META-INF/NOTICE | 2 +-
 flink-python/src/main/resources/META-INF/NOTICE | 2 +-
 flink-rpc/flink-rpc-akka/src/main/resources/META-INF/NOTICE | 2 +-
 flink-runtime-web/src/main/resources/META-INF/NOTICE| 2 +-
 flink-runtime/src/main/resources/META-INF/NOTICE| 2 +-
 flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE | 2 +-
 .../flink-table-api-java-uber/src/main/resources/META-INF/NOTICE| 2 +-
 .../flink-table-code-splitter/src/main/resources/META-INF/NOTICE| 2 +-
 .../src/main/resources/META-INF/NOTICE  | 2 +-
 flink-table/flink-table-planner/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-table/flink-table-runtime/src/main/resources/META-INF/NOTICE  | 2 +-
 tools/releasing/NOTICE-binary_PREAMBLE.txt  | 2 +-
 37 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/NOTICE b/NOTICE
index 2246bd3d158..04de5217e2b 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache Flink
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
diff --git 
a/flink-connectors/flink-connector-hive/src/main/resources/META-INF/NOTICE 
b/flink-connectors/flink-connector-hive/src/main/resources/META-INF/NOTICE
index 41cb58d028b..53111485d87 100644
--- a/flink-connectors/flink-connector-hive/src/main/resources/META-INF/NOTICE
+++ b/flink-connectors/flink-connector-hive/src/main/resources/META-INF/NOTICE
@@ -1,5 +1,5 @@
 flink-connector-hive
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
diff --git 
a/flink-connectors/flink-sql-connector-hbase-1.4/src/main/resources/META-INF/NOTICE
 
b/flink-connectors/flink-sql-connector-hbase-1.4/src/main/resources/META-INF/NOTICE
index a8d707f4bb5..f1842fa572a 100644
--- 
a/flink-connectors/flink-sql-connector-hbase-1.4/src/main/resources/META-INF/NOTICE
+++ 
b/flink-connectors/flink-sql-connector-hbase-1.4/src/main/resources/META-INF/NOTICE
@@ -1,5 +1,5 @@
 flink-sql-connector-hbase-1.4
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache

[flink-kubernetes-operator] 01/03: wip

2023-02-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch DE-3569
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit 2f4e1580ff1a98d0ae30a93042ca9f06dcd34bcb
Author: Robert Metzger 
AuthorDate: Wed Feb 1 09:55:50 2023 +0100

wip
---
 .../kubernetes/operator/api/status/JobStatus.java  |   4 +
 .../{JobStatus.java => StatusExceptionInfo.java}   |  26 +
 .../operator/observer/JobStatusObserver.java   |  33 +-
 .../operator/service/AbstractFlinkService.java | 112 +++--
 .../kubernetes/operator/service/FlinkService.java  |   4 +
 .../crds/flinkdeployments.flink.apache.org-v1.yml  |  11 ++
 .../crds/flinksessionjobs.flink.apache.org-v1.yml  |  11 ++
 7 files changed, 102 insertions(+), 99 deletions(-)

diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
index 9b9e15e9..68f813bb 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
@@ -25,6 +25,8 @@ import lombok.Builder;
 import lombok.Data;
 import lombok.NoArgsConstructor;
 
+import java.util.List;
+
 /** Last observed status of the Flink job within an application deployment. */
 @Experimental
 @Data
@@ -50,4 +52,6 @@ public class JobStatus {
 
 /** Information about pending and last savepoint for the job. */
 private SavepointInfo savepointInfo = new SavepointInfo();
+
+private List exceptionHistory;
 }
diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/StatusExceptionInfo.java
similarity index 62%
copy from 
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
copy to 
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/StatusExceptionInfo.java
index 9b9e15e9..40a02324 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/JobStatus.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/StatusExceptionInfo.java
@@ -19,35 +19,19 @@ package org.apache.flink.kubernetes.operator.api.status;
 
 import org.apache.flink.annotation.Experimental;
 
-import io.fabric8.kubernetes.model.annotation.PrinterColumn;
 import lombok.AllArgsConstructor;
 import lombok.Builder;
 import lombok.Data;
 import lombok.NoArgsConstructor;
 
-/** Last observed status of the Flink job within an application deployment. */
+/** Exception history item for the JobStatus. */
 @Experimental
 @Data
 @NoArgsConstructor
 @AllArgsConstructor
 @Builder(toBuilder = true)
-public class JobStatus {
-/** Name of the job. */
-private String jobName;
-
-/** Flink JobId of the Job. */
-private String jobId;
-
-/** Last observed state of the job. */
-@PrinterColumn(name = "Job Status")
-private String state;
-
-/** Start time of the job. */
-private String startTime;
-
-/** Update time of the job. */
-private String updateTime;
-
-/** Information about pending and last savepoint for the job. */
-private SavepointInfo savepointInfo = new SavepointInfo();
+public class StatusExceptionInfo {
+private String stacktrace;
+private String taskName;
+private long timestamp;
 }
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
index 1062796f..49c4f55a 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
@@ -20,11 +20,13 @@ package org.apache.flink.kubernetes.operator.observer;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.kubernetes.operator.api.AbstractFlinkResource;
 import org.apache.flink.kubernetes.operator.api.status.JobStatus;
+import org.apache.flink.kubernetes.operator.api.status.StatusExceptionInfo;
 import org.apache.flink.kubernetes.operator.config.FlinkConfigManager;
 import org.apache.flink.kubernetes.operator.controller.FlinkResourceContext;
 import org.apache.flink.kubernetes.operator.reconciler.ReconciliationUtils;
 import org.apache.flink.kubernetes.operator.utils.EventRecorder;
 import org.apache.flink.runtime.client.JobStatusMes

[flink-kubernetes-operator] branch DE-3569 created (now 286d3ae6)

2023-02-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch DE-3569
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


  at 286d3ae6 Setup deployment to codeartifact

This branch includes the following new commits:

 new 2f4e1580 wip
 new d0f4fd54 checkstyle
 new 286d3ae6 Setup deployment to codeartifact

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[flink-kubernetes-operator] 03/03: Setup deployment to codeartifact

2023-02-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch DE-3569
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit 286d3ae6c309c8c945023631879278684979ee6d
Author: Robert Metzger 
AuthorDate: Wed Feb 1 15:32:42 2023 +0100

Setup deployment to codeartifact
---
 .github/workflows/codeartifact_push.yml | 48 +++
 .github/workflows/publish_snapshot.yml  | 59 -
 pom.xml | 14 
 3 files changed, 62 insertions(+), 59 deletions(-)

diff --git a/.github/workflows/codeartifact_push.yml 
b/.github/workflows/codeartifact_push.yml
new file mode 100644
index ..6dea0750
--- /dev/null
+++ b/.github/workflows/codeartifact_push.yml
@@ -0,0 +1,48 @@
+
+name: "Push to decodable AWS codeartifact"
+on:
+  workflow_dispatch:
+  push:
+branches:
+  - main
+  - 'release-*'
+tags:
+  - 'release-*-rc*'
+jobs:
+  build_n_push:
+runs-on: ubuntu-latest
+steps:
+  - name: Check out the repo
+uses: actions/checkout@v2
+  - name: Set up JDK 11
+uses: actions/setup-java@v2
+with:
+  java-version: '11'
+  distribution: 'adopt'
+  cache: maven
+  - name: Configure AWS Credentials
+uses: aws-actions/configure-aws-credentials@v1
+with:
+  aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+  aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+  aws-region: us-west-2
+  - name: Fetch AWS CodeArtifact token
+run: echo CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact 
get-authorization-token --region us-west-2 --domain decodable --domain-owner 
671293015970 --query authorizationToken --output text ) >> $GITHUB_ENV
+  - name: maven-settings-xml-action
+uses: whelk-io/maven-settings-xml-action@v20
+with:
+  servers: >
+[
+  {
+"id": "decodable-mvn-releases-local",
+"username": "aws",
+"password": "${env.CODEARTIFACT_AUTH_TOKEN}"
+  },
+{
+  "id": "decodable-mvn-snapshots-local",
+  "username": "aws",
+  "password": "${env.CODEARTIFACT_AUTH_TOKEN}"
+}
+]
+  - name: Build
+run: mvn clean deploy -B -DskipTests -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
diff --git a/.github/workflows/publish_snapshot.yml 
b/.github/workflows/publish_snapshot.yml
deleted file mode 100644
index 5fce9fd7..
--- a/.github/workflows/publish_snapshot.yml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-#  Licensed to the Apache Software Foundation (ASF) under one
-#  or more contributor license agreements.  See the NOTICE file
-#  distributed with this work for additional information
-#  regarding copyright ownership.  The ASF licenses this file
-#  to you under the Apache License, Version 2.0 (the
-#  "License"); you may not use this file except in compliance
-#  with the License.  You may obtain a copy of the License at
-#
-#  http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-name: Publish Snapshot
-
-on:
-  schedule:
-# At the end of every day
-- cron: '0 0 * * *'
-  workflow_dispatch:
-jobs:
-  publish-snapshot:
-if: github.repository == 'apache/flink-kubernetes-operator'
-runs-on: ubuntu-latest
-steps:
-  - uses: actions/checkout@v3
-with:
-  ref: main
-  - name: Set up JDK 11
-uses: actions/setup-java@v2
-with:
-  java-version: '11'
-  distribution: 'adopt'
-  - name: Cache local Maven repository
-uses: actions/cache@v3
-with:
-  path: ~/.m2/repository
-  key: snapshot-maven-${{ hashFiles('**/pom.xml') }}
-  restore-keys: |
-snapshot-maven-
-  - name: Publish snapshot
-env:
-  ASF_USERNAME: ${{ secrets.NEXUS_USER }}
-  ASF_PASSWORD: ${{ secrets.NEXUS_PW }}
-run: |
-  tmp_settings="tmp-settings.xml"
-  echo "" > $tmp_settings
-  echo 
"apache.snapshots.https$ASF_USERNAME" >> 
$tmp_settings
-  echo "$ASF_PASSWORD" &

[flink-kubernetes-operator] 02/03: checkstyle

2023-02-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch DE-3569
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit d0f4fd545b3e8c7658553c45da253e08133e6a85
Author: Robert Metzger 
AuthorDate: Wed Feb 1 10:00:43 2023 +0100

checkstyle
---
 .../flink/kubernetes/operator/service/AbstractFlinkService.java   | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
index a6f2b7cd..eee07da0 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
@@ -57,7 +57,13 @@ import 
org.apache.flink.runtime.messages.webmonitor.MultipleJobsDetails;
 import org.apache.flink.runtime.rest.FileUpload;
 import org.apache.flink.runtime.rest.RestClient;
 import org.apache.flink.runtime.rest.handler.async.AsynchronousOperationResult;
-import org.apache.flink.runtime.rest.messages.*;
+import org.apache.flink.runtime.rest.messages.DashboardConfiguration;
+import org.apache.flink.runtime.rest.messages.EmptyMessageParameters;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.JobExceptionsHeaders;
+import org.apache.flink.runtime.rest.messages.JobExceptionsInfoWithHistory;
+import org.apache.flink.runtime.rest.messages.JobsOverviewHeaders;
+import org.apache.flink.runtime.rest.messages.TriggerId;
 import 
org.apache.flink.runtime.rest.messages.job.JobExceptionsMessageParameters;
 import org.apache.flink.runtime.rest.messages.job.metrics.JobMetricsHeaders;
 import 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointDisposalRequest;



[flink] branch master updated: [FLINK-29363] allow fully redirection in web dashboard

2023-01-02 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ded2df542fd [FLINK-29363] allow fully redirection in web dashboard
ded2df542fd is described below

commit ded2df542fd5d585842e77d021fb84a92a5bea76
Author: Zhenqiu Huang 
AuthorDate: Wed Sep 21 08:06:38 2022 -0700

[FLINK-29363] allow fully redirection in web dashboard

This closes #20875
---
 .../web-dashboard/src/app/app.interceptor.ts  | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/flink-runtime-web/web-dashboard/src/app/app.interceptor.ts 
b/flink-runtime-web/web-dashboard/src/app/app.interceptor.ts
index 92c86f94b46..567d0cf1981 100644
--- a/flink-runtime-web/web-dashboard/src/app/app.interceptor.ts
+++ b/flink-runtime-web/web-dashboard/src/app/app.interceptor.ts
@@ -16,7 +16,14 @@
  * limitations under the License.
  */
 
-import { HttpEvent, HttpHandler, HttpInterceptor, HttpRequest } from 
'@angular/common/http';
+import {
+  HttpEvent,
+  HttpHandler,
+  HttpInterceptor,
+  HttpRequest,
+  HttpResponseBase,
+  HttpStatusCode
+} from '@angular/common/http';
 import { Injectable, Injector } from '@angular/core';
 import { Observable, throwError } from 'rxjs';
 import { catchError } from 'rxjs/operators';
@@ -39,6 +46,16 @@ export class AppInterceptor implements HttpInterceptor {
 
 return next.handle(req.clone({ withCredentials: true })).pipe(
   catchError(res => {
+if (
+  res instanceof HttpResponseBase &&
+  (res.status == HttpStatusCode.MovedPermanently ||
+res.status == HttpStatusCode.TemporaryRedirect ||
+res.status == HttpStatusCode.SeeOther) &&
+  res.headers.has('Location')
+) {
+  window.location.href = String(res.headers.get('Location'));
+}
+
 const errorMessage = res && res.error && res.error.errors && 
res.error.errors[0];
 if (
   errorMessage &&



[flink] branch master updated: [FLINK-30443] [core] Ensuring more sensitive keys are masked in log output

2022-12-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d7b63aee3b0 [FLINK-30443] [core] Ensuring more sensitive keys are 
masked in log output
d7b63aee3b0 is described below

commit d7b63aee3b02e71f837e1f0b18f1b93790765d9f
Author: Gunnar Morling 
AuthorDate: Thu Dec 22 14:32:06 2022 +0100

[FLINK-30443] [core] Ensuring more sensitive keys are masked in log output
---
 .../org/apache/flink/configuration/GlobalConfiguration.java  | 12 +++-
 .../apache/flink/configuration/GlobalConfigurationTest.java  | 12 
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/configuration/GlobalConfiguration.java
 
b/flink-core/src/main/java/org/apache/flink/configuration/GlobalConfiguration.java
index b3122ed146b..f12b648fac0 100644
--- 
a/flink-core/src/main/java/org/apache/flink/configuration/GlobalConfiguration.java
+++ 
b/flink-core/src/main/java/org/apache/flink/configuration/GlobalConfiguration.java
@@ -45,7 +45,17 @@ public final class GlobalConfiguration {
 
 // the keys whose values should be hidden
 private static final String[] SENSITIVE_KEYS =
-new String[] {"password", "secret", "fs.azure.account.key", 
"apikey"};
+new String[] {
+"password",
+"secret",
+"fs.azure.account.key",
+"apikey",
+"auth-params",
+"service-key",
+"token",
+"basic-auth",
+"jaas.config"
+};
 
 // the hidden content to be displayed
 public static final String HIDDEN_CONTENT = "**";
diff --git 
a/flink-core/src/test/java/org/apache/flink/configuration/GlobalConfigurationTest.java
 
b/flink-core/src/test/java/org/apache/flink/configuration/GlobalConfigurationTest.java
index 9a321e2840d..fab76caa175 100644
--- 
a/flink-core/src/test/java/org/apache/flink/configuration/GlobalConfigurationTest.java
+++ 
b/flink-core/src/test/java/org/apache/flink/configuration/GlobalConfigurationTest.java
@@ -129,6 +129,18 @@ public class GlobalConfigurationTest extends TestLogger {
 assertTrue(GlobalConfiguration.isSensitive("123pasSword"));
 assertTrue(GlobalConfiguration.isSensitive("PasSword"));
 assertTrue(GlobalConfiguration.isSensitive("Secret"));
+assertTrue(GlobalConfiguration.isSensitive("polaris.client-secret"));
+assertTrue(GlobalConfiguration.isSensitive("client-secret"));
+assertTrue(GlobalConfiguration.isSensitive("service-key-json"));
+assertTrue(GlobalConfiguration.isSensitive("auth.basic.password"));
+assertTrue(GlobalConfiguration.isSensitive("auth.basic.token"));
+
assertTrue(GlobalConfiguration.isSensitive("avro-confluent.basic-auth.user-info"));
+
assertTrue(GlobalConfiguration.isSensitive("key.avro-confluent.basic-auth.user-info"));
+
assertTrue(GlobalConfiguration.isSensitive("value.avro-confluent.basic-auth.user-info"));
+assertTrue(GlobalConfiguration.isSensitive("kafka.jaas.config"));
+
assertTrue(GlobalConfiguration.isSensitive("properties.ssl.truststore.password"));
+
assertTrue(GlobalConfiguration.isSensitive("properties.ssl.keystore.password"));
+
 assertTrue(
 GlobalConfiguration.isSensitive(
 
"fs.azure.account.key.storageaccount123456.core.windows.net"));



[flink] branch master updated: [FLINK-30454] Fixing visibility issue for SizeGauge.SizeSupplier;

2022-12-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f82be845f3f [FLINK-30454] Fixing visibility issue for 
SizeGauge.SizeSupplier;
f82be845f3f is described below

commit f82be845f3f673264a13eaf29e11b19e00f37222
Author: Gunnar Morling 
AuthorDate: Mon Dec 19 16:18:51 2022 +0100

[FLINK-30454] Fixing visibility issue for SizeGauge.SizeSupplier;

Also changing visibility JsonJobResultEntry.FIELD_NAME_VERSION as per 
JLS§6.6.1 requirements.

This closes #21531
---
 .../runtime/highavailability/FileSystemJobResultStore.java  |  2 +-
 .../flink/runtime/metrics/groups/TaskIOMetricGroup.java | 13 +++--
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
index 986b7a03343..c3fc225596d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
@@ -207,7 +207,7 @@ public class FileSystemJobResultStore extends 
AbstractThreadsafeJobResultStore {
 @VisibleForTesting
 static class JsonJobResultEntry extends JobResultEntry {
 private static final String FIELD_NAME_RESULT = "result";
-private static final String FIELD_NAME_VERSION = "version";
+static final String FIELD_NAME_VERSION = "version";
 
 private JsonJobResultEntry(JobResultEntry entry) {
 this(entry.getJobResult());
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/TaskIOMetricGroup.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/TaskIOMetricGroup.java
index 6903f666bda..02269406512 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/TaskIOMetricGroup.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/TaskIOMetricGroup.java
@@ -245,7 +245,7 @@ public class TaskIOMetricGroup extends 
ProxyMetricGroup {
 this.resultPartitionBytes.put(resultPartitionId, 
resultPartitionBytesCounter);
 }
 
-public void registerMailboxSizeSupplier(SizeGauge.SizeSupplier 
supplier) {
+public void registerMailboxSizeSupplier(SizeSupplier supplier) {
 this.mailboxSize.registerSupplier(supplier);
 }
 
@@ -275,11 +275,6 @@ public class TaskIOMetricGroup extends 
ProxyMetricGroup {
 private static class SizeGauge implements Gauge {
 private SizeSupplier supplier;
 
-@FunctionalInterface
-public interface SizeSupplier {
-R get();
-}
-
 public void registerSupplier(SizeSupplier supplier) {
 this.supplier = supplier;
 }
@@ -293,4 +288,10 @@ public class TaskIOMetricGroup extends 
ProxyMetricGroup {
 }
 }
 }
+
+/** Supplier for sizes. */
+@FunctionalInterface
+public interface SizeSupplier {
+R get();
+}
 }



[flink] branch master updated (1064b13a2f2 -> 80dc61ecf18)

2022-12-19 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 1064b13a2f2 [FLINK-29997] Link to the taskmanager page in the expired 
sub-task of checkpoint tab
 add 80dc61ecf18 [hotfix] Avoiding unsecure HTTP reference to JBoss repo

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink-web] branch asf-site updated: Rebuild website

2022-11-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8950f518e Rebuild website
8950f518e is described below

commit 8950f518ec585256934ea04bb224280cdc26f038
Author: Hong Liang Teoh 
AuthorDate: Fri Nov 25 09:10:07 2022 +

Rebuild website
---
 .../11/25/async-sink-rate-limiting-strategy.html   | 470 +
 content/blog/feed.xml  | 320 --
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  40 +-
 content/blog/page11/index.html |  38 +-
 content/blog/page12/index.html |  36 +-
 content/blog/page13/index.html |  38 +-
 content/blog/page14/index.html |  40 +-
 content/blog/page15/index.html |  38 +-
 content/blog/page16/index.html |  36 +-
 content/blog/page17/index.html |  37 +-
 content/blog/page18/index.html |  38 +-
 content/blog/page19/index.html |  44 +-
 content/blog/page2/index.html  |  36 +-
 content/blog/page20/index.html |  45 +-
 content/blog/page21/index.html |  25 ++
 content/blog/page3/index.html  |  36 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  38 +-
 content/blog/page8/index.html  |  36 +-
 content/blog/page9/index.html  |  38 +-
 content/index.html |   8 +-
 content/zh/index.html  |   8 +-
 25 files changed, 1178 insertions(+), 421 deletions(-)

diff --git a/content/2022/11/25/async-sink-rate-limiting-strategy.html 
b/content/2022/11/25/async-sink-rate-limiting-strategy.html
new file mode 100644
index 0..833918676
--- /dev/null
+++ b/content/2022/11/25/async-sink-rate-limiting-strategy.html
@@ -0,0 +1,470 @@
+
+
+  
+
+
+
+
+Apache Flink: Optimising the throughput of async sinks using a 
custom RateLimitingStrategy
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  var _paq = window._paq = window._paq || [];
+  /* tracker methods like "setCustomDimension" should be called before 
"trackPageView" */
+  /* We explicitly disable cookie tracking to avoid privacy issues */
+  _paq.push(['disableCookies']);
+  /* Measure a visit to flink.apache.org and nightlies.apache.org/flink as 
the same visit */
+  _paq.push(["setDomains", 
["*.flink.apache.org","*.nightlies.apache.org/flink"]]);
+  _paq.push(['trackPageView']);
+  _paq.push(['enableLinkTracking']);
+  (function() {
+var u="//matomo.privacy.apache.org/";
+_paq.push(['setTrackerUrl', u+'matomo.php']);
+_paq.push(['setSiteId', '1']);
+var d=document, g=d.createElement('script'), 
s=d.getElementsByTagName('script')[0];
+g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);
+  })();
+
+
+  
+
+
+
+
+
+
+
+  
+ 
+
+
+
+
+
+
+  
+
+
+
+  
+  
+
+  
+
+  
+
+
+
+
+  
+
+
+
+
+
+
+
+What is Apache 
Flink?
+
+
+
+
+
+https://nightlies.apache.org/flink/flink-statefun-docs-stable/;>What is 
Stateful Functions?
+
+
+
+https://nightlies.apache.org/flink/flink-ml-docs-stable/;>What is Flink 
ML?
+
+
+
+https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/;>What
 is the Flink Kubernetes Operator?
+
+
+
+https://nightlies.apache.org/flink/flink-table-store-docs-stable/;>What 
is Flink Table Store?
+
+
+Use Cases
+
+
+Powered By
+
+
+
+
+
+
+Downloads
+
+
+
+  Getting Started
+  
+https://nightlies.apache.org/flink/flink-docs-release-1.16//docs/try-flink/local_installation/;
 target="_blank">With Flink 
+https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/getting-started/project-setup.html;
 target="_blank">With Flink Stateful Functions 
+

[flink-web] branch asf-site updated: [FLINK-30156] Add blogpost for the Async sink custom RateLimitingStrategy

2022-11-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 0e4b8a20a [FLINK-30156] Add blogpost for the Async sink custom 
RateLimitingStrategy
0e4b8a20a is described below

commit 0e4b8a20ac88202c8ff5c671014d4c464c86ec8d
Author: Hong Liang Teoh 
AuthorDate: Tue Nov 22 17:53:13 2022 +

[FLINK-30156] Add blogpost for the Async sink custom RateLimitingStrategy
---
 ...2022-11-25-async-sink-rate-limiting-strategy.md | 200 +
 1 file changed, 200 insertions(+)

diff --git a/_posts/2022-11-25-async-sink-rate-limiting-strategy.md 
b/_posts/2022-11-25-async-sink-rate-limiting-strategy.md
new file mode 100644
index 0..133ea7132
--- /dev/null
+++ b/_posts/2022-11-25-async-sink-rate-limiting-strategy.md
@@ -0,0 +1,200 @@
+---
+layout: post
+title: "Optimising the throughput of async sinks using a custom 
RateLimitingStrategy"
+date: 2022-11-25T12:00:00.000Z
+authors:
+- liangtl:
+  name: "Hong Liang Teoh"
+excerpt: An overview of how to optimise the throughput of async sinks using a 
custom RateLimitingStrategy
+---
+
+## Introduction
+
+When designing a Flink data processing job, one of the key concerns is 
maximising job throughput. Sink throughput is a crucial factor because it can 
determine the entire job’s throughput. We generally want the highest possible 
write rate in the sink without overloading the destination. However, since the 
factors impacting a destination’s performance are variable over the job’s 
lifetime, the sink needs to adjust its write rate dynamically. Depending on the 
sink’s destination, it helps to  [...]
+
+**This post explains how you can optimise sink throughput by configuring a 
custom RateLimitingStrategy on a connector that builds on the** 
[**AsyncSinkBase 
(FLIP-171)**](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink)**.**
 In the sections below, we cover the design logic behind the AsyncSinkBase and 
the RateLimitingStrategy, then we take you through two example implementations 
of rate limiting strategies, specifically the 
CongestionControlRateLimitingStrategy an [...]
+
+### Background of the AsyncSinkBase
+
+When implementing the AsyncSinkBase, our goal was to simplify building new 
async sinks to custom destinations by providing common async sink functionality 
used with at least once processing. This has allowed users to more easily write 
sinks to custom destinations, such as Amazon Kinesis Data Streams and Amazon 
Kinesis Firehose. An additional async sink to Amazon DynamoDB 
([FLIP-252](https://cwiki.apache.org/confluence/display/FLINK/FLIP-252%3A+Amazon+DynamoDB+Sink+Connector))
 is also bei [...]
+
+The AsyncSinkBase provides the core implementation which handles the mechanics 
of async requests and responses. This includes retrying failed messages, 
deciding when to flush records to the destination, and persisting un-flushed 
records to state during checkpointing. In order to increase throughput, the 
async sink also dynamically adjusts the request rate depending on the 
destination’s responses. Read more about this in our [previous 1.15 release 
blog post](https://flink.apache.org/2022/ [...]
+
+### Configuring the AsyncSinkBase
+
+When designing the AsyncSinkBase, we wanted users to be able to tune their 
custom connector implementations based on their use case and needs, without 
having to understand the low-level workings of the base sink itself.
+
+So, as part of our initial implementation in Flink 1.15, we exposed 
configurations such as `maxBatchSize`, `maxInFlightRequests`, 
`maxBufferedRequests`, `maxBatchSizeInBytes`, `maxTimeInBufferMS` and 
`maxRecordSizeInBytes` so that users can adapt the flushing and writing 
behaviour of the sink.
+
+In Flink 1.16, we have further extended this configurability to the 
RateLimitingStrategy used by the AsyncSinkBase 
([FLIP-242](https://cwiki.apache.org/confluence/display/FLINK/FLIP-242%3A+Introduce+configurable+RateLimitingStrategy+for+Async+Sink)).
 With this change, users can now customise how the AsyncSinkBase dynamically 
adjusts the request rate in real-time to optimise throughput whilst mitigating 
back pressure. Example customisations include changing the mathematical 
function used  [...]
+
+## Rationale behind the RateLimitingStrategy interface
+
+```java
+public interface RateLimitingStrategy {
+
+// Information provided to the RateLimitingStrategy
+void registerInFlightRequest(RequestInfo requestInfo);
+void registerCompletedRequest(ResultInfo resultInfo);
+
+// Controls offered to the RateLimitingStrategy
+boolean shouldBlock(RequestInfo requestInfo);
+int getMaxBatchSize();
+
+}
+```
+
+There are 2 core ideas behind the RateLimitingStrategy interface:
+
+* **Information methods:** We nee

[flink] branch master updated: [FLINK-29779] Pass PluginManager into MiniCluster

2022-11-24 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 9984b09cb9a [FLINK-29779] Pass PluginManager into MiniCluster
9984b09cb9a is described below

commit 9984b09cb9a2af9f2bde0e973cb8ce375942bd8c
Author: Robert Metzger 
AuthorDate: Wed Oct 26 15:34:17 2022 +0200

[FLINK-29779] Pass PluginManager into MiniCluster
---
 .../flink/runtime/minicluster/MiniCluster.java  |  3 ++-
 .../minicluster/MiniClusterConfiguration.java   | 21 +++--
 .../TestingMiniClusterConfiguration.java|  3 ++-
 3 files changed, 23 insertions(+), 4 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
index bc21f6108f6..bab16dd21f8 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
@@ -1103,7 +1103,8 @@ public class MiniCluster implements AutoCloseableAsync {
 Configuration config, long maximumMessageSizeInBytes) {
 return new MetricRegistryImpl(
 MetricRegistryConfiguration.fromConfiguration(config, 
maximumMessageSizeInBytes),
-ReporterSetup.fromConfiguration(config, null));
+ReporterSetup.fromConfiguration(
+config, miniClusterConfiguration.getPluginManager()));
 }
 
 /**
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
index 51be3944f3c..9b2b2bddc48 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
@@ -26,6 +26,7 @@ import 
org.apache.flink.configuration.NettyShuffleEnvironmentOptions;
 import org.apache.flink.configuration.RestOptions;
 import org.apache.flink.configuration.TaskManagerOptions;
 import org.apache.flink.configuration.UnmodifiableConfiguration;
+import org.apache.flink.core.plugin.PluginManager;
 import org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils;
 import org.apache.flink.util.Preconditions;
 
@@ -50,6 +51,8 @@ public class MiniClusterConfiguration {
 
 private final MiniCluster.HaServices haServices;
 
+@Nullable private final PluginManager pluginManager;
+
 // 
 //  Construction
 // 
@@ -59,12 +62,14 @@ public class MiniClusterConfiguration {
 int numTaskManagers,
 RpcServiceSharing rpcServiceSharing,
 @Nullable String commonBindAddress,
-MiniCluster.HaServices haServices) {
+MiniCluster.HaServices haServices,
+@Nullable PluginManager pluginManager) {
 this.numTaskManagers = numTaskManagers;
 this.configuration = 
generateConfiguration(Preconditions.checkNotNull(configuration));
 this.rpcServiceSharing = Preconditions.checkNotNull(rpcServiceSharing);
 this.commonBindAddress = commonBindAddress;
 this.haServices = haServices;
+this.pluginManager = pluginManager;
 }
 
 private UnmodifiableConfiguration generateConfiguration(final 
Configuration configuration) {
@@ -100,6 +105,11 @@ public class MiniClusterConfiguration {
 return rpcServiceSharing;
 }
 
+@Nullable
+public PluginManager getPluginManager() {
+return pluginManager;
+}
+
 public int getNumTaskManagers() {
 return numTaskManagers;
 }
@@ -176,6 +186,7 @@ public class MiniClusterConfiguration {
 @Nullable private String commonBindAddress = null;
 private MiniCluster.HaServices haServices = 
MiniCluster.HaServices.CONFIGURED;
 private boolean useRandomPorts = false;
+@Nullable private PluginManager pluginManager;
 
 public Builder setConfiguration(Configuration configuration1) {
 this.configuration = Preconditions.checkNotNull(configuration1);
@@ -212,6 +223,11 @@ public class MiniClusterConfiguration {
 return this;
 }
 
+public Builder setPluginManager(PluginManager pluginManager) {
+this.pluginManager = Preconditions.checkNotNull(pluginManager);
+return this;
+}
+
 public MiniClusterConfiguration build() {
 final Configuration modifiedConfiguration = new 
Configuration(configuration);
 modifiedConfiguration.setInteger(
@@ -234,7 +250,8 @@ public

[flink-web] 02/02: rebuild website

2022-10-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit a3561b2cde93aa25f76c90e356f8585c7349f382
Author: Robert Metzger 
AuthorDate: Tue Oct 18 15:29:07 2022 +0200

rebuild website
---
 content/community.html| 2 ++
 content/zh/community.html | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/content/community.html b/content/community.html
index a2f089e30..6c4ab6932 100644
--- a/content/community.html
+++ b/content/community.html
@@ -424,6 +424,8 @@ Any existing Slack member can also invite anyone else to 
join.
   Please do not direct message people for 
troubleshooting, Jira assigning and PR review. These should be picked-up 
voluntarily.
 
 
+Note: All messages from public channels in our Slack are 
permanently stored and published in the https://www.linen.dev/s/apache-flink;>Apache Flink Slack archive on 
linen.dev. The purpose of this archive is to allow search engines to find 
past discussions in the Flink Slack.
+
 Stack Overflow
 
 Committers are watching http://stackoverflow.com/questions/tagged/apache-flink;>Stack 
Overflow for the http://stackoverflow.com/questions/tagged/apache-flink;>apache-flink 
tag.
diff --git a/content/zh/community.html b/content/zh/community.html
index 77d8ad67a..c1cd96708 100644
--- a/content/zh/community.html
+++ b/content/zh/community.html
@@ -421,6 +421,8 @@ Slack 规定每个邀请链接最多可邀请 100 人,如果遇到上述链接
   不要通过私信(Direct Message)要求他人答疑、指派 Jira、审查 
PR。这些事务应遵从自愿原则。
 
 
+注意:来自我们 Slack 中公共渠道的所有消息都永久存储并发布在 
[linen.dev 上的 Apache Flink Slack 存档](https://www.linen.dev/s/apache-flink)。 
这个存档的目的是让搜索引擎在 Flink Slack 中找到过去的讨论。
+
 Stack Overflow
 
 Committer 们会关注 http://stackoverflow.com/questions/tagged/apache-flink;>Stack 
Overflow 上 http://stackoverflow.com/questions/tagged/apache-flink;>apache-flink 
相关标签的问题。



[flink-web] 01/02: [FLINK-27721] Add Slack archive link

2022-10-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit a28a09a11abed97e7408df37b5ae633b770cd3ac
Author: Robert Metzger 
AuthorDate: Tue Oct 18 15:25:44 2022 +0200

[FLINK-27721] Add Slack archive link
---
 community.md| 2 ++
 community.zh.md | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/community.md b/community.md
index 9b81285bc..575b73e31 100644
--- a/community.md
+++ b/community.md
@@ -159,6 +159,8 @@ There are a couple of community rules:
 * Use **Slack threads** to keep parallel conversations from overwhelming a 
channel.
 * Please **do not direct message** people for troubleshooting, Jira assigning 
and PR review. These should be picked-up voluntarily.
 
+**Note**: All messages from public channels in our Slack are **permanently 
stored and published** in the [Apache Flink Slack archive on 
linen.dev](https://www.linen.dev/s/apache-flink). The purpose of this archive 
is to allow search engines to find past discussions in the Flink Slack.
+
 ## Stack Overflow
 
 Committers are watching [Stack 
Overflow](http://stackoverflow.com/questions/tagged/apache-flink) for the 
[apache-flink](http://stackoverflow.com/questions/tagged/apache-flink) tag.
diff --git a/community.zh.md b/community.zh.md
index bbff3bc1d..ed1240355 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -156,6 +156,8 @@ Slack 规定每个邀请链接最多可邀请 100 人,如果遇到上述链接
 * 使用 **Slack 消息列(Thread)**使频道(Channel)中的多组同时进行的对话保持有序。
 * **不要通过私信(Direct Message)**要求他人答疑、指派 Jira、审查 PR。这些事务应遵从自愿原则。
 
+**注意**:来自我们 Slack 中公共渠道的所有消息都**永久存储并发布**在 [linen.dev 上的 Apache Flink Slack 
存档](https://www.linen.dev/s/apache-flink)。 这个存档的目的是让搜索引擎在 Flink Slack 中找到过去的讨论。
+
 ## Stack Overflow
 
 Committer 们会关注 [Stack 
Overflow](http://stackoverflow.com/questions/tagged/apache-flink) 上 
[apache-flink](http://stackoverflow.com/questions/tagged/apache-flink) 相关标签的问题。



[flink-web] branch asf-site updated (1e25277e2 -> a3561b2cd)

2022-10-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


from 1e25277e2 Revert changes on docker-build.sh
 new a28a09a11 [FLINK-27721] Add Slack archive link
 new a3561b2cd rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community.md  | 2 ++
 community.zh.md   | 2 ++
 content/community.html| 2 ++
 content/zh/community.html | 2 ++
 4 files changed, 8 insertions(+)



[flink] branch release-1.14 updated: [FLINK-28680] Revisit 'free_disk_space.sh' to free up more space

2022-07-26 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new 274040dafad [FLINK-28680] Revisit 'free_disk_space.sh' to free up more 
space
274040dafad is described below

commit 274040dafad472cfe2a88e1e3038258545f1a2ef
Author: Robert Metzger 
AuthorDate: Tue Jul 26 09:35:35 2022 +0200

[FLINK-28680] Revisit 'free_disk_space.sh' to free up more space
---
 tools/azure-pipelines/free_disk_space.sh | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/azure-pipelines/free_disk_space.sh 
b/tools/azure-pipelines/free_disk_space.sh
index 0c495d78587..a4b64ebd3b6 100755
--- a/tools/azure-pipelines/free_disk_space.sh
+++ b/tools/azure-pipelines/free_disk_space.sh
@@ -43,6 +43,12 @@ sudo apt-get autoremove -y
 sudo apt-get clean
 df -h
 echo "Removing large directories"
-# deleting 15GB
-rm -rf /usr/share/dotnet/
+
+sudo rm -rf /usr/share/dotnet/
+sudo rm -rf /usr/local/graalvm/
+sudo rm -rf /usr/local/.ghcup/
+sudo rm -rf /usr/local/share/powershell
+sudo rm -rf /usr/local/share/chromium
+sudo rm -rf /usr/local/lib/android
+sudo rm -rf /usr/local/lib/node_modules
 df -h



[flink] branch release-1.15 updated: [FLINK-28680] Revisit 'free_disk_space.sh' to free up more space

2022-07-26 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
 new 28bb487c99b [FLINK-28680] Revisit 'free_disk_space.sh' to free up more 
space
28bb487c99b is described below

commit 28bb487c99b4551a4dea63441c71dc4fd68898b1
Author: Robert Metzger 
AuthorDate: Tue Jul 26 09:35:35 2022 +0200

[FLINK-28680] Revisit 'free_disk_space.sh' to free up more space
---
 tools/azure-pipelines/free_disk_space.sh | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/azure-pipelines/free_disk_space.sh 
b/tools/azure-pipelines/free_disk_space.sh
index 0c495d78587..a4b64ebd3b6 100755
--- a/tools/azure-pipelines/free_disk_space.sh
+++ b/tools/azure-pipelines/free_disk_space.sh
@@ -43,6 +43,12 @@ sudo apt-get autoremove -y
 sudo apt-get clean
 df -h
 echo "Removing large directories"
-# deleting 15GB
-rm -rf /usr/share/dotnet/
+
+sudo rm -rf /usr/share/dotnet/
+sudo rm -rf /usr/local/graalvm/
+sudo rm -rf /usr/local/.ghcup/
+sudo rm -rf /usr/local/share/powershell
+sudo rm -rf /usr/local/share/chromium
+sudo rm -rf /usr/local/lib/android
+sudo rm -rf /usr/local/lib/node_modules
 df -h



[flink] branch master updated: [FLINK-28680] Revisit 'free_disk_space.sh' to free up more space

2022-07-26 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 6dd75e1d395 [FLINK-28680] Revisit 'free_disk_space.sh' to free up more 
space
6dd75e1d395 is described below

commit 6dd75e1d39597d6482eb920a610b1ebc55f39458
Author: Robert Metzger 
AuthorDate: Tue Jul 26 09:35:35 2022 +0200

[FLINK-28680] Revisit 'free_disk_space.sh' to free up more space
---
 tools/azure-pipelines/free_disk_space.sh | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/azure-pipelines/free_disk_space.sh 
b/tools/azure-pipelines/free_disk_space.sh
index 0c495d78587..a4b64ebd3b6 100755
--- a/tools/azure-pipelines/free_disk_space.sh
+++ b/tools/azure-pipelines/free_disk_space.sh
@@ -43,6 +43,12 @@ sudo apt-get autoremove -y
 sudo apt-get clean
 df -h
 echo "Removing large directories"
-# deleting 15GB
-rm -rf /usr/share/dotnet/
+
+sudo rm -rf /usr/share/dotnet/
+sudo rm -rf /usr/local/graalvm/
+sudo rm -rf /usr/local/.ghcup/
+sudo rm -rf /usr/local/share/powershell
+sudo rm -rf /usr/local/share/chromium
+sudo rm -rf /usr/local/lib/android
+sudo rm -rf /usr/local/lib/node_modules
 df -h



[flink] branch master updated (a4540fea6bc -> c71762686b1)

2022-07-06 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from a4540fea6bc [FLINK-28313][rest] Add history server flag to 
DashboardConfiguration
 add c71762686b1 [FLINK-28274][runtime] set the max parallelism of 
ContinuousFileMonitoringFunction to 1

No new revisions were added by this update.

Summary of changes:
 .../environment/StreamExecutionEnvironment.java|  4 
 .../flink/test/scheduling/ReactiveModeITCase.java  | 22 ++
 2 files changed, 26 insertions(+)



[flink-web] branch asf-site updated: [hotfix] Remove flinkbot mentions

2022-03-04 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d4bca48  [hotfix] Remove flinkbot mentions
d4bca48 is described below

commit d4bca486750ae684b4af15ab1988c5421d1d8002
Author: Robert Metzger 
AuthorDate: Wed Mar 2 20:03:37 2022 +0100

[hotfix] Remove flinkbot mentions
---
 contributing/reviewing-prs.md| 26 --
 contributing/reviewing-prs.zh.md | 25 -
 2 files changed, 51 deletions(-)

diff --git a/contributing/reviewing-prs.md b/contributing/reviewing-prs.md
index 6ddd2c5..4fc6f09 100644
--- a/contributing/reviewing-prs.md
+++ b/contributing/reviewing-prs.md
@@ -92,30 +92,4 @@ If the pull request introduces a new feature, the feature 
should be documented.
 
 See more about how to [contribute documentation]({{ site.baseurl 
}}/contributing/contribute-documentation.html).
 
-## Review with the @flinkbot
-
-The Flink community is using a service called 
[@flinkbot](https://github.com/flinkbot) to help with the review of the pull 
requests.
-
-The bot automatically posts a comment tracking the review progress for each 
new pull request:
-
-```
-### Review Progress
-
-* [ ] 1. The description looks good.
-* [ ] 2. There is consensus that the contribution should go into to Flink.
-* [ ] 3. [Does not need specific attention | Needs specific attention for X | 
Has attention for X by Y]
-* [ ] 4. The architectural approach is sound.
-* [ ] 5. Overall code quality is good.
-
-Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) if you have 
questions about the review process.
-```
-
-Reviewers can instruct the bot to tick off the boxes (in order) to indicate 
the progress of the review.
-
-For approving the description of the contribution, mention the bot with 
`@flinkbot approve description`. This works similarly with `consensus`, 
`architecture` and `quality`.
-
-For approving all aspects, put a new comment with `@flinkbot approve all` into 
the pull request.
-
-The syntax for requiring attention is `@flinkbot attention @username1 
[@username2 ..]`.
-
 
diff --git a/contributing/reviewing-prs.zh.md b/contributing/reviewing-prs.zh.md
index 44f2a72..9aa0fd4 100644
--- a/contributing/reviewing-prs.zh.md
+++ b/contributing/reviewing-prs.zh.md
@@ -89,30 +89,5 @@ title:  "如何审核 Pull Request"
 
 阅读[如何贡献文档]({{ site.baseurl 
}}/zh/contributing/contribute-documentation.html)了解更多。
 
-## 使用 @flinkbot 进行审核
-
-Flink 社区正在使用名为 [@flinkbot](https://github.com/flinkbot) 的服务来帮助审核 pull request。
-
-针对每个新的 pull request,机器人都会自动发表评论并跟踪审核进度:
-
-```
-### Review Progress
-
-* [ ] 1. The description looks good.
-* [ ] 2. There is consensus that the contribution should go into to Flink.
-* [ ] 3. [Does not need specific attention | Needs specific attention for X | 
Has attention for X by Y]
-* [ ] 4. The architectural approach is sound.
-* [ ] 5. Overall code quality is good.
-
-Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) if you have 
questions about the review process.
-```
-
-审核人可以指示机器人(按顺序)勾选方框以指示审核的进度。
-
-用于批准贡献的描述,请使用 `@flinkbot approve description` @机器人。`consensus`、`architecture` 
、 `quality` 情况的操作与之类似。
-
-要批准全部方面,请在 pull request 中添加一条带有 `@flinkbot approve all` 的新评论。
-
-提醒他人关注的语法是 `@flinkbot attention @username1 [@username2 ..]`。
 
 


[flink] branch master updated: [hotfix] Log architecture on startup

2022-02-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f95d69b  [hotfix] Log architecture on startup
f95d69b is described below

commit f95d69b83c0649383f0a2bbaca0675a604fc7218
Author: Robert Metzger 
AuthorDate: Wed Feb 23 11:45:51 2022 +0100

[hotfix] Log architecture on startup
---
 .../java/org/apache/flink/runtime/util/EnvironmentInformation.java | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java
index b4452fd..ea3a841 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java
@@ -403,6 +403,8 @@ public class EnvironmentInformation {
 
 String inheritedLogs = System.getenv("FLINK_INHERITED_LOGS");
 
+String arch = System.getProperty("os.arch");
+
 long maxHeapMegabytes = getMaxJvmHeapMemory() >>> 20;
 
 if (inheritedLogs != null) {
@@ -431,6 +433,7 @@ public class EnvironmentInformation {
 log.info(" OS current user: " + System.getProperty("user.name"));
 log.info(" Current Hadoop/Kerberos user: " + getHadoopUser());
 log.info(" JVM: " + jvmVersion);
+log.info(" Arch: " + arch);
 log.info(" Maximum heap size: " + maxHeapMegabytes + " MiBytes");
 log.info(" JAVA_HOME: " + (javaHome == null ? "(not set)" : 
javaHome));
 


[flink-docker] branch master updated: [FLINK-25679] Add support for arm64v8

2022-02-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new d232f51  [FLINK-25679] Add support for arm64v8
d232f51 is described below

commit d232f51ff2f8c96c1f6eb074751d04977168a44a
Author: Robert Metzger 
AuthorDate: Fri Jan 21 20:14:43 2022 +0100

[FLINK-25679] Add support for arm64v8
---
 1.14/scala_2.11-java11-debian/release.metadata | 2 +-
 1.14/scala_2.11-java8-debian/release.metadata  | 2 +-
 1.14/scala_2.12-java11-debian/release.metadata | 2 +-
 1.14/scala_2.12-java8-debian/release.metadata  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/1.14/scala_2.11-java11-debian/release.metadata 
b/1.14/scala_2.11-java11-debian/release.metadata
index 84a6d6d..2874668 100644
--- a/1.14/scala_2.11-java11-debian/release.metadata
+++ b/1.14/scala_2.11-java11-debian/release.metadata
@@ -1,2 +1,2 @@
 Tags: 1.14.3-scala_2.11-java11, 1.14-scala_2.11-java11, scala_2.11-java11
-Architectures: amd64
+Architectures: amd64,arm64v8
diff --git a/1.14/scala_2.11-java8-debian/release.metadata 
b/1.14/scala_2.11-java8-debian/release.metadata
index f405f57..6b529bf 100644
--- a/1.14/scala_2.11-java8-debian/release.metadata
+++ b/1.14/scala_2.11-java8-debian/release.metadata
@@ -1,2 +1,2 @@
 Tags: 1.14.3-scala_2.11-java8, 1.14-scala_2.11-java8, scala_2.11-java8, 
1.14.3-scala_2.11, 1.14-scala_2.11, scala_2.11
-Architectures: amd64
+Architectures: amd64,arm64v8
diff --git a/1.14/scala_2.12-java11-debian/release.metadata 
b/1.14/scala_2.12-java11-debian/release.metadata
index 23f2f57..ac3e5bc 100644
--- a/1.14/scala_2.12-java11-debian/release.metadata
+++ b/1.14/scala_2.12-java11-debian/release.metadata
@@ -1,2 +1,2 @@
 Tags: 1.14.3-scala_2.12-java11, 1.14-scala_2.12-java11, scala_2.12-java11, 
1.14.3-java11, 1.14-java11, java11
-Architectures: amd64
+Architectures: amd64,arm64v8
diff --git a/1.14/scala_2.12-java8-debian/release.metadata 
b/1.14/scala_2.12-java8-debian/release.metadata
index b6f8fe7..eb1d045 100644
--- a/1.14/scala_2.12-java8-debian/release.metadata
+++ b/1.14/scala_2.12-java8-debian/release.metadata
@@ -1,2 +1,2 @@
 Tags: 1.14.3-scala_2.12-java8, 1.14-scala_2.12-java8, scala_2.12-java8, 
1.14.3-scala_2.12, 1.14-scala_2.12, scala_2.12, 1.14.3-java8, 1.14-java8, 
java8, 1.14.3, 1.14, latest
-Architectures: amd64
+Architectures: amd64,arm64v8


[flink-docker] branch dev-1.14 updated: [FLINK-25679] Add support for arm64v8

2022-02-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch dev-1.14
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.14 by this push:
 new f17b89c  [FLINK-25679] Add support for arm64v8
f17b89c is described below

commit f17b89c3f083453fb1ba3d246db065b04a6dd963
Author: Robert Metzger 
AuthorDate: Fri Jan 21 20:10:28 2022 +0100

[FLINK-25679] Add support for arm64v8
---
 generator.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/generator.sh b/generator.sh
index 3eb7b2b..94d7550 100644
--- a/generator.sh
+++ b/generator.sh
@@ -85,6 +85,5 @@ function generateReleaseMetadata {
 
 echo "Tags: $tags" >> $dir/release.metadata
 
-# We currently only support amd64 with Flink.
-echo "Architectures: amd64" >> $dir/release.metadata
+echo "Architectures: amd64,arm64v8" >> $dir/release.metadata
 }


[flink-docker] branch dev-master updated: [FLINK-25679] Add support for arm64v8

2022-02-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch dev-master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-master by this push:
 new dcf1c9a  [FLINK-25679] Add support for arm64v8
dcf1c9a is described below

commit dcf1c9a3fe6444d5500a75a26e0d209b2fb4564d
Author: Robert Metzger 
AuthorDate: Fri Jan 21 20:10:28 2022 +0100

[FLINK-25679] Add support for arm64v8
---
 generator.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/generator.sh b/generator.sh
index e2a02f7..e093835 100644
--- a/generator.sh
+++ b/generator.sh
@@ -85,6 +85,5 @@ function generateReleaseMetadata {
 
 echo "Tags: $tags" >> $dir/release.metadata
 
-# We currently only support amd64 with Flink.
-echo "Architectures: amd64" >> $dir/release.metadata
+echo "Architectures: amd64,arm64v8" >> $dir/release.metadata
 }


[flink] branch master updated (1b02f02 -> e548f21)

2022-01-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1b02f02  [FLINK-24857][test][FileSource][Kafka] Upgrade 
SourceReaderTestBase to JUnit 5
 add e548f21  [FLINK-23230] Fix protoc dependency for Apple Silicon / m1 
support

No new revisions were added by this update.

Summary of changes:
 flink-connectors/flink-connector-pulsar/pom.xml | 3 +--
 flink-formats/flink-parquet/pom.xml | 8 +++-
 2 files changed, 8 insertions(+), 3 deletions(-)


[flink] branch release-1.14 updated: [FLINK-24129][connectors-pulsar] Harden TopicRangeTest.rangeCreationHaveALimitedScope.

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new 64902ea  [FLINK-24129][connectors-pulsar] Harden 
TopicRangeTest.rangeCreationHaveALimitedScope.
64902ea is described below

commit 64902ea420a7785383258b0d7fa7922f7cec2c85
Author: David Moravek 
AuthorDate: Mon Sep 6 11:39:24 2021 +0200

[FLINK-24129][connectors-pulsar] Harden 
TopicRangeTest.rangeCreationHaveALimitedScope.
---
 .../source/enumerator/topic/TopicRangeTest.java| 37 +++---
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git 
a/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
 
b/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
index 93b3621..f5665e7 100644
--- 
a/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
+++ 
b/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
@@ -20,11 +20,8 @@ package 
org.apache.flink.connector.pulsar.source.enumerator.topic;
 
 import org.apache.flink.util.InstantiationUtil;
 
-import org.junit.jupiter.api.RepeatedTest;
 import org.junit.jupiter.api.Test;
 
-import java.util.Random;
-
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.MAX_RANGE;
 import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
 import static org.junit.jupiter.api.Assertions.assertEquals;
@@ -33,26 +30,30 @@ import static org.junit.jupiter.api.Assertions.assertThrows;
 /** Unit tests for {@link TopicRange}. */
 class TopicRangeTest {
 
-private final Random random = new Random(System.currentTimeMillis());
+@Test
+void topicRangeIsSerializable() throws Exception {
+final TopicRange range = new TopicRange(1, 5);
+final TopicRange cloneRange = InstantiationUtil.clone(range);
+assertEquals(range, cloneRange);
+}
 
-@RepeatedTest(10)
-@SuppressWarnings("java:S5778")
-void rangeCreationHaveALimitedScope() {
-assertThrows(
-IllegalArgumentException.class,
-() -> new TopicRange(-1, random.nextInt(MAX_RANGE)));
-assertThrows(
-IllegalArgumentException.class,
-() -> new TopicRange(1, MAX_RANGE + random.nextInt(1)));
+@Test
+void negativeStart() {
+assertThrows(IllegalArgumentException.class, () -> new TopicRange(-1, 
1));
+}
 
-assertDoesNotThrow(() -> new TopicRange(1, random.nextInt(MAX_RANGE)));
+@Test
+void endBelowTheMaximum() {
+assertDoesNotThrow(() -> new TopicRange(1, MAX_RANGE - 1));
 }
 
 @Test
-void topicRangeIsSerializable() throws Exception {
-TopicRange range = new TopicRange(10, random.nextInt(MAX_RANGE));
-TopicRange cloneRange = InstantiationUtil.clone(range);
+void endOnTheMaximum() {
+assertDoesNotThrow(() -> new TopicRange(1, MAX_RANGE));
+}
 
-assertEquals(range, cloneRange);
+@Test
+void endAboveTheMaximum() {
+assertThrows(IllegalArgumentException.class, () -> new TopicRange(1, 
MAX_RANGE + 1));
 }
 }


[flink] branch master updated: [FLINK-24129][connectors-pulsar] Harden TopicRangeTest.rangeCreationHaveALimitedScope.

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e18d273  [FLINK-24129][connectors-pulsar] Harden 
TopicRangeTest.rangeCreationHaveALimitedScope.
e18d273 is described below

commit e18d2731a637b6f6d7f984221e95f02fb68b4e20
Author: David Moravek 
AuthorDate: Mon Sep 6 11:39:24 2021 +0200

[FLINK-24129][connectors-pulsar] Harden 
TopicRangeTest.rangeCreationHaveALimitedScope.
---
 .../source/enumerator/topic/TopicRangeTest.java| 37 +++---
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git 
a/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
 
b/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
index 93b3621..f5665e7 100644
--- 
a/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
+++ 
b/flink-connectors/flink-connector-pulsar/src/test/java/org/apache/flink/connector/pulsar/source/enumerator/topic/TopicRangeTest.java
@@ -20,11 +20,8 @@ package 
org.apache.flink.connector.pulsar.source.enumerator.topic;
 
 import org.apache.flink.util.InstantiationUtil;
 
-import org.junit.jupiter.api.RepeatedTest;
 import org.junit.jupiter.api.Test;
 
-import java.util.Random;
-
 import static 
org.apache.flink.connector.pulsar.source.enumerator.topic.TopicRange.MAX_RANGE;
 import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
 import static org.junit.jupiter.api.Assertions.assertEquals;
@@ -33,26 +30,30 @@ import static org.junit.jupiter.api.Assertions.assertThrows;
 /** Unit tests for {@link TopicRange}. */
 class TopicRangeTest {
 
-private final Random random = new Random(System.currentTimeMillis());
+@Test
+void topicRangeIsSerializable() throws Exception {
+final TopicRange range = new TopicRange(1, 5);
+final TopicRange cloneRange = InstantiationUtil.clone(range);
+assertEquals(range, cloneRange);
+}
 
-@RepeatedTest(10)
-@SuppressWarnings("java:S5778")
-void rangeCreationHaveALimitedScope() {
-assertThrows(
-IllegalArgumentException.class,
-() -> new TopicRange(-1, random.nextInt(MAX_RANGE)));
-assertThrows(
-IllegalArgumentException.class,
-() -> new TopicRange(1, MAX_RANGE + random.nextInt(1)));
+@Test
+void negativeStart() {
+assertThrows(IllegalArgumentException.class, () -> new TopicRange(-1, 
1));
+}
 
-assertDoesNotThrow(() -> new TopicRange(1, random.nextInt(MAX_RANGE)));
+@Test
+void endBelowTheMaximum() {
+assertDoesNotThrow(() -> new TopicRange(1, MAX_RANGE - 1));
 }
 
 @Test
-void topicRangeIsSerializable() throws Exception {
-TopicRange range = new TopicRange(10, random.nextInt(MAX_RANGE));
-TopicRange cloneRange = InstantiationUtil.clone(range);
+void endOnTheMaximum() {
+assertDoesNotThrow(() -> new TopicRange(1, MAX_RANGE));
+}
 
-assertEquals(range, cloneRange);
+@Test
+void endAboveTheMaximum() {
+assertThrows(IllegalArgumentException.class, () -> new TopicRange(1, 
MAX_RANGE + 1));
 }
 }


[flink] branch release-1.14 updated: [FLINK-23969][connector/pulsar] Create e2e tests for pulsar connector.

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new a2b612a  [FLINK-23969][connector/pulsar] Create e2e tests for pulsar 
connector.
a2b612a is described below

commit a2b612af84ac358592db6e52cf14bcd718a16fbb
Author: syhily 
AuthorDate: Fri Sep 17 01:13:52 2021 +0800

[FLINK-23969][connector/pulsar] Create e2e tests for pulsar connector.

(cherry picked from commit b6086203d5fc0a08a330dd0069fbe1359ceac97a)
---
 flink-connectors/flink-connector-pulsar/pom.xml|  47 
 .../pulsar/source/enumerator/topic/TopicRange.java |   2 +
 .../topic/range/FixedRangeGenerator.java}  |  29 ++---
 .../pulsar/source/PulsarSourceITCase.java  |   2 +-
 .../testutils/PulsarPartitionDataWriter.java   |  28 ++---
 .../pulsar/testutils/PulsarTestContext.java|   9 +-
 .../pulsar/testutils/PulsarTestEnvironment.java|  19 ++--
 .../pulsar/testutils/PulsarTestSuiteBase.java  |   2 +-
 .../cases/MultipleTopicConsumingContext.java   |  83 ++
 ...text.java => MultipleTopicTemplateContext.java} |  46 
 .../cases/SingleTopicConsumingContext.java |  16 ++-
 .../pulsar/testutils/runtime/PulsarRuntime.java|  39 ---
 .../testutils/runtime/PulsarRuntimeOperator.java   |  21 +++-
 ...erProvider.java => PulsarContainerRuntime.java} |  28 -
 ...sarMockProvider.java => PulsarMockRuntime.java} |  41 +--
 .../util/flink/FlinkContainerTestEnvironment.java  |  14 ++-
 .../flink-end-to-end-tests-pulsar/pom.xml  | 121 +
 .../util/pulsar/PulsarSourceOrderedE2ECase.java|  40 ---
 .../util/pulsar/PulsarSourceUnorderedE2ECase.java  |  55 ++
 .../pulsar/cases/ExclusiveSubscriptionContext.java |  61 +++
 .../pulsar/cases/FailoverSubscriptionContext.java  |  61 +++
 .../pulsar/cases/KeySharedSubscriptionContext.java |  93 ++--
 .../pulsar/cases/SharedSubscriptionContext.java|  67 +---
 .../FlinkContainerWithPulsarEnvironment.java   |  54 +
 .../common/KeyedPulsarPartitionDataWriter.java |  62 +++
 .../common/UnorderedSourceTestSuiteBase.java   |  72 
 .../src/test/resources/log4j2-test.properties  |  31 ++
 flink-end-to-end-tests/pom.xml |   1 +
 .../modules-skipping-deployment.modulelist |   1 +
 29 files changed, 846 insertions(+), 299 deletions(-)

diff --git a/flink-connectors/flink-connector-pulsar/pom.xml 
b/flink-connectors/flink-connector-pulsar/pom.xml
index fe13f3a..f4c1407 100644
--- a/flink-connectors/flink-connector-pulsar/pom.xml
+++ b/flink-connectors/flink-connector-pulsar/pom.xml
@@ -169,6 +169,12 @@ under the License.
org.apache.pulsar
pulsar-client-all
${pulsar.version}
+   
+   
+   org.apache.pulsar
+   
pulsar-package-core
+   
+   


 
@@ -258,6 +264,47 @@ under the License.



+   
+   org.apache.maven.plugins
+   maven-jar-plugin
+   
+   
+   
+   test-jar
+   
+   
+   
+   
**/testutils/**
+   
META-INF/LICENSE
+   
META-INF/NOTICE
+   
+   
+   
+   
+   
+   
+   org.apache.maven.plugins
+   maven-source-plugin
+   
+   
+   attach-test-sources
+   
+   
test-ja

[flink] branch master updated (8664b36 -> 0bbc91a)

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8664b36  [hotfix][connectors/kafka] Rename EventDeSerializer to 
EventDeSerializationSchema in examples
 add 0bbc91a  [FLINK-24248][docs]update Gradle dependency

No new revisions were added by this update.

Summary of changes:
 docs/content/docs/dev/datastream/project-configuration.md | 1 +
 1 file changed, 1 insertion(+)


[flink] 02/02: [hotfix][connectors/kafka] Rename EventDeSerializer to EventDeSerializationSchema in examples

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d818ccb2f6317e910764908af55937762a33c377
Author: Fabian Paul 
AuthorDate: Wed Sep 15 16:49:16 2021 +0200

[hotfix][connectors/kafka] Rename EventDeSerializer to 
EventDeSerializationSchema in examples
---
 .../streaming/examples/statemachine/KafkaEventsGeneratorJob.java  | 4 ++--
 .../flink/streaming/examples/statemachine/StateMachineExample.java| 4 ++--
 .../kafka/{EventDeSerializer.java => EventDeSerializationSchema.java} | 3 ++-
 .../examples/statemachine/kafka/KafkaStandaloneGenerator.java | 4 ++--
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
index 822d17c..a065f84 100644
--- 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
+++ 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
@@ -24,7 +24,7 @@ import org.apache.flink.connector.kafka.sink.KafkaSink;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.examples.statemachine.event.Event;
 import 
org.apache.flink.streaming.examples.statemachine.generator.EventsGeneratorSource;
-import 
org.apache.flink.streaming.examples.statemachine.kafka.EventDeSerializer;
+import 
org.apache.flink.streaming.examples.statemachine.kafka.EventDeSerializationSchema;
 
 /**
  * Job to generate input events that are written to Kafka, for the {@link 
StateMachineExample} job.
@@ -55,7 +55,7 @@ public class KafkaEventsGeneratorJob {
 .setRecordSerializer(
 
KafkaRecordSerializationSchema.builder()
 .setValueSerializationSchema(
-new 
EventDeSerializer())
+new 
EventDeSerializationSchema())
 .setTopic(kafkaTopic)
 .build())
 .build());
diff --git 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/StateMachineExample.java
 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/StateMachineExample.java
index 93d3326..72a1587 100644
--- 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/StateMachineExample.java
+++ 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/StateMachineExample.java
@@ -36,7 +36,7 @@ import 
org.apache.flink.streaming.examples.statemachine.dfa.State;
 import org.apache.flink.streaming.examples.statemachine.event.Alert;
 import org.apache.flink.streaming.examples.statemachine.event.Event;
 import 
org.apache.flink.streaming.examples.statemachine.generator.EventsGeneratorSource;
-import 
org.apache.flink.streaming.examples.statemachine.kafka.EventDeSerializer;
+import 
org.apache.flink.streaming.examples.statemachine.kafka.EventDeSerializationSchema;
 import org.apache.flink.util.Collector;
 
 /**
@@ -102,7 +102,7 @@ public class StateMachineExample {
 .setTopics(kafkaTopic)
 .setDeserializer(
 KafkaRecordDeserializationSchema.valueOnly(
-new EventDeSerializer()))
+new EventDeSerializationSchema()))
 .setStartingOffsets(OffsetsInitializer.latest())
 .build();
 events =
diff --git 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/EventDeSerializer.java
 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/EventDeSerializationSchema.java
similarity index 95%
rename from 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/EventDeSerializer.java
rename to 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/EventDeSerializationSchema.java
index a0f4099..42bd675 100644
--- 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/EventDeSerializer.java
+++

[flink] 01/02: [FLINK-24292][connectors/kafka] Use KafkaSink in examples instead of FlinkKafkaProducer

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 06d4828423e2d4e29fe5ddf5710ca651805e5d7a
Author: Fabian Paul 
AuthorDate: Wed Sep 15 12:46:41 2021 +0200

[FLINK-24292][connectors/kafka] Use KafkaSink in examples instead of 
FlinkKafkaProducer
---
 .../flink/streaming/kafka/test/KafkaExample.java   | 24 ++
 .../statemachine/KafkaEventsGeneratorJob.java  | 15 --
 2 files changed, 29 insertions(+), 10 deletions(-)

diff --git 
a/flink-end-to-end-tests/flink-streaming-kafka-test/src/main/java/org/apache/flink/streaming/kafka/test/KafkaExample.java
 
b/flink-end-to-end-tests/flink-streaming-kafka-test/src/main/java/org/apache/flink/streaming/kafka/test/KafkaExample.java
index 36988b7..01e4819 100644
--- 
a/flink-end-to-end-tests/flink-streaming-kafka-test/src/main/java/org/apache/flink/streaming/kafka/test/KafkaExample.java
+++ 
b/flink-end-to-end-tests/flink-streaming-kafka-test/src/main/java/org/apache/flink/streaming/kafka/test/KafkaExample.java
@@ -18,17 +18,19 @@
 package org.apache.flink.streaming.kafka.test;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
 import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
-import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
-import 
org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper;
 import org.apache.flink.streaming.kafka.test.base.CustomWatermarkExtractor;
 import org.apache.flink.streaming.kafka.test.base.KafkaEvent;
 import org.apache.flink.streaming.kafka.test.base.KafkaEventSchema;
 import org.apache.flink.streaming.kafka.test.base.KafkaExampleUtil;
 import org.apache.flink.streaming.kafka.test.base.RollingAdditionMapper;
 
+import org.apache.kafka.clients.producer.ProducerConfig;
+
 /**
  * A simple example that shows how to read from and write to modern Kafka. 
This will read String
  * messages from the input topic, parse them into a POJO type {@link 
KafkaEvent}, group by some key,
@@ -60,12 +62,18 @@ public class KafkaExample extends KafkaExampleUtil {
 .keyBy("word")
 .map(new RollingAdditionMapper());
 
-input.addSink(
-new FlinkKafkaProducer<>(
-parameterTool.getRequired("output-topic"),
-new KeyedSerializationSchemaWrapper<>(new 
KafkaEventSchema()),
-parameterTool.getProperties(),
-FlinkKafkaProducer.Semantic.EXACTLY_ONCE));
+input.sinkTo(
+KafkaSink.builder()
+.setBootstrapServers(
+parameterTool
+.getProperties()
+
.getProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))
+.setRecordSerializer(
+KafkaRecordSerializationSchema.builder()
+
.setTopic(parameterTool.getRequired("output-topic"))
+.setValueSerializationSchema(new 
KafkaEventSchema())
+.build())
+.build());
 
 env.execute("Modern Kafka Example");
 }
diff --git 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
index 0d3d476..822d17c 100644
--- 
a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
+++ 
b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/KafkaEventsGeneratorJob.java
@@ -19,8 +19,10 @@
 package org.apache.flink.streaming.examples.statemachine;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
-import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
+import org.apache.flink.streaming.examples.statemachine.event.Event;
 import 
org.apache.flink.streaming.examples.statemachine.generator.EventsGeneratorSource;
 import 
org.apache.flink.streaming.examples.statemachine.kafka.EventDeSerializer;
 
@@ 

[flink] branch release-1.14 updated (5dd99ed -> d818ccb)

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5dd99ed  [FLINK-24281][connectors/kafka] Migrate all format tests from 
FlinkKafkaProducer to KafkaSink
 new 06d4828  [FLINK-24292][connectors/kafka] Use KafkaSink in examples 
instead of FlinkKafkaProducer
 new d818ccb  [hotfix][connectors/kafka] Rename EventDeSerializer to 
EventDeSerializationSchema in examples

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../flink/streaming/kafka/test/KafkaExample.java   | 24 ++
 .../statemachine/KafkaEventsGeneratorJob.java  | 17 ---
 .../examples/statemachine/StateMachineExample.java |  4 ++--
 ...alizer.java => EventDeSerializationSchema.java} |  3 ++-
 .../kafka/KafkaStandaloneGenerator.java|  4 ++--
 5 files changed, 36 insertions(+), 16 deletions(-)
 rename 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/{EventDeSerializer.java
 => EventDeSerializationSchema.java} (95%)


[flink] branch master updated (4a89552 -> 8664b36)

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4a89552  [FLINK-24281][connectors/kafka] Migrate all format tests from 
FlinkKafkaProducer to KafkaSink
 add a83f8f4  [FLINK-24292][connectors/kafka] Use KafkaSink in examples 
instead of FlinkKafkaProducer
 add 8664b36  [hotfix][connectors/kafka] Rename EventDeSerializer to 
EventDeSerializationSchema in examples

No new revisions were added by this update.

Summary of changes:
 .../flink/streaming/kafka/test/KafkaExample.java   | 24 ++
 .../statemachine/KafkaEventsGeneratorJob.java  | 17 ---
 .../examples/statemachine/StateMachineExample.java |  4 ++--
 ...alizer.java => EventDeSerializationSchema.java} |  3 ++-
 .../kafka/KafkaStandaloneGenerator.java|  4 ++--
 5 files changed, 36 insertions(+), 16 deletions(-)
 rename 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/kafka/{EventDeSerializer.java
 => EventDeSerializationSchema.java} (95%)


[flink] 03/03: [FLINK-24281][connectors/kafka] Migrate all format tests from FlinkKafkaProducer to KafkaSink

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 5dd99eddef34e0f90ed9a1bc8648735bd464c4b4
Author: Fabian Paul 
AuthorDate: Tue Sep 14 16:04:25 2021 +0200

[FLINK-24281][connectors/kafka] Migrate all format tests from 
FlinkKafkaProducer to KafkaSink
---
 .../kafka/table/KafkaChangelogTableITCase.java | 82 --
 .../connectors/kafka/table/KafkaTableTestBase.java |  3 +
 .../registry/test/TestAvroConsumerConfluent.java   | 48 -
 3 files changed, 66 insertions(+), 67 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java
 
b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java
index 70d5060..501d470 100644
--- 
a/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java
+++ 
b/flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaChangelogTableITCase.java
@@ -21,12 +21,15 @@ package org.apache.flink.streaming.connectors.kafka.table;
 import org.apache.flink.api.common.serialization.SerializationSchema;
 import org.apache.flink.api.common.serialization.SimpleStringSchema;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
 import org.apache.flink.streaming.api.datastream.DataStreamSource;
-import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
 import 
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner;
 import 
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
 import org.apache.flink.table.api.TableResult;
 
+import org.apache.kafka.clients.producer.ProducerConfig;
 import org.junit.Before;
 import org.junit.Test;
 
@@ -36,8 +39,6 @@ import java.util.Arrays;
 import java.util.List;
 import java.util.Properties;
 
-import static 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.DEFAULT_KAFKA_PRODUCERS_POOL_SIZE;
-import static 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic.EXACTLY_ONCE;
 import static 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.readLines;
 import static 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.waitingExpectedResults;
 
@@ -65,23 +66,8 @@ public class KafkaChangelogTableITCase extends 
KafkaTableTestBase {
 
 // -- Write the Debezium json into Kafka ---
 List lines = readLines("debezium-data-schema-exclude.txt");
-DataStreamSource stream = env.fromCollection(lines);
-SerializationSchema serSchema = new SimpleStringSchema();
-FlinkKafkaPartitioner partitioner = new 
FlinkFixedPartitioner<>();
-
-// the producer must not produce duplicates
-Properties producerProperties = getStandardProps();
-producerProperties.setProperty("retries", "0");
 try {
-stream.addSink(
-new FlinkKafkaProducer<>(
-topic,
-serSchema,
-producerProperties,
-partitioner,
-EXACTLY_ONCE,
-DEFAULT_KAFKA_PRODUCERS_POOL_SIZE));
-env.execute("Write sequence");
+writeRecordsToKafka(topic, lines);
 } catch (Exception e) {
 throw new Exception("Failed to write debezium data to Kafka.", e);
 }
@@ -208,23 +194,8 @@ public class KafkaChangelogTableITCase extends 
KafkaTableTestBase {
 
 // -- Write the Canal json into Kafka ---
 List lines = readLines("canal-data.txt");
-DataStreamSource stream = env.fromCollection(lines);
-SerializationSchema serSchema = new SimpleStringSchema();
-FlinkKafkaPartitioner partitioner = new 
FlinkFixedPartitioner<>();
-
-// the producer must not produce duplicates
-Properties producerProperties = getStandardProps();
-producerProperties.setProperty("retries", "0");
 try {
-stream.addSink(
-new FlinkKafkaProducer<>(
-topic,
-serSchema,
-producerProperties,
-partitioner,
-EXACTLY_ONCE,
-DEFAULT_KAFKA_PRODUCERS_POOL_SIZE));
-   

[flink] 01/03: [hotfix][connectors/kafka] Remove unused code from KafkaDynamicSink

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 587e6770f19109cd749da48dc91ccf56fc24799c
Author: Fabian Paul 
AuthorDate: Tue Sep 14 14:29:07 2021 +0200

[hotfix][connectors/kafka] Remove unused code from KafkaDynamicSink
---
 .../connectors/kafka/table/KafkaDynamicSink.java  | 15 ---
 1 file changed, 15 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaDynamicSink.java
 
b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaDynamicSink.java
index 1fd9caf..a67472b 100644
--- 
a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaDynamicSink.java
+++ 
b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaDynamicSink.java
@@ -27,7 +27,6 @@ import org.apache.flink.connector.base.DeliveryGuarantee;
 import org.apache.flink.connector.kafka.sink.KafkaSink;
 import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
 import org.apache.flink.streaming.api.datastream.DataStreamSink;
-import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
 import 
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
 import org.apache.flink.table.api.DataTypes;
 import org.apache.flink.table.connector.ChangelogMode;
@@ -359,20 +358,6 @@ public class KafkaDynamicSink implements DynamicTableSink, 
SupportsWritingMetada
 return metadataKeys.size() > 0;
 }
 
-private static FlinkKafkaProducer.Semantic getSemantic(DeliveryGuarantee 
deliveryGuarantee) {
-switch (deliveryGuarantee) {
-case NONE:
-return FlinkKafkaProducer.Semantic.NONE;
-case AT_LEAST_ONCE:
-return FlinkKafkaProducer.Semantic.AT_LEAST_ONCE;
-case EXACTLY_ONCE:
-return FlinkKafkaProducer.Semantic.EXACTLY_ONCE;
-default:
-throw new IllegalStateException(
-"Unsupported delivery guarantee " + deliveryGuarantee);
-}
-}
-
 private RowData.FieldGetter[] getFieldGetters(
 List physicalChildren, int[] keyProjection) {
 return Arrays.stream(keyProjection)


[flink] 02/03: [FLINK-24281][connectors/kafka] Only allow KafkaSinkBuilder creation with KafkaSink.builder()

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 83fd46fbdbb864984e2d0134fa7151e1e5c13f77
Author: Fabian Paul 
AuthorDate: Wed Sep 15 14:56:56 2021 +0200

[FLINK-24281][connectors/kafka] Only allow KafkaSinkBuilder creation with 
KafkaSink.builder()
---
 .../java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java| 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java
 
b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java
index 678a48e..e87d4dd 100644
--- 
a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java
+++ 
b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java
@@ -70,6 +70,8 @@ public class KafkaSinkBuilder {
 private KafkaRecordSerializationSchema recordSerializer;
 private String bootstrapServers;
 
+KafkaSinkBuilder() {}
+
 /**
  * Sets the wanted the {@link DeliveryGuarantee}. The default delivery 
guarantee is {@link
  * #deliveryGuarantee}.


[flink] branch release-1.14 updated (c05dd396 -> 5dd99ed)

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c05dd396 [FLINK-24317][python][tests] Optimize the implementation of 
Top2 in test_flat_aggregate
 new 587e677  [hotfix][connectors/kafka] Remove unused code from 
KafkaDynamicSink
 new 83fd46f  [FLINK-24281][connectors/kafka] Only allow KafkaSinkBuilder 
creation with KafkaSink.builder()
 new 5dd99ed  [FLINK-24281][connectors/kafka] Migrate all format tests from 
FlinkKafkaProducer to KafkaSink

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../connector/kafka/sink/KafkaSinkBuilder.java |  2 +
 .../connectors/kafka/table/KafkaDynamicSink.java   | 15 
 .../kafka/table/KafkaChangelogTableITCase.java | 82 --
 .../connectors/kafka/table/KafkaTableTestBase.java |  3 +
 .../registry/test/TestAvroConsumerConfluent.java   | 48 -
 5 files changed, 68 insertions(+), 82 deletions(-)


[flink] branch master updated (8debdd0 -> 4a89552)

2021-09-17 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8debdd0  [FLINK-24317][python][tests] Optimize the implementation of 
Top2 in test_flat_aggregate
 add f763706  [hotfix][connectors/kafka] Remove unused code from 
KafkaDynamicSink
 add f82b2a9  [FLINK-24281][connectors/kafka] Only allow KafkaSinkBuilder 
creation with KafkaSink.builder()
 add 4a89552  [FLINK-24281][connectors/kafka] Migrate all format tests from 
FlinkKafkaProducer to KafkaSink

No new revisions were added by this update.

Summary of changes:
 .../connector/kafka/sink/KafkaSinkBuilder.java |  2 +
 .../connectors/kafka/table/KafkaDynamicSink.java   | 15 
 .../kafka/table/KafkaChangelogTableITCase.java | 82 --
 .../connectors/kafka/table/KafkaTableTestBase.java |  3 +
 .../registry/test/TestAvroConsumerConfluent.java   | 48 -
 5 files changed, 68 insertions(+), 82 deletions(-)


[flink] branch master updated (92d857a -> 43fe4be)

2021-08-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 92d857a  [FLINK-23923][python] Log the environment variables and 
command when starting Python process
 add 43fe4be  [hotfix][docs] Fix typo in CLI docs

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/cli.md | 2 +-
 docs/content/docs/deployment/cli.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


[flink] branch master updated: [FLINK-23562][CI] Update CI JDK to 1.8.0_292

2021-08-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4d19a9f  [FLINK-23562][CI] Update CI JDK to 1.8.0_292
4d19a9f is described below

commit 4d19a9f09e58ae5726901b1c6c473b655d908440
Author: Robert Metzger 
AuthorDate: Fri Jul 30 13:21:03 2021 +0200

[FLINK-23562][CI] Update CI JDK to 1.8.0_292
---
 azure-pipelines.yml | 2 +-
 tools/azure-pipelines/build-apache-repo.yml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 8c7ed6a..d1e8b2b 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -38,7 +38,7 @@ resources:
   containers:
   # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:ubuntu-amd64-7ac4e28
+image: rmetzger/flink-ci:ubuntu-amd64-02878c6
 # On AZP provided machines, set this flag to allow writing coredumps in 
docker
 options: --privileged
 
diff --git a/tools/azure-pipelines/build-apache-repo.yml 
b/tools/azure-pipelines/build-apache-repo.yml
index 37679d8..cd0030e 100644
--- a/tools/azure-pipelines/build-apache-repo.yml
+++ b/tools/azure-pipelines/build-apache-repo.yml
@@ -38,7 +38,7 @@ resources:
   containers:
   # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:ubuntu-amd64-7ac4e28
+image: rmetzger/flink-ci:ubuntu-amd64-02878c6
 
 variables:
   MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository


[flink] branch release-1.13 updated: [FLINK-23546][dist] Supress error messages on macOS

2021-07-30 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new d5bf264  [FLINK-23546][dist] Supress error messages on macOS
d5bf264 is described below

commit d5bf26448780d2bfc3ec4db28c8f8c91b1435487
Author: Robert Metzger 
AuthorDate: Thu Jul 29 17:14:01 2021 +0200

[FLINK-23546][dist] Supress error messages on macOS
---
 flink-dist/src/main/flink-bin/bin/flink-daemon.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/flink-dist/src/main/flink-bin/bin/flink-daemon.sh 
b/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
index 67fe698..c6d5442 100644
--- a/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
+++ b/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
@@ -97,7 +97,7 @@ function guaranteed_kill {
   # if timeout exists, use it
   if command -v timeout &> /dev/null ; then
 # wait 10 seconds for process to stop. By default, Flink kills the JVM 5 
seconds after sigterm.
-timeout 10 tail --pid=$to_stop_pid -f /dev/null
+timeout 10 tail --pid=$to_stop_pid -f /dev/null &> /dev/null
 if [ "$?" -eq 124 ]; then
   echo "Daemon $daemon didn't stop within 10 seconds. Killing it."
   # send sigkill


[flink] branch master updated: [FLINK-23546][dist] Supress error messages on macOS

2021-07-30 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b11554  [FLINK-23546][dist] Supress error messages on macOS
3b11554 is described below

commit 3b115544b04572831e162288097105c63ca5e048
Author: Robert Metzger 
AuthorDate: Thu Jul 29 17:14:01 2021 +0200

[FLINK-23546][dist] Supress error messages on macOS
---
 flink-dist/src/main/flink-bin/bin/flink-daemon.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/flink-dist/src/main/flink-bin/bin/flink-daemon.sh 
b/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
index 67fe698..c6d5442 100644
--- a/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
+++ b/flink-dist/src/main/flink-bin/bin/flink-daemon.sh
@@ -97,7 +97,7 @@ function guaranteed_kill {
   # if timeout exists, use it
   if command -v timeout &> /dev/null ; then
 # wait 10 seconds for process to stop. By default, Flink kills the JVM 5 
seconds after sigterm.
-timeout 10 tail --pid=$to_stop_pid -f /dev/null
+timeout 10 tail --pid=$to_stop_pid -f /dev/null &> /dev/null
 if [ "$?" -eq 124 ]; then
   echo "Daemon $daemon didn't stop within 10 seconds. Killing it."
   # send sigkill


[flink] branch release-1.13 updated (045e68d -> fb42772)

2021-06-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 045e68d  [FLINK-22966][runtime] StateAssignmentOperation returns only 
not null state handles
 new b389e3f  [hotfix][docs] Mention -D argument for CLI
 new fb42772  [FLINK-23052][ci] Improve stability of maven snapshot 
deployment

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content.zh/docs/deployment/cli.md   | 8 
 docs/content/docs/deployment/cli.md  | 8 
 tools/azure-pipelines/build-nightly-dist.yml | 4 ++--
 3 files changed, 18 insertions(+), 2 deletions(-)


[flink] 01/02: [hotfix][docs] Mention -D argument for CLI

2021-06-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b389e3f446e020dcc59294204c83d88adaba26c7
Author: Robert Metzger 
AuthorDate: Thu Jun 17 08:08:25 2021 +0200

[hotfix][docs] Mention -D argument for CLI
---
 docs/content.zh/docs/deployment/cli.md | 8 
 docs/content/docs/deployment/cli.md| 8 
 2 files changed, 16 insertions(+)

diff --git a/docs/content.zh/docs/deployment/cli.md 
b/docs/content.zh/docs/deployment/cli.md
index 13d8bb60..cd43d00 100644
--- a/docs/content.zh/docs/deployment/cli.md
+++ b/docs/content.zh/docs/deployment/cli.md
@@ -80,6 +80,14 @@ There is another action called `run-application` available 
to run the job in
 [Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode). 
This documentation does not address
 this action individually as it works similarly to the `run` action in terms of 
the CLI frontend.
 
+The `run` and `run-application` commands support passing additional 
configuration parameters via the
+`-D` argument. For example setting the [maximum parallelism]({{< ref 
"docs/deployment/config#pipeline-max-parallelism" >}}#application-mode) 
+for a job can be done by setting `-Dpipeline.max-parallelism=120`. This 
argument is very useful for
+configuring per-job or application mode clusters, because you can pass any 
configuration parameter 
+to the cluster, without changing the configuration file.
+
+When submitting a job to an existing session cluster, only [execution 
configuration parameters]({{< ref "docs/deployment/config#execution" >}}) are 
supported.
+
 ### Job Monitoring
 
 You can monitor any running jobs using the `list` action:
diff --git a/docs/content/docs/deployment/cli.md 
b/docs/content/docs/deployment/cli.md
index 9761288..f229367 100644
--- a/docs/content/docs/deployment/cli.md
+++ b/docs/content/docs/deployment/cli.md
@@ -78,6 +78,14 @@ There is another action called `run-application` available 
to run the job in
 [Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode). 
This documentation does not address
 this action individually as it works similarly to the `run` action in terms of 
the CLI frontend.
 
+The `run` and `run-application` commands support passing additional 
configuration parameters via the
+`-D` argument. For example setting the [maximum parallelism]({{< ref 
"docs/deployment/config#pipeline-max-parallelism" >}}#application-mode) 
+for a job can be done by setting `-Dpipeline.max-parallelism=120`. This 
argument is very useful for
+configuring per-job or application mode clusters, because you can pass any 
configuration parameter 
+to the cluster, without changing the configuration file.
+
+When submitting a job to an existing session cluster, only [execution 
configuration parameters]({{< ref "docs/deployment/config#execution" >}}) are 
supported.
+
 ### Job Monitoring
 
 You can monitor any running jobs using the `list` action:


[flink] 02/02: [FLINK-23052][ci] Improve stability of maven snapshot deployment

2021-06-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fb42772920366dceb26d593d1c1da75afd3309c9
Author: Robert Metzger 
AuthorDate: Fri Jun 25 09:22:50 2021 +0200

[FLINK-23052][ci] Improve stability of maven snapshot deployment
---
 tools/azure-pipelines/build-nightly-dist.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/azure-pipelines/build-nightly-dist.yml 
b/tools/azure-pipelines/build-nightly-dist.yml
index 5e170bc..0d5d5a3 100644
--- a/tools/azure-pipelines/build-nightly-dist.yml
+++ b/tools/azure-pipelines/build-nightly-dist.yml
@@ -65,7 +65,7 @@ jobs:
 pool:
   vmImage: 'ubuntu-20.04'
 container: flink-build-container
-timeoutInMinutes: 100 # 40 minutes per scala version + 20 buffer
+timeoutInMinutes: 240
 workspace:
   clean: all
 steps:
@@ -109,7 +109,7 @@ jobs:
 
 EOF
 
-export CUSTOM_OPTIONS="-Dgpg.skip -Drat.skip -Dcheckstyle.skip 
--settings $(pwd)/deploy-settings.xml"
+export CUSTOM_OPTIONS="-Dgpg.skip -Drat.skip -Dcheckstyle.skip 
-Dmaven.wagon.http.pool=false --settings $(pwd)/deploy-settings.xml"
 export MVN_RUN_VERBOSE=true
 ./releasing/deploy_staging_jars.sh
 env:


[flink] branch master updated (d6ee4c4 -> 09ac6be7)

2021-06-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d6ee4c4  [hotfix][docs] Mention -D argument for CLI
 add 09ac6be7 [FLINK-23052][ci] Improve stability of maven snapshot 
deployment

No new revisions were added by this update.

Summary of changes:
 tools/azure-pipelines/build-nightly-dist.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


[flink] branch master updated (649b5c9 -> d6ee4c4)

2021-06-28 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 649b5c9  [FLINK-22966][runtime] StateAssignmentOperation returns only 
not null state handles
 add d6ee4c4  [hotfix][docs] Mention -D argument for CLI

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/cli.md | 8 
 docs/content/docs/deployment/cli.md| 8 
 2 files changed, 16 insertions(+)


[flink] 02/02: [FLINK-22464][tests] Fix OperatorCoordinator test which is stalling or slow with AdaptiveScheduler

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit cd54b584371b67e67015d104d34c2f19a446e2b3
Author: Robert Metzger 
AuthorDate: Mon Jun 21 22:21:21 2021 +0200

[FLINK-22464][tests] Fix OperatorCoordinator test which is stalling or slow 
with AdaptiveScheduler

This closes #16229
---
 .../OperatorEventSendingCheckpointITCase.java  | 70 --
 1 file changed, 64 insertions(+), 6 deletions(-)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
index 8d460f2..f6bf272 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
@@ -20,12 +20,17 @@ package org.apache.flink.runtime.operators.coordination;
 
 import org.apache.flink.api.common.eventtime.WatermarkStrategy;
 import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
 import org.apache.flink.api.connector.source.SplitEnumerator;
 import org.apache.flink.api.connector.source.SplitEnumeratorContext;
 import org.apache.flink.api.connector.source.lib.NumberSequenceSource;
 import org.apache.flink.api.connector.source.lib.util.IteratorSourceEnumerator;
+import org.apache.flink.api.connector.source.lib.util.IteratorSourceReader;
 import org.apache.flink.api.connector.source.lib.util.IteratorSourceSplit;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.io.InputStatus;
 import org.apache.flink.runtime.akka.AkkaUtils;
 import org.apache.flink.runtime.concurrent.FutureUtils;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
@@ -46,7 +51,6 @@ import 
org.apache.flink.runtime.taskexecutor.TaskExecutorGatewayDecoratorBase;
 import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.util.TestStreamEnvironment;
-import org.apache.flink.testutils.junit.FailsWithAdaptiveScheduler;
 import org.apache.flink.util.SerializedValue;
 import org.apache.flink.util.TestLogger;
 import org.apache.flink.util.function.TriFunction;
@@ -55,7 +59,6 @@ import akka.actor.ActorSystem;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
-import org.junit.experimental.categories.Category;
 
 import javax.annotation.Nullable;
 
@@ -85,7 +88,10 @@ public class OperatorEventSendingCheckpointITCase extends 
TestLogger {
 
 @BeforeClass
 public static void setupMiniClusterAndEnv() throws Exception {
-flinkCluster = new MiniClusterWithRpcIntercepting(PARALLELISM);
+Configuration config = new Configuration();
+// uncomment to run test with adaptive scheduler
+// config.set(JobManagerOptions.SCHEDULER, 
JobManagerOptions.SchedulerType.Adaptive);
+flinkCluster = new MiniClusterWithRpcIntercepting(PARALLELISM, config);
 flinkCluster.start();
 TestStreamEnvironment.setAsContext(flinkCluster, PARALLELISM);
 }
@@ -126,7 +132,6 @@ public class OperatorEventSendingCheckpointITCase extends 
TestLogger {
  * additionally a failure on the reader that triggers recovery.
  */
 @Test
-@Category(FailsWithAdaptiveScheduler.class) // FLINK-22464
 public void testOperatorEventLostWithReaderFailure() throws Exception {
 final int[] eventsToLose = new int[] {1, 3};
 
@@ -205,6 +210,22 @@ public class OperatorEventSendingCheckpointITCase extends 
TestLogger {
 env.setParallelism(1);
 env.enableCheckpointing(50);
 
+// This test depends on checkpoints persisting progress from the 
source before the
+// artificial exception gets triggered. Otherwise, the job will run 
for a long time (or
+// forever) because the exception will be thrown before any checkpoint 
successfully
+// completes.
+//
+// Checkpoints are triggered once the checkpoint scheduler gets 
started + a random initial
+// delay. For DefaultScheduler, this mechanism is fine, because DS 
starts the checkpoint
+// coordinator, then requests the required slots and then deploys the 
tasks. These
+// operations take enough time to have a checkpoint triggered by the 
time the task starts
+// running. AdaptiveScheduler starts the CheckpointCoordinator right 
before deploying tasks
+// (when slots are available already), hence tasks

[flink] 01/02: [hotfix] Fix some typos in comments

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fa52d7d53c8852f12dba912089c759689fd92668
Author: Robert Metzger 
AuthorDate: Mon Jun 21 22:19:49 2021 +0200

[hotfix] Fix some typos in comments
---
 .../api/connector/source/lib/util/IteratorSourceEnumerator.java   | 8 +++-
 .../flink/runtime/checkpoint/DefaultCheckpointPlanCalculator.java | 2 +-
 .../flink/runtime/operators/coordination/OperatorCoordinator.java | 6 +++---
 3 files changed, 7 insertions(+), 9 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/api/connector/source/lib/util/IteratorSourceEnumerator.java
 
b/flink-core/src/main/java/org/apache/flink/api/connector/source/lib/util/IteratorSourceEnumerator.java
index 9c37cca..ff805c5 100644
--- 
a/flink-core/src/main/java/org/apache/flink/api/connector/source/lib/util/IteratorSourceEnumerator.java
+++ 
b/flink-core/src/main/java/org/apache/flink/api/connector/source/lib/util/IteratorSourceEnumerator.java
@@ -78,10 +78,8 @@ public class IteratorSourceEnumerator>
 
 @Override
 public void addReader(int subtaskId) {
-// we don't assign any splits here, because this registration happens 
after fist startup
-// and after each reader restart/recovery
-// we only want to assign splits once, initially, which we get by 
reacting to the readers
-// explicit
-// split request
+// we don't assign any splits here, because this registration happens 
after fist startup and
+// after each reader restart/recovery we only want to assign splits 
once, initially, which
+// we get by reacting to the readers explicit split request
 }
 }
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointPlanCalculator.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointPlanCalculator.java
index c81a5dd..d9d392a 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointPlanCalculator.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointPlanCalculator.java
@@ -150,7 +150,7 @@ public class DefaultCheckpointPlanCalculator implements 
CheckpointPlanCalculator
 if (execution.getState() != ExecutionState.RUNNING) {
 throw new CheckpointException(
 String.format(
-"Checkpoint triggering task %s of job %s has 
not being executed at the moment. "
+"Checkpoint triggering task %s of job %s is 
not being executed at the moment. "
 + "Aborting checkpoint.",
 
execution.getVertex().getTaskNameWithSubtaskIndex(), jobId),
 
CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinator.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinator.java
index 24a546c..2e2869f 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinator.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinator.java
@@ -30,7 +30,7 @@ import java.util.concurrent.CompletableFuture;
 
 /**
  * A coordinator for runtime operators. The OperatorCoordinator runs on the 
master, associated with
- * the job vertex of the operator. It communicated with operators via sending 
operator events.
+ * the job vertex of the operator. It communicates with operators via sending 
operator events.
  *
  * Operator coordinators are for example source and sink coordinators that 
discover and assign
  * work, or aggregate and commit metadata.
@@ -62,7 +62,7 @@ import java.util.concurrent.CompletableFuture;
  *   scheduler determined which checkpoint to restore, these methods 
notify the coordinator of
  *   that. The former method is called in case of a regional 
failure/recovery (affecting
  *   possible a subset of subtasks), the later method in case of a global 
failure/recovery. This
- *   method should be used to determine which actions to recover, because 
it tells you with
+ *   method should be used to determine which actions to recover, because 
it tells you which
  *   checkpoint to fall back to. The coordinator implementation needs to 
recover the
  *   interactions with the relevant tasks since the checkpoint that is 
restored.
  *   {@link #subtaskReady(int, SubtaskGateway)}: Called again, once the 
recovered tasks are
@@ -133,7 +133,7 @@ public interface OperatorCoordinator extends 
CheckpointListener, AutoCloseable {
  *   checkpoint.
  * 
  *
- * @thro

[flink] branch release-1.13 updated (6634b3f -> cd54b58)

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6634b3f  [FLINK-23129][docs] Document ApplicationMode limitations
 new fa52d7d  [hotfix] Fix some typos in comments
 new cd54b58  [FLINK-22464][tests] Fix OperatorCoordinator test which is 
stalling or slow with AdaptiveScheduler

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../source/lib/util/IteratorSourceEnumerator.java  |  8 +--
 .../DefaultCheckpointPlanCalculator.java   |  2 +-
 .../coordination/OperatorCoordinator.java  |  6 +-
 .../OperatorEventSendingCheckpointITCase.java  | 70 --
 4 files changed, 71 insertions(+), 15 deletions(-)


[flink] branch master updated (88152c0 -> be15eb0)

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 88152c0  [hotfix][docs] Merge the two Flink POJO definitions to make 
the definition uniform.
 add 7475027  [hotfix] Fix some typos in comments
 add be15eb0  [FLINK-22464][tests] Fix OperatorCoordinator test which is 
stalling or slow with AdaptiveScheduler

No new revisions were added by this update.

Summary of changes:
 .../source/lib/util/IteratorSourceEnumerator.java  |  8 +--
 .../DefaultCheckpointPlanCalculator.java   |  2 +-
 .../coordination/OperatorCoordinator.java  |  6 +-
 .../OperatorEventSendingCheckpointITCase.java  | 70 --
 4 files changed, 71 insertions(+), 15 deletions(-)


[flink] branch release-1.13 updated: [FLINK-23129][docs] Document ApplicationMode limitations

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new 6634b3f  [FLINK-23129][docs] Document ApplicationMode limitations
6634b3f is described below

commit 6634b3fc4ef7305513cef067b15edc3d4fa61e7f
Author: Robert Metzger 
AuthorDate: Thu Jun 24 15:12:55 2021 +0200

[FLINK-23129][docs] Document ApplicationMode limitations

This closes #16281
---
 docs/content.zh/docs/deployment/overview.md | 4 
 docs/content/docs/deployment/overview.md| 4 
 2 files changed, 8 insertions(+)

diff --git a/docs/content.zh/docs/deployment/overview.md 
b/docs/content.zh/docs/deployment/overview.md
index 960db9b..649d07d 100644
--- a/docs/content.zh/docs/deployment/overview.md
+++ b/docs/content.zh/docs/deployment/overview.md
@@ -205,6 +205,10 @@ non-blocking, will lead to the "next" job starting before 
"this" job finishes.
 The Application Mode allows for multi-`execute()` applications but 
 High-Availability is not supported in these cases. High-Availability in 
Application Mode is only
 supported for single-`execute()` applications.
+
+Additionally, when any of multiple running jobs in Application Mode (submitted 
for example using 
+`executeAsync()`) gets cancelled, all jobs will be stopped and the JobManager 
will shut down. 
+Regular job completions (by the sources shutting down) are supported.
 {{< /hint >}}
 
  Per-Job Mode
diff --git a/docs/content/docs/deployment/overview.md 
b/docs/content/docs/deployment/overview.md
index 85452b8..32d7348 100644
--- a/docs/content/docs/deployment/overview.md
+++ b/docs/content/docs/deployment/overview.md
@@ -205,6 +205,10 @@ non-blocking, will lead to the "next" job starting before 
"this" job finishes.
 The Application Mode allows for multi-`execute()` applications but 
 High-Availability is not supported in these cases. High-Availability in 
Application Mode is only
 supported for single-`execute()` applications.
+
+Additionally, when any of multiple running jobs in Application Mode (submitted 
for example using 
+`executeAsync()`) gets cancelled, all jobs will be stopped and the JobManager 
will shut down. 
+Regular job completions (by the sources shutting down) are supported.
 {{< /hint >}}
 
  Per-Job Mode


[flink] branch master updated (8188199 -> 18a95ca)

2021-06-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8188199  [FLINK-22818][tests] Remembering important values only on 
certain checkpoint
 add 18a95ca  [FLINK-23129][docs] Document ApplicationMode limitations

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/overview.md | 4 
 docs/content/docs/deployment/overview.md| 4 
 2 files changed, 8 insertions(+)


[flink] branch master updated (d1b45eb -> 4b62f21)

2021-06-24 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d1b45eb  [FLINK-23078] Add 
SchedulerBenchmarkUtils#shutdownTestingUtilDefaultExecutor
 add 4b62f21  [FLINK-23057] add variable expansion for FLINK_ENV_JAVA_OPTS 
in flink-console.sh

No new revisions were added by this update.

Summary of changes:
 flink-dist/src/main/flink-bin/bin/flink-console.sh | 3 +++
 1 file changed, 3 insertions(+)


[flink-web] branch asf-site updated: Replace Nabble archives by official apache archive

2021-06-23 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 9194dda  Replace Nabble archives by official apache archive
9194dda is described below

commit 9194dda862da00d93f627fd315056471657655d1
Author: Robert Metzger 
AuthorDate: Wed Jun 23 12:24:34 2021 +0200

Replace Nabble archives by official apache archive
---
 community.md| 19 ---
 community.zh.md | 19 ---
 content/community.html  | 19 ---
 content/gettinghelp.html|  3 +--
 content/zh/community.html   | 19 ---
 content/zh/gettinghelp.html |  1 -
 gettinghelp.md  |  3 +--
 gettinghelp.zh.md   |  1 -
 8 files changed, 34 insertions(+), 50 deletions(-)

diff --git a/community.md b/community.md
index 088719e..fbf9fa9 100644
--- a/community.md
+++ b/community.md
@@ -32,7 +32,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:news-unsubscr...@flink.apache.org;>Unsubscribe
  Read only 
list
 
-  http://mail-archives.apache.org/mod_mbox/flink-news/;>Archives 
+  https://lists.apache.org/list.html?n...@flink.apache.org;>Archives 

 
   
   
@@ -45,7 +45,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:community-unsubscr...@flink.apache.org;>Unsubscribe
  mailto:commun...@flink.apache.org;>Post
 
-  http://mail-archives.apache.org/mod_mbox/flink-community/;>Archives 

+  https://lists.apache.org/list.html?commun...@flink.apache.org;>Archives
 
 
   
   
@@ -58,8 +58,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:user-unsubscr...@flink.apache.org;>Unsubscribe
  mailto:u...@flink.apache.org;>Post
 
-  http://mail-archives.apache.org/mod_mbox/flink-user/;>Archives 
-  http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/;>Nabble
 Archive
+  https://lists.apache.org/list.html?u...@flink.apache.org;>Archives
 
   
 
@@ -72,8 +71,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:user-zh-unsubscr...@flink.apache.org;>Unsubscribe
  mailto:user...@flink.apache.org;>Post
 
-  http://mail-archives.apache.org/mod_mbox/flink-user-zh/;>Archives 
-  http://apache-flink.147419.n8.nabble.com/;>Nabble Archive
+  https://lists.apache.org/list.html?user...@flink.apache.org;>Archives
 
   
   
@@ -86,8 +84,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:dev-unsubscr...@flink.apache.org;>Unsubscribe
  mailto:d...@flink.apache.org;>Post
 
-  http://mail-archives.apache.org/mod_mbox/flink-dev/;>Archives 
-  http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/;>Nabble 
Archive
+  https://lists.apache.org/list.html?d...@flink.apache.org;>Archives
 
   
   
@@ -100,7 +97,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
mailto:builds-unsubscr...@flink.apache.org;>Unsubscribe
   Read 
only list
   
-http://mail-archives.apache.org/mod_mbox/flink-builds/;>Archives 
+https://lists.apache.org/list.html?bui...@flink.apache.org;>Archives
   
 
   
@@ -113,7 +110,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:issues-digest-subscr...@flink.apache.org;>Subscribe
  mailto:issues-unsubscr...@flink.apache.org;>Unsubscribe
 Read only 
list
-http://mail-archives.apache.org/mod_mbox/flink-issues/;>Archives
+https://lists.apache.org/list.html?iss...@flink.apache.org;>Archives
   
   
 
@@ -125,7 +122,7 @@ There are many ways to get help from the Apache Flink 
community. The [mailing li
  mailto:commits-digest-subscr...@flink.apache.org;>Subscribe
  mailto:commits-unsubscr...@flink.apache.org;>Unsubscribe
  Read only 
list
-http://mail-archives.apache.org/mod_mbox/flink-commits/;>Archives
+https://lists.apache.org/list.html?commits@flink.apache.org;>Archives
   
 
 
diff --git a/community.zh.md b/community.zh.md
index 8c2fc0a..020cd9b 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -31,7 +31,7 @@ title: "社区 & 项目信息"
  mailto:news-unsubscr...@flink.apache.org;>退订
  
只读邮件列表
 
-  http://mail-archives.apache.org/mod_mbox/flink-news/;>归档 

+  https://lists.apache.org/list.html?n...@flink.apache.org;>归档 
 
   
   
@@ -44,7 +44,7 @@ title: "社区 & 项目信息"
  mailto:community-unsubscr...@flink.apache.org;>退订
  mailto:commun...@flink.apache.org;>发送
 

[flink] branch release-1.13 updated: [FLINK-22980][tests] Set parallelism on job vertex required by adaptive scheduler

2021-06-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new 1183fb5  [FLINK-22980][tests] Set parallelism on job vertex required 
by adaptive scheduler
1183fb5 is described below

commit 1183fb5f7d258bb24d829efa229e3eb598050faa
Author: Fabian Paul 
AuthorDate: Thu Jun 17 10:39:05 2021 +0200

[FLINK-22980][tests] Set parallelism on job vertex required by adaptive 
scheduler
---
 .../flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java   | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
index 43f69bd..b1a01c6 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
@@ -349,6 +349,8 @@ public class FileExecutionGraphInfoStoreTest extends 
TestLogger {
 new PersistingMiniCluster(new 
MiniClusterConfiguration.Builder().build())) {
 miniCluster.start();
 final JobVertex vertex = new JobVertex("blockingVertex");
+// The adaptive scheduler expects that every vertex has a 
configured parallelism
+vertex.setParallelism(1);
 vertex.setInvokableClass(SignallingBlockingNoOpInvokable.class);
 final JobGraph jobGraph = 
JobGraphTestUtils.streamingJobGraph(vertex);
 miniCluster.submitJob(jobGraph);


[flink] branch master updated: [FLINK-22980][tests] Set parallelism on job vertex required by adaptive scheduler

2021-06-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 465fc66  [FLINK-22980][tests] Set parallelism on job vertex required 
by adaptive scheduler
465fc66 is described below

commit 465fc66949c010af740bb242125ce15116ac0aeb
Author: Fabian Paul 
AuthorDate: Thu Jun 17 10:39:05 2021 +0200

[FLINK-22980][tests] Set parallelism on job vertex required by adaptive 
scheduler
---
 .../flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java   | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
index 43f69bd..b1a01c6 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java
@@ -349,6 +349,8 @@ public class FileExecutionGraphInfoStoreTest extends 
TestLogger {
 new PersistingMiniCluster(new 
MiniClusterConfiguration.Builder().build())) {
 miniCluster.start();
 final JobVertex vertex = new JobVertex("blockingVertex");
+// The adaptive scheduler expects that every vertex has a 
configured parallelism
+vertex.setParallelism(1);
 vertex.setInvokableClass(SignallingBlockingNoOpInvokable.class);
 final JobGraph jobGraph = 
JobGraphTestUtils.streamingJobGraph(vertex);
 miniCluster.submitJob(jobGraph);


[flink] branch release-1.12 updated: [FLINK-22856][Azure] Upgrade to ubuntu-20.04

2021-06-08 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.12 by this push:
 new 9831b46  [FLINK-22856][Azure] Upgrade to ubuntu-20.04
9831b46 is described below

commit 9831b46be02bc3819acd02f893cb88d3de8cb46d
Author: Robert Metzger 
AuthorDate: Wed Jun 2 15:10:22 2021 +0200

[FLINK-22856][Azure] Upgrade to ubuntu-20.04
---
 azure-pipelines.yml|  4 +--
 .../test-scripts/common_docker.sh  |  8 ++---
 .../test-scripts/common_kubernetes.sh  |  7 ++--
 tools/azure-pipelines/build-apache-repo.yml| 16 -
 tools/azure-pipelines/build-nightly-dist.yml   |  4 +--
 tools/azure-pipelines/build-python-wheels.yml  |  2 +-
 tools/azure-pipelines/jobs-template.yml| 41 --
 7 files changed, 43 insertions(+), 39 deletions(-)

diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 8f093e4..2df17a7 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -68,9 +68,9 @@ stages:
 parameters: # see template file for a definition of the parameters.
   stage_name: ci_build
   test_pool_definition:
-vmImage: 'ubuntu-16.04'
+vmImage: 'ubuntu-20.04'
   e2e_pool_definition:
-vmImage: 'ubuntu-16.04'
+vmImage: 'ubuntu-20.04'
   environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
   run_end_to_end: false
   container: flink-build-container
diff --git a/flink-end-to-end-tests/test-scripts/common_docker.sh 
b/flink-end-to-end-tests/test-scripts/common_docker.sh
index c2aa887..dc2447e 100644
--- a/flink-end-to-end-tests/test-scripts/common_docker.sh
+++ b/flink-end-to-end-tests/test-scripts/common_docker.sh
@@ -58,15 +58,15 @@ function build_image() {
 }
 
 function start_file_server() {
-command -v python >/dev/null 2>&1
+command -v python3 >/dev/null 2>&1
 if [[ $? -eq 0 ]]; then
-  python ${TEST_INFRA_DIR}/python2_fileserver.py &
+  python3 ${TEST_INFRA_DIR}/python3_fileserver.py &
   return
 fi
 
-command -v python3 >/dev/null 2>&1
+command -v python >/dev/null 2>&1
 if [[ $? -eq 0 ]]; then
-  python3 ${TEST_INFRA_DIR}/python3_fileserver.py &
+  python ${TEST_INFRA_DIR}/python2_fileserver.py &
   return
 fi
 
diff --git a/flink-end-to-end-tests/test-scripts/common_kubernetes.sh 
b/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
index 57e8a1b..dd03fdb 100755
--- a/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
+++ b/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
@@ -50,6 +50,8 @@ function setup_kubernetes_for_linux {
 fi
 # conntrack is required for minikube 1.9 and later
 sudo apt-get install conntrack
+# required to resolve HOST_JUJU_LOCK_PERMISSION error of "minikube start 
--vm-driver=none"
+sudo sysctl fs.protected_regular=0
 }
 
 function check_kubernetes_status {
@@ -76,7 +78,7 @@ function start_kubernetes_if_not_running {
 # here.
 # Similarly, the kubelets are marking themself as "low disk space",
 # causing Flink to avoid this node (again, failing the test)
-sudo CHANGE_MINIKUBE_NONE_USER=true minikube start --vm-driver=none \
+CHANGE_MINIKUBE_NONE_USER=true sudo -E minikube start --vm-driver=none 
\
 --extra-config=kubelet.image-gc-high-threshold=99 \
 --extra-config=kubelet.image-gc-low-threshold=98 \
 --extra-config=kubelet.minimum-container-ttl-duration=120m \
@@ -108,7 +110,6 @@ function start_kubernetes {
 exit 1
 fi
 fi
-eval $(minikube docker-env)
 }
 
 function stop_kubernetes {
@@ -118,7 +119,7 @@ function stop_kubernetes {
 kill $minikube_mount_pid 2> /dev/null
 else
 echo "Stopping minikube ..."
-stop_command="sudo minikube stop"
+stop_command="minikube stop"
 if ! retry_times ${MINIKUBE_START_RETRIES} ${MINIKUBE_START_BACKOFF} 
"${stop_command}"; then
 echo "Could not stop minikube. Aborting..."
 exit 1
diff --git a/tools/azure-pipelines/build-apache-repo.yml 
b/tools/azure-pipelines/build-apache-repo.yml
index cb1ee68..c15623a 100644
--- a/tools/azure-pipelines/build-apache-repo.yml
+++ b/tools/azure-pipelines/build-apache-repo.yml
@@ -63,7 +63,7 @@ stages:
   test_pool_definition:
 name: Default
   e2e_pool_definition:
-vmImage: 'ubuntu-16.04'
+vmImage: 'ubuntu-20.04'
   environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
   run_end_to_end: false
   

[flink] branch release-1.13 updated: [FLINK-22856][Azure] Upgrade to ubuntu-20.04

2021-06-08 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new ed02b5d  [FLINK-22856][Azure] Upgrade to ubuntu-20.04
ed02b5d is described below

commit ed02b5deca5881ac5572228ddd1cd3e85eadfd32
Author: Robert Metzger 
AuthorDate: Wed Jun 2 15:10:22 2021 +0200

[FLINK-22856][Azure] Upgrade to ubuntu-20.04
---
 azure-pipelines.yml|  6 ++---
 flink-end-to-end-tests/run-nightly-tests.sh|  2 +-
 .../test-scripts/common_docker.sh  |  8 +++---
 .../test-scripts/common_kubernetes.sh  |  7 +++---
 tools/azure-pipelines/build-apache-repo.yml| 20 +++
 tools/azure-pipelines/build-nightly-dist.yml   |  4 +--
 tools/azure-pipelines/build-python-wheels.yml  |  2 +-
 tools/azure-pipelines/jobs-template.yml| 29 --
 8 files changed, 41 insertions(+), 37 deletions(-)

diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index cbe3b11..23d0f49 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -71,16 +71,16 @@ stages:
 parameters: # see template file for a definition of the parameters.
   stage_name: ci_build
   test_pool_definition:
-vmImage: 'ubuntu-16.04'
+vmImage: 'ubuntu-20.04'
   e2e_pool_definition:
-vmImage: 'ubuntu-16.04'
+vmImage: 'ubuntu-20.04'
   environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
   run_end_to_end: false
   container: flink-build-container
   jdk: jdk8
   - job: docs_404_check # run on a MSFT provided machine
 pool:
-  vmImage: 'ubuntu-16.04'
+  vmImage: 'ubuntu-20.04'
 steps:
   - script: ./tools/ci/docs.sh
   # CI / Special stage for release, e.g. building PyFlink wheel packages, etc:
diff --git a/flink-end-to-end-tests/run-nightly-tests.sh 
b/flink-end-to-end-tests/run-nightly-tests.sh
index ef8b0a8..03f0b33 100755
--- a/flink-end-to-end-tests/run-nightly-tests.sh
+++ b/flink-end-to-end-tests/run-nightly-tests.sh
@@ -37,7 +37,7 @@ if [ -z "$FLINK_LOG_DIR" ] ; then
 fi
 
 # On Azure CI, use artifacts dir
-if [ -z "$DEBUG_FILES_OUTPUT_DIR"] ; then
+if [ -z "$DEBUG_FILES_OUTPUT_DIR" ] ; then
 export DEBUG_FILES_OUTPUT_DIR="$FLINK_LOG_DIR"
 fi
 
diff --git a/flink-end-to-end-tests/test-scripts/common_docker.sh 
b/flink-end-to-end-tests/test-scripts/common_docker.sh
index af37db8..3127662 100644
--- a/flink-end-to-end-tests/test-scripts/common_docker.sh
+++ b/flink-end-to-end-tests/test-scripts/common_docker.sh
@@ -58,15 +58,15 @@ function build_image() {
 }
 
 function start_file_server() {
-command -v python >/dev/null 2>&1
+command -v python3 >/dev/null 2>&1
 if [[ $? -eq 0 ]]; then
-  python ${TEST_INFRA_DIR}/python2_fileserver.py &
+  python3 ${TEST_INFRA_DIR}/python3_fileserver.py &
   return
 fi
 
-command -v python3 >/dev/null 2>&1
+command -v python >/dev/null 2>&1
 if [[ $? -eq 0 ]]; then
-  python3 ${TEST_INFRA_DIR}/python3_fileserver.py &
+  python ${TEST_INFRA_DIR}/python2_fileserver.py &
   return
 fi
 
diff --git a/flink-end-to-end-tests/test-scripts/common_kubernetes.sh 
b/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
index 7a393d9..e4fa575 100755
--- a/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
+++ b/flink-end-to-end-tests/test-scripts/common_kubernetes.sh
@@ -50,6 +50,8 @@ function setup_kubernetes_for_linux {
 fi
 # conntrack is required for minikube 1.9 and later
 sudo apt-get install conntrack
+# required to resolve HOST_JUJU_LOCK_PERMISSION error of "minikube start 
--vm-driver=none"
+sudo sysctl fs.protected_regular=0
 }
 
 function check_kubernetes_status {
@@ -76,7 +78,7 @@ function start_kubernetes_if_not_running {
 # here.
 # Similarly, the kubelets are marking themself as "low disk space",
 # causing Flink to avoid this node (again, failing the test)
-sudo CHANGE_MINIKUBE_NONE_USER=true minikube start --vm-driver=none \
+CHANGE_MINIKUBE_NONE_USER=true sudo -E minikube start --vm-driver=none 
\
 --extra-config=kubelet.image-gc-high-threshold=99 \
 --extra-config=kubelet.image-gc-low-threshold=98 \
 --extra-config=kubelet.minimum-container-ttl-duration=120m \
@@ -108,7 +110,6 @@ function start_kubernetes {
 exit 1
 fi
 fi
-eval $(minikube docker-env)
 }
 
 function stop_kubernetes {
@@ -118,7 +119,7 @@ function stop_kubernetes {
 kill $minikube_mount_pid 2> /dev/null
 else
 echo &

[flink] branch master updated: [FLINK-22765][runtime][tests] Improve logging for debugging rare test failure

2021-06-08 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new cf6d097  [FLINK-22765][runtime][tests] Improve logging for debugging 
rare test failure
cf6d097 is described below

commit cf6d097082546394e353754b70344ded9cee51da
Author: Robert Metzger 
AuthorDate: Tue May 25 14:21:13 2021 +0200

[FLINK-22765][runtime][tests] Improve logging for debugging rare test 
failure

This closes #15998
---
 .../apache/flink/test/util/TestProcessBuilder.java |  6 +++
 .../flink/runtime/util/ExceptionUtilsITCase.java   | 43 +-
 2 files changed, 39 insertions(+), 10 deletions(-)

diff --git 
a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestProcessBuilder.java
 
b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestProcessBuilder.java
index 5771e6f..d03beb9 100644
--- 
a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestProcessBuilder.java
+++ 
b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/TestProcessBuilder.java
@@ -23,6 +23,9 @@ import org.apache.flink.configuration.MemorySize;
 import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.testutils.CommonTestUtils.PipeForwarder;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import java.io.File;
 import java.io.IOException;
 import java.io.StringWriter;
@@ -35,6 +38,8 @@ import static 
org.apache.flink.util.Preconditions.checkNotNull;
 
 /** Utility class wrapping {@link ProcessBuilder} and pre-configuring it with 
common options. */
 public class TestProcessBuilder {
+private static final Logger LOG = 
LoggerFactory.getLogger(TestProcessBuilder.class);
+
 private final String javaCommand = checkNotNull(getJavaCommandPath());
 
 private final ArrayList jvmArgs = new ArrayList<>();
@@ -72,6 +77,7 @@ public class TestProcessBuilder {
 
 StringWriter processOutput = new StringWriter();
 StringWriter errorOutput = new StringWriter();
+LOG.info("Starting process with commands {}", commands);
 final ProcessBuilder processBuilder = new ProcessBuilder(commands);
 if (withCleanEnvironment) {
 processBuilder.environment().clear();
diff --git 
a/flink-tests/src/test/java/org/apache/flink/runtime/util/ExceptionUtilsITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/runtime/util/ExceptionUtilsITCase.java
index 611e4bc..ef151d1 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/runtime/util/ExceptionUtilsITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/runtime/util/ExceptionUtilsITCase.java
@@ -55,23 +55,25 @@ public class ExceptionUtilsITCase extends TestLogger {
 @Test
 public void testIsDirectOutOfMemoryError() throws IOException, 
InterruptedException {
 String className = DummyDirectAllocatingProgram.class.getName();
-String out = run(className, Collections.emptyList(), 
DIRECT_MEMORY_SIZE, -1);
-assertThat(out, is(""));
+RunResult result = run(className, Collections.emptyList(), 
DIRECT_MEMORY_SIZE, -1);
+assertThat(result.getErrorOut() + "|" + result.getStandardOut(), 
is("|"));
 }
 
 @Test
 public void testIsMetaspaceOutOfMemoryError() throws IOException, 
InterruptedException {
 String className = DummyClassLoadingProgram.class.getName();
 // load only one class and record required Metaspace
-String normalOut =
+RunResult normalOut =
 run(className, getDummyClassLoadingProgramArgs(1), -1, 
INITIAL_BIG_METASPACE_SIZE);
-long okMetaspace = Long.parseLong(normalOut);
+long okMetaspace = Long.parseLong(normalOut.getStandardOut());
 // load more classes to cause 'OutOfMemoryError: Metaspace'
-String oomOut = run(className, getDummyClassLoadingProgramArgs(1000), 
-1, okMetaspace);
-assertThat(oomOut, is(""));
+RunResult oomOut = run(className, 
getDummyClassLoadingProgramArgs(1000), -1, okMetaspace);
+// 'OutOfMemoryError: Metaspace' errors are caught, hence no output 
means we produced the
+// expected exception.
+assertThat(oomOut.getErrorOut() + "|" + oomOut.getStandardOut(), 
is("|"));
 }
 
-private static String run(
+private static RunResult run(
 String className, Iterable args, long directMemorySize, 
long metaspaceSize)
 throws InterruptedException, IOException {
 TestProcessBuilder taskManagerProcessBuilder = new 
TestProcessBuilder(className);
@@ -91,8 +93,26 @@ public class ExceptionUtilsITCase extends TestLogger {
 taskManagerProcessBuilder.with

[flink] branch master updated (16452fb -> bab81cb)

2021-06-08 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 16452fb  [FLINK-13538][formats] Figure out wrong field name when 
serializer/deserializer throw exceptions while doing serializing/deserializing 
for json and csv format.
 add bab81cb  [FLINK-22856][Azure] Upgrade to ubuntu-20.04

No new revisions were added by this update.

Summary of changes:
 azure-pipelines.yml|  6 ++---
 flink-end-to-end-tests/run-nightly-tests.sh|  2 +-
 .../test-scripts/common_docker.sh  |  8 +++---
 .../test-scripts/common_kubernetes.sh  |  7 +++---
 tools/azure-pipelines/build-apache-repo.yml| 20 +++
 tools/azure-pipelines/build-nightly-dist.yml   |  4 +--
 tools/azure-pipelines/build-python-wheels.yml  |  2 +-
 tools/azure-pipelines/jobs-template.yml| 29 --
 8 files changed, 41 insertions(+), 37 deletions(-)


[flink] branch master updated (362aadc -> 214ade6)

2021-06-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 362aadc  [FLINK-22038][table-planner-blink] Update TopN to be without 
rowNumber if rowNumber field is never used by the successor Calc
 new 3763321  Revert "[FLINK-22680][table-planner-blink] Fix 
IndexOutOfBoundsException when apply 
WatermarkAssignerChangelogNormalizeTransposeRule"
 new 214ade6  Revert "[FLINK-22038][table-planner-blink] Update TopN to be 
without rowNumber if rowNumber field is never used by the successor Calc"

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../RedundantRankNumberColumnRemoveRule.java   | 101 ---
 ...arkAssignerChangelogNormalizeTransposeRule.java | 301 ++---
 .../planner/plan/rules/FlinkBatchRuleSets.scala|   6 +-
 .../planner/plan/rules/FlinkStreamRuleSets.scala   |   4 +-
 ...Rule.scala => RankNumberColumnRemoveRule.scala} |   8 +-
 .../table/planner/plan/batch/sql/RankTest.xml  |  76 +-
 .../planner/plan/batch/sql/RemoveShuffleTest.xml   |   2 +-
 .../FlinkLogicalRankRuleForConstantRangeTest.xml   |   2 +-
 .../batch/RemoveRedundantLocalHashAggRuleTest.xml  |   2 +-
 .../batch/RemoveRedundantLocalRankRuleTest.xml |   4 +-
 ...AssignerChangelogNormalizeTransposeRuleTest.xml | 191 -
 .../table/planner/plan/stream/sql/RankTest.xml |  68 -
 .../table/planner/plan/batch/sql/RankTest.scala|  51 
 .../TemporalJoinRewriteWithUniqueKeyRuleTest.scala |   2 +-
 ...signerChangelogNormalizeTransposeRuleTest.scala | 175 
 .../table/planner/plan/stream/sql/RankTest.scala   |  51 
 .../runtime/stream/sql/GroupWindowITCase.scala |  51 ++--
 17 files changed, 62 insertions(+), 1033 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.java
 rename 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/{ConstantRankNumberColumnRemoveRule.scala
 => RankNumberColumnRemoveRule.scala} (94%)
 delete mode 100644 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRuleTest.xml
 delete mode 100644 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRuleTest.scala


[flink] 02/02: Revert "[FLINK-22038][table-planner-blink] Update TopN to be without rowNumber if rowNumber field is never used by the successor Calc"

2021-06-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 214ade6d0d73b21e8a541501ecb64b4d28bdc74d
Author: Robert Metzger 
AuthorDate: Thu Jun 3 20:07:10 2021 +0200

Revert "[FLINK-22038][table-planner-blink] Update TopN to be without 
rowNumber if rowNumber field is never used by the successor Calc"

This reverts commit 362aadc335af2b6887798d22f97bf72a74229e22.
---
 .../RedundantRankNumberColumnRemoveRule.java   | 101 -
 .../planner/plan/rules/FlinkBatchRuleSets.scala|   6 +-
 .../planner/plan/rules/FlinkStreamRuleSets.scala   |   4 +-
 ...Rule.scala => RankNumberColumnRemoveRule.scala} |   8 +-
 .../table/planner/plan/batch/sql/RankTest.xml  |  76 +---
 .../planner/plan/batch/sql/RemoveShuffleTest.xml   |   2 +-
 .../FlinkLogicalRankRuleForConstantRangeTest.xml   |   2 +-
 .../batch/RemoveRedundantLocalHashAggRuleTest.xml  |   2 +-
 .../batch/RemoveRedundantLocalRankRuleTest.xml |   4 +-
 .../table/planner/plan/stream/sql/RankTest.xml |  68 --
 .../table/planner/plan/batch/sql/RankTest.scala|  51 ---
 .../TemporalJoinRewriteWithUniqueKeyRuleTest.scala |   2 +-
 .../table/planner/plan/stream/sql/RankTest.scala   |  51 ---
 13 files changed, 14 insertions(+), 363 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.java
deleted file mode 100644
index a0fb5b2..000
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.rules.logical;
-
-import org.apache.flink.table.planner.plan.nodes.logical.FlinkLogicalCalc;
-import org.apache.flink.table.planner.plan.nodes.logical.FlinkLogicalRank;
-
-import org.apache.calcite.plan.RelOptRule;
-import org.apache.calcite.plan.RelOptRuleCall;
-import org.apache.calcite.plan.RelOptUtil;
-import org.apache.calcite.rex.RexNode;
-import org.apache.calcite.rex.RexProgram;
-import org.apache.calcite.util.ImmutableBitSet;
-import org.apache.calcite.util.Pair;
-
-import java.util.List;
-import java.util.stream.Collectors;
-
-/**
- * Planner rule that removes the output column of rank number iff the rank 
number column is not used
- * by successor calc.
- */
-public class RedundantRankNumberColumnRemoveRule extends RelOptRule {
-public static final RedundantRankNumberColumnRemoveRule INSTANCE =
-new RedundantRankNumberColumnRemoveRule();
-
-public RedundantRankNumberColumnRemoveRule() {
-super(
-operand(FlinkLogicalCalc.class, 
operand(FlinkLogicalRank.class, any())),
-"RedundantRankNumberColumnRemoveRule");
-}
-
-@Override
-public boolean matches(RelOptRuleCall call) {
-FlinkLogicalCalc calc = call.rel(0);
-ImmutableBitSet usedFields = getUsedFields(calc.getProgram());
-FlinkLogicalRank rank = call.rel(1);
-return rank.outputRankNumber() && 
!usedFields.get(rank.getRowType().getFieldCount() - 1);
-}
-
-@Override
-public void onMatch(RelOptRuleCall call) {
-FlinkLogicalCalc calc = call.rel(0);
-FlinkLogicalRank rank = call.rel(1);
-FlinkLogicalRank newRank =
-new FlinkLogicalRank(
-rank.getCluster(),
-rank.getTraitSet(),
-rank.getInput(),
-rank.partitionKey(),
-rank.orderKey(),
-rank.rankType(),
-rank.rankRange(),
-rank.rankNumberType(),
-false);
-RexProgram oldProgram = calc.getProgram();
-Pair, RexNo

[flink] 01/02: Revert "[FLINK-22680][table-planner-blink] Fix IndexOutOfBoundsException when apply WatermarkAssignerChangelogNormalizeTransposeRule"

2021-06-03 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 37633211cff21050e7bf0dafa9006ad1700ceb7a
Author: Robert Metzger 
AuthorDate: Thu Jun 3 20:07:07 2021 +0200

Revert "[FLINK-22680][table-planner-blink] Fix IndexOutOfBoundsException 
when apply WatermarkAssignerChangelogNormalizeTransposeRule"

This reverts commit a364daa37202dc4d7a60c613f547cdcd6893ecd2.
---
 ...arkAssignerChangelogNormalizeTransposeRule.java | 301 ++---
 ...AssignerChangelogNormalizeTransposeRuleTest.xml | 191 -
 ...signerChangelogNormalizeTransposeRuleTest.scala | 175 
 .../runtime/stream/sql/GroupWindowITCase.scala |  51 ++--
 4 files changed, 48 insertions(+), 670 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRule.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRule.java
index fb0cb49..aed80bd 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRule.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRule.java
@@ -18,40 +18,18 @@
 
 package org.apache.flink.table.planner.plan.rules.physical.stream;
 
-import org.apache.flink.api.java.tuple.Tuple2;
 import 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalCalc;
 import 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalChangelogNormalize;
 import 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalExchange;
 import 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalWatermarkAssigner;
-import org.apache.flink.table.planner.plan.trait.FlinkRelDistribution;
-import org.apache.flink.table.planner.plan.trait.FlinkRelDistributionTraitDef;
-import org.apache.flink.table.planner.typeutils.RowTypeUtils;
 
 import org.apache.calcite.plan.RelOptRule;
 import org.apache.calcite.plan.RelOptRuleCall;
 import org.apache.calcite.plan.RelOptUtil;
 import org.apache.calcite.plan.RelRule;
-import org.apache.calcite.plan.RelTraitSet;
-import org.apache.calcite.rel.RelDistribution;
 import org.apache.calcite.rel.RelNode;
-import org.apache.calcite.rel.type.RelDataType;
-import org.apache.calcite.rel.type.RelDataTypeField;
-import org.apache.calcite.rex.RexBuilder;
-import org.apache.calcite.rex.RexInputRef;
-import org.apache.calcite.rex.RexLocalRef;
-import org.apache.calcite.rex.RexNode;
-import org.apache.calcite.rex.RexProgram;
-import org.apache.calcite.rex.RexProgramBuilder;
-import org.apache.calcite.rex.RexShuttle;
-import org.apache.calcite.rex.RexUtil;
-import org.apache.calcite.util.mapping.Mappings;
 
-import java.util.ArrayList;
 import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.function.Function;
 
 import static org.apache.flink.util.Preconditions.checkArgument;
 
@@ -84,288 +62,43 @@ public class 
WatermarkAssignerChangelogNormalizeTransposeRule
 public void onMatch(RelOptRuleCall call) {
 final StreamPhysicalWatermarkAssigner watermark = call.rel(0);
 final RelNode node = call.rel(1);
-RelNode newTree;
 if (node instanceof StreamPhysicalCalc) {
 // with calc
 final StreamPhysicalCalc calc = call.rel(1);
 final StreamPhysicalChangelogNormalize changelogNormalize = 
call.rel(2);
 final StreamPhysicalExchange exchange = call.rel(3);
-final Mappings.TargetMapping calcMapping = 
buildMapping(calc.getProgram());
-final RelDistribution exchangeDistribution = 
exchange.getDistribution();
-final RelDistribution newExchangeDistribution = 
exchangeDistribution.apply(calcMapping);
-final boolean shuffleKeysAreKeptByCalc =
-newExchangeDistribution.getType() == 
exchangeDistribution.getType()
-&& newExchangeDistribution.getKeys().size()
-== exchangeDistribution.getKeys().size();
-if (shuffleKeysAreKeptByCalc) {
-// Pushes down WatermarkAssigner/Calc as a whole if shuffle 
keys of
-// Exchange are all kept by Calc
-newTree =
-pushDownOriginalWatermarkAndCalc(
-watermark,
-calc,
-changelogNormalize,
-exchange,
-newEx

[flink] branch release-1.13 updated (5332c98 -> feac87e)

2021-06-01 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5332c98  [FLINK-22796][doc] Update mem_setup_tm documentation
 add feac87e  [FLINK-22464][runtime][tests] Disable a test failing with 
AdaptiveScheduler tracked in FLINK-22464

No new revisions were added by this update.

Summary of changes:
 .../operators/coordination/OperatorEventSendingCheckpointITCase.java   | 3 +++
 1 file changed, 3 insertions(+)


[flink] branch master updated: [FLINK-22464][runtime][tests] Disable a test failing with AdaptiveScheduler tracked in FLINK-22464

2021-05-25 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 80ad5b3  [FLINK-22464][runtime][tests] Disable a test failing with 
AdaptiveScheduler tracked in FLINK-22464
80ad5b3 is described below

commit 80ad5b3b511a68cce19a53291000c9936e10db17
Author: Robert Metzger 
AuthorDate: Tue May 25 13:10:55 2021 +0200

[FLINK-22464][runtime][tests] Disable a test failing with AdaptiveScheduler 
tracked in FLINK-22464
---
 .../operators/coordination/OperatorEventSendingCheckpointITCase.java   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
index 9c91386..8d460f2 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/runtime/operators/coordination/OperatorEventSendingCheckpointITCase.java
@@ -46,6 +46,7 @@ import 
org.apache.flink.runtime.taskexecutor.TaskExecutorGatewayDecoratorBase;
 import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.util.TestStreamEnvironment;
+import org.apache.flink.testutils.junit.FailsWithAdaptiveScheduler;
 import org.apache.flink.util.SerializedValue;
 import org.apache.flink.util.TestLogger;
 import org.apache.flink.util.function.TriFunction;
@@ -54,6 +55,7 @@ import akka.actor.ActorSystem;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import javax.annotation.Nullable;
 
@@ -124,6 +126,7 @@ public class OperatorEventSendingCheckpointITCase extends 
TestLogger {
  * additionally a failure on the reader that triggers recovery.
  */
 @Test
+@Category(FailsWithAdaptiveScheduler.class) // FLINK-22464
 public void testOperatorEventLostWithReaderFailure() throws Exception {
 final int[] eventsToLose = new int[] {1, 3};
 


svn commit: r47857 - /release/flink/flink-1.12.3/

2021-05-21 Thread rmetzger
Author: rmetzger
Date: Fri May 21 15:21:56 2021
New Revision: 47857

Log:
[flink] Delete flink 1.12.3 release

Removed:
release/flink/flink-1.12.3/



[flink] branch release-1.13 updated (d1dd346 -> 040bd81)

2021-05-20 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d1dd346  [FLINK-22706][legal] Update NOTICE regarding docs/ contents
 add 040bd81  [FLINK-22266] Fix stop-with-savepoint operation in 
AdaptiveScheduler

No new revisions were added by this update.

Summary of changes:
 .../runtime/scheduler/adaptive/Executing.java  |   7 +-
 .../runtime/scheduler/adaptive/CancelingTest.java  |   4 +-
 .../runtime/scheduler/adaptive/ExecutingTest.java  | 153 -
 .../runtime/scheduler/adaptive/FailingTest.java|  13 +-
 .../runtime/scheduler/adaptive/RestartingTest.java |   9 +-
 .../scheduler/adaptive/StopWithSavepointTest.java  |  38 +++--
 6 files changed, 169 insertions(+), 55 deletions(-)


[flink] branch master updated: [FLINK-22266] Fix stop-with-savepoint operation in AdaptiveScheduler

2021-05-20 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 1106542  [FLINK-22266] Fix stop-with-savepoint operation in 
AdaptiveScheduler
1106542 is described below

commit 11065420e03a95fc53930e981474b98abcabc1f7
Author: Robert Metzger 
AuthorDate: Thu May 6 13:32:12 2021 +0200

[FLINK-22266] Fix stop-with-savepoint operation in AdaptiveScheduler

This closes #15884
---
 .../runtime/scheduler/adaptive/Executing.java  |   7 +-
 .../runtime/scheduler/adaptive/CancelingTest.java  |   4 +-
 .../runtime/scheduler/adaptive/ExecutingTest.java  | 153 -
 .../runtime/scheduler/adaptive/FailingTest.java|  13 +-
 .../runtime/scheduler/adaptive/RestartingTest.java |   9 +-
 .../scheduler/adaptive/StopWithSavepointTest.java  |  38 +++--
 6 files changed, 169 insertions(+), 55 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java
index 6b44c51..29e3cad 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java
@@ -59,6 +59,8 @@ class Executing extends StateWithExecutionGraph implements 
ResourceConsumer {
 super(context, executionGraph, executionGraphHandler, 
operatorCoordinatorHandler, logger);
 this.context = context;
 this.userCodeClassLoader = userCodeClassLoader;
+Preconditions.checkState(
+executionGraph.getState() == JobStatus.RUNNING, "Assuming 
running execution graph");
 
 deploy();
 
@@ -129,7 +131,10 @@ class Executing extends StateWithExecutionGraph implements 
ResourceConsumer {
 for (ExecutionJobVertex executionJobVertex :
 getExecutionGraph().getVerticesTopologically()) {
 for (ExecutionVertex executionVertex : 
executionJobVertex.getTaskVertices()) {
-deploySafely(executionVertex);
+if (executionVertex.getExecutionState() == 
ExecutionState.CREATED
+|| executionVertex.getExecutionState() == 
ExecutionState.SCHEDULED) {
+deploySafely(executionVertex);
+}
 }
 }
 }
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/CancelingTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/CancelingTest.java
index 0d74a10..1d7258c 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/CancelingTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/CancelingTest.java
@@ -96,7 +96,9 @@ public class CancelingTest extends TestLogger {
 StateTrackingMockExecutionGraph meg = new 
StateTrackingMockExecutionGraph();
 Canceling canceling = createCancelingState(ctx, meg);
 // register execution at EG
-ExecutingTest.MockExecutionJobVertex ejv = new 
ExecutingTest.MockExecutionJobVertex();
+ExecutingTest.MockExecutionJobVertex ejv =
+new ExecutingTest.MockExecutionJobVertex(
+ExecutingTest.MockExecutionVertex::new);
 TaskExecutionStateTransition update =
 new TaskExecutionStateTransition(
 new TaskExecutionState(
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java
index 2852bda..ca0b041 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java
@@ -81,6 +81,7 @@ import java.util.function.Supplier;
 import static 
org.apache.flink.runtime.scheduler.adaptive.WaitingForResourcesTest.assertNonNull;
 import static org.hamcrest.CoreMatchers.instanceOf;
 import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.CoreMatchers.not;
 import static org.hamcrest.CoreMatchers.notNullValue;
 import static org.junit.Assert.assertThat;
 
@@ -90,18 +91,61 @@ public class ExecutingTest extends TestLogger {
 @Test
 public void testExecutionGraphDeploymentOnEnter() throws Exception {
 try (MockExecutingContext ctx = new MockExecutingContext()) {
-MockExecutionJobVertex mockExecutionJobVertex = new 
MockExecutionJobVertex();
+MockExecutionJobVertex mockExecutionJobVertex =
+new MockExecutionJobVertex(MockExecutionVertex::new);
+MockExecutionVertex mockExecu

svn commit: r47834 - /dev/flink/flink-1.12.4-rc1/ /release/flink/flink-1.12.4/

2021-05-20 Thread rmetzger
Author: rmetzger
Date: Thu May 20 09:25:23 2021
New Revision: 47834

Log:
Release Flink 1.12.4

Added:
release/flink/flink-1.12.4/
  - copied from r47833, dev/flink/flink-1.12.4-rc1/
Removed:
dev/flink/flink-1.12.4-rc1/



[flink] branch master updated (ce3631a -> aa9f474)

2021-05-18 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ce3631a  [FLINK-22622][parquet] Drop BatchTableSource 
ParquetTableSource and related classes
 add aa9f474  [hotfix] Ignore failing KinesisITCase traacked in FLINK-22613

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java   | 2 ++
 1 file changed, 2 insertions(+)


[flink] branch release-1.13 updated (cfdf001 -> 25489aa)

2021-05-13 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cfdf001  [FLINK-22618][runtime] Fix incorrect free resource metrics of 
task managers
 new 51680fa  [hotfix] Add missing TestLogger to Kinesis tests
 new 25489aa  [FLINK-22574] Adaptive Scheduler: Fix cancellation while in 
Restarting state.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../connectors/kinesis/FlinkKinesisITCase.java |  3 +-
 .../kinesis/FlinkKinesisProducerTest.java  |  3 +-
 .../connectors/kinesis/KinesisConsumerTest.java|  3 +-
 .../adaptive/StateWithExecutionGraph.java  |  2 -
 .../adaptive/AdaptiveSchedulerSimpleITCase.java| 44 ++
 5 files changed, 50 insertions(+), 5 deletions(-)


[flink] 01/02: [hotfix] Add missing TestLogger to Kinesis tests

2021-05-13 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 51680fadfd3fe6cfe63e6c89000a9b3fdaae0353
Author: Robert Metzger 
AuthorDate: Mon May 10 15:46:21 2021 +0200

[hotfix] Add missing TestLogger to Kinesis tests
---
 .../apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java  | 3 ++-
 .../flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java   | 3 ++-
 .../apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
index 0b224c3..3db1dd3 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
@@ -26,6 +26,7 @@ import 
org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConsta
 import 
org.apache.flink.streaming.connectors.kinesis.testutils.KinesaliteContainer;
 import 
org.apache.flink.streaming.connectors.kinesis.testutils.KinesisPubsubClient;
 import org.apache.flink.test.util.MiniClusterWithClientResource;
+import org.apache.flink.util.TestLogger;
 
 import org.junit.Before;
 import org.junit.ClassRule;
@@ -49,7 +50,7 @@ import static org.hamcrest.Matchers.lessThan;
 import static org.junit.Assert.assertThat;
 
 /** IT cases for using Kinesis consumer/producer based on Kinesalite. */
-public class FlinkKinesisITCase {
+public class FlinkKinesisITCase extends TestLogger {
 public static final String TEST_STREAM = "test_stream";
 
 @ClassRule
diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
index d5643ec..387d2d3 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
@@ -33,6 +33,7 @@ import 
org.apache.flink.streaming.util.MockSerializationSchema;
 import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
 import org.apache.flink.util.ExceptionUtils;
 import org.apache.flink.util.InstantiationUtil;
+import org.apache.flink.util.TestLogger;
 
 import com.amazonaws.services.kinesis.producer.KinesisProducer;
 import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
@@ -63,7 +64,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 /** Suite of {@link FlinkKinesisProducer} tests. */
-public class FlinkKinesisProducerTest {
+public class FlinkKinesisProducerTest extends TestLogger {
 
 @Rule public ExpectedException exception = ExpectedException.none();
 
diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
index 2d695ab..6ebe4a4 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
@@ -21,6 +21,7 @@ package org.apache.flink.streaming.connectors.kinesis;
 import org.apache.flink.api.common.serialization.DeserializationSchema;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.util.Collector;
+import org.apache.flink.util.TestLogger;
 
 import org.junit.Rule;
 import org.junit.Test;
@@ -33,7 +34,7 @@ import java.util.Properties;
  * Tests for {@link FlinkKinesisConsumer}. In contrast to tests in {@link 
FlinkKinesisConsumerTest}
  * it does not use power mock, which makes it possible to use e.g. the {@link 
ExpectedException}.
  */
-public class KinesisConsumerTest {
+public class KinesisConsumerTest extends TestLogger {
 
 @Rule public ExpectedException thrown = ExpectedException.none();
 


[flink] 02/02: [FLINK-22574] Adaptive Scheduler: Fix cancellation while in Restarting state.

2021-05-13 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 25489aa0cc536f1551a0a887bdfc690824a24bd9
Author: Robert Metzger 
AuthorDate: Fri May 7 08:35:04 2021 +0200

[FLINK-22574] Adaptive Scheduler: Fix cancellation while in Restarting 
state.

The Canceling state of Adaptive Scheduler was expecting the ExecutionGraph 
to be in state RUNNING when entering the state.
However, the Restarting state is cancelling the ExecutionGraph already, 
thus the ExectionGraph can be in state CANCELING or CANCELED when entering the 
Canceling state.

Calling the ExecutionGraph.cancel() method in the Canceling state while 
being in ExecutionGraph.state = CANCELED || CANCELLED is not a problem.

The change is guarded by a new ITCase, as this issue affects the interplay 
between different AS states.
---
 .../adaptive/StateWithExecutionGraph.java  |  2 -
 .../adaptive/AdaptiveSchedulerSimpleITCase.java| 44 ++
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
index 9962c78..91d688f 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
@@ -91,8 +91,6 @@ abstract class StateWithExecutionGraph implements State {
 this.operatorCoordinatorHandler = operatorCoordinatorHandler;
 this.kvStateHandler = new KvStateHandler(executionGraph);
 this.logger = logger;
-Preconditions.checkState(
-executionGraph.getState() == JobStatus.RUNNING, "Assuming 
running execution graph");
 
 FutureUtils.assertNoException(
 executionGraph
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
index 5c14764..3155ab7 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
@@ -20,7 +20,9 @@ package org.apache.flink.runtime.scheduler.adaptive;
 
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
 import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
 import org.apache.flink.configuration.ClusterOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.JobManagerOptions;
@@ -34,6 +36,7 @@ import 
org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable;
 import org.apache.flink.runtime.jobmaster.JobResult;
 import org.apache.flink.runtime.minicluster.MiniCluster;
 import org.apache.flink.runtime.testtasks.NoOpInvokable;
+import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.testutils.MiniClusterResource;
 import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
 import org.apache.flink.util.FlinkRuntimeException;
@@ -44,6 +47,7 @@ import org.junit.Test;
 
 import java.io.IOException;
 import java.time.Duration;
+import java.time.temporal.ChronoUnit;
 
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assume.assumeTrue;
@@ -110,6 +114,34 @@ public class AdaptiveSchedulerSimpleITCase extends 
TestLogger {
 }
 
 @Test
+public void testJobCancellationWhileRestartingSucceeds() throws Exception {
+final long timeInRestartingState = 1L;
+
+final MiniCluster miniCluster = MINI_CLUSTER_RESOURCE.getMiniCluster();
+final JobVertex alwaysFailingOperator = new JobVertex("Always failing 
operator");
+alwaysFailingOperator.setInvokableClass(AlwaysFailingInvokable.class);
+alwaysFailingOperator.setParallelism(1);
+
+final JobGraph jobGraph = 
JobGraphTestUtils.streamingJobGraph(alwaysFailingOperator);
+ExecutionConfig executionConfig = new ExecutionConfig();
+// configure a high delay between attempts: We'll stay in RESTARTING 
for 10 seconds.
+executionConfig.setRestartStrategy(
+RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 
timeInRestartingState));
+jobGraph.setExecutionConfig(executionConfig);
+
+miniCluster.submitJob(jobGraph).join();
+
+// wait until we are in RESTARTING state
+CommonTestUtil

[flink] 02/02: [FLINK-22574] Adaptive Scheduler: Fix cancellation while in Restarting state.

2021-05-12 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 02d30ace69dc18555a5085eccf70ee884e73a16e
Author: Robert Metzger 
AuthorDate: Fri May 7 08:35:04 2021 +0200

[FLINK-22574] Adaptive Scheduler: Fix cancellation while in Restarting 
state.

The Canceling state of Adaptive Scheduler was expecting the ExecutionGraph 
to be in state RUNNING when entering the state.
However, the Restarting state is cancelling the ExecutionGraph already, 
thus the ExectionGraph can be in state CANCELING or CANCELED when entering the 
Canceling state.

Calling the ExecutionGraph.cancel() method in the Canceling state while 
being in ExecutionGraph.state = CANCELED || CANCELLED is not a problem.

The change is guarded by a new ITCase, as this issue affects the interplay 
between different AS states.

This closes #15882
---
 .../adaptive/StateWithExecutionGraph.java  |  2 -
 .../adaptive/AdaptiveSchedulerSimpleITCase.java| 44 ++
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
index 9962c78..91d688f 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java
@@ -91,8 +91,6 @@ abstract class StateWithExecutionGraph implements State {
 this.operatorCoordinatorHandler = operatorCoordinatorHandler;
 this.kvStateHandler = new KvStateHandler(executionGraph);
 this.logger = logger;
-Preconditions.checkState(
-executionGraph.getState() == JobStatus.RUNNING, "Assuming 
running execution graph");
 
 FutureUtils.assertNoException(
 executionGraph
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
index 9280dbc..992d2c4 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerSimpleITCase.java
@@ -20,7 +20,9 @@ package org.apache.flink.runtime.scheduler.adaptive;
 
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
 import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.JobManagerOptions;
 import org.apache.flink.runtime.execution.Environment;
@@ -33,6 +35,7 @@ import 
org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable;
 import org.apache.flink.runtime.jobmaster.JobResult;
 import org.apache.flink.runtime.minicluster.MiniCluster;
 import org.apache.flink.runtime.testtasks.NoOpInvokable;
+import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.testutils.MiniClusterResource;
 import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
 import org.apache.flink.util.FlinkRuntimeException;
@@ -43,6 +46,7 @@ import org.junit.Test;
 
 import java.io.IOException;
 import java.time.Duration;
+import java.time.temporal.ChronoUnit;
 
 import static org.junit.Assert.assertTrue;
 
@@ -105,6 +109,34 @@ public class AdaptiveSchedulerSimpleITCase extends 
TestLogger {
 }
 
 @Test
+public void testJobCancellationWhileRestartingSucceeds() throws Exception {
+final long timeInRestartingState = 1L;
+
+final MiniCluster miniCluster = MINI_CLUSTER_RESOURCE.getMiniCluster();
+final JobVertex alwaysFailingOperator = new JobVertex("Always failing 
operator");
+alwaysFailingOperator.setInvokableClass(AlwaysFailingInvokable.class);
+alwaysFailingOperator.setParallelism(1);
+
+final JobGraph jobGraph = 
JobGraphTestUtils.streamingJobGraph(alwaysFailingOperator);
+ExecutionConfig executionConfig = new ExecutionConfig();
+// configure a high delay between attempts: We'll stay in RESTARTING 
for 10 seconds.
+executionConfig.setRestartStrategy(
+RestartStrategies.fixedDelayRestart(Integer.MAX_VALUE, 
timeInRestartingState));
+jobGraph.setExecutionConfig(executionConfig);
+
+miniCluster.submitJob(jobGraph).join();
+
+// wait until we are in RESTARTING state
+CommonTestUtils.waitUntilCondition(
+() -> m

[flink] 01/02: [hotfix] Add missing TestLogger to Kinesis tests

2021-05-12 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 728cddc3e01bb1368c7e0c0c3781e2c0c4a6a50e
Author: Robert Metzger 
AuthorDate: Mon May 10 15:46:21 2021 +0200

[hotfix] Add missing TestLogger to Kinesis tests
---
 .../apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java  | 3 ++-
 .../flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java   | 3 ++-
 .../apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
index 0b224c3..3db1dd3 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java
@@ -26,6 +26,7 @@ import 
org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConsta
 import 
org.apache.flink.streaming.connectors.kinesis.testutils.KinesaliteContainer;
 import 
org.apache.flink.streaming.connectors.kinesis.testutils.KinesisPubsubClient;
 import org.apache.flink.test.util.MiniClusterWithClientResource;
+import org.apache.flink.util.TestLogger;
 
 import org.junit.Before;
 import org.junit.ClassRule;
@@ -49,7 +50,7 @@ import static org.hamcrest.Matchers.lessThan;
 import static org.junit.Assert.assertThat;
 
 /** IT cases for using Kinesis consumer/producer based on Kinesalite. */
-public class FlinkKinesisITCase {
+public class FlinkKinesisITCase extends TestLogger {
 public static final String TEST_STREAM = "test_stream";
 
 @ClassRule
diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
index d5643ec..387d2d3 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducerTest.java
@@ -33,6 +33,7 @@ import 
org.apache.flink.streaming.util.MockSerializationSchema;
 import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
 import org.apache.flink.util.ExceptionUtils;
 import org.apache.flink.util.InstantiationUtil;
+import org.apache.flink.util.TestLogger;
 
 import com.amazonaws.services.kinesis.producer.KinesisProducer;
 import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
@@ -63,7 +64,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 /** Suite of {@link FlinkKinesisProducer} tests. */
-public class FlinkKinesisProducerTest {
+public class FlinkKinesisProducerTest extends TestLogger {
 
 @Rule public ExpectedException exception = ExpectedException.none();
 
diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
index 2d695ab..6ebe4a4 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/KinesisConsumerTest.java
@@ -21,6 +21,7 @@ package org.apache.flink.streaming.connectors.kinesis;
 import org.apache.flink.api.common.serialization.DeserializationSchema;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.util.Collector;
+import org.apache.flink.util.TestLogger;
 
 import org.junit.Rule;
 import org.junit.Test;
@@ -33,7 +34,7 @@ import java.util.Properties;
  * Tests for {@link FlinkKinesisConsumer}. In contrast to tests in {@link 
FlinkKinesisConsumerTest}
  * it does not use power mock, which makes it possible to use e.g. the {@link 
ExpectedException}.
  */
-public class KinesisConsumerTest {
+public class KinesisConsumerTest extends TestLogger {
 
 @Rule public ExpectedException thrown = ExpectedException.none();
 


[flink] branch master updated (ad86adf -> 02d30ac)

2021-05-12 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ad86adf  [FLINK-22628][docs] Update state_processor_api.md
 new 728cddc  [hotfix] Add missing TestLogger to Kinesis tests
 new 02d30ac  [FLINK-22574] Adaptive Scheduler: Fix cancellation while in 
Restarting state.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../connectors/kinesis/FlinkKinesisITCase.java |  3 +-
 .../kinesis/FlinkKinesisProducerTest.java  |  3 +-
 .../connectors/kinesis/KinesisConsumerTest.java|  3 +-
 .../adaptive/StateWithExecutionGraph.java  |  2 -
 .../adaptive/AdaptiveSchedulerSimpleITCase.java| 44 ++
 5 files changed, 50 insertions(+), 5 deletions(-)


[flink-web] branch asf-site updated: [hotfix] Fix typo in reactive mode blog post

2021-05-11 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8265ac3  [hotfix] Fix typo in reactive mode blog post
8265ac3 is described below

commit 8265ac3c9d6763a61ed6216dc2211407853df5da
Author: Robert Metzger 
AuthorDate: Tue May 11 16:23:46 2021 +0200

[hotfix] Fix typo in reactive mode blog post
---
 _posts/2021-05-06-reactive-mode.md| 2 +-
 content/2021/05/06/reactive-mode.html | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/_posts/2021-05-06-reactive-mode.md 
b/_posts/2021-05-06-reactive-mode.md
index 64dfc88..6597953 100644
--- a/_posts/2021-05-06-reactive-mode.md
+++ b/_posts/2021-05-06-reactive-mode.md
@@ -39,7 +39,7 @@ If you want to try out Reactive Mode yourself locally, follow 
these steps using
 
 ```bash
 # These instructions assume you are in the root directory of a Flink 
distribution.
-# Put Job into lib/ directory
+# Put Job into usrlib/ directory
 mkdir usrlib
 cp ./examples/streaming/TopSpeedWindowing.jar usrlib/
 # Submit Job in Reactive Mode
diff --git a/content/2021/05/06/reactive-mode.html 
b/content/2021/05/06/reactive-mode.html
index b5c46a2..024a15e 100644
--- a/content/2021/05/06/reactive-mode.html
+++ b/content/2021/05/06/reactive-mode.html
@@ -242,7 +242,7 @@ Similarly, Kubernetes provides https://kubernetes.io/docs/tasks/run-app
 If you want to try out Reactive Mode yourself locally, follow these steps 
using a Flink 1.13.0 distribution:
 
 # 
These instructions assume you are in the root directory of a Flink 
distribution.
-# Put Job into lib/ directory
+# Put Job into usrlib/ directory
 mkdir usrlib
 cp ./examples/streaming/TopSpeedWindowing.jar usrlib/
 # Submit Job in Reactive Mode


[flink] branch master updated: [hotfix] Disable broken savepoint tests tracked in FLINK-22067

2021-05-10 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 464e0d0  [hotfix] Disable broken savepoint tests tracked in FLINK-22067
464e0d0 is described below

commit 464e0d0997762e69be606f8afd7c796d97d24072
Author: Robert Metzger 
AuthorDate: Mon May 10 10:54:07 2021 +0200

[hotfix] Disable broken savepoint tests tracked in FLINK-22067
---
 .../test/java/org/apache/flink/state/api/utils/SavepointTestBase.java  | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/SavepointTestBase.java
 
b/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/SavepointTestBase.java
index 30291a2..64808ec 100644
--- 
a/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/SavepointTestBase.java
+++ 
b/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/SavepointTestBase.java
@@ -30,6 +30,8 @@ import 
org.apache.flink.streaming.api.functions.source.SourceFunction;
 import org.apache.flink.test.util.AbstractTestBase;
 import org.apache.flink.util.AbstractID;
 
+import org.junit.Ignore;
+
 import java.io.IOException;
 import java.util.Arrays;
 import java.util.Collection;
@@ -38,6 +40,7 @@ import java.util.concurrent.TimeUnit;
 import java.util.function.Function;
 
 /** A test base that includes utilities for taking a savepoint. */
+@Ignore
 public abstract class SavepointTestBase extends AbstractTestBase {
 
 public  String takeSavepoint(


[flink-web] 02/03: [hotfix] Extend readme

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b1fb1889a570502befab3b40aaa10b7f777e3f35
Author: Robert Metzger 
AuthorDate: Fri May 7 08:21:24 2021 +0200

[hotfix] Extend readme
---
 README.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/README.md b/README.md
index f9b7fa9..a991180 100644
--- a/README.md
+++ b/README.md
@@ -18,6 +18,8 @@ bash docker-build.sh -f
 
 Both commands will start a webserver providing the website via 
`http://0.0.0.0:4000`.
 
+If a newly added blog post is not showing up on the index / blog overview 
page, delete the "content" directory before building the page locally. The 
"content" directory will be regenerated completely, including the newly added 
blog post.
+
 ## Building website
 
 The site needs to be rebuild before merging into the branch asf-site.


[flink-web] 03/03: Rebuild website

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 90ca52fd38beddaba6b92a3a0a90751e323da3ee
Author: Robert Metzger 
AuthorDate: Fri May 7 12:00:14 2021 +0200

Rebuild website

This closes #427
---
 content/2021/05/06/reactive-mode.html  | 408 +
 content/blog/feed.xml  | 296 ---
 content/blog/index.html|  38 +-
 content/blog/page10/index.html |  45 ++-
 content/blog/page11/index.html |  43 ++-
 content/blog/page12/index.html |  40 +-
 content/blog/page13/index.html |  42 ++-
 content/blog/page14/index.html |  40 +-
 content/blog/page15/index.html |  45 ++-
 content/blog/{page9 => page16}/index.html  | 164 +
 content/blog/page2/index.html  |  40 +-
 content/blog/page3/index.html  |  40 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  38 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  42 ++-
 content/img/blog/2021-04-reactive-mode/arch.png| Bin 0 -> 54735 bytes
 .../blog/2021-04-reactive-mode/high-timeout.png| Bin 0 -> 715791 bytes
 content/img/blog/2021-04-reactive-mode/intro.svg   |   1 +
 content/img/blog/2021-04-reactive-mode/result.png  | Bin 0 -> 898175 bytes
 content/index.html |   8 +-
 content/zh/index.html  |   8 +-
 24 files changed, 974 insertions(+), 522 deletions(-)

diff --git a/content/2021/05/06/reactive-mode.html 
b/content/2021/05/06/reactive-mode.html
new file mode 100644
index 000..b5c46a2
--- /dev/null
+++ b/content/2021/05/06/reactive-mode.html
@@ -0,0 +1,408 @@
+
+
+  
+
+
+
+
+Apache Flink: Scaling Flink automatically with Reactive Mode
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+
+
+
+
+
+
+
+  
+ 
+
+
+
+
+
+
+  
+
+
+
+  
+  
+
+  
+
+  
+
+
+
+
+  
+
+
+
+
+
+
+
+What is Apache 
Flink?
+
+
+
+
+
+What is Stateful 
Functions?
+
+
+Use Cases
+
+
+Powered By
+
+
+
+
+
+
+Downloads
+
+
+
+  Getting Started
+  
+https://ci.apache.org/projects/flink/flink-docs-release-1.13//docs/try-flink/local_installation/;
 target="_blank">With Flink 
+https://ci.apache.org/projects/flink/flink-statefun-docs-release-3.0/getting-started/project-setup.html;
 target="_blank">With Flink Stateful Functions 
+Training Course
+  
+
+
+
+
+  Documentation
+  
+https://ci.apache.org/projects/flink/flink-docs-release-1.13; 
target="_blank">Flink 1.13 (Latest stable release) 
+https://ci.apache.org/projects/flink/flink-docs-master; 
target="_blank">Flink Master (Latest Snapshot) 
+https://ci.apache.org/projects/flink/flink-statefun-docs-release-3.0; 
target="_blank">Flink Stateful Functions 3.0 (Latest stable release) 

+https://ci.apache.org/projects/flink/flink-statefun-docs-master; 
target="_blank">Flink Stateful Functions Master (Latest Snapshot) 
+  
+
+
+
+Getting Help
+
+
+Flink Blog
+
+
+
+
+  https://flink-packages.org; 
target="_blank">flink-packages.org 
+
+
+
+
+
+
+Community  Project Info
+
+
+Roadmap
+
+
+How to 
Contribute
+
+
+
+
+  https://github.com/apache/flink; target="_blank">Flink 
on GitHub 
+
+
+
+
+
+
+  
+
+  中文版
+
+  
+
+
+  
+
+  
+.smalllinks:link {
+  display: in

[flink-web] 01/03: Add reactive mode blog post

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 266b417936f0ab463b58088639ba95cea91b7dad
Author: Robert Metzger 
AuthorDate: Sat Mar 27 13:46:41 2021 +0100

Add reactive mode blog post
---
 _posts/2021-05-06-reactive-mode.md  | 149 
 img/blog/2021-04-reactive-mode/arch.png | Bin 0 -> 54735 bytes
 img/blog/2021-04-reactive-mode/high-timeout.png | Bin 0 -> 715791 bytes
 img/blog/2021-04-reactive-mode/intro.svg|   1 +
 img/blog/2021-04-reactive-mode/result.png   | Bin 0 -> 898175 bytes
 5 files changed, 150 insertions(+)

diff --git a/_posts/2021-05-06-reactive-mode.md 
b/_posts/2021-05-06-reactive-mode.md
new file mode 100644
index 000..64dfc88
--- /dev/null
+++ b/_posts/2021-05-06-reactive-mode.md
@@ -0,0 +1,149 @@
+---
+layout: post
+title:  "Scaling Flink automatically with Reactive Mode"
+date: 2021-05-06T00:00:00.000Z
+authors:
+- rob:
+  name: "Robert Metzger"
+  twitter: "rmetzger_"
+excerpt: Apache Flink 1.13 introduced Reactive Mode, a big step forward in 
Flink's ability to dynamically adjust to changing workloads, reducing resource 
utilization and overall costs. This blog post showcases how to use this new 
feature on Kubernetes, including some lessons learned.
+
+---
+
+{% toc %}
+
+## Introduction
+
+Streaming jobs which run for several days or longer usually experience 
variations in workload during their lifetime. These variations can originate 
from seasonal spikes, such as day vs. night, weekdays vs. weekend or holidays 
vs. non-holidays, sudden events or simply the growing popularity of your 
product. Although some of these variations are more predictable than others, in 
all cases there is a change in job resource demand that needs to be addressed 
if you want to ensure the same qual [...]
+
+A simple way of quantifying the mismatch between the required resources and 
the available resources is to measure the space between the actual load and the 
number of available workers. As pictured below, in the case of static resource 
allocation, you can see that there's a big gap between the actual load and the 
available workers — hence, we are wasting resources. For elastic resource 
allocation, the gap between the red and black line is consistently small.
+
+
+  
+
+
+**Manually rescaling** a Flink job has been possible since Flink 1.2 
introduced [rescalable 
state](https://flink.apache.org/features/2017/07/04/flink-rescalable-state.html),
 which allows you to stop-and-restore a job with a different parallelism. For 
example, if your job is running with a parallelism of p=100 and your load 
increases, you can restart it with p=200 to cope with the additional data. 
+
+The problem with this approach is that you have to orchestrate a rescale 
operation with custom tools by yourself, including error handling and similar 
tasks.
+
+[Reactive 
Mode](https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/elastic_scaling/)
 introduces a new option in Flink 1.13: You monitor your Flink cluster and add 
or remove resources depending on some metrics, Flink will do the rest. Reactive 
Mode is a mode where JobManager will try to use all TaskManager resources 
available.
+
+The big benefit of Reactive Mode is that you don't need any specific knowledge 
to scale Flink anymore. Flink basically behaves like a fleet of servers (e.g. 
webservers, caches, batch processing) that you can expand or shrink as you 
wish. Since this is such a common pattern, there is a lot of infrastructure 
available for handling such cases: all major cloud providers offer utilities to 
monitor specific metrics and automatically scale a set of machines accordingly. 
For example, this would  [...]
+Similarly, Kubernetes provides [Horizontal Pod 
Autoscalers](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+
+What is interesting, as a side note, is that unlike most auto scalable "fleets 
of servers", Flink is a stateful system, often processing valuable data 
requiring strong correctness guarantees (comparable to a database). But, unlike 
many traditional databases, Flink is resilient enough (through checkpointing 
and state backups) to adjust to changing workloads by just adding or removing 
resources, with very little requirements (i.e. simple blob store for state 
backups).
+
+## Getting Started
+
+If you want to try out Reactive Mode yourself locally, follow these steps 
using a Flink 1.13.0 distribution:
+
+```bash
+# These instructions assume you are in the root directory of a Flink 
distribution.
+# Put Job into lib/ directory
+mkdir usrlib
+cp ./examples/streaming/TopSpeedWindowing.jar usrlib/
+# Submit Job in Reactive Mode
+./bin/standalone-job.sh start -Dscheduler-mode=reac

[flink-web] branch asf-site updated (2d9a40d -> 90ca52f)

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 2d9a40d  rebuild site
 new 266b417  Add reactive mode blog post
 new b1fb188  [hotfix] Extend readme
 new 90ca52f  Rebuild website

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 README.md  |   2 +
 _posts/2021-05-06-reactive-mode.md | 149 
 content/2021/05/06/reactive-mode.html  | 408 +
 content/blog/feed.xml  | 296 ---
 content/blog/index.html|  38 +-
 content/blog/page10/index.html |  45 ++-
 content/blog/page11/index.html |  43 ++-
 content/blog/page12/index.html |  40 +-
 content/blog/page13/index.html |  42 ++-
 content/blog/page14/index.html |  40 +-
 content/blog/page15/index.html |  45 ++-
 content/blog/{page9 => page16}/index.html  | 164 +
 content/blog/page2/index.html  |  40 +-
 content/blog/page3/index.html  |  40 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  38 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  42 ++-
 content/img/blog/2021-04-reactive-mode/arch.png| Bin 0 -> 54735 bytes
 .../blog/2021-04-reactive-mode/high-timeout.png| Bin 0 -> 715791 bytes
 content/img/blog/2021-04-reactive-mode/intro.svg   |   1 +
 content/img/blog/2021-04-reactive-mode/result.png  | Bin 0 -> 898175 bytes
 content/index.html |   8 +-
 content/zh/index.html  |   8 +-
 img/blog/2021-04-reactive-mode/arch.png| Bin 0 -> 54735 bytes
 img/blog/2021-04-reactive-mode/high-timeout.png| Bin 0 -> 715791 bytes
 img/blog/2021-04-reactive-mode/intro.svg   |   1 +
 img/blog/2021-04-reactive-mode/result.png  | Bin 0 -> 898175 bytes
 30 files changed, 1126 insertions(+), 522 deletions(-)
 create mode 100644 _posts/2021-05-06-reactive-mode.md
 create mode 100644 content/2021/05/06/reactive-mode.html
 copy content/blog/{page9 => page16}/index.html (88%)
 create mode 100644 content/img/blog/2021-04-reactive-mode/arch.png
 create mode 100644 content/img/blog/2021-04-reactive-mode/high-timeout.png
 create mode 100644 content/img/blog/2021-04-reactive-mode/intro.svg
 create mode 100644 content/img/blog/2021-04-reactive-mode/result.png
 create mode 100644 img/blog/2021-04-reactive-mode/arch.png
 create mode 100644 img/blog/2021-04-reactive-mode/high-timeout.png
 create mode 100644 img/blog/2021-04-reactive-mode/intro.svg
 create mode 100644 img/blog/2021-04-reactive-mode/result.png


[flink] branch master updated (8f3d483 -> aca4328)

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8f3d483  [FLINK-22555][build][python] Exclude leftover jboss files
 add aca4328  [hotfix] Ignore failing test reported in FLINK-22559

No new revisions were added by this update.

Summary of changes:
 .../flink/streaming/connectors/kafka/table/UpsertKafkaTableITCase.java  | 2 ++
 1 file changed, 2 insertions(+)


[flink] branch release-1.13 updated (c8b3160 -> da9cb97)

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c8b3160  [hotfix][docs][python] Add an overview page for Python UDFs
 new 8094c0f  [hotfix][docs] Re-introduce note about FLINK_CONF_DIR
 new da9cb97  [hotfix] Make reactive warning less strong, clarifications

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content.zh/docs/deployment/config.md  | 4 
 docs/content.zh/docs/deployment/elastic_scaling.md | 6 +++---
 docs/content/docs/deployment/config.md | 4 
 docs/content/docs/deployment/elastic_scaling.md| 6 +++---
 4 files changed, 14 insertions(+), 6 deletions(-)


[flink] 02/02: [hotfix] Make reactive warning less strong, clarifications

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit da9cb972bea65b47e2bfb20cebb1dd859ae338f9
Author: Robert Metzger 
AuthorDate: Thu May 6 10:44:15 2021 +0200

[hotfix] Make reactive warning less strong, clarifications
---
 docs/content.zh/docs/deployment/elastic_scaling.md | 6 +++---
 docs/content/docs/deployment/elastic_scaling.md| 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/content.zh/docs/deployment/elastic_scaling.md 
b/docs/content.zh/docs/deployment/elastic_scaling.md
index 8a06749..e4046ce 100644
--- a/docs/content.zh/docs/deployment/elastic_scaling.md
+++ b/docs/content.zh/docs/deployment/elastic_scaling.md
@@ -31,7 +31,7 @@ This page describes options where Flink automatically adjusts 
the parallelism in
 
 ## Reactive Mode
 
-{{< hint danger >}}
+{{< hint info >}}
 Reactive mode is an MVP ("minimum viable product") feature. The Flink 
community is actively looking for feedback by users through our mailing lists. 
Please check the limitations listed on this page.
 {{< /hint >}}
 
@@ -122,8 +122,8 @@ The [limitations of Adaptive Scheduler](#limitations-1) 
also apply to Reactive M
 
 ## Adaptive Scheduler
 
-{{< hint danger >}}
-Using Adaptive Scheduler directly (not through Reactive Mode) is only advised 
for advanced users.
+{{< hint warning >}}
+Using Adaptive Scheduler directly (not through Reactive Mode) is only advised 
for advanced users because slot allocation on a session cluster with multiple 
jobs is not defined.
 {{< /hint >}}
 
 The Adaptive Scheduler can adjust the parallelism of a job based on available 
slots. It will automatically reduce the parallelism if not enough slots are 
available to run the job with the originally configured parallelism; be it due 
to not enough resources being available at the time of submission, or 
TaskManager outages during the job execution. If new slots become available the 
job will be scaled up again, up to the configured parallelism.
diff --git a/docs/content/docs/deployment/elastic_scaling.md 
b/docs/content/docs/deployment/elastic_scaling.md
index 8a06749..e4046ce 100644
--- a/docs/content/docs/deployment/elastic_scaling.md
+++ b/docs/content/docs/deployment/elastic_scaling.md
@@ -31,7 +31,7 @@ This page describes options where Flink automatically adjusts 
the parallelism in
 
 ## Reactive Mode
 
-{{< hint danger >}}
+{{< hint info >}}
 Reactive mode is an MVP ("minimum viable product") feature. The Flink 
community is actively looking for feedback by users through our mailing lists. 
Please check the limitations listed on this page.
 {{< /hint >}}
 
@@ -122,8 +122,8 @@ The [limitations of Adaptive Scheduler](#limitations-1) 
also apply to Reactive M
 
 ## Adaptive Scheduler
 
-{{< hint danger >}}
-Using Adaptive Scheduler directly (not through Reactive Mode) is only advised 
for advanced users.
+{{< hint warning >}}
+Using Adaptive Scheduler directly (not through Reactive Mode) is only advised 
for advanced users because slot allocation on a session cluster with multiple 
jobs is not defined.
 {{< /hint >}}
 
 The Adaptive Scheduler can adjust the parallelism of a job based on available 
slots. It will automatically reduce the parallelism if not enough slots are 
available to run the job with the originally configured parallelism; be it due 
to not enough resources being available at the time of submission, or 
TaskManager outages during the job execution. If new slots become available the 
job will be scaled up again, up to the configured parallelism.


[flink] 01/02: [hotfix][docs] Re-introduce note about FLINK_CONF_DIR

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8094c0f85bd3e3382bcc19d5097ef51c93ad3f82
Author: Robert Metzger 
AuthorDate: Thu May 6 10:51:21 2021 +0200

[hotfix][docs] Re-introduce note about FLINK_CONF_DIR

This closes #15845
---
 docs/content.zh/docs/deployment/config.md | 4 
 docs/content/docs/deployment/config.md| 4 
 2 files changed, 8 insertions(+)

diff --git a/docs/content.zh/docs/deployment/config.md 
b/docs/content.zh/docs/deployment/config.md
index 92b5903..14806f9 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -34,6 +34,10 @@ The configuration is parsed and evaluated when the Flink 
processes are started.
 
 The out of the box configuration will use your default Java installation. You 
can manually set the environment variable `JAVA_HOME` or the configuration key 
`env.java.home` in `conf/flink-conf.yaml` if you want to manually override the 
Java runtime to use.
 
+You can specify a different configuration directory location by defining the 
`FLINK_CONF_DIR` environment variable. For resource providers which provide 
non-session deployments, you can specify per-job configurations this way. Make 
a copy of the `conf` directory from the Flink distribution and modify the 
settings on a per-job basis. Note that this is not supported in Docker or 
standalone Kubernetes deployments. On Docker-based deployments, you can use the 
`FLINK_PROPERTIES` environment v [...]
+
+On session clusters, the provided configuration will only be used for 
configuring [execution](#execution) parameters, e.g. configuration parameters 
affecting the job, not the underlying cluster.
+
 # Basic Setup
 
 The default configuration supports starting a single-node Flink session 
cluster without any changes.
diff --git a/docs/content/docs/deployment/config.md 
b/docs/content/docs/deployment/config.md
index 3310a15..935e791 100644
--- a/docs/content/docs/deployment/config.md
+++ b/docs/content/docs/deployment/config.md
@@ -34,6 +34,10 @@ The configuration is parsed and evaluated when the Flink 
processes are started.
 
 The out of the box configuration will use your default Java installation. You 
can manually set the environment variable `JAVA_HOME` or the configuration key 
`env.java.home` in `conf/flink-conf.yaml` if you want to manually override the 
Java runtime to use.
 
+You can specify a different configuration directory location by defining the 
`FLINK_CONF_DIR` environment variable. For resource providers which provide 
non-session deployments, you can specify per-job configurations this way. Make 
a copy of the `conf` directory from the Flink distribution and modify the 
settings on a per-job basis. Note that this is not supported in Docker or 
standalone Kubernetes deployments. On Docker-based deployments, you can use the 
`FLINK_PROPERTIES` environment v [...]
+
+On session clusters, the provided configuration will only be used for 
configuring [execution](#execution) parameters, e.g. configuration parameters 
affecting the job, not the underlying cluster.
+
 # Basic Setup
 
 The default configuration supports starting a single-node Flink session 
cluster without any changes.


[flink] branch master updated (9554321 -> bb586ed)

2021-05-07 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 9554321  [hotfix][docs][python] Add an overview page for Python UDFs
 add 9283897  [hotfix] Make reactive warning less strong, clarifications
 add bb586ed  [hotfix][docs] Re-introduce note about FLINK_CONF_DIR

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/config.md  | 4 
 docs/content.zh/docs/deployment/elastic_scaling.md | 6 +++---
 docs/content/docs/deployment/config.md | 4 
 docs/content/docs/deployment/elastic_scaling.md| 6 +++---
 4 files changed, 14 insertions(+), 6 deletions(-)


[flink] branch release-1.13 updated: [FLINK-22493] Increase test stability in AdaptiveSchedulerITCase.

2021-04-29 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new 229669c  [FLINK-22493] Increase test stability in 
AdaptiveSchedulerITCase.
229669c is described below

commit 229669cf16883b80c00c2b3aad75effc962d16d6
Author: Robert Metzger 
AuthorDate: Thu Apr 29 12:28:59 2021 +0200

[FLINK-22493] Increase test stability in AdaptiveSchedulerITCase.

This addresses the following problem in the 
testStopWithSavepointFailOnFirstSavepointSucceedOnSecond() test.

Once all tasks are running, the test triggers a savepoint, which 
intentionally fails, because of a test exception in a Task's checkpointing 
method. The test then waits for the savepoint future to fail, and the scheduler 
to restart the tasks. Once they are running again, it performs a sanity check 
whether the savepoint directory has been properly removed. In the reported run, 
there was still the savepoint directory around.

The savepoint directory is removed via the PendingCheckpoint.discard() 
method. This method is executed using the i/o executor pool of the 
CheckpointCoordinator. There is no guarantee that this discard method has been 
executed when the job is running again (and the executor shuts down with the 
dispatcher, hence it is not bound to job restarts).
---
 .../test/scheduling/AdaptiveSchedulerITCase.java   | 29 ++
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/scheduling/AdaptiveSchedulerITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/scheduling/AdaptiveSchedulerITCase.java
index b17dded..a41b9b4 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/scheduling/AdaptiveSchedulerITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/scheduling/AdaptiveSchedulerITCase.java
@@ -219,16 +219,12 @@ public class AdaptiveSchedulerITCase extends TestLogger {
 
 DummySource.awaitRunning();
 
-// ensure failed savepoint files have been removed by now (this check 
is intentionally after
-// the wait for the sources to be running again, due to instabilities 
observed)
-File[] files = savepointDirectory.listFiles();
-if (files.length > 0) {
-fail(
-"Found unexpected files: "
-+ Arrays.stream(files)
-.map(File::getAbsolutePath)
-.collect(Collectors.joining(", ")));
-}
+// ensure failed savepoint files have been removed from the directory.
+// We execute this in a retry loop with a timeout, because the 
savepoint deletion happens
+// asynchronously and is not bound to the job lifecycle. See 
FLINK-22493 for more details.
+CommonTestUtils.waitUntilCondition(
+() -> isDirectoryEmpty(savepointDirectory),
+Deadline.fromNow(Duration.ofSeconds(10)));
 
 // trigger second savepoint
 final String savepoint =
@@ -236,6 +232,19 @@ public class AdaptiveSchedulerITCase extends TestLogger {
 assertThat(savepoint, 
containsString(savepointDirectory.getAbsolutePath()));
 }
 
+private boolean isDirectoryEmpty(File directory) {
+File[] files = directory.listFiles();
+if (files.length > 0) {
+log.warn(
+"There are still unexpected files: {}",
+Arrays.stream(files)
+.map(File::getAbsolutePath)
+.collect(Collectors.joining(", ")));
+return false;
+}
+return true;
+}
+
 private static StreamExecutionEnvironment getEnvWithSource(
 StopWithSavepointTestBehavior behavior) {
 final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();


[flink] branch master updated (16c2e46 -> 00584d3)

2021-04-29 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 16c2e46  [FLINK-22426][table] Fix several shortcomings that prevent 
schema expressions
 add 00584d3  [FLINK-22493] Increase test stability in 
AdaptiveSchedulerITCase.

No new revisions were added by this update.

Summary of changes:
 .../test/scheduling/AdaptiveSchedulerITCase.java   | 29 ++
 1 file changed, 19 insertions(+), 10 deletions(-)


[flink] branch release-1.13 updated: [FLINK-22495][docs] Add Reactive Mode section to K8s

2021-04-29 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new be7a590  [FLINK-22495][docs] Add Reactive Mode section to K8s
be7a590 is described below

commit be7a590e05bd091caefbe2bb0a51b20107b79493
Author: Robert Metzger 
AuthorDate: Wed Apr 28 11:45:49 2021 +0200

[FLINK-22495][docs] Add Reactive Mode section to K8s
---
 .../resource-providers/standalone/kubernetes.md| 81 +-
 .../resource-providers/standalone/kubernetes.md| 81 +-
 2 files changed, 160 insertions(+), 2 deletions(-)

diff --git 
a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md 
b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
index 84c53ac..b9f51ce 100644
--- 
a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
+++ 
b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
@@ -95,7 +95,7 @@ You can tear down the cluster using the following commands:
 
 ### Deploy Application Cluster
 
-A *Flink Application cluster* is a dedicated cluster which runs a single 
application.
+A *Flink Application cluster* is a dedicated cluster which runs a single 
application, which needs to be available at deployment time.
 
 A basic *Flink Application cluster* deployment in Kubernetes has three 
components:
 
@@ -233,6 +233,15 @@ You can access the queryable state of TaskManager if you 
create a `NodePort` ser
   1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create 
the `NodePort` service for the `taskmanager` pod. The example of 
`taskmanager-query-state-service.yaml` can be found in 
[appendix](#common-cluster-resource-definitions).
   2. Run `kubectl get svc flink-taskmanager-query-state` to get the 
`` of this service. Then you can create the 
[QueryableStateClient(public-node-ip, node-port]({{< ref 
"docs/dev/datastream/fault-tolerance/queryable_state" >}}#querying-state) to 
submit state queries.
 
+### Using Standalone Kubernetes with Reactive Mode
+
+[Reactive Mode]({{< ref "docs/deployment/elastic_scaling" >}}#reactive-mode) 
allows to run Flink in a mode, where the *Application Cluster* is always 
adjusting the job parallelism to the available resources. In combination with 
Kubernetes, the replica count of the TaskManager deployment determines the 
available resources. Increasing the replica count will scale up the job, 
reducing it will trigger a scale down. This can also be done automatically by 
using a [Horizontal Pod Autoscaler](ht [...]
+
+To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a 
job using an Application Cluster](#deploy-application-cluster). But instead of 
`flink-configuration-configmap.yaml` use this config map: 
`flink-reactive-mode-configuration-configmap.yaml`. It contains the 
`scheduler-mode: reactive` setting for Flink.
+
+Once you have deployed the *Application Cluster*, you can scale your job up or 
down by changing the replica count in the `flink-taskmanager` deployment.
+
+
 {{< top >}}
 
 ## Appendix
@@ -305,6 +314,76 @@ data:
 logger.netty.level = OFF
 ```
 
+
+`flink-reactive-mode-configuration-configmap.yaml`
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: flink-config
+  labels:
+app: flink
+data:
+  flink-conf.yaml: |+
+jobmanager.rpc.address: flink-jobmanager
+taskmanager.numberOfTaskSlots: 2
+blob.server.port: 6124
+jobmanager.rpc.port: 6123
+taskmanager.rpc.port: 6122
+queryable-state.proxy.ports: 6125
+jobmanager.memory.process.size: 1600m
+taskmanager.memory.process.size: 1728m
+parallelism.default: 2
+scheduler-mode: reactive
+execution.checkpointing.interval: 10s
+  log4j-console.properties: |+
+# This affects logging for both user code and Flink
+rootLogger.level = INFO
+rootLogger.appenderRef.console.ref = ConsoleAppender
+rootLogger.appenderRef.rolling.ref = RollingFileAppender
+
+# Uncomment this if you want to _only_ change Flink's logging
+#logger.flink.name = org.apache.flink
+#logger.flink.level = INFO
+
+# The following lines keep the log level of common libraries/connectors on
+# log level INFO. The root logger does not override this. You have to 
manually
+# change the log levels here.
+logger.akka.name = akka
+logger.akka.level = INFO
+logger.kafka.name= org.apache.kafka
+logger.kafka.level = INFO
+logger.hadoop.name = org.apache.hadoop
+logger.hadoop.level = INFO
+logger.zookeeper.name = org.apache.zookeeper
+logger.zookeeper.level = INFO
+
+# Log all infos to the console
+appender.console.name = ConsoleAppender
+appender.console.type = CONSOLE
+appender.console.layout.type = PatternLayout
+append

[flink] branch master updated (26da1bf -> 0ddda1b)

2021-04-29 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 26da1bf  [FLINK-22373][docs] Add Flink 1.13 release notes
 add 0ddda1b  [FLINK-22495][docs] Add Reactive Mode section to K8s

No new revisions were added by this update.

Summary of changes:
 .../resource-providers/standalone/kubernetes.md| 81 +-
 .../resource-providers/standalone/kubernetes.md| 81 +-
 2 files changed, 160 insertions(+), 2 deletions(-)


  1   2   3   4   5   6   7   8   9   10   >