Build failed in Jenkins: flink-snapshot-deployment-1.7 #319

2019-09-23 Thread Apache Jenkins Server
See 


--
[...truncated 424.24 KB...]
[INFO] Excluding org.objenesis:objenesis:jar:2.1 from the shaded jar.
[INFO] No artifact matching filter io.netty:netty
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing 

 with 

[INFO] Replacing original test artifact with shaded test artifact.
[INFO] Replacing 

 with 

[INFO] Dependency-reduced POM written at: 

[INFO] 
[INFO] >>> maven-source-plugin:2.2.1:jar (attach-sources) > generate-sources @ 
flink-scala_2.11 >>>
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- directory-maven-plugin:0.1:highest-basedir (directories) @ 
flink-scala_2.11 ---
[INFO] Highest basedir set to: 

[INFO] 
[INFO] --- build-helper-maven-plugin:1.7:add-source (add-source) @ 
flink-scala_2.11 ---
[INFO] Source directory: 

 added.
[INFO] 
[INFO] <<< maven-source-plugin:2.2.1:jar (attach-sources) < generate-sources @ 
flink-scala_2.11 <<<
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar (attach-sources) @ flink-scala_2.11 ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-javadoc-plugin:2.9.1:jar (attach-javadocs) @ flink-scala_2.11 
---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (integration-tests) @ 
flink-scala_2.11 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalastyle-maven-plugin:1.0.0:check (default) @ flink-scala_2.11 ---
Saving to 
outputFile=
Processed 103 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2041 ms
[INFO] 
[INFO] --- japicmp-maven-plugin:0.11.0:cmp (default) @ flink-scala_2.11 ---
[INFO] Written file 
'
[INFO] Written file 
'
[INFO] Written file 
'
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
flink-scala_2.11 ---
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT.pom
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT-tests.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT-tests.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT-sources.jar
[INFO] Installing 

Build failed in Jenkins: flink-snapshot-deployment-1.6 #413

2019-09-23 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H40 (ubuntu xenial) in workspace 

No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Using shallow clone
Cloning repository https://gitbox.apache.org/repos/asf/flink.git
 > git init  # 
 > timeout=10
Fetching upstream changes from https://gitbox.apache.org/repos/asf/flink.git
 > git --version # timeout=10
 > git fetch --tags --progress https://gitbox.apache.org/repos/asf/flink.git 
 > +refs/heads/*:refs/remotes/origin/* --depth=1
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
https://gitbox.apache.org/repos/asf/flink.git 
+refs/heads/*:refs/remotes/origin/* --depth=1" returned status code 128:
stdout: 
stderr: remote: Counting objects: 114521, done.
remote: Compressing objects:   0% (1/64628)   remote: Compressing 
objects:   1% (647/64628)   remote: Compressing objects:   2% 
(1293/64628)   remote: Compressing objects:   3% (1939/64628)   
remote: Compressing objects:   4% (2586/64628)   remote: Compressing 
objects:   5% (3232/64628)   remote: Compressing objects:   6% 
(3878/64628)   remote: Compressing objects:   7% (4524/64628)   
remote: Compressing objects:   8% (5171/64628)   remote: Compressing 
objects:   9% (5817/64628)   remote: Compressing objects:  10% 
(6463/64628)   remote: Compressing objects:  11% (7110/64628)   
remote: Compressing objects:  12% (7756/64628)   remote: Compressing 
objects:  13% (8402/64628)   remote: Compressing objects:  14% 
(9048/64628)   remote: Compressing objects:  15% (9695/64628)   
remote: Compressing objects:  16% (10341/64628)   remote: Compressing 
objects:  17% (10987/64628)   remote: Compressing objects:  18% 
(11634/64628)   remote: Compressing objects:  19% (12280/64628) 
  remote: Compressing objects:  20% (12926/64628)   remote: Compressing 
objects:  21% (13572/64628)   remote: Compressing objects:  22% 
(14219/64628)   remote: Compressing objects:  23% (14865/64628) 
  remote: Compressing objects:  24% (15511/64628)   remote: Compressing 
objects:  25% (16157/64628)   remote: Compressing objects:  26% 
(16804/64628)   remote: Compressing objects:  27% (17450/64628) 
  remote: Compressing objects:  28% (18096/64628)   remote: Compressing 
objects:  29% (18743/64628)   remote: Compressing objects:  30% 
(19389/64628)   remote: Compressing objects:  31% (20035/64628) 
  remote: Compressing objects:  32% (20681/64628)   remote: Compressing 
objects:  33% (21328/64628)   remote: Compressing objects:  34% 
(21974/64628)   remote: Compressing objects:  35% (22620/64628) 
  remote: Compressing objects:  36% (23267/64628)   remote: Compressing 
objects:  37% (23913/64628)   remote: Compressing objects:  38% 
(24559/64628)   remote: Compressing objects:  39% (25205/64628) 
  remote: Compressing objects:  40% (25852/64628)   remote: Compressing 
objects:  41% (26498/64628)   remote: Compressing objects:  42% 
(27144/64628)   remote: Compressing objects:  43% (27791/64628) 
  remote: Compressing objects:  44% (28437/64628)   remote: Compressing 
objects:  45% (29083/64628)   remote: Compressing objects:  46% 
(29729/64628)   remote: Compressing objects:  47% (30376/64628) 
  remote: Compressing objects:  48% (31022/64628)   remote: Compressing 
objects:  49% (31668/64628)   remote: Compressing objects:  50% 
(32314/64628)   remote: Compressing objects:  51% (32961/64628) 
  remote: Compressing objects:  52% (33607/64628)   remote: Compressing 
objects:  53% (34253/64628)   remote: Compressing objects:  54% 
(34900/64628)   remote: Compressing objects:  55% (35546/64628) 
  remote: Compressing objects:  56% (36192/64628)   remote: Compressing 
objects:  57% (36838/64628)   remote: Compressing objects:  58% 
(37485/64628)   remote: Compressing objects:  59% (38131/64628) 
  remote: Compressing objects:  60% (38777/64628)   remote: Compressing 
objects:  61% (39424/64628)   remote: Compressing objects:  62% 
(40070/64628)   remote: Compressing objects:  63% (40716/64628) 
  remote: Compressing objects:  64% (41362/64628)   remote: Compressing 
objects:  65% (42009/64628)   remote: Compressing 

[flink] branch master updated: [FLINK-14015][python] Introduces PythonScalarFunctionOperator to execute Python user-defined functions

2019-09-23 Thread hequn
This is an automated email from the ASF dual-hosted git repository.

hequn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 71fa237  [FLINK-14015][python] Introduces PythonScalarFunctionOperator 
to execute Python user-defined functions
71fa237 is described below

commit 71fa23738e7f5f582c20753a92653bc3ce1e29b8
Author: Dian Fu 
AuthorDate: Tue Sep 17 18:09:00 2019 +0800

[FLINK-14015][python] Introduces PythonScalarFunctionOperator to execute 
Python user-defined functions

- Introduces PythonScalarFunctionOperator to execute Python user-defined 
functions for flink planner.
- Introduces BaseRowPythonScalarFunctionOperator to execute Python 
user-defined functions for blink planner.

This closes #9707.
---
 .../src/main/java/org/apache/flink/types/Row.java  |  30 +++
 .../test/java/org/apache/flink/types/RowTest.java  |  24 ++
 flink-python/pom.xml   |   6 +
 .../flink/python/AbstractPythonFunctionRunner.java |  34 +--
 .../org/apache/flink/python/PythonOptions.java |  48 
 .../python/AbstractPythonFunctionOperator.java | 278 +
 .../AbstractPythonScalarFunctionOperator.java  | 200 +++
 .../BaseRowPythonScalarFunctionOperator.java   | 178 +
 .../python/PythonScalarFunctionOperator.java   | 158 
 .../python/AbstractPythonScalarFunctionRunner.java |  11 +-
 .../python/BaseRowPythonScalarFunctionRunner.java  |   9 +-
 .../python/PythonScalarFunctionRunner.java |   9 +-
 .../typeutils}/BeamTypeUtils.java  |   6 +-
 .../typeutils}/coders/BaseRowCoder.java|   2 +-
 .../typeutils}/coders/ReusableDataInputView.java   |   2 +-
 .../typeutils}/coders/ReusableDataOutputView.java  |   2 +-
 .../typeutils}/coders/RowCoder.java|   2 +-
 .../org/apache/flink/python/PythonOptionsTest.java |  59 +
 .../AbstractPythonScalarFunctionRunnerTest.java|   1 +
 .../BaseRowPythonScalarFunctionRunnerTest.java |   6 +-
 .../table/functions/python/BeamTypeUtilsTest.java  |   5 +-
 .../python/PythonScalarFunctionRunnerTest.java |  12 +-
 .../BaseRowPythonScalarFunctionOperatorTest.java   | 103 
 .../python/PassThroughPythonFunctionRunner.java|  75 ++
 .../python/PythonScalarFunctionOperatorTest.java   |  81 ++
 .../PythonScalarFunctionOperatorTestBase.java  | 223 +
 .../typeutils}/coders/BaseRowCoderTest.java|   2 +-
 .../typeutils}/coders/CoderTestBase.java   |   2 +-
 .../typeutils}/coders/RowCoderTest.java|   2 +-
 .../util/AbstractStreamOperatorTestHarness.java|   7 +
 30 files changed, 1531 insertions(+), 46 deletions(-)

diff --git a/flink-core/src/main/java/org/apache/flink/types/Row.java 
b/flink-core/src/main/java/org/apache/flink/types/Row.java
index b8bdbf9..aa15bf9 100644
--- a/flink-core/src/main/java/org/apache/flink/types/Row.java
+++ b/flink-core/src/main/java/org/apache/flink/types/Row.java
@@ -167,4 +167,34 @@ public class Row implements Serializable{
}
return newRow;
}
+
+   /**
+* Creates a new Row which fields are copied from the other rows.
+* This method does not perform a deep copy.
+*
+* @param first The first row being copied.
+* @param remainings The other rows being copied.
+* @return the joined new Row
+*/
+   public static Row join(Row first, Row... remainings) {
+   int newLength = first.fields.length;
+   for (Row remaining : remainings) {
+   newLength += remaining.fields.length;
+   }
+
+   final Row joinedRow = new Row(newLength);
+   int index = 0;
+
+   // copy the first row
+   System.arraycopy(first.fields, 0, joinedRow.fields, index, 
first.fields.length);
+   index += first.fields.length;
+
+   // copy the remaining rows
+   for (Row remaining : remainings) {
+   System.arraycopy(remaining.fields, 0, joinedRow.fields, 
index, remaining.fields.length);
+   index += remaining.fields.length;
+   }
+
+   return joinedRow;
+   }
 }
diff --git a/flink-core/src/test/java/org/apache/flink/types/RowTest.java 
b/flink-core/src/test/java/org/apache/flink/types/RowTest.java
index 067992a..605ac9b 100644
--- a/flink-core/src/test/java/org/apache/flink/types/RowTest.java
+++ b/flink-core/src/test/java/org/apache/flink/types/RowTest.java
@@ -79,4 +79,28 @@ public class RowTest {
expected.setField(2, "hello world");
assertEquals(expected, projected);
}
+
+   @Test
+   public void testRowJoin() {
+   Row row1 = new Row(2);
+   row1.setField(0, 

[flink] branch master updated (a80bbf1 -> 250f23d)

2019-09-23 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a80bbf1  [FLINK-13766][task] Refactor the implementation of 
StreamInputProcessor based on PushingAsyncDataInput#emitNext
 new 8496de3  [FLINK-14157][e2e] Undo jaxb rellocations for java 8 in s3.
 new 250f23d  [FLINK-14157][e2e] Disable Streaming File Sink s3 end-to-end 
test for java 11.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 flink-filesystems/flink-s3-fs-hadoop/pom.xml | 23 ---
 flink-filesystems/flink-s3-fs-presto/pom.xml | 22 --
 tools/travis/splits/split_misc.sh|  4 +++-
 3 files changed, 3 insertions(+), 46 deletions(-)



[flink] 02/02: [FLINK-14157][e2e] Disable Streaming File Sink s3 end-to-end test for java 11.

2019-09-23 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 250f23ddb82ad7ff71c85b6691b426e21e79fff6
Author: Kostas Kloudas 
AuthorDate: Fri Sep 20 14:41:09 2019 +0200

[FLINK-14157][e2e] Disable Streaming File Sink s3 end-to-end test for java 
11.

The test was disabled temporarily until a proper fix is added for 
FLINK-13748.
The problem is that the rellocation of jaxb in only relevant for Java 11 and
not Java 8. For Java 8 it actually makes Flink fail at runtime.
---
 tools/travis/splits/split_misc.sh | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/travis/splits/split_misc.sh 
b/tools/travis/splits/split_misc.sh
index 97f811d..856b351 100755
--- a/tools/travis/splits/split_misc.sh
+++ b/tools/travis/splits/split_misc.sh
@@ -54,7 +54,9 @@ run_test "Streaming SQL end-to-end test (Old planner)" 
"$END_TO_END_DIR/test-scr
 run_test "Streaming SQL end-to-end test (Blink planner)" 
"$END_TO_END_DIR/test-scripts/test_streaming_sql.sh blink" 
"skip_check_exceptions"
 run_test "Streaming bucketing end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_bucketing.sh" 
"skip_check_exceptions"
 run_test "Streaming File Sink end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_file_sink.sh" 
"skip_check_exceptions"
-run_test "Streaming File Sink s3 end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_file_sink.sh s3" 
"skip_check_exceptions"
+if [[ ${PROFILE} != *"jdk11"* ]]; then
+  run_test "Streaming File Sink s3 end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_file_sink.sh s3" 
"skip_check_exceptions"
+fi
 run_test "Stateful stream job upgrade end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_stateful_stream_job_upgrade.sh 2 4"
 
 run_test "Elasticsearch (v2.3.5) sink end-to-end test" 
"$END_TO_END_DIR/test-scripts/test_streaming_elasticsearch.sh 2 
https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.5/elasticsearch-2.3.5.tar.gz;



[flink] 01/02: [FLINK-14157][e2e] Undo jaxb rellocations for java 8 in s3.

2019-09-23 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8496de3c019874f223faa8ee6baddc0abbf09590
Author: Kostas Kloudas 
AuthorDate: Fri Sep 20 15:03:18 2019 +0200

[FLINK-14157][e2e] Undo jaxb rellocations for java 8 in s3.
---
 flink-filesystems/flink-s3-fs-hadoop/pom.xml | 23 ---
 flink-filesystems/flink-s3-fs-presto/pom.xml | 22 --
 2 files changed, 45 deletions(-)

diff --git a/flink-filesystems/flink-s3-fs-hadoop/pom.xml 
b/flink-filesystems/flink-s3-fs-hadoop/pom.xml
index 4bc57a4..e03c885 100644
--- a/flink-filesystems/flink-s3-fs-hadoop/pom.xml
+++ b/flink-filesystems/flink-s3-fs-hadoop/pom.xml
@@ -106,11 +106,6 @@ under the License.

com.amazon

org.apache.flink.fs.s3base.shaded.com.amazon

-   
-   
-   
javax.xml.bind
-   
org.apache.flink.fs.s3hadoop.shaded.javax.xml.bind
-   



org.apache.flink.runtime.util
@@ -131,22 +126,4 @@ under the License.



-
-   
-   
-   java11
-   
-   11
-   
-   
-   
-   
-   javax.xml.bind
-   jaxb-api
-   2.3.0
-   
-   
-   
-   
-
 
diff --git a/flink-filesystems/flink-s3-fs-presto/pom.xml 
b/flink-filesystems/flink-s3-fs-presto/pom.xml
index b0a93c1..f589e04 100644
--- a/flink-filesystems/flink-s3-fs-presto/pom.xml
+++ b/flink-filesystems/flink-s3-fs-presto/pom.xml
@@ -271,10 +271,6 @@ under the License.

org.apache.flink.fs.s3presto.shaded.io.airlift


-   
javax.xml.bind
-   
org.apache.flink.fs.s3presto.shaded.javax.xml.bind
-   
-   

org.HdrHistogram

org.apache.flink.fs.s3presto.shaded.org.HdrHistogram

@@ -327,22 +323,4 @@ under the License.



-
-   
-   
-   java11
-   
-   11
-   
-   
-   
-   
-   javax.xml.bind
-   jaxb-api
-   2.3.0
-   
-   
-   
-   
-
 



[flink] branch FLINK-13748 deleted (was 1bd04a2)

2019-09-23 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch FLINK-13748
in repository https://gitbox.apache.org/repos/asf/flink.git.


 was 1bd04a2  [FLINK-13748][S3][build] Fix jaxb relocation for S3.

This change permanently discards the following revisions:

 discard 1bd04a2  [FLINK-13748][S3][build] Fix jaxb relocation for S3.



[flink] branch master updated (c83c186 -> a80bbf1)

2019-09-23 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c83c186  [FLINK-14160][docs] Describe --backpressure option for 
Operations Playground.
 add f5b6952  [hotfix][task] Refactor the process of checking input index 
for StreamOneInputProcessor#processInput
 add 7aa0671  [hotfix][task] Remove unncessary SuppressWarnings from 
StreamOneInputProcessor
 add 0983d1f  [hotfix][task] Remove unused argument from constructor of 
StreamTaskNetworkInput
 add c727190  [hotfix][network] Refactor the class name AsyncDataInput to 
PullingAsyncDataInput
 add 4d42eab  [hotfix][task] Refactor the constrcutor of 
StreamTwoInputProcessor
 add 68386c4  [hotfix][task] Refactor the constrcutor of 
StreamOneInputProcessor
 add a80bbf1  [FLINK-13766][task] Refactor the implementation of 
StreamInputProcessor based on PushingAsyncDataInput#emitNext

No new revisions were added by this update.

Summary of changes:
 .../flink/runtime/io/AvailabilityListener.java |   2 +-
 .../flink/runtime/io/NullableAsyncDataInput.java   |   6 +-
 ...ncDataInput.java => PullingAsyncDataInput.java} |   2 +-
 .../io/network/partition/consumer/InputGate.java   |   4 +-
 .../partition/consumer/InputGateTestBase.java  |   6 +-
 .../runtime/io/CheckpointedInputGate.java  |   4 +-
 .../io/{StreamTaskInput.java => InputStatus.java}  |  28 ++-
 .../runtime/io/PushingAsyncDataInput.java  |  60 +
 .../runtime/io/StreamOneInputProcessor.java| 155 +---
 .../streaming/runtime/io/StreamTaskInput.java  |  10 +-
 .../runtime/io/StreamTaskNetworkInput.java |  54 +++--
 .../runtime/io/StreamTwoInputProcessor.java| 260 ++---
 .../runtime/streamstatus/StatusWatermarkValve.java |  38 ++-
 .../runtime/tasks/OneInputStreamTask.java  | 139 +--
 .../runtime/tasks/TwoInputStreamTask.java  |  28 ++-
 .../runtime/io/StreamTaskNetworkInputTest.java |  64 -
 .../streamstatus/StatusWatermarkValveTest.java |  47 ++--
 17 files changed, 504 insertions(+), 403 deletions(-)
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/io/{AsyncDataInput.java => 
PullingAsyncDataInput.java} (96%)
 copy 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/{StreamTaskInput.java
 => InputStatus.java} (58%)
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/PushingAsyncDataInput.java



[flink] branch release-1.9 updated: [FLINK-14160][docs] Describe --backpressure option for Operations Playground.

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 5997acc  [FLINK-14160][docs] Describe --backpressure option for 
Operations Playground.
5997acc is described below

commit 5997accdf9d5a8c5a934b7888f6826ca9fd1acf8
Author: David Anderson 
AuthorDate: Sat Sep 21 10:51:02 2019 +0200

[FLINK-14160][docs] Describe --backpressure option for Operations 
Playground.

This closes #9739.
---
 .../docker-playgrounds/flink-operations-playground.md   | 17 ++---
 .../flink-operations-playground.zh.md   | 17 ++---
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index bb720b4..e0cd10d 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -132,7 +132,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -798,8 +798,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -811,3 +811,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 
 the timestamp of the `ClickEvent`. Consequently, the number of events per 
window will not be exactly
 one thousand anymore. 
+
+The *Click Event Count* application also has another option, turned off by 
default, that you can 
+enable to explore the behavior of this job under backpressure. You can add 
this option in the 
+command of the *client* container in `docker-compose.yaml`.
+
+* `--backpressure` adds an additional operator into the middle of the job that 
causes severe backpressure 
+during even-numbered minutes (e.g., during 10:12, but not during 10:13). This 
can be observed by 
+inspecting various [network metrics]({{ site.baseurl 
}}/monitoring/metrics.html#default-shuffle-service) 
+such as `outputQueueLength` and `outPoolUsage`, and/or by using the 
+[backpressure monitoring]({{ site.baseurl 
}}/monitoring/back_pressure.html#monitoring-back-pressure) 
+available in the WebUI.
\ No newline at end of file
diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
index b3c4f24..65b0ee1 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
@@ -132,7 +132,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -798,8 +798,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -811,3 +811,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 

[flink] branch master updated: [FLINK-14160][docs] Describe --backpressure option for Operations Playground.

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c83c186  [FLINK-14160][docs] Describe --backpressure option for 
Operations Playground.
c83c186 is described below

commit c83c18671bc0056a341877f312ba293ae5811953
Author: David Anderson 
AuthorDate: Sat Sep 21 10:51:02 2019 +0200

[FLINK-14160][docs] Describe --backpressure option for Operations 
Playground.

This closes #9739.
---
 .../docker-playgrounds/flink-operations-playground.md   | 17 ++---
 .../flink-operations-playground.zh.md   | 17 ++---
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index 38a0848..c9f7675 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -136,7 +136,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -802,8 +802,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -815,3 +815,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 
 the timestamp of the `ClickEvent`. Consequently, the number of events per 
window will not be exactly
 one thousand anymore. 
+
+The *Click Event Count* application also has another option, turned off by 
default, that you can 
+enable to explore the behavior of this job under backpressure. You can add 
this option in the 
+command of the *client* container in `docker-compose.yaml`.
+
+* `--backpressure` adds an additional operator into the middle of the job that 
causes severe backpressure 
+during even-numbered minutes (e.g., during 10:12, but not during 10:13). This 
can be observed by 
+inspecting various [network metrics]({{ site.baseurl 
}}/monitoring/metrics.html#default-shuffle-service) 
+such as `outputQueueLength` and `outPoolUsage`, and/or by using the 
+[backpressure monitoring]({{ site.baseurl 
}}/monitoring/back_pressure.html#monitoring-back-pressure) 
+available in the WebUI.
\ No newline at end of file
diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
index 38a0848..c9f7675 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
@@ -136,7 +136,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -802,8 +802,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -815,3 +815,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 
 the 

[flink-playgrounds] 01/02: [FLINK-14160] Add --backpressure option to the ClickEventCount job in the operations playground

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 41acc3b90bbf43e6879f2e3d9cdded0cac980524
Author: David Anderson 
AuthorDate: Thu Sep 19 20:08:58 2019 +0200

[FLINK-14160] Add --backpressure option to the ClickEventCount job in the 
operations playground

This closes #4.
---
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 4 files changed, 71 insertions(+), 6 deletions(-)

diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
index 3d17fcd..893c11e 100644
--- a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
+++ b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
@@ -22,7 +22,7 @@ under the License.
 
org.apache.flink
flink-playground-clickcountjob
-   1-FLINK-1.9_2.11
+   2-FLINK-1.9_2.11
 
flink-playground-clickcountjob
jar
diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
index 0316bc6..f3d628c 100644
--- 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
+++ 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
@@ -18,6 +18,7 @@
 package org.apache.flink.playgrounds.ops.clickcount;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.playgrounds.ops.clickcount.functions.BackpressureMap;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.ClickEventStatisticsCollector;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.CountingAggregator;
 import org.apache.flink.playgrounds.ops.clickcount.records.ClickEvent;
@@ -25,6 +26,7 @@ import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventDeserializa
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatistics;
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatisticsSerializationSchema;
 import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
 import org.apache.flink.streaming.api.windowing.time.Time;
@@ -47,6 +49,7 @@ import java.util.concurrent.TimeUnit;
  * The Job can be configured via the command line:
  * * "--checkpointing": enables checkpointing
  * * "--event-time": set the StreamTimeCharacteristic to EventTime
+ * * "--backpressure": insert an operator that causes periodic backpressure
  * * "--input-topic": the name of the Kafka Topic to consume {@link 
ClickEvent}s from
  * * "--output-topic": the name of the Kafka Topic to produce {@link 
ClickEventStatistics} to
  * * "--bootstrap.servers": comma-separated list of Kafka brokers
@@ -56,6 +59,7 @@ public class ClickEventCount {
 
public static final String CHECKPOINTING_OPTION = "checkpointing";
public static final String EVENT_TIME_OPTION = "event-time";
+   public static final String BACKPRESSURE_OPTION = "backpressure";
 
public static final Time WINDOW_SIZE = Time.of(15, TimeUnit.SECONDS);
 
@@ -66,6 +70,8 @@ public class ClickEventCount {
 
configureEnvironment(params, env);
 
+   boolean inflictBackpressure = params.has(BACKPRESSURE_OPTION);
+
String inputTopic = params.get("input-topic", "input");
String outputTopic = params.get("output-topic", "output");
String brokers = params.get("bootstrap.servers", 
"localhost:9092");
@@ -73,19 +79,32 @@ public class ClickEventCount {
kafkaProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
brokers);
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, 
"click-event-count");
 
-   env.addSource(new FlinkKafkaConsumer<>(inputTopic, new 
ClickEventDeserializationSchema(), kafkaProps))
+   DataStream clicks =
+   env.addSource(new 
FlinkKafkaConsumer<>(inputTopic, new ClickEventDeserializationSchema(), 
kafkaProps))
.name("ClickEvent Source")

[flink-playgrounds] 02/02: [hotfix] Improve .gitignore

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 5b93147d2fc050a5ce9597dcc8c478c1b9ed08c4
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 12:03:20 2019 +0200

[hotfix] Improve .gitignore
---
 .gitignore | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/.gitignore b/.gitignore
index d4e4d76..d04cff5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,3 @@
-*/.idea
-*/target
-*/dependency-reduced-pom.xml
+**/.idea
+**/target
+**/dependency-reduced-pom.xml



[flink-playgrounds] branch release-1.9 updated (b575647 -> 5b93147)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git.


from b575647  [hotfix] Update URL in ops playground README.md to Flink 1.9 
docs.
 new 41acc3b  [FLINK-14160] Add --backpressure option to the 
ClickEventCount job in the operations playground
 new 5b93147  [hotfix] Improve .gitignore

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore |  6 +--
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 5 files changed, 74 insertions(+), 9 deletions(-)
 create mode 100644 
docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/functions/BackpressureMap.java



[flink-playgrounds] 02/02: [hotfix] Improve .gitignore

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 00db5d0904ca1a023eb9612b12eccd25961f31a9
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 12:03:20 2019 +0200

[hotfix] Improve .gitignore
---
 .gitignore | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/.gitignore b/.gitignore
index d4e4d76..d04cff5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,3 @@
-*/.idea
-*/target
-*/dependency-reduced-pom.xml
+**/.idea
+**/target
+**/dependency-reduced-pom.xml



[flink-playgrounds] 01/02: [FLINK-14160] Add --backpressure option to the ClickEventCount job in the operations playground

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 1c7c254fc7827e74db7c3c387348e7ca2219788a
Author: David Anderson 
AuthorDate: Thu Sep 19 20:08:58 2019 +0200

[FLINK-14160] Add --backpressure option to the ClickEventCount job in the 
operations playground

This closes #4.
---
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 4 files changed, 71 insertions(+), 6 deletions(-)

diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
index 3d17fcd..893c11e 100644
--- a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
+++ b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
@@ -22,7 +22,7 @@ under the License.
 
org.apache.flink
flink-playground-clickcountjob
-   1-FLINK-1.9_2.11
+   2-FLINK-1.9_2.11
 
flink-playground-clickcountjob
jar
diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
index 0316bc6..f3d628c 100644
--- 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
+++ 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
@@ -18,6 +18,7 @@
 package org.apache.flink.playgrounds.ops.clickcount;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.playgrounds.ops.clickcount.functions.BackpressureMap;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.ClickEventStatisticsCollector;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.CountingAggregator;
 import org.apache.flink.playgrounds.ops.clickcount.records.ClickEvent;
@@ -25,6 +26,7 @@ import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventDeserializa
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatistics;
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatisticsSerializationSchema;
 import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
 import org.apache.flink.streaming.api.windowing.time.Time;
@@ -47,6 +49,7 @@ import java.util.concurrent.TimeUnit;
  * The Job can be configured via the command line:
  * * "--checkpointing": enables checkpointing
  * * "--event-time": set the StreamTimeCharacteristic to EventTime
+ * * "--backpressure": insert an operator that causes periodic backpressure
  * * "--input-topic": the name of the Kafka Topic to consume {@link 
ClickEvent}s from
  * * "--output-topic": the name of the Kafka Topic to produce {@link 
ClickEventStatistics} to
  * * "--bootstrap.servers": comma-separated list of Kafka brokers
@@ -56,6 +59,7 @@ public class ClickEventCount {
 
public static final String CHECKPOINTING_OPTION = "checkpointing";
public static final String EVENT_TIME_OPTION = "event-time";
+   public static final String BACKPRESSURE_OPTION = "backpressure";
 
public static final Time WINDOW_SIZE = Time.of(15, TimeUnit.SECONDS);
 
@@ -66,6 +70,8 @@ public class ClickEventCount {
 
configureEnvironment(params, env);
 
+   boolean inflictBackpressure = params.has(BACKPRESSURE_OPTION);
+
String inputTopic = params.get("input-topic", "input");
String outputTopic = params.get("output-topic", "output");
String brokers = params.get("bootstrap.servers", 
"localhost:9092");
@@ -73,19 +79,32 @@ public class ClickEventCount {
kafkaProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
brokers);
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, 
"click-event-count");
 
-   env.addSource(new FlinkKafkaConsumer<>(inputTopic, new 
ClickEventDeserializationSchema(), kafkaProps))
+   DataStream clicks =
+   env.addSource(new 
FlinkKafkaConsumer<>(inputTopic, new ClickEventDeserializationSchema(), 
kafkaProps))
.name("ClickEvent Source")

[flink-playgrounds] branch master updated (5d636ae -> 00db5d0)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git.


from 5d636ae  [hotfix] Update URL in ops playground README.md to Flink 
master docs.
 new 1c7c254  [FLINK-14160] Add --backpressure option to the 
ClickEventCount job in the operations playground
 new 00db5d0  [hotfix] Improve .gitignore

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore |  6 +--
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 5 files changed, 74 insertions(+), 9 deletions(-)
 create mode 100644 
docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/functions/BackpressureMap.java



[flink] branch master updated (8155d46 -> adfe011)

2019-09-23 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8155d46  [FLINK-14150][python] Clean up the __pycache__ directories 
and other empty directories in flink-python source code folder before packaging.
 add adfe011  [FLINK-13864][fs-connector] Make StreamingFileSink extensible

No new revisions were added by this update.

Summary of changes:
 .../sink/filesystem/StreamingFileSink.java | 180 -
 .../functions/sink/filesystem/BulkWriterTest.java  |  80 -
 .../filesystem/LocalStreamingFileSinkTest.java |  62 +++
 .../api/functions/sink/filesystem/TestUtils.java   | 139 ++--
 4 files changed, 367 insertions(+), 94 deletions(-)



[flink-web] 02/02: Rebuild website

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b8b507e248b103a1c107bc4e54d6267a4d681bb7
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 11:33:44 2019 +0200

Rebuild website
---
 content/img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 content/index.html|   6 ++
 content/poweredby.html|   4 
 content/zh/index.html |   8 +++-
 content/zh/poweredby.html |   4 
 5 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/content/img/poweredby/xiaomi-logo.png 
b/content/img/poweredby/xiaomi-logo.png
new file mode 100644
index 000..004707a
Binary files /dev/null and b/content/img/poweredby/xiaomi-logo.png differ
diff --git a/content/index.html b/content/index.html
index aff2564..f0fc296 100644
--- a/content/index.html
+++ b/content/index.html
@@ -461,6 +461,12 @@
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/content/poweredby.html b/content/poweredby.html
index 81f67eb..acd424a 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -297,6 +297,10 @@
   Uber built their internal SQL-based, open-source streaming analytics 
platform AthenaX on Apache Flink. https://eng.uber.com/athenax/; target="_blank"> Read more on the Uber 
engineering blog
   
   
+
+Xiaomi, one of the largest electronics companies in China, built a 
platform with Flink to improve the efficiency of developing and operating 
real-time applications and use it in real-time recommendations. https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target="_blank"> Learn more about how Xiaomi is using 
Flink.
+  
+  
 
   Yelp utilizes Flink to power its data connectors ecosystem and stream 
processing infrastructure. https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target="_blank"> Find out more watching a Flink Forward 
talk
   
diff --git a/content/zh/index.html b/content/zh/index.html
index 166d33c..d7cfc4a 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -440,7 +440,7 @@
   
 
 
-  
+  
 
   
   
@@ -459,6 +459,12 @@
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/content/zh/poweredby.html b/content/zh/poweredby.html
index dd0a42c..a4fc2b4 100644
--- a/content/zh/poweredby.html
+++ b/content/zh/poweredby.html
@@ -295,6 +295,10 @@
   Uber 在 Apache Flink 上构建了基于 SQL 的开源流媒体分析平台 AthenaX。https://eng.uber.com/athenax/; target="_blank"> 更多信息请访问Uber工程博客
   
   
+
+小米,作为中国最大的专注于硬件与软件开发的公司之一,利用 Flink 
构建了一个内部平台,以提高开发运维实时应用程序的效率,并用于实时推荐等场景。https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target="_blank"> 详细了解小米如何使用 Flink 的。
+  
+  
 
   Yelp 利用 Flink 为其数据连接器生态系统和流处理基础架构提供支持。https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target="_blank"> 请参阅 Flink Forward 上的演讲
   



[flink-web] branch asf-site updated (caf1c41 -> b8b507e)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from caf1c41  Rebuild website
 new 2cbd74b  Add Xiaomi to the Powered By page
 new b8b507e  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 content/index.html|   6 ++
 content/poweredby.html|   4 
 content/zh/index.html |   8 +++-
 content/zh/poweredby.html |   4 
 img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 index.md  |   6 ++
 index.zh.md   |   8 +++-
 poweredby.md  |   4 
 poweredby.zh.md   |   4 
 10 files changed, 42 insertions(+), 2 deletions(-)
 create mode 100644 content/img/poweredby/xiaomi-logo.png
 create mode 100644 img/poweredby/xiaomi-logo.png



[flink-web] 01/02: Add Xiaomi to the Powered By page

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 2cbd74bd8a4048fe48cef8040011da80ad34ce26
Author: Jark Wu 
AuthorDate: Mon Sep 23 16:23:52 2019 +0800

Add Xiaomi to the Powered By page

This closes #270.
---
 img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 index.md  |   6 ++
 index.zh.md   |   8 +++-
 poweredby.md  |   4 
 poweredby.zh.md   |   4 
 5 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/img/poweredby/xiaomi-logo.png b/img/poweredby/xiaomi-logo.png
new file mode 100644
index 000..004707a
Binary files /dev/null and b/img/poweredby/xiaomi-logo.png differ
diff --git a/index.md b/index.md
index 73a37ab..9242e71 100644
--- a/index.md
+++ b/index.md
@@ -290,6 +290,12 @@ layout: base
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/index.zh.md b/index.zh.md
index a8a11ce..b798558 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -271,7 +271,7 @@ layout: base
   
 
 
-  
+  
 
   
   
@@ -290,6 +290,12 @@ layout: base
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/poweredby.md b/poweredby.md
index 7775388..f4f7eb3 100644
--- a/poweredby.md
+++ b/poweredby.md
@@ -122,6 +122,10 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   Uber built their internal SQL-based, open-source streaming analytics 
platform AthenaX on Apache Flink. https://eng.uber.com/athenax/; target='_blank'> Read more on the Uber 
engineering blog
   
   
+
+Xiaomi, one of the largest electronics companies in China, built a 
platform with Flink to improve the efficiency of developing and operating 
real-time applications and use it in real-time recommendations. https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target='_blank'> Learn more about how Xiaomi is using 
Flink.
+  
+  
 
   Yelp utilizes Flink to power its data connectors ecosystem and stream 
processing infrastructure. https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target='_blank'> Find out more watching a Flink Forward 
talk
   
diff --git a/poweredby.zh.md b/poweredby.zh.md
index d8b1e65..52b2f40 100644
--- a/poweredby.zh.md
+++ b/poweredby.zh.md
@@ -122,6 +122,10 @@ Apache Flink 为全球许多公司和企业的关键业务提供支持。在这
   Uber 在 Apache Flink 上构建了基于 SQL 的开源流媒体分析平台 AthenaX。https://eng.uber.com/athenax/; target='_blank'> 更多信息请访问Uber工程博客
   
   
+
+小米,作为中国最大的专注于硬件与软件开发的公司之一,利用 Flink 
构建了一个内部平台,以提高开发运维实时应用程序的效率,并用于实时推荐等场景。https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target='_blank'> 详细了解小米如何使用 Flink 的。
+  
+  
 
   Yelp 利用 Flink 为其数据连接器生态系统和流处理基础架构提供支持。https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target='_blank'> 请参阅 Flink Forward 上的演讲
   



[flink-web] 01/02: Add SK telecom to Chinese Powered By page

2019-09-23 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 2e6224661156652efc90d327a8c7a74a517d283f
Author: AT-Fieldless <15692118...@163.com>
AuthorDate: Tue Aug 20 21:31:32 2019 +0800

Add SK telecom to Chinese Powered By page

This closes #248
---
 index.zh.md | 6 ++
 poweredby.zh.md | 4 
 2 files changed, 10 insertions(+)

diff --git a/index.zh.md b/index.zh.md
index 3b9ad46..a8a11ce 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -271,6 +271,12 @@ layout: base
   
 
 
+  
+
+  
+  
+
+
   
 
   
diff --git a/poweredby.zh.md b/poweredby.zh.md
index 7b58d20..d8b1e65 100644
--- a/poweredby.zh.md
+++ b/poweredby.zh.md
@@ -106,6 +106,10 @@ Apache Flink 为全球许多公司和企业的关键业务提供支持。在这
   ResearchGate 是科学家的社交网络,它使用 Flink 进行网络分析和近似重复检测。http://2016.flink-forward.org/kb_sessions/joining-infinity-windowless-stream-processing-with-flink/;
 target='_blank'> 请参阅 ResearchGate 在 Flink Forward 2016 
上的分享
   
   
+  
+  三星(SK telecom)是韩国最大的无线运营商。它在很多应用中使用了 Flink,包括智能工厂和移动应用程序。https://www.youtube.com/watch?v=wPQWFy5JENw; target='_blank'> 了解其中一个 SK telecom 
的使用案例。
+  
+  
 
   Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target='_blank'> 了解更多关于 Telefónica NEXT 的信息
   



[flink-web] 02/02: Rebuild website

2019-09-23 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit caf1c412573cca231b4853719ea2c1988775eae6
Author: Jark Wu 
AuthorDate: Mon Sep 23 15:54:19 2019 +0800

Rebuild website
---
 content/zh/index.html | 6 ++
 content/zh/poweredby.html | 4 
 2 files changed, 10 insertions(+)

diff --git a/content/zh/index.html b/content/zh/index.html
index 0465de0..166d33c 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -440,6 +440,12 @@
   
 
 
+  
+
+  
+  
+
+
   
 
   
diff --git a/content/zh/poweredby.html b/content/zh/poweredby.html
index 78f68e7..dd0a42c 100644
--- a/content/zh/poweredby.html
+++ b/content/zh/poweredby.html
@@ -279,6 +279,10 @@
   ResearchGate 是科学家的社交网络,它使用 Flink 进行网络分析和近似重复检测。http://2016.flink-forward.org/kb_sessions/joining-infinity-windowless-stream-processing-with-flink/;
 target="_blank"> 请参阅 ResearchGate 在 Flink Forward 2016 
上的分享
   
   
+  
+  三星(SK telecom)是韩国最大的无线运营商。它在很多应用中使用了 Flink,包括智能工厂和移动应用程序。https://www.youtube.com/watch?v=wPQWFy5JENw; target="_blank"> 了解其中一个 SK telecom 
的使用案例。
+  
+  
 
   Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target="_blank"> 了解更多关于 Telefónica NEXT 的信息
   



[flink-web] branch asf-site updated (94a96a6 -> caf1c41)

2019-09-23 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 94a96a6  Rebuild website
 new 2e62246  Add SK telecom to Chinese Powered By page
 new caf1c41  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/zh/index.html | 6 ++
 content/zh/poweredby.html | 4 
 index.zh.md   | 6 ++
 poweredby.zh.md   | 4 
 4 files changed, 20 insertions(+)