[flink] branch master updated (e523732 -> 536ccd7)

2019-09-03 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from e523732  [FLINK-13922][docs] Translate headings of 
task_failure_recovery.zh.md into English
 add 536ccd7  [FLNK-13885] Deprecate HighAvailabilityOptions#HA_JOB_DELAY

No new revisions were added by this update.

Summary of changes:
 .../generated/high_availability_configuration.html |  5 -
 .../flink/configuration/ConfigConstants.java   |  7 +-
 .../configuration/HighAvailabilityOptions.java | 25 ++
 .../runtime/testutils/ZooKeeperTestUtils.java  |  1 -
 4 files changed, 17 insertions(+), 21 deletions(-)



[flink] branch master updated (b5a0b31 -> e523732)

2019-09-03 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b5a0b31  [FLINK-13775][table-planner-blink] Refactor 
ExpressionConverter
 add e523732  [FLINK-13922][docs] Translate headings of 
task_failure_recovery.zh.md into English

No new revisions were added by this update.

Summary of changes:
 docs/dev/task_failure_recovery.zh.md | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)



[flink] 02/03: [FLINK-13775][table-planner-blink] Rename RexNodeConverter to ExpressionConverter

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 70f0081264cf01e5c94ddc692e4cb9433c2a6007
Author: JingsongLi 
AuthorDate: Thu Aug 29 17:10:36 2019 +0800

[FLINK-13775][table-planner-blink] Rename RexNodeConverter to 
ExpressionConverter
---
 .../ExpressionConverter.java}  | 10 +---
 .../planner/plan/QueryOperationConverter.java  | 30 +++---
 .../codegen/agg/DeclarativeAggCodeGen.scala|  5 ++--
 .../planner/codegen/agg/DistinctAggCodeGen.scala   |  4 +--
 .../planner/codegen/agg/ImperativeAggCodeGen.scala |  5 ++--
 .../codegen/agg/batch/AggCodeGenHelper.scala   | 18 +++--
 .../codegen/agg/batch/HashAggCodeGenHelper.scala   | 13 +-
 .../codegen/agg/batch/WindowCodeGenerator.scala|  5 ++--
 .../PushFilterIntoTableSourceScanRule.scala|  4 +--
 .../table/planner/sources/TableSourceUtil.scala|  4 +--
 .../batch/table/stringexpr/SetOperatorsTest.scala  |  2 +-
 11 files changed, 54 insertions(+), 46 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/RexNodeConverter.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
similarity index 99%
rename from 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/RexNodeConverter.java
rename to 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
index 2e3159d..b1e8e96 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/RexNodeConverter.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.planner.expressions;
+package org.apache.flink.table.planner.expressions.converter;
 
 import org.apache.flink.table.api.TableException;
 import org.apache.flink.table.dataformat.Decimal;
@@ -47,6 +47,8 @@ import 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl;
 import org.apache.flink.table.planner.calcite.FlinkRelBuilder;
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
 import org.apache.flink.table.planner.calcite.RexFieldVariable;
+import org.apache.flink.table.planner.expressions.RexNodeExpression;
+import org.apache.flink.table.planner.expressions.SqlAggFunctionVisitor;
 import org.apache.flink.table.planner.functions.InternalFunctionDefinitions;
 import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable;
 import org.apache.flink.table.planner.functions.sql.SqlThrowExceptionFunction;
@@ -117,7 +119,7 @@ import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoT
 /**
  * Visit expression to generator {@link RexNode}.
  */
-public class RexNodeConverter implements ExpressionVisitor {
+public class ExpressionConverter implements ExpressionVisitor {
 
private final RelBuilder relBuilder;
private final FlinkTypeFactory typeFactory;
@@ -125,7 +127,7 @@ public class RexNodeConverter implements 
ExpressionVisitor {
// store mapping from BuiltInFunctionDefinition to it's 
RexNodeConversion.
private final Map 
conversionsOfBuiltInFunc = new IdentityHashMap<>();
 
-   public RexNodeConverter(RelBuilder relBuilder) {
+   public ExpressionConverter(RelBuilder relBuilder) {
this.relBuilder = relBuilder;
this.typeFactory = (FlinkTypeFactory) 
relBuilder.getRexBuilder().getTypeFactory();
 
@@ -358,7 +360,7 @@ public class RexNodeConverter implements 
ExpressionVisitor {
 
private List convertCallChildren(List children) {
return children.stream()
-   .map(expression -> 
expression.accept(RexNodeConverter.this))
+   .map(expression -> 
expression.accept(ExpressionConverter.this))
.collect(Collectors.toList());
}
 
diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/QueryOperationConverter.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/QueryOperationConverter.java
index 481f125..7a4da1f 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/QueryOperationConverter.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/QueryOperationConverter.java
@@ -59,9 +59,9 @@ import 
org.apache.flink.table.planner.expressions.PlannerRowtimeAttribute;
 import org.apache.flink.table.planner.expressions.PlannerWindowEnd;
 import 

[flink] branch master updated (0147cf6 -> b5a0b31)

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0147cf6  [FLINK-13941][fs-connector] Do not delete partial part files 
from S3 upon restore.
 new 87ee1a1  [FLINK-13775][table-planner-blink] Correct time field index 
of window agg
 new 70f0081  [FLINK-13775][table-planner-blink] Rename RexNodeConverter to 
ExpressionConverter
 new b5a0b31  [FLINK-13775][table-planner-blink] Refactor 
ExpressionConverter

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../planner/expressions/RexNodeConverter.java  | 967 -
 .../converter/CallExpressionConvertRule.java   |  56 ++
 .../converter/CustomizedConvertRule.java   | 381 
 .../expressions/converter/DirectConvertRule.java   | 157 
 .../expressions/converter/ExpressionConverter.java | 328 +++
 .../expressions/converter/OverConvertRule.java | 197 +
 .../converter/ScalarFunctionConvertRule.java   |  54 ++
 .../planner/plan/QueryOperationConverter.java  |  30 +-
 .../codegen/agg/DeclarativeAggCodeGen.scala|   5 +-
 .../planner/codegen/agg/DistinctAggCodeGen.scala   |   4 +-
 .../planner/codegen/agg/ImperativeAggCodeGen.scala |   5 +-
 .../codegen/agg/batch/AggCodeGenHelper.scala   |  18 +-
 .../codegen/agg/batch/HashAggCodeGenHelper.scala   |  13 +-
 .../codegen/agg/batch/WindowCodeGenerator.scala|   5 +-
 .../logical/BatchLogicalWindowAggregateRule.scala  |   2 +-
 .../PushFilterIntoTableSourceScanRule.scala|   4 +-
 .../logical/StreamLogicalWindowAggregateRule.scala |   2 +-
 .../table/planner/sources/TableSourceUtil.scala|   4 +-
 .../batch/table/stringexpr/SetOperatorsTest.scala  |   2 +-
 tools/maven/suppressions.xml   |   4 +-
 20 files changed, 1225 insertions(+), 1013 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/RexNodeConverter.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/DirectConvertRule.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/OverConvertRule.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ScalarFunctionConvertRule.java



[flink] 01/03: [FLINK-13775][table-planner-blink] Correct time field index of window agg

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 87ee1a1cb3ea6cf0680a88a9a153aa38762ad0e6
Author: JingsongLi 
AuthorDate: Sun Aug 18 17:18:51 2019 +0200

[FLINK-13775][table-planner-blink] Correct time field index of window agg
---
 .../planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala| 2 +-
 .../planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
index e711d8d..68d20d4 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
@@ -73,7 +73,7 @@ class BatchLogicalWindowAggregateRule
   fieldName,
   fromLogicalTypeToDataType(toLogicalType(fieldType)),
   0, // only one input, should always be 0
-  ref.getIndex)
+  windowExprIdx)
 }
   }
 
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala
index af1a481..c94e4de 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala
@@ -80,7 +80,7 @@ class StreamLogicalWindowAggregateRule
   rowType.getFieldList.get(v.getIndex).getName,
   fromLogicalTypeToDataType(toLogicalType(v.getType)),
   0, // only one input, should always be 0
-  v.getIndex)
+  windowExprIdx)
   case _ =>
 throw new ValidationException("Window can only be defined over a time 
attribute column.")
 }



[flink] 03/03: [FLINK-13775][table-planner-blink] Refactor ExpressionConverter

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b5a0b317ad008e031bbfd16e75ebc2a7eb9ae25f
Author: JingsongLi 
AuthorDate: Thu Aug 29 17:14:21 2019 +0800

[FLINK-13775][table-planner-blink] Refactor ExpressionConverter

This closes #9485
---
 .../converter/CallExpressionConvertRule.java   |  56 ++
 .../converter/CustomizedConvertRule.java   | 381 +
 .../expressions/converter/DirectConvertRule.java   | 157 
 .../expressions/converter/ExpressionConverter.java | 907 +++--
 .../expressions/converter/OverConvertRule.java | 197 +
 .../converter/ScalarFunctionConvertRule.java   |  54 ++
 tools/maven/suppressions.xml   |   4 +-
 7 files changed, 980 insertions(+), 776 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java
new file mode 100644
index 000..bf6a5c4
--- /dev/null
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.expressions.converter;
+
+import org.apache.flink.table.expressions.CallExpression;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.Optional;
+
+/**
+ * Rule to convert {@link CallExpression}.
+ */
+public interface CallExpressionConvertRule {
+
+   /**
+* Convert call expression with context to RexNode.
+*
+* @return Success return RexNode of {@link Optional#of}, Fail return 
{@link Optional#empty()}.
+*/
+   Optional convert(CallExpression call, ConvertContext context);
+
+   /**
+* Context of {@link CallExpressionConvertRule}.
+*/
+   interface ConvertContext {
+
+   /**
+* Convert expression to RexNode, used by children conversion.
+*/
+   RexNode toRexNode(Expression expr);
+
+   RelBuilder getRelBuilder();
+
+   FlinkTypeFactory getTypeFactory();
+   }
+}
diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java
new file mode 100644
index 000..ca8193b
--- /dev/null
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java
@@ -0,0 +1,381 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.expressions.converter;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.expressions.CallExpression;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ExpressionUtils;
+import 

[flink] branch master updated: [FLINK-13941][fs-connector] Do not delete partial part files from S3 upon restore.

2019-09-03 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0147cf6  [FLINK-13941][fs-connector] Do not delete partial part files 
from S3 upon restore.
0147cf6 is described below

commit 0147cf601701c87dd330898f68b47939f2ef1226
Author: Kostas Kloudas 
AuthorDate: Mon Sep 2 14:35:57 2019 +0200

[FLINK-13941][fs-connector] Do not delete partial part files from S3 upon 
restore.
---
 .../api/functions/sink/filesystem/Bucket.java  | 19 +++
 .../api/functions/sink/filesystem/BucketTest.java  | 22 --
 2 files changed, 11 insertions(+), 30 deletions(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
index 5fe535d..4a996e7 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
@@ -150,10 +150,6 @@ public class Bucket {
 

fsWriter.recoverForCommit(resumable).commitAfterRecovery();
}
-
-   if (fsWriter.requiresCleanupOfRecoverableState()) {
-   fsWriter.cleanupRecoverableState(resumable);
-   }
}
 
private void commitRecoveredPendingFiles(final BucketState 
state) throws IOException {
@@ -316,12 +312,19 @@ public class Bucket {
 
while (it.hasNext()) {
final ResumeRecoverable recoverable = 
it.next().getValue();
-   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
-   it.remove();
 
-   if (LOG.isDebugEnabled() && successfullyDeleted) {
-   LOG.debug("Subtask {} successfully deleted 
incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   // this check is redundant, as we only put entries in 
the resumablesPerCheckpoint map
+   // list when the requiresCleanupOfRecoverableState() 
returns true, but having it makes
+   // the code more readable.
+
+   if (fsWriter.requiresCleanupOfRecoverableState()) {
+   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
+
+   if (LOG.isDebugEnabled() && 
successfullyDeleted) {
+   LOG.debug("Subtask {} successfully 
deleted incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   }
}
+   it.remove();
}
}
 
diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
index 546a08c..583bacf 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
@@ -99,28 +99,6 @@ public class BucketTest {
}
 
@Test
-   public void shouldCleanupResumableAfterRestoring() throws Exception {
-   final File outDir = TEMP_FOLDER.newFolder();
-   final Path path = new Path(outDir.toURI());
-
-   final TestRecoverableWriter recoverableWriter = 
getRecoverableWriter(path);
-   final Bucket bucketUnderTest =
-   createBucket(recoverableWriter, path, 0, 0, new 
PartFileConfig());
-
-   bucketUnderTest.write("test-element", 0L);
-
-   final BucketState state = 
bucketUnderTest.onReceptionOfCheckpoint(0L);
-   assertThat(state, hasActiveInProgressFile());
-
-   bucketUnderTest.onSuccessfulCompletionOfCheckpoint(0L);
-
-   final TestRecoverableWriter newRecoverableWriter = 
getRecoverableWriter(path);
-   restoreBucket(newRecoverableWriter, 0, 1, state, new 
PartFileConfig());
-
-   assertThat(newRecoverableWriter, hasCalledDiscard(1)); // that 
is for checkpoints 0 and 1
-   }
-
-   @Test
public void shouldNotCallCleanupWithoutInProgressPartFiles() throws 
Exception {
final File outDir = TEMP_FOLDER.newFolder();
final Path path = new Path(outDir.toURI());



[flink] branch release-1.9 updated: [FLINK-13941][fs-connector] Do not delete partial part files from S3 upon restore.

2019-09-03 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 5a4c05e  [FLINK-13941][fs-connector] Do not delete partial part files 
from S3 upon restore.
5a4c05e is described below

commit 5a4c05eddc80ac8c6c017fc4ec44bc43e262626b
Author: Kostas Kloudas 
AuthorDate: Mon Sep 2 14:35:57 2019 +0200

[FLINK-13941][fs-connector] Do not delete partial part files from S3 upon 
restore.
---
 .../api/functions/sink/filesystem/Bucket.java  | 19 +++
 .../api/functions/sink/filesystem/BucketTest.java  | 22 --
 2 files changed, 11 insertions(+), 30 deletions(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
index 3252d9c..cc6726b 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
@@ -146,10 +146,6 @@ public class Bucket {
 

fsWriter.recoverForCommit(resumable).commitAfterRecovery();
}
-
-   if (fsWriter.requiresCleanupOfRecoverableState()) {
-   fsWriter.cleanupRecoverableState(resumable);
-   }
}
 
private void commitRecoveredPendingFiles(final BucketState 
state) throws IOException {
@@ -312,12 +308,19 @@ public class Bucket {
 
while (it.hasNext()) {
final ResumeRecoverable recoverable = 
it.next().getValue();
-   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
-   it.remove();
 
-   if (LOG.isDebugEnabled() && successfullyDeleted) {
-   LOG.debug("Subtask {} successfully deleted 
incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   // this check is redundant, as we only put entries in 
the resumablesPerCheckpoint map
+   // list when the requiresCleanupOfRecoverableState() 
returns true, but having it makes
+   // the code more readable.
+
+   if (fsWriter.requiresCleanupOfRecoverableState()) {
+   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
+
+   if (LOG.isDebugEnabled() && 
successfullyDeleted) {
+   LOG.debug("Subtask {} successfully 
deleted incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   }
}
+   it.remove();
}
}
 
diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
index 308bc31..a2e4582 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
@@ -99,28 +99,6 @@ public class BucketTest {
}
 
@Test
-   public void shouldCleanupResumableAfterRestoring() throws Exception {
-   final File outDir = TEMP_FOLDER.newFolder();
-   final Path path = new Path(outDir.toURI());
-
-   final TestRecoverableWriter recoverableWriter = 
getRecoverableWriter(path);
-   final Bucket bucketUnderTest =
-   createBucket(recoverableWriter, path, 0, 0);
-
-   bucketUnderTest.write("test-element", 0L);
-
-   final BucketState state = 
bucketUnderTest.onReceptionOfCheckpoint(0L);
-   assertThat(state, hasActiveInProgressFile());
-
-   bucketUnderTest.onSuccessfulCompletionOfCheckpoint(0L);
-
-   final TestRecoverableWriter newRecoverableWriter = 
getRecoverableWriter(path);
-   restoreBucket(newRecoverableWriter, 0, 1, state);
-
-   assertThat(newRecoverableWriter, hasCalledDiscard(1)); // that 
is for checkpoints 0 and 1
-   }
-
-   @Test
public void shouldNotCallCleanupWithoutInProgressPartFiles() throws 
Exception {
final File outDir = TEMP_FOLDER.newFolder();
final Path path = new Path(outDir.toURI());



[flink] branch release-1.8 updated: [FLINK-13941][fs-connector] Do not delete partial part files from S3 upon restore.

2019-09-03 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 94ca735  [FLINK-13941][fs-connector] Do not delete partial part files 
from S3 upon restore.
94ca735 is described below

commit 94ca735b9d5a3861ccd7c5deb54c7ca6d67300c3
Author: Kostas Kloudas 
AuthorDate: Mon Sep 2 14:35:57 2019 +0200

[FLINK-13941][fs-connector] Do not delete partial part files from S3 upon 
restore.
---
 .../api/functions/sink/filesystem/Bucket.java  | 19 +++
 .../api/functions/sink/filesystem/BucketTest.java  | 22 --
 2 files changed, 11 insertions(+), 30 deletions(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
index 3252d9c..cc6726b 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java
@@ -146,10 +146,6 @@ public class Bucket {
 

fsWriter.recoverForCommit(resumable).commitAfterRecovery();
}
-
-   if (fsWriter.requiresCleanupOfRecoverableState()) {
-   fsWriter.cleanupRecoverableState(resumable);
-   }
}
 
private void commitRecoveredPendingFiles(final BucketState 
state) throws IOException {
@@ -312,12 +308,19 @@ public class Bucket {
 
while (it.hasNext()) {
final ResumeRecoverable recoverable = 
it.next().getValue();
-   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
-   it.remove();
 
-   if (LOG.isDebugEnabled() && successfullyDeleted) {
-   LOG.debug("Subtask {} successfully deleted 
incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   // this check is redundant, as we only put entries in 
the resumablesPerCheckpoint map
+   // list when the requiresCleanupOfRecoverableState() 
returns true, but having it makes
+   // the code more readable.
+
+   if (fsWriter.requiresCleanupOfRecoverableState()) {
+   final boolean successfullyDeleted = 
fsWriter.cleanupRecoverableState(recoverable);
+
+   if (LOG.isDebugEnabled() && 
successfullyDeleted) {
+   LOG.debug("Subtask {} successfully 
deleted incomplete part for bucket id={}.", subtaskIndex, bucketId);
+   }
}
+   it.remove();
}
}
 
diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
index 308bc31..a2e4582 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketTest.java
@@ -99,28 +99,6 @@ public class BucketTest {
}
 
@Test
-   public void shouldCleanupResumableAfterRestoring() throws Exception {
-   final File outDir = TEMP_FOLDER.newFolder();
-   final Path path = new Path(outDir.toURI());
-
-   final TestRecoverableWriter recoverableWriter = 
getRecoverableWriter(path);
-   final Bucket bucketUnderTest =
-   createBucket(recoverableWriter, path, 0, 0);
-
-   bucketUnderTest.write("test-element", 0L);
-
-   final BucketState state = 
bucketUnderTest.onReceptionOfCheckpoint(0L);
-   assertThat(state, hasActiveInProgressFile());
-
-   bucketUnderTest.onSuccessfulCompletionOfCheckpoint(0L);
-
-   final TestRecoverableWriter newRecoverableWriter = 
getRecoverableWriter(path);
-   restoreBucket(newRecoverableWriter, 0, 1, state);
-
-   assertThat(newRecoverableWriter, hasCalledDiscard(1)); // that 
is for checkpoints 0 and 1
-   }
-
-   @Test
public void shouldNotCallCleanupWithoutInProgressPartFiles() throws 
Exception {
final File outDir = TEMP_FOLDER.newFolder();
final Path path = new Path(outDir.toURI());



[flink] branch release-1.9 updated: [FLINK-13363][docs] Add documentation for streaming aggregate performance tuning

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 8a4fd5f  [FLINK-13363][docs] Add documentation for streaming aggregate 
performance tuning
8a4fd5f is described below

commit 8a4fd5fd81b5395f5e9a661090ead73a8a7bb684
Author: Jark Wu 
AuthorDate: Sat Aug 24 10:01:17 2019 +0800

[FLINK-13363][docs] Add documentation for streaming aggregate performance 
tuning

This closes #9525
---
 docs/dev/table/tuning/index.md |  25 ++
 docs/dev/table/tuning/index.zh.md  |  25 ++
 .../tuning/streaming_aggregation_optimization.md   | 271 +
 .../streaming_aggregation_optimization.zh.md   | 271 +
 docs/fig/table-streaming/distinct_split.png| Bin 0 -> 360758 bytes
 docs/fig/table-streaming/local_agg.png | Bin 0 -> 464216 bytes
 docs/fig/table-streaming/minibatch_agg.png | Bin 0 -> 81573 bytes
 7 files changed, 592 insertions(+)

diff --git a/docs/dev/table/tuning/index.md b/docs/dev/table/tuning/index.md
new file mode 100644
index 000..7498c1e
--- /dev/null
+++ b/docs/dev/table/tuning/index.md
@@ -0,0 +1,25 @@
+---
+title: "Performance Tuning"
+nav-id: tableapi_performance_tuning
+nav-parent_id: tableapi
+nav-pos: 160
+nav-show_overview: false
+---
+
diff --git a/docs/dev/table/tuning/index.zh.md 
b/docs/dev/table/tuning/index.zh.md
new file mode 100644
index 000..7498c1e
--- /dev/null
+++ b/docs/dev/table/tuning/index.zh.md
@@ -0,0 +1,25 @@
+---
+title: "Performance Tuning"
+nav-id: tableapi_performance_tuning
+nav-parent_id: tableapi
+nav-pos: 160
+nav-show_overview: false
+---
+
diff --git a/docs/dev/table/tuning/streaming_aggregation_optimization.md 
b/docs/dev/table/tuning/streaming_aggregation_optimization.md
new file mode 100644
index 000..35a5eff
--- /dev/null
+++ b/docs/dev/table/tuning/streaming_aggregation_optimization.md
@@ -0,0 +1,271 @@
+---
+title: "Streaming Aggregation"
+nav-parent_id: tableapi_performance_tuning
+nav-pos: 10
+---
+
+
+SQL is the most widely used language for data analytics. Flink's Table API and 
SQL enables users to define efficient stream analytics applications in less 
time and effort. Moreover, Flink Table API and SQL is effectively optimized, it 
integrates a lot of query optimizations and tuned operator implementations. But 
not all of the optimizations are enabled by default, so for some workloads, it 
is possible to improve performance by turning on some options.
+
+In this page, we will introduce some useful optimization options and the 
internals of streaming aggregation which will bring great improvement in some 
cases.
+
+Attention Currently, the optimization 
options mentioned in this page are only supported in the Blink planner.
+
+Attention Currently, the streaming 
aggregations optimization are only supported for [unbounded-aggregations]({{ 
site.baseurl }}/dev/table/sql.html#aggregations). Optimizations for [window 
aggregations]({{ site.baseurl }}/dev/table/sql.html#group-windows) will be 
supported in the future.
+
+* This will be replaced by the TOC
+{:toc}
+
+By default, the unbounded aggregation operator processes input records one by 
one, i.e., (1) read accumulator from state, (2) accumulate/retract record to 
accumulator, (3) write accumulator back to state, (4) the next record will do 
the process again from (1). This processing pattern may increase the overhead 
of StateBackend (especially for RocksDB StateBackend).
+Besides, data skew which is very common in production will worsen the problem 
and make it easy for the jobs to be under backpressure situations.
+
+## MiniBatch Aggregation
+
+The core idea of mini-batch aggregation is caching a bundle of inputs in a 
buffer inside of the aggregation operator. When the bundle of inputs is 
triggered to process, only one operation per key to access state is needed. 
This can significantly reduce the state overhead and get a better throughput. 
However, this may increase some latency because it buffers some records instead 
of processing them in an instant. This is a trade-off between throughput and 
latency.
+
+The following figure explains how the mini-batch aggregation reduces state 
operations.
+
+
+  
+
+
+MiniBatch optimization is disabled by default. In order to enable this 
optimization, you should set options `table.exec.mini-batch.enabled`, 
`table.exec.mini-batch.allow-latency` and `table.exec.mini-batch.size`. Please 
see [configuration]({{ site.baseurl }}/dev/table/config.html#execution-options) 
page for more details.
+
+The following examples show how to enable these options.
+
+
+
+{% highlight java %}
+// instantiate table environment
+TableEnvironment tEnv = ...
+
+tEnv.getConfig()// access high-level configuration
+  .getConfiguration()   // set low-level 

[flink] branch master updated: [FLINK-13363][docs] Add documentation for streaming aggregate performance tuning

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d82b5be  [FLINK-13363][docs] Add documentation for streaming aggregate 
performance tuning
d82b5be is described below

commit d82b5be2681eb297164700ad25b5017bfd739864
Author: Jark Wu 
AuthorDate: Sat Aug 24 10:01:17 2019 +0800

[FLINK-13363][docs] Add documentation for streaming aggregate performance 
tuning

This closes #9525
---
 docs/dev/table/tuning/index.md |  25 ++
 docs/dev/table/tuning/index.zh.md  |  25 ++
 .../tuning/streaming_aggregation_optimization.md   | 271 +
 .../streaming_aggregation_optimization.zh.md   | 271 +
 docs/fig/table-streaming/distinct_split.png| Bin 0 -> 360758 bytes
 docs/fig/table-streaming/local_agg.png | Bin 0 -> 464216 bytes
 docs/fig/table-streaming/minibatch_agg.png | Bin 0 -> 81573 bytes
 7 files changed, 592 insertions(+)

diff --git a/docs/dev/table/tuning/index.md b/docs/dev/table/tuning/index.md
new file mode 100644
index 000..7498c1e
--- /dev/null
+++ b/docs/dev/table/tuning/index.md
@@ -0,0 +1,25 @@
+---
+title: "Performance Tuning"
+nav-id: tableapi_performance_tuning
+nav-parent_id: tableapi
+nav-pos: 160
+nav-show_overview: false
+---
+
diff --git a/docs/dev/table/tuning/index.zh.md 
b/docs/dev/table/tuning/index.zh.md
new file mode 100644
index 000..7498c1e
--- /dev/null
+++ b/docs/dev/table/tuning/index.zh.md
@@ -0,0 +1,25 @@
+---
+title: "Performance Tuning"
+nav-id: tableapi_performance_tuning
+nav-parent_id: tableapi
+nav-pos: 160
+nav-show_overview: false
+---
+
diff --git a/docs/dev/table/tuning/streaming_aggregation_optimization.md 
b/docs/dev/table/tuning/streaming_aggregation_optimization.md
new file mode 100644
index 000..35a5eff
--- /dev/null
+++ b/docs/dev/table/tuning/streaming_aggregation_optimization.md
@@ -0,0 +1,271 @@
+---
+title: "Streaming Aggregation"
+nav-parent_id: tableapi_performance_tuning
+nav-pos: 10
+---
+
+
+SQL is the most widely used language for data analytics. Flink's Table API and 
SQL enables users to define efficient stream analytics applications in less 
time and effort. Moreover, Flink Table API and SQL is effectively optimized, it 
integrates a lot of query optimizations and tuned operator implementations. But 
not all of the optimizations are enabled by default, so for some workloads, it 
is possible to improve performance by turning on some options.
+
+In this page, we will introduce some useful optimization options and the 
internals of streaming aggregation which will bring great improvement in some 
cases.
+
+Attention Currently, the optimization 
options mentioned in this page are only supported in the Blink planner.
+
+Attention Currently, the streaming 
aggregations optimization are only supported for [unbounded-aggregations]({{ 
site.baseurl }}/dev/table/sql.html#aggregations). Optimizations for [window 
aggregations]({{ site.baseurl }}/dev/table/sql.html#group-windows) will be 
supported in the future.
+
+* This will be replaced by the TOC
+{:toc}
+
+By default, the unbounded aggregation operator processes input records one by 
one, i.e., (1) read accumulator from state, (2) accumulate/retract record to 
accumulator, (3) write accumulator back to state, (4) the next record will do 
the process again from (1). This processing pattern may increase the overhead 
of StateBackend (especially for RocksDB StateBackend).
+Besides, data skew which is very common in production will worsen the problem 
and make it easy for the jobs to be under backpressure situations.
+
+## MiniBatch Aggregation
+
+The core idea of mini-batch aggregation is caching a bundle of inputs in a 
buffer inside of the aggregation operator. When the bundle of inputs is 
triggered to process, only one operation per key to access state is needed. 
This can significantly reduce the state overhead and get a better throughput. 
However, this may increase some latency because it buffers some records instead 
of processing them in an instant. This is a trade-off between throughput and 
latency.
+
+The following figure explains how the mini-batch aggregation reduces state 
operations.
+
+
+  
+
+
+MiniBatch optimization is disabled by default. In order to enable this 
optimization, you should set options `table.exec.mini-batch.enabled`, 
`table.exec.mini-batch.allow-latency` and `table.exec.mini-batch.size`. Please 
see [configuration]({{ site.baseurl }}/dev/table/config.html#execution-options) 
page for more details.
+
+The following examples show how to enable these options.
+
+
+
+{% highlight java %}
+// instantiate table environment
+TableEnvironment tEnv = ...
+
+tEnv.getConfig()// access high-level configuration
+  .getConfiguration()   // set low-level key-value 

[flink] branch master updated (ec4c0c3 -> 4ce5e0f)

2019-09-03 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ec4c0c3  [FLINK-13356][table][docs] Add documentation for TopN and 
Deduplication in blink planner
 add 4ce5e0f  [FLINK-13912][client] Remove 
ClusterClient#getClusterConnectionInfo

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/client/program/ClusterClient.java | 19 -
 .../apache/flink/client/cli/DefaultCLITest.java| 20 ++
 .../org/apache/flink/api/scala/FlinkShell.scala| 15 +-
 .../yarn/YARNSessionCapacitySchedulerITCase.java   |  8 
 .../apache/flink/yarn/YARNSessionFIFOITCase.java   |  2 +-
 .../flink/yarn/YarnPrioritySchedulingITCase.java   |  2 +-
 .../flink/yarn/AbstractYarnClusterDescriptor.java  | 24 ++
 .../apache/flink/yarn/cli/FlinkYarnSessionCli.java |  5 -
 8 files changed, 33 insertions(+), 62 deletions(-)



[flink] branch release-1.9 updated: [FLINK-13356][table][docs] Add documentation for TopN and Deduplication in blink planner

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 1bb602b  [FLINK-13356][table][docs] Add documentation for TopN and 
Deduplication in blink planner
1bb602b is described below

commit 1bb602b987908aacdc1e025f8255db9f0479c913
Author: Jark Wu 
AuthorDate: Thu Aug 22 20:00:28 2019 +0800

[FLINK-13356][table][docs] Add documentation for TopN and Deduplication in 
blink planner

This closes #9511
---
 docs/dev/table/sql.md| 223 ++-
 docs/dev/table/sql.zh.md | 221 ++
 2 files changed, 443 insertions(+), 1 deletion(-)

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index 0cfbdda..c3d0cd3 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -24,7 +24,7 @@ under the License.
 
 This is a complete list of Data Definition Language (DDL) and Data 
Manipulation Language (DML) constructs supported in Flink.
 * This will be replaced by the TOC
-{:toc} 
+{:toc}
 
 ## Query
 SQL queries are specified with the `sqlQuery()` method of the 
`TableEnvironment`. The method returns the result of the SQL query as a 
`Table`. A `Table` can be used in [subsequent SQL and Table API 
queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or 
DataStream](common.html#integration-with-datastream-and-dataset-api), or 
[written to a TableSink](common.html#emit-a-table)). SQL and Table API queries 
can be seamlessly mixed and are holistically optimized and tra [...]
@@ -835,6 +835,227 @@ LIMIT 3
 
 {% top %}
 
+ Top-N
+
+Attention Top-N is only supported in 
Blink planner.
+
+Top-N queries ask for the N smallest or largest values ordered by columns. 
Both smallest and largest values sets are considered Top-N queries. Top-N 
queries are useful in cases where the need is to display only the N bottom-most 
or the N top-
+most records from batch/streaming table on a condition. This result set can be 
used for further analysis.
+
+Flink uses the combination of a OVER window clause and a filter condition to 
express a Top-N query. With the power of OVER window `PARTITION BY` clause, 
Flink also supports per group Top-N. For example, the top five products per 
category that have the maximum sales in realtime. Top-N queries are supported 
for SQL on batch and streaming tables.
+
+The following shows the syntax of the TOP-N statement:
+
+{% highlight sql %}
+SELECT [column_list]
+FROM (
+   SELECT [column_list],
+ ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
+   ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
+   FROM table_name)
+WHERE rownum <= N [AND conditions]
+{% endhighlight %}
+
+**Parameter Specification:**
+
+- `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting 
with one, according to the ordering of rows within the partition. Currently, we 
only support `ROW_NUMBER` as the over window function. In the future, we will 
support `RANK()` and `DENSE_RANK()`.
+- `PARTITION BY col1[, col2...]`: Specifies the partition columns. Each 
partition will have a Top-N result.
+- `ORDER BY col1 [asc|desc][, col2 [asc|desc]...]`: Specifies the ordering 
columns. The ordering directions can be different on different columns.
+- `WHERE rownum <= N`: The `rownum <= N` is required for Flink to recognize 
this query is a Top-N query. The N represents the N smallest or largest records 
will be retained.
+- `[AND conditions]`: It is free to add other conditions in the where clause, 
but the other conditions can only be combined with `rownum <= N` using `AND` 
conjunction.
+
+Attention in Streaming Mode The TopN 
query is Result Updating. Flink SQL will 
sort the input data stream according to the order key, so if the top N records 
have been changed, the changed ones will be sent as retraction/update records 
to downstream.
+It is recommended to use a storage which supports updating as the sink of 
Top-N query. In addition, if the top N records need to be stored in external 
storage, the result table should have the same unique key with the Top-N query.
+
+The unique keys of Top-N query is the combination of partition columns and 
rownum column. Top-N query can also derive the unique key of upstream. Take 
following job as an example, say `product_id` is the unique key of the 
`ShopSales`, then the unique keys of the Top-N query are [`category`, `rownum`] 
and [`product_id`].
+
+The following examples show how to specify SQL queries with Top-N on streaming 
tables. This is an example to get "the top five products per category that have 
the maximum sales in realtime" we mentioned above.
+
+
+
+{% highlight java %}
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tableEnv = 

[flink] branch master updated: [FLINK-13356][table][docs] Add documentation for TopN and Deduplication in blink planner

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ec4c0c3  [FLINK-13356][table][docs] Add documentation for TopN and 
Deduplication in blink planner
ec4c0c3 is described below

commit ec4c0c35a55253ee4776c9359fc682d2feae
Author: Jark Wu 
AuthorDate: Thu Aug 22 20:00:28 2019 +0800

[FLINK-13356][table][docs] Add documentation for TopN and Deduplication in 
blink planner

This closes #9511
---
 docs/dev/table/sql.md| 223 ++-
 docs/dev/table/sql.zh.md | 221 ++
 2 files changed, 443 insertions(+), 1 deletion(-)

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index 0cfbdda..c3d0cd3 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -24,7 +24,7 @@ under the License.
 
 This is a complete list of Data Definition Language (DDL) and Data 
Manipulation Language (DML) constructs supported in Flink.
 * This will be replaced by the TOC
-{:toc} 
+{:toc}
 
 ## Query
 SQL queries are specified with the `sqlQuery()` method of the 
`TableEnvironment`. The method returns the result of the SQL query as a 
`Table`. A `Table` can be used in [subsequent SQL and Table API 
queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or 
DataStream](common.html#integration-with-datastream-and-dataset-api), or 
[written to a TableSink](common.html#emit-a-table)). SQL and Table API queries 
can be seamlessly mixed and are holistically optimized and tra [...]
@@ -835,6 +835,227 @@ LIMIT 3
 
 {% top %}
 
+ Top-N
+
+Attention Top-N is only supported in 
Blink planner.
+
+Top-N queries ask for the N smallest or largest values ordered by columns. 
Both smallest and largest values sets are considered Top-N queries. Top-N 
queries are useful in cases where the need is to display only the N bottom-most 
or the N top-
+most records from batch/streaming table on a condition. This result set can be 
used for further analysis.
+
+Flink uses the combination of a OVER window clause and a filter condition to 
express a Top-N query. With the power of OVER window `PARTITION BY` clause, 
Flink also supports per group Top-N. For example, the top five products per 
category that have the maximum sales in realtime. Top-N queries are supported 
for SQL on batch and streaming tables.
+
+The following shows the syntax of the TOP-N statement:
+
+{% highlight sql %}
+SELECT [column_list]
+FROM (
+   SELECT [column_list],
+ ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
+   ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
+   FROM table_name)
+WHERE rownum <= N [AND conditions]
+{% endhighlight %}
+
+**Parameter Specification:**
+
+- `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting 
with one, according to the ordering of rows within the partition. Currently, we 
only support `ROW_NUMBER` as the over window function. In the future, we will 
support `RANK()` and `DENSE_RANK()`.
+- `PARTITION BY col1[, col2...]`: Specifies the partition columns. Each 
partition will have a Top-N result.
+- `ORDER BY col1 [asc|desc][, col2 [asc|desc]...]`: Specifies the ordering 
columns. The ordering directions can be different on different columns.
+- `WHERE rownum <= N`: The `rownum <= N` is required for Flink to recognize 
this query is a Top-N query. The N represents the N smallest or largest records 
will be retained.
+- `[AND conditions]`: It is free to add other conditions in the where clause, 
but the other conditions can only be combined with `rownum <= N` using `AND` 
conjunction.
+
+Attention in Streaming Mode The TopN 
query is Result Updating. Flink SQL will 
sort the input data stream according to the order key, so if the top N records 
have been changed, the changed ones will be sent as retraction/update records 
to downstream.
+It is recommended to use a storage which supports updating as the sink of 
Top-N query. In addition, if the top N records need to be stored in external 
storage, the result table should have the same unique key with the Top-N query.
+
+The unique keys of Top-N query is the combination of partition columns and 
rownum column. Top-N query can also derive the unique key of upstream. Take 
following job as an example, say `product_id` is the unique key of the 
`ShopSales`, then the unique keys of the Top-N query are [`category`, `rownum`] 
and [`product_id`].
+
+The following examples show how to specify SQL queries with Top-N on streaming 
tables. This is an example to get "the top five products per category that have 
the maximum sales in realtime" we mentioned above.
+
+
+
+{% highlight java %}
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tableEnv = 

[flink] branch release-1.9 updated: [FLINK-13355][docs] Add documentation for Temporal Table Join in blink planner

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 830742a  [FLINK-13355][docs] Add documentation for Temporal Table Join 
in blink planner
830742a is described below

commit 830742a4b1a88c6ea569689c8dd2bd1f0f97a6d9
Author: Jark Wu 
AuthorDate: Wed Aug 28 00:11:09 2019 +0800

[FLINK-13355][docs] Add documentation for Temporal Table Join in blink 
planner

This closes #9545
---
 docs/dev/table/sourceSinks.md  |  42 
 docs/dev/table/sourceSinks.zh.md   |  42 
 docs/dev/table/sql.md  |  27 -
 docs/dev/table/sql.zh.md   |  27 -
 docs/dev/table/streaming/joins.md  | 133 -
 docs/dev/table/streaming/joins.zh.md   | 133 -
 docs/dev/table/streaming/temporal_tables.md| 113 -
 docs/dev/table/streaming/temporal_tables.zh.md | 113 -
 docs/dev/table/tableApi.md |  12 +--
 docs/dev/table/tableApi.zh.md  |  12 +--
 10 files changed, 626 insertions(+), 28 deletions(-)

diff --git a/docs/dev/table/sourceSinks.md b/docs/dev/table/sourceSinks.md
index b27d42d..12b4ca7 100644
--- a/docs/dev/table/sourceSinks.md
+++ b/docs/dev/table/sourceSinks.md
@@ -324,6 +324,48 @@ FilterableTableSource[T] {
 
 {% top %}
 
+### Defining a TableSource for Lookups
+
+Attention This is an experimental 
feature. The interface may be changed in future versions. It's only supported 
in Blink planner.
+
+The `LookupableTableSource` interface adds support for the table to be 
accessed via key column(s) in a lookup fashion. This is very useful when used 
to join with a dimension table to enrich some information. If you want to use 
the `TableSource` in lookup mode, you should use the source in [temporal table 
join syntax](streaming/joins.html).
+
+The interface looks as follows:
+
+
+
+{% highlight java %}
+LookupableTableSource implements TableSource {
+
+  public TableFunction getLookupFunction(String[] lookupkeys);
+
+  public AsyncTableFunction getAsyncLookupFunction(String[] lookupkeys);
+
+  public boolean isAsyncEnabled();
+}
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+LookupableTableSource[T] extends TableSource[T] {
+
+  def getLookupFunction(lookupKeys: Array[String]): TableFunction[T]
+
+  def getAsyncLookupFunction(lookupKeys: Array[String]): AsyncTableFunction[T]
+
+  def isAsyncEnabled: Boolean
+}
+{% endhighlight %}
+
+
+
+* `getLookupFunction(lookupkeys)`: Returns a `TableFunction` which used to 
lookup the matched row(s) via lookup keys. The lookupkeys are the field names 
of `LookupableTableSource` in the join equal conditions. The eval method 
parameters of the returned `TableFunction`'s should be in the order which 
`lookupkeys` defined. It is recommended to define the parameters in varargs 
(e.g. `eval(Object... lookupkeys)` to match all the cases). The return type of 
the `TableFunction` must be identical [...]
+* `getAsyncLookupFunction(lookupkeys)`: Optional. Similar to 
`getLookupFunction`, but the `AsyncLookupFunction` lookups the matched row(s) 
asynchronously. The underlying of `AsyncLookupFunction` will be called via 
[Async I/O](/dev/stream/operators/asyncio.html). The first argument of the eval 
method of the returned `AsyncTableFunction` should be defined as 
`java.util.concurrent.CompletableFuture` to collect results asynchronously 
(e.g. `eval(CompletableFuture> result,  [...]
+* `isAsyncEnabled()`: Returns true if async lookup is enabled. It requires 
`getAsyncLookupFunction(lookupkeys)` is implemented if `isAsyncEnabled` returns 
true.
+
+{% top %}
+
 Define a TableSink
 --
 
diff --git a/docs/dev/table/sourceSinks.zh.md b/docs/dev/table/sourceSinks.zh.md
index b34967f..e37e7be 100644
--- a/docs/dev/table/sourceSinks.zh.md
+++ b/docs/dev/table/sourceSinks.zh.md
@@ -324,6 +324,48 @@ FilterableTableSource[T] {
 
 {% top %}
 
+### Defining a TableSource for Lookups
+
+Attention This is an experimental 
feature. The interface may be changed in future versions. It's only supported 
in Blink planner.
+
+The `LookupableTableSource` interface adds support for the table to be 
accessed via key column(s) in a lookup fashion. This is very useful when used 
to join with a dimension table to enrich some information. If you want to use 
the `TableSource` in lookup mode, you should use the source in [temporal table 
join syntax](streaming/joins.html).
+
+The interface looks as follows:
+
+
+
+{% highlight java %}
+LookupableTableSource implements TableSource {
+
+  public TableFunction getLookupFunction(String[] lookupkeys);
+
+  public AsyncTableFunction getAsyncLookupFunction(String[] lookupkeys);
+
+  public boolean isAsyncEnabled();
+}
+{% 

[flink] branch master updated: [FLINK-13355][docs] Add documentation for Temporal Table Join in blink planner

2019-09-03 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ca2e3e3  [FLINK-13355][docs] Add documentation for Temporal Table Join 
in blink planner
ca2e3e3 is described below

commit ca2e3e39e99e392514936cd2998dac22e719f19e
Author: Jark Wu 
AuthorDate: Wed Aug 28 00:11:09 2019 +0800

[FLINK-13355][docs] Add documentation for Temporal Table Join in blink 
planner

This closes #9545
---
 docs/dev/table/sourceSinks.md  |  42 
 docs/dev/table/sourceSinks.zh.md   |  42 
 docs/dev/table/sql.md  |  27 -
 docs/dev/table/sql.zh.md   |  29 +-
 docs/dev/table/streaming/joins.md  | 133 -
 docs/dev/table/streaming/joins.zh.md   | 133 -
 docs/dev/table/streaming/temporal_tables.md| 113 -
 docs/dev/table/streaming/temporal_tables.zh.md | 113 -
 docs/dev/table/tableApi.md |  12 +--
 docs/dev/table/tableApi.zh.md  |  12 +--
 10 files changed, 627 insertions(+), 29 deletions(-)

diff --git a/docs/dev/table/sourceSinks.md b/docs/dev/table/sourceSinks.md
index b27d42d..12b4ca7 100644
--- a/docs/dev/table/sourceSinks.md
+++ b/docs/dev/table/sourceSinks.md
@@ -324,6 +324,48 @@ FilterableTableSource[T] {
 
 {% top %}
 
+### Defining a TableSource for Lookups
+
+Attention This is an experimental 
feature. The interface may be changed in future versions. It's only supported 
in Blink planner.
+
+The `LookupableTableSource` interface adds support for the table to be 
accessed via key column(s) in a lookup fashion. This is very useful when used 
to join with a dimension table to enrich some information. If you want to use 
the `TableSource` in lookup mode, you should use the source in [temporal table 
join syntax](streaming/joins.html).
+
+The interface looks as follows:
+
+
+
+{% highlight java %}
+LookupableTableSource implements TableSource {
+
+  public TableFunction getLookupFunction(String[] lookupkeys);
+
+  public AsyncTableFunction getAsyncLookupFunction(String[] lookupkeys);
+
+  public boolean isAsyncEnabled();
+}
+{% endhighlight %}
+
+
+
+{% highlight scala %}
+LookupableTableSource[T] extends TableSource[T] {
+
+  def getLookupFunction(lookupKeys: Array[String]): TableFunction[T]
+
+  def getAsyncLookupFunction(lookupKeys: Array[String]): AsyncTableFunction[T]
+
+  def isAsyncEnabled: Boolean
+}
+{% endhighlight %}
+
+
+
+* `getLookupFunction(lookupkeys)`: Returns a `TableFunction` which used to 
lookup the matched row(s) via lookup keys. The lookupkeys are the field names 
of `LookupableTableSource` in the join equal conditions. The eval method 
parameters of the returned `TableFunction`'s should be in the order which 
`lookupkeys` defined. It is recommended to define the parameters in varargs 
(e.g. `eval(Object... lookupkeys)` to match all the cases). The return type of 
the `TableFunction` must be identical [...]
+* `getAsyncLookupFunction(lookupkeys)`: Optional. Similar to 
`getLookupFunction`, but the `AsyncLookupFunction` lookups the matched row(s) 
asynchronously. The underlying of `AsyncLookupFunction` will be called via 
[Async I/O](/dev/stream/operators/asyncio.html). The first argument of the eval 
method of the returned `AsyncTableFunction` should be defined as 
`java.util.concurrent.CompletableFuture` to collect results asynchronously 
(e.g. `eval(CompletableFuture> result,  [...]
+* `isAsyncEnabled()`: Returns true if async lookup is enabled. It requires 
`getAsyncLookupFunction(lookupkeys)` is implemented if `isAsyncEnabled` returns 
true.
+
+{% top %}
+
 Define a TableSink
 --
 
diff --git a/docs/dev/table/sourceSinks.zh.md b/docs/dev/table/sourceSinks.zh.md
index b34967f..e37e7be 100644
--- a/docs/dev/table/sourceSinks.zh.md
+++ b/docs/dev/table/sourceSinks.zh.md
@@ -324,6 +324,48 @@ FilterableTableSource[T] {
 
 {% top %}
 
+### Defining a TableSource for Lookups
+
+Attention This is an experimental 
feature. The interface may be changed in future versions. It's only supported 
in Blink planner.
+
+The `LookupableTableSource` interface adds support for the table to be 
accessed via key column(s) in a lookup fashion. This is very useful when used 
to join with a dimension table to enrich some information. If you want to use 
the `TableSource` in lookup mode, you should use the source in [temporal table 
join syntax](streaming/joins.html).
+
+The interface looks as follows:
+
+
+
+{% highlight java %}
+LookupableTableSource implements TableSource {
+
+  public TableFunction getLookupFunction(String[] lookupkeys);
+
+  public AsyncTableFunction getAsyncLookupFunction(String[] lookupkeys);
+
+  public boolean isAsyncEnabled();
+}
+{% endhighlight %}
+

[flink-web] 04/04: Rebuild website

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b9e62de724f5c0cef8053f73d4108dc8199ac89e
Author: Chesnay Schepler 
AuthorDate: Mon Sep 2 11:07:09 2019 +0200

Rebuild website
---
 content/downloads.html| 74 +++
 content/zh/downloads.html | 74 +++
 2 files changed, 74 insertions(+), 74 deletions(-)

diff --git a/content/downloads.html b/content/downloads.html
index 08120bd..edd0747 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -461,7 +461,7 @@ Flink 1.9.0 - 2019-08-22
 https://archive.apache.org/dist/flink/flink-1.9.0;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.9;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/scala/index.html;>Scaladocs)
 
 
 
@@ -472,7 +472,7 @@ Flink 1.8.1 - 2019-07-02
 https://archive.apache.org/dist/flink/flink-1.8.1;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.8;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>Scaladocs)
 
 
 
@@ -483,7 +483,7 @@ Flink 1.8.0 - 2019-04-09
 https://archive.apache.org/dist/flink/flink-1.8.0;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.8;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>Scaladocs)
 
 
 
@@ -494,7 +494,7 @@ Flink 1.7.2 - 2019-02-15
 https://archive.apache.org/dist/flink/flink-1.7.2;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>Scaladocs)
 
 
 
@@ -505,7 +505,7 @@ Flink 1.7.1 - 2018-12-21
 https://archive.apache.org/dist/flink/flink-1.7.1;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>Scaladocs)
 
 
 
@@ -516,7 +516,7 @@ Flink 1.7.0 - 2018-11-30
 https://archive.apache.org/dist/flink/flink-1.7.0;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>Scaladocs)
 
 
 
@@ -527,7 +527,7 @@ Flink 1.6.4 - 2019-02-25
 https://archive.apache.org/dist/flink/flink-1.6.4;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>Scaladocs)
 
 
 
@@ -538,7 +538,7 @@ Flink 1.6.3 - 2018-12-22
 https://archive.apache.org/dist/flink/flink-1.6.3;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>Scaladocs)
 
 
 
@@ -549,7 +549,7 @@ Flink 1.6.2 - 2018-10-29
 https://archive.apache.org/dist/flink/flink-1.6.2;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6;>Docs, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 
-https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
+https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>Scaladocs)
 
 
 
@@ -560,7 +560,7 @@ Flink 1.6.1 - 2018-09-19
 https://archive.apache.org/dist/flink/flink-1.6.1;>Binaries, 
 https://ci.apache.org/projects/flink/flink-docs-release-1.6;>Docs, 
 

[flink-web] 01/04: [FLINK-13920] Move release list into _config.yml

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 1985e613b44fe8a8a51434afedb2fdd2bae86fe6
Author: Chesnay Schepler 
AuthorDate: Fri Aug 30 14:01:23 2019 +0200

[FLINK-13920] Move release list into _config.yml
---
 _config.yml | 209 
 downloads.md|  82 +++---
 downloads.zh.md |  79 +++--
 3 files changed, 259 insertions(+), 111 deletions(-)

diff --git a/_config.yml b/_config.yml
index d7dc5c4..9ebdb00 100644
--- a/_config.yml
+++ b/_config.yml
@@ -342,6 +342,215 @@ component_releases:
   asc_url: 
"https://www.apache.org/dist/flink/flink-shaded-8.0/flink-shaded-8.0-src.tgz.asc;
   sha512_url: 
"https://www.apache.org/dist/flink/flink-shaded-8.0/flink-shaded-8.0-src.tgz.sha512;
 
+release_archive:
+flink:
+  -
+version_short: 1.9
+version_long: 1.9.0
+release_date: 2019-08-22
+  -
+version_short: 1.8
+version_long: 1.8.1
+release_date: 2019-07-02
+  -
+version_short: 1.8
+version_long: 1.8.0
+release_date: 2019-04-09
+  -
+version_short: 1.7
+version_long: 1.7.2
+release_date: 2019-02-15
+  -
+version_short: 1.7
+version_long: 1.7.1
+release_date: 2018-12-21
+  -
+version_short: 1.7
+version_long: 1.7.0
+release_date: 2018-11-30
+  -
+version_short: 1.6
+version_long: 1.6.4
+release_date: 2019-02-25
+  -
+version_short: 1.6
+version_long: 1.6.3
+release_date: 2018-12-22
+  -
+version_short: 1.6
+version_long: 1.6.2
+release_date: 2018-10-29
+  -
+version_short: 1.6
+version_long: 1.6.1
+release_date: 2018-09-19
+  -
+version_short: 1.6
+version_long: 1.6.0
+release_date: 2018-08-08
+  -
+version_short: 1.5
+version_long: 1.5.6
+release_date: 2018-12-21
+  -
+version_short: 1.5
+version_long: 1.5.5
+release_date: 2018-10-29
+  -
+version_short: 1.5
+version_long: 1.5.4
+release_date: 2018-09-19
+  -
+version_short: 1.5
+version_long: 1.5.3
+release_date: 2018-08-21
+  -
+version_short: 1.5
+version_long: 1.5.2
+release_date: 2018-07-31
+  -
+version_short: 1.5
+version_long: 1.5.1
+release_date: 2018-07-12
+  -
+version_short: 1.5
+version_long: 1.5.0
+release_date: 2018-05-25
+  -
+version_short: 1.4
+version_long: 1.4.2
+release_date: 2018-03-08
+  -
+version_short: 1.4
+version_long: 1.4.1
+release_date: 2018-02-15
+  -
+version_short: 1.4
+version_long: 1.4.0
+release_date: 2017-11-29
+  -
+version_short: 1.3
+version_long: 1.3.3
+release_date: 2018-03-15
+  -
+version_short: 1.3
+version_long: 1.3.2
+release_date: 2017-08-05
+  -
+version_short: 1.3
+version_long: 1.3.1
+release_date: 2017-06-23
+  -
+version_short: 1.3
+version_long: 1.3.0
+release_date: 2017-06-01
+  -
+version_short: 1.2
+version_long: 1.2.1
+release_date: 2017-04-26
+  -
+version_short: 1.2
+version_long: 1.2.0
+release_date: 2017-02-06
+  -
+version_short: 1.1
+version_long: 1.1.5
+release_date: 2017-03-22
+  -
+version_short: 1.1
+version_long: 1.1.4
+release_date: 2016-12-21
+  -
+version_short: 1.1
+version_long: 1.1.3
+release_date: 2016-10-13
+  -
+version_short: 1.1
+version_long: 1.1.2
+release_date: 2016-09-05
+  -
+version_short: 1.1
+version_long: 1.1.1
+release_date: 2016-08-11
+  -
+version_short: 1.1
+version_long: 1.1.0
+release_date: 2016-08-08
+  -
+version_short: 1.0
+version_long: 1.0.3
+release_date: 2016-05-12
+  -
+version_short: 1.0
+version_long: 1.0.2
+release_date: 2016-04-23
+  -
+version_short: 1.0
+version_long: 1.0.1
+release_date: 2016-04-06
+  -
+version_short: 1.0
+version_long: 1.0.0
+release_date: 2016-03-08
+  -
+version_long: 0.10.2
+release_date: 2016-02-11
+  -
+version_long: 0.10.1
+release_date: 2015-11-27
+  -
+version_long: 0.10.0
+release_date: 2015-11-16
+  -
+version_long: 0.9.1
+release_date: 2015-09-01
+  -
+version_long: 0.9.0

[flink-web] 02/04: Rebuild website

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 96ae88b597c07b4aa76ce9f9fab3fad4b8aa1ed1
Author: Chesnay Schepler 
AuthorDate: Fri Aug 30 14:01:33 2019 +0200

Rebuild website
---
 content/downloads.html| 571 +-
 content/zh/downloads.html | 569 -
 2 files changed, 1030 insertions(+), 110 deletions(-)

diff --git a/content/downloads.html b/content/downloads.html
index b0cdabd..08120bd 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -451,67 +451,526 @@ main Flink release:
 All Flink releases are available via https://archive.apache.org/dist/flink/;>https://archive.apache.org/dist/flink/
 including checksums and cryptographic signatures. At the time of writing, this 
includes the following versions:
 
 Flink
+
 
-  Flink 1.9.0 - 2019-08-22 (https://archive.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.9.0/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.9/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/scala/index.html;>ScalaDocs)
-  Flink 1.8.1 - 2019-07-02 (https://archive.apache.org/dist/flink/flink-1.8.1/flink-1.8.1-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.8.1/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.8/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
-  Flink 1.8.0 - 2019-04-09 (https://archive.apache.org/dist/flink/flink-1.8.0/flink-1.8.0-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.8.0/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.8/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
-  Flink 1.7.2 - 2019-02-15 (https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.7.2/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.7/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
-  Flink 1.7.1 - 2018-12-21 (https://archive.apache.org/dist/flink/flink-1.7.1/flink-1.7.1-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.7.1/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.7/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
-  Flink 1.7.0 - 2018-11-30 (https://archive.apache.org/dist/flink/flink-1.7.0/flink-1.7.0-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.7.0/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.7/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
-  Flink 1.6.4 - 2019-02-25 (https://archive.apache.org/dist/flink/flink-1.6.4/flink-1.6.4-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.6.4/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.6/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
-  Flink 1.6.3 - 2018-12-22 (https://archive.apache.org/dist/flink/flink-1.6.3/flink-1.6.3-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.6.3/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.6/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
-  Flink 1.6.2 - 2018-10-29 (https://archive.apache.org/dist/flink/flink-1.6.2/flink-1.6.2-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.6.2/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.6/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/scala/index.html;>ScalaDocs)
-  Flink 1.6.1 - 2018-09-19 (https://archive.apache.org/dist/flink/flink-1.6.1/flink-1.6.1-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.6.1/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.6/;>Docs, 

[flink-web] 03/04: [hotfix] Fix capitalization

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 81cbc2b5c687901d9d98d9f03c53e4104003676c
Author: Chesnay Schepler 
AuthorDate: Mon Sep 2 11:06:55 2019 +0200

[hotfix] Fix capitalization
---
 downloads.md| 2 +-
 downloads.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/downloads.md b/downloads.md
index ff851e7..f9ba7cb 100644
--- a/downloads.md
+++ b/downloads.md
@@ -186,7 +186,7 @@ Flink {{ flink_release.version_long }} - {{ 
flink_release.release_date }}
 https://archive.apache.org/dist/flink/flink-{{ 
flink_release.version_long }}">Binaries, 
 Docs, 
 Javadocs, 
-ScalaDocs)
+Scaladocs)
 {% else %}
 Flink {{ flink_release.version_long }} - {{ flink_release.release_date }} 
 (https://archive.apache.org/dist/flink/flink-{{ 
flink_release.version_long }}/flink-{{ flink_release.version_long 
}}-src.tgz">Source, 
diff --git a/downloads.zh.md b/downloads.zh.md
index 3764014..f9a14e8 100644
--- a/downloads.zh.md
+++ b/downloads.zh.md
@@ -195,7 +195,7 @@ Flink {{ flink_release.version_long }} - {{ 
flink_release.release_date }}
 https://archive.apache.org/dist/flink/flink-{{ 
flink_release.version_long }}">Binaries, 
 Docs, 
 Javadocs, 
-ScalaDocs)
+Scaladocs)
 {% else %}
 Flink {{ flink_release.version_long }} - {{ flink_release.release_date }} 
 (https://archive.apache.org/dist/flink/flink-{{ 
flink_release.version_long }}/flink-{{ flink_release.version_long 
}}-src.tgz">Source, 



[flink-web] branch asf-site updated (e498364 -> b9e62de)

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from e498364  Rebuild website
 new 1985e61  [FLINK-13920] Move release list into _config.yml
 new 96ae88b  Rebuild website
 new 81cbc2b  [hotfix] Fix capitalization
 new b9e62de  Rebuild website

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml   | 209 +
 content/downloads.html| 571 +-
 content/zh/downloads.html | 569 -
 downloads.md  |  82 ++-
 downloads.zh.md   |  79 ++-
 5 files changed, 1289 insertions(+), 221 deletions(-)



[flink] branch master updated (21a21a6 -> fbb4837)

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 21a21a6  [FLINK-13887][core] Ensure 
ExConfig#defaultInputDependencyConstraint is non-null
 add fbb4837  [FLINK-13059][cassandra] Release semaphore on exception in 
send()

No new revisions were added by this update.

Summary of changes:
 .../connectors/cassandra/CassandraSinkBase.java| 21 +++
 .../cassandra/CassandraSinkBaseTest.java   | 42 --
 2 files changed, 54 insertions(+), 9 deletions(-)



[flink] branch release-1.8 updated: [FLINK-13059][cassandra] Release semaphore on exception in send()

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new b7ce7b8  [FLINK-13059][cassandra] Release semaphore on exception in 
send()
b7ce7b8 is described below

commit b7ce7b8ff14807e4981591a7e26c99d5051d529f
Author: Mads Chr. Olesen 
AuthorDate: Tue Sep 3 11:49:10 2019 +0200

[FLINK-13059][cassandra] Release semaphore on exception in send()
---
 .../connectors/cassandra/CassandraSinkBase.java| 21 +++
 .../cassandra/CassandraSinkBaseTest.java   | 42 --
 2 files changed, 54 insertions(+), 9 deletions(-)

diff --git 
a/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
 
b/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
index ede5586..0e7eb6f 100644
--- 
a/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
+++ 
b/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
@@ -128,8 +128,14 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
@Override
public void invoke(IN value) throws Exception {
checkAsyncErrors();
-   tryAcquire();
-   final ListenableFuture result = send(value);
+   tryAcquire(1);
+   final ListenableFuture result;
+   try {
+   result = send(value);
+   } catch (Exception e) {
+   semaphore.release();
+   throw e;
+   }
Futures.addCallback(result, callback);
}
 
@@ -139,11 +145,12 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
 
public abstract ListenableFuture send(IN value);
 
-   private void tryAcquire() throws InterruptedException, TimeoutException 
{
-   if 
(!semaphore.tryAcquire(config.getMaxConcurrentRequestsTimeout().toMillis(), 
TimeUnit.MILLISECONDS)) {
+   private void tryAcquire(int permits) throws InterruptedException, 
TimeoutException {
+   if (!semaphore.tryAcquire(permits, 
config.getMaxConcurrentRequestsTimeout().toMillis(), TimeUnit.MILLISECONDS)) {
throw new TimeoutException(
String.format(
-   "Failed to acquire 1 permit of %d to 
send value in %s.",
+   "Failed to acquire %d out of %d permits 
to send value in %s.",
+   permits,
config.getMaxConcurrentRequests(),
config.getMaxConcurrentRequestsTimeout()
)
@@ -158,8 +165,8 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
}
}
 
-   private void flush() {
-   
semaphore.acquireUninterruptibly(config.getMaxConcurrentRequests());
+   private void flush() throws InterruptedException, TimeoutException {
+   tryAcquire(config.getMaxConcurrentRequests());
semaphore.release(config.getMaxConcurrentRequests());
}
 
diff --git 
a/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
 
b/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
index 2b705a5..b4406ab 100644
--- 
a/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
+++ 
b/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
@@ -28,6 +28,7 @@ import org.apache.flink.util.Preconditions;
 import com.datastax.driver.core.Cluster;
 import com.datastax.driver.core.ResultSet;
 import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.InvalidQueryException;
 import com.datastax.driver.core.exceptions.NoHostAvailableException;
 import com.google.common.util.concurrent.ListenableFuture;
 import org.junit.Assert;
@@ -180,7 +181,7 @@ public class CassandraSinkBaseTest {
}
};
t.start();
-   while (t.getState() != Thread.State.WAITING) {
+   while (t.getState() != Thread.State.TIMED_WAITING) {
Thread.sleep(5);
}
 
@@ -212,7 +213,7 @@ public class CassandraSinkBaseTest {
   

[flink] branch release-1.9 updated: [FLINK-13059][cassandra] Release semaphore on exception in send()

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 22571aa  [FLINK-13059][cassandra] Release semaphore on exception in 
send()
22571aa is described below

commit 22571aab57fc30450de8a850f1a8a6ea80fdba2c
Author: Mads Chr. Olesen 
AuthorDate: Tue Sep 3 11:49:10 2019 +0200

[FLINK-13059][cassandra] Release semaphore on exception in send()
---
 .../connectors/cassandra/CassandraSinkBase.java| 21 +++
 .../cassandra/CassandraSinkBaseTest.java   | 42 --
 2 files changed, 54 insertions(+), 9 deletions(-)

diff --git 
a/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
 
b/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
index ede5586..0e7eb6f 100644
--- 
a/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
+++ 
b/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java
@@ -128,8 +128,14 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
@Override
public void invoke(IN value) throws Exception {
checkAsyncErrors();
-   tryAcquire();
-   final ListenableFuture result = send(value);
+   tryAcquire(1);
+   final ListenableFuture result;
+   try {
+   result = send(value);
+   } catch (Exception e) {
+   semaphore.release();
+   throw e;
+   }
Futures.addCallback(result, callback);
}
 
@@ -139,11 +145,12 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
 
public abstract ListenableFuture send(IN value);
 
-   private void tryAcquire() throws InterruptedException, TimeoutException 
{
-   if 
(!semaphore.tryAcquire(config.getMaxConcurrentRequestsTimeout().toMillis(), 
TimeUnit.MILLISECONDS)) {
+   private void tryAcquire(int permits) throws InterruptedException, 
TimeoutException {
+   if (!semaphore.tryAcquire(permits, 
config.getMaxConcurrentRequestsTimeout().toMillis(), TimeUnit.MILLISECONDS)) {
throw new TimeoutException(
String.format(
-   "Failed to acquire 1 permit of %d to 
send value in %s.",
+   "Failed to acquire %d out of %d permits 
to send value in %s.",
+   permits,
config.getMaxConcurrentRequests(),
config.getMaxConcurrentRequestsTimeout()
)
@@ -158,8 +165,8 @@ public abstract class CassandraSinkBase extends 
RichSinkFunction impl
}
}
 
-   private void flush() {
-   
semaphore.acquireUninterruptibly(config.getMaxConcurrentRequests());
+   private void flush() throws InterruptedException, TimeoutException {
+   tryAcquire(config.getMaxConcurrentRequests());
semaphore.release(config.getMaxConcurrentRequests());
}
 
diff --git 
a/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
 
b/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
index 2b705a5..b4406ab 100644
--- 
a/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
+++ 
b/flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBaseTest.java
@@ -28,6 +28,7 @@ import org.apache.flink.util.Preconditions;
 import com.datastax.driver.core.Cluster;
 import com.datastax.driver.core.ResultSet;
 import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.InvalidQueryException;
 import com.datastax.driver.core.exceptions.NoHostAvailableException;
 import com.google.common.util.concurrent.ListenableFuture;
 import org.junit.Assert;
@@ -180,7 +181,7 @@ public class CassandraSinkBaseTest {
}
};
t.start();
-   while (t.getState() != Thread.State.WAITING) {
+   while (t.getState() != Thread.State.TIMED_WAITING) {
Thread.sleep(5);
}
 
@@ -212,7 +213,7 @@ public class CassandraSinkBaseTest {
   

[flink] branch release-1.9 updated: [FLINK-13887][core] Ensure ExConfig#defaultInputDependencyConstraint is non-null

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 7687b5b  [FLINK-13887][core] Ensure 
ExConfig#defaultInputDependencyConstraint is non-null
7687b5b is described below

commit 7687b5bdffa01ec71ea176a199c6b559a37ed642
Author: Zhu Zhu 
AuthorDate: Tue Sep 3 17:23:34 2019 +0800

[FLINK-13887][core] Ensure ExConfig#defaultInputDependencyConstraint is 
non-null
---
 .../main/java/org/apache/flink/api/common/ExecutionConfig.java| 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java 
b/flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java
index fd3b358..e850ec7 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java
@@ -549,7 +549,13 @@ public class ExecutionConfig implements Serializable, 
Archiveable

[flink] branch master updated (162a10a -> 21a21a6)

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 162a10a  [FLINK-13897][oss] Move NOTICE file into META-INF directory
 add 21a21a6  [FLINK-13887][core] Ensure 
ExConfig#defaultInputDependencyConstraint is non-null

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/flink/api/common/ExecutionConfig.java| 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)



[flink] branch release-1.8 updated: [FLINK-13897][oss] Move NOTICE file into META-INF directory

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 4fcbefb  [FLINK-13897][oss] Move NOTICE file into META-INF directory
4fcbefb is described below

commit 4fcbefba2a55f327daa8f187e40547a926939d7d
Author: Chesnay Schepler 
AuthorDate: Thu Aug 29 14:39:37 2019 +0200

[FLINK-13897][oss] Move NOTICE file into META-INF directory
---
 NOTICE-binary  | 129 +++--
 .../src/main/resources/{ => META-INF}/NOTICE   |   0
 2 files changed, 119 insertions(+), 10 deletions(-)

diff --git a/NOTICE-binary b/NOTICE-binary
index f1dc848..bde3487 100644
--- a/NOTICE-binary
+++ b/NOTICE-binary
@@ -673,6 +673,125 @@ The Apache Software Foundation (http://www.apache.org/).
 flink-oss-fs-hadoop
 Copyright 2014-2019 The Apache Software Foundation
 
+This project includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+This project bundles the following dependencies under the Apache Software 
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+- com.aliyun.oss:aliyun-sdk-oss:3.4.1
+- com.aliyun:aliyun-java-sdk-core:3.4.0
+- com.aliyun:aliyun-java-sdk-ecs:4.2.0
+- com.aliyun:aliyun-java-sdk-ram:3.0.0
+- com.aliyun:aliyun-java-sdk-sts:3.0.0
+- com.fasterxml.jackson.core:jackson-annotations:2.7.0
+- com.fasterxml.jackson.core:jackson-core:2.7.8
+- com.fasterxml.jackson.core:jackson-databind:2.7.8
+- com.fasterxml.woodstox:woodstox-core:5.0.3
+- com.github.stephenc.jcip:jcip-annotations:1.0-1
+- com.google.code.gson:gson:2.2.4
+- com.google.guava:guava:11.0.2
+- com.nimbusds:nimbus-jose-jwt:4.41.1
+- commons-beanutils:commons-beanutils:1.9.3
+- commons-cli:commons-cli:1.3.1
+- commons-codec:commons-codec:1.10
+- commons-collections:commons-collections:3.2.2
+- commons-io:commons-io:2.4
+- commons-lang:commons-lang:3.3.2
+- commons-logging:commons-logging:1.1.3
+- commons-net:commons-net:3.6
+- net.minidev:accessors-smart:1.2
+- net.minidev:json-smart:2.3
+- org.apache.avro:avro:1.8.2
+- org.apache.commons:commons-compress:1.18
+- org.apache.commons:commons-configuration2:2.1.1
+- org.apache.commons:commons-lang3:3.3.2
+- org.apache.commons:commons-math3:3.5
+- org.apache.curator:curator-client:2.12.0
+- org.apache.curator:curator-framework:2.12.0
+- org.apache.curator:curator-recipes:2.12.0
+- org.apache.hadoop:hadoop-aliyun:3.1.0
+- org.apache.hadoop:hadoop-annotations:3.1.0
+- org.apache.hadoop:hadoop-auth:3.1.0
+- org.apache.hadoop:hadoop-common:3.1.0
+- org.apache.htrace:htrace-core4:4.1.0-incubating
+- org.apache.httpcomponents:httpclient:4.5.3
+- org.apache.httpcomponents:httpcore:4.4.6
+- org.apache.kerby:kerb-admin:1.0.1
+- org.apache.kerby:kerb-client:1.0.1
+- org.apache.kerby:kerb-common:1.0.1
+- org.apache.kerby:kerb-core:1.0.1
+- org.apache.kerby:kerb-crypto:1.0.1
+- org.apache.kerby:kerb-identity:1.0.1
+- org.apache.kerby:kerb-server:1.0.1
+- org.apache.kerby:kerb-simplekdc:1.0.1
+- org.apache.kerby:kerb-util:1.0.1
+- org.apache.kerby:kerby-asn1:1.0.1
+- org.apache.kerby:kerby-config:1.0.1
+- org.apache.kerby:kerby-pkix:1.0.1
+- org.apache.kerby:kerby-util:1.0.1
+- org.apache.kerby:kerby-xdr:1.0.1
+- org.apache.kerby:token-provider:1.0.1
+- org.apache.zookeeper:zookeeper:3.4.10
+- org.codehaus.jackson:jackson-core-asl:1.9.2
+- org.codehaus.jackson:jackson-jaxrs:1.9.2
+- org.codehaus.jackson:jackson-mapper-asl:1.9.2
+- org.codehaus.jackson:jackson-xc:1.9.2
+- org.codehaus.jettison:jettison:1.1
+- org.eclipse.jetty:jetty-http:9.3.19.v20170502
+- org.eclipse.jetty:jetty-io:9.3.19.v20170502
+- org.eclipse.jetty:jetty-security:9.3.19.v20170502
+- org.eclipse.jetty:jetty-server:9.3.19.v20170502
+- org.eclipse.jetty:jetty-servlet:9.3.19.v20170502
+- org.eclipse.jetty:jetty-util:9.3.19.v20170502
+- org.eclipse.jetty:jetty-webapp:9.3.19.v20170502
+- org.eclipse.jetty:jetty-xml:9.3.19.v20170502
+- org.xerial.snappy:snappy-java:1.1.4
+
+This project bundles the following dependencies under the BSD license.
+See bundled license files for details.
+
+- com.google.protobuf:protobuf-java:2.5.0
+- com.jcraft:jsch:0.1.54
+- com.thoughtworks.paranamer:paranamer:2.7
+- org.codehaus.woodstox:stax2-api:3.1.4
+- org.ow2.asm:asm:5.0.4
+
+This project bundles the following dependencies under the Common Development 
and Distribution License (CDDL) 1.0.
+See bundled license files for details.
+
+- javax.activation:activation:1.1
+- javax.ws.rs:jsr311-api:1.1.1
+- javax.xml.stream:stax-api:1.0-2
+- stax:stax-api:1.0.1
+
+This project bundles the following dependencies under the Common Development 
and Distribution License (CDDL) 1.1.
+See bundled license files for details.
+
+- com.sun.jersey:jersey-core:1.19
+- com.sun.jersey:jersey-json:1.9
+- com.sun.jersey:jersey-server:1.19
+- com.sun.jersey:jersey-servlet:1.19
+- 

[flink] branch master updated (2bd7999 -> 162a10a)

2019-09-03 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2bd7999  [hotfix][tests] Fix code style problem of JobMasterTest
 add 162a10a  [FLINK-13897][oss] Move NOTICE file into META-INF directory

No new revisions were added by this update.

Summary of changes:
 NOTICE-binary  | 129 +++--
 .../src/main/resources/{ => META-INF}/NOTICE   |   0
 2 files changed, 119 insertions(+), 10 deletions(-)
 rename flink-filesystems/flink-oss-fs-hadoop/src/main/resources/{ => 
META-INF}/NOTICE (100%)



[flink] branch release-1.8 updated: [hotfix][docs] Minor fixes in operations playground.

2019-09-03 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new b79c199  [hotfix][docs] Minor fixes in operations playground.
b79c199 is described below

commit b79c19948562ad7ca73e4230a8a817429c1a0381
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 10:11:37 2019 +0200

[hotfix][docs] Minor fixes in operations playground.

[ci skip]
---
 docs/tutorials/docker-playgrounds/flink-operations-playground.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/tutorials/docker-playgrounds/flink-operations-playground.md 
b/docs/tutorials/docker-playgrounds/flink-operations-playground.md
index 3c5effc..df16f95 100644
--- a/docs/tutorials/docker-playgrounds/flink-operations-playground.md
+++ b/docs/tutorials/docker-playgrounds/flink-operations-playground.md
@@ -82,7 +82,7 @@ output of the Flink job should show 1000 views per page and 
window.
 The playground environment is set up in just a few steps. We will walk you 
through the necessary 
 commands and show how to validate that everything is running correctly.
 
-We assume that you have that you have [docker](https://docs.docker.com/) 
(1.12+) and
+We assume that you have [Docker](https://docs.docker.com/) (1.12+) and
 [docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
 
 The required configuration files are available in the 
@@ -204,7 +204,7 @@ docker-compose exec kafka kafka-console-consumer.sh \
 
 Now that you learned how to interact with Flink and the Docker containers, 
let's have a look at 
 some common operational tasks that you can try out on our playground.
-All of these tasks are independent of each other, i.e.i you can perform them 
in any order. 
+All of these tasks are independent of each other, i.e. you can perform them in 
any order. 
 Most tasks can be executed via the [CLI](#flink-cli) and the [REST 
API](#flink-rest-api).
 
 ### Listing Running Jobs
@@ -277,7 +277,7 @@ an external resource).
 docker-compose kill taskmanager
 {% endhighlight %}
 
-After a few seconds, Flink will notice the loss of the TaskManager, cancel the 
affected Job, and 
+After a few seconds, the Flink Master will notice the loss of the TaskManager, 
cancel the affected Job, and 
 immediately resubmit it for recovery.
 When the Job gets restarted, its tasks remain in the `SCHEDULED` state, which 
is indicated by the 
 counts in the gray colored square (see screenshot below).



[flink] branch release-1.9 updated: [hotfix][docs] Minor fixes in operations playground.

2019-09-03 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new a2e90ff  [hotfix][docs] Minor fixes in operations playground.
a2e90ff is described below

commit a2e90ff0104875b8fb76030ba7a13877bc55973f
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 10:01:00 2019 +0200

[hotfix][docs] Minor fixes in operations playground.

[ci skip]
---
 .../docker-playgrounds/flink-operations-playground.md   | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index b3c4f24..bb720b4 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -87,7 +87,7 @@ output of the Flink job should show 1000 views per page and 
window.
 The playground environment is set up in just a few steps. We will walk you 
through the necessary 
 commands and show how to validate that everything is running correctly.
 
-We assume that you have that you have [docker](https://docs.docker.com/) 
(1.12+) and
+We assume that you have [Docker](https://docs.docker.com/) (1.12+) and
 [docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
 
 The required configuration files are available in the 
@@ -209,7 +209,7 @@ docker-compose exec kafka kafka-console-consumer.sh \
 
 Now that you learned how to interact with Flink and the Docker containers, 
let's have a look at 
 some common operational tasks that you can try out on our playground.
-All of these tasks are independent of each other, i.e.i you can perform them 
in any order. 
+All of these tasks are independent of each other, i.e. you can perform them in 
any order. 
 Most tasks can be executed via the [CLI](#flink-cli) and the [REST 
API](#flink-rest-api).
 
 ### Listing Running Jobs
@@ -282,7 +282,7 @@ an external resource).
 docker-compose kill taskmanager
 {% endhighlight %}
 
-After a few seconds, Flink will notice the loss of the TaskManager, cancel the 
affected Job, and 
+After a few seconds, the Flink Master will notice the loss of the TaskManager, 
cancel the affected Job, and 
 immediately resubmit it for recovery.
 When the Job gets restarted, its tasks remain in the `SCHEDULED` state, which 
is indicated by the 
 purple colored squares (see screenshot below).