[flink] branch master updated (d32af52 -> 60cf41a)

2019-09-05 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d32af52  [FLINK-13952][table-planner][hive] PartitionableTableSink can 
not work with OverwritableTableSink
 add 60cf41a  [FLINK-13591][web] fix job list display when job name is very 
long

No new revisions were added by this update.

Summary of changes:
 .../src/app/share/customize/job-list/job-list.component.html| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink] branch master updated: [FLINK-13952][table-planner][hive] PartitionableTableSink can not work with OverwritableTableSink

2019-09-05 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d32af52  [FLINK-13952][table-planner][hive] PartitionableTableSink can 
not work with OverwritableTableSink
d32af52 is described below

commit d32af521cbe83f88cd0b822c4d752a1b5102c47c
Author: Rui Li 
AuthorDate: Wed Sep 4 21:27:00 2019 +0800

[FLINK-13952][table-planner][hive] PartitionableTableSink can not work with 
OverwritableTableSink

To support insert overwrite partition.

This closes #9615.
---
 .../flink/connectors/hive/TableEnvHiveConnectorTest.java | 16 
 .../flink/table/planner/delegation/PlannerBase.scala |  3 +++
 .../apache/flink/table/api/internal/TableEnvImpl.scala   |  5 -
 .../org/apache/flink/table/planner/StreamPlanner.scala   |  5 -
 4 files changed, 27 insertions(+), 2 deletions(-)

diff --git 
a/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
 
b/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
index 07dd674..e3a 100644
--- 
a/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
+++ 
b/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
@@ -177,6 +177,7 @@ public class TableEnvHiveConnectorTest {
public void testInsertOverwrite() throws Exception {
hiveShell.execute("create database db1");
try {
+   // non-partitioned
hiveShell.execute("create table db1.dest (x int, y 
string)");
hiveShell.insertInto("db1", "dest").addRow(1, 
"a").addRow(2, "b").commit();
verifyHiveQueryResult("select * from db1.dest", 
Arrays.asList("1\ta", "2\tb"));
@@ -184,6 +185,21 @@ public class TableEnvHiveConnectorTest {
tableEnv.sqlUpdate("insert overwrite db1.dest values 
(3,'c')");
tableEnv.execute("test insert overwrite");
verifyHiveQueryResult("select * from db1.dest", 
Collections.singletonList("3\tc"));
+
+   // static partition
+   hiveShell.execute("create table db1.part(x int) 
partitioned by (y int)");
+   hiveShell.insertInto("db1", "part").addRow(1, 
1).addRow(2, 2).commit();
+   tableEnv = getTableEnvWithHiveCatalog();
+   tableEnv.sqlUpdate("insert overwrite db1.part partition 
(y=1) select 100");
+   tableEnv.execute("insert overwrite static partition");
+   verifyHiveQueryResult("select * from db1.part", 
Arrays.asList("100\t1", "2\t2"));
+
+   // dynamic partition
+   tableEnv = getTableEnvWithHiveCatalog();
+   tableEnv.sqlUpdate("insert overwrite db1.part values 
(200,2),(3,3)");
+   tableEnv.execute("insert overwrite dynamic partition");
+   // only overwrite dynamically matched partitions, other 
existing partitions remain intact
+   verifyHiveQueryResult("select * from db1.part", 
Arrays.asList("100\t1", "200\t2", "3\t3"));
} finally {
hiveShell.execute("drop database db1 cascade");
}
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
index f0e18f0..90cdab9 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
@@ -182,6 +182,9 @@ abstract class PlannerBase(
   if partitionableSink.getPartitionFieldNames != null
 && partitionableSink.getPartitionFieldNames.nonEmpty =>
   
partitionableSink.setStaticPartition(catalogSink.getStaticPartitions)
+case _ =>
+  }
+  sink match {
 case overwritableTableSink: OverwritableTableSink =>
   overwritableTableSink.setOverwrite(catalogSink.isOverwrite)
 case _ =>
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/TableEnvImpl.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/TableEnvImpl.scala
index 0dece49..0e00268 100644
--- 

[flink] branch release-1.9 updated: [FLINK-13937][docs] Fix the error of the hive connector dependency version

2019-09-05 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 1e88d42  [FLINK-13937][docs] Fix the error of the hive connector 
dependency version
1e88d42 is described below

commit 1e88d4261c891d7d201621046e9c712b4fda400a
Author: yangjf2019 
AuthorDate: Mon Sep 2 14:19:04 2019 +0800

[FLINK-13937][docs] Fix the error of the hive connector dependency version

Fix the error of the hive connector dependency version in doc.

This closes #9591.
---
 docs/_config.yml| 4 
 docs/dev/table/hive/index.md| 8 
 docs/dev/table/hive/index.zh.md | 8 
 3 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/docs/_config.yml b/docs/_config.yml
index 8fd855b..04a43dd 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -39,6 +39,10 @@ scala_version: "2.11"
 # This suffix is appended to the Scala-dependent Maven artifact names
 scala_version_suffix: "_2.11"
 
+# Plain flink-shaded version is needed for e.g. the hive connector.
+# Please update the shaded_version once new flink-shaded is released.
+shaded_version: "8.0"
+
 # Some commonly linked pages (this was more important to have as a variable
 # during incubator; by now it should also be fine to hardcode these.)
 website_url: "http://flink.apache.org;
diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index 0d5ab31..62709a1 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -69,7 +69,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -79,7 +79,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.7.5-{{site.version}}
+  2.7.5-{{ site.shaded_version }}
   provided
 
 
@@ -105,7 +105,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -115,7 +115,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.6.5-{{site.version}}
+  2.6.5-{{ site.shaded_version }}
   provided
 
 
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index 6b2f7e4..14303c0 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -69,7 +69,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -79,7 +79,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.7.5-{{site.version}}
+  2.7.5-{{ site.shaded_version }}
   provided
 
 
@@ -105,7 +105,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -115,7 +115,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.6.5-{{site.version}}
+  2.6.5-{{ site.shaded_version }}
   provided
 
 



[flink] branch master updated: [FLINK-13937][docs] Fix the error of the hive connector dependency version

2019-09-05 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d6ff17d  [FLINK-13937][docs] Fix the error of the hive connector 
dependency version
d6ff17d is described below

commit d6ff17d908932d867a6c0a02ac317a4f59e855b5
Author: yangjf2019 
AuthorDate: Mon Sep 2 14:19:04 2019 +0800

[FLINK-13937][docs] Fix the error of the hive connector dependency version

Fix the error of the hive connector dependency version in doc.

This closes #9591.
---
 docs/_config.yml| 4 
 docs/dev/table/hive/index.md| 8 
 docs/dev/table/hive/index.zh.md | 8 
 3 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/docs/_config.yml b/docs/_config.yml
index b80..e6e1f80 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -39,6 +39,10 @@ scala_version: "2.11"
 # This suffix is appended to the Scala-dependent Maven artifact names
 scala_version_suffix: "_2.11"
 
+# Plain flink-shaded version is needed for e.g. the hive connector.
+# Please update the shaded_version once new flink-shaded is released.
+shaded_version: "8.0"
+
 # Some commonly linked pages (this was more important to have as a variable
 # during incubator; by now it should also be fine to hardcode these.)
 website_url: "https://flink.apache.org;
diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index 0d5ab31..62709a1 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -69,7 +69,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -79,7 +79,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.7.5-{{site.version}}
+  2.7.5-{{ site.shaded_version }}
   provided
 
 
@@ -105,7 +105,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -115,7 +115,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.6.5-{{site.version}}
+  2.6.5-{{ site.shaded_version }}
   provided
 
 
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index 6b2f7e4..14303c0 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -69,7 +69,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -79,7 +79,7 @@ To integrate with Hive, users need the following dependencies 
in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.7.5-{{site.version}}
+  2.7.5-{{ site.shaded_version }}
   provided
 
 
@@ -105,7 +105,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
 
   org.apache.flink
-  flink-hadoop-compatibility_{{site.version}}
+  flink-hadoop-compatibility{{ site.scala_version_suffix 
}}
   {{site.version}}
   provided
 
@@ -115,7 +115,7 @@ To integrate with Hive, users need the following 
dependencies in their project.
 
   org.apache.flink
   flink-shaded-hadoop-2-uber
-  2.6.5-{{site.version}}
+  2.6.5-{{ site.shaded_version }}
   provided
 
 



[flink] branch master updated: [FLINK-13930][hive] Support Hive version 3.1.x

2019-09-05 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e670b92  [FLINK-13930][hive] Support Hive version 3.1.x
e670b92 is described below

commit e670b929ae124e04c6cdec560d4432ae9c561ff9
Author: Xuefu Zhang 
AuthorDate: Fri Aug 30 11:23:57 2019 -0700

[FLINK-13930][hive] Support Hive version 3.1.x

Support Hive 3.1.x versions, including 3.1.0, 3.1.1, and 3.1.2. Those are 
latest and newest versions from Hive community.

This closes #9580.
---
 flink-connectors/flink-connector-hive/pom.xml  |  13 +-
 .../flink/connectors/hive/HiveTableFactory.java|  10 +-
 .../connectors/hive/HiveTableOutputFormat.java |   6 +-
 .../flink/connectors/hive/HiveTableSink.java   |   8 +-
 .../flink/table/catalog/hive/HiveCatalog.java  |  15 ++-
 .../flink/table/catalog/hive/client/HiveShim.java  |  50 
 .../table/catalog/hive/client/HiveShimLoader.java  |  12 ++
 .../table/catalog/hive/client/HiveShimV120.java|  51 
 .../table/catalog/hive/client/HiveShimV310.java| 114 ++
 .../table/catalog/hive/client/HiveShimV311.java|  26 
 .../table/catalog/hive/client/HiveShimV312.java|  26 
 .../catalog/hive/util/HiveReflectionUtils.java | 131 +
 .../table/functions/hive/HiveGenericUDAF.java  |   3 +-
 .../flink/table/functions/hive/HiveGenericUDF.java |  10 +-
 .../table/functions/hive/HiveGenericUDTF.java  |   9 +-
 .../flink/table/functions/hive/HiveSimpleUDF.java  |   7 +-
 .../functions/hive/conversion/HiveInspectors.java  |  22 ++--
 .../connectors/hive/FlinkStandaloneHiveRunner.java |   6 +-
 .../hive/FlinkStandaloneHiveServerContext.java |   4 +-
 .../connectors/hive/HiveRunnerShimLoader.java  |   3 +
 .../table/functions/hive/HiveGenericUDFTest.java   |  18 +--
 .../table/functions/hive/HiveGenericUDTFTest.java  |   7 +-
 .../table/functions/hive/HiveSimpleUDFTest.java|  29 ++---
 23 files changed, 507 insertions(+), 73 deletions(-)

diff --git a/flink-connectors/flink-connector-hive/pom.xml 
b/flink-connectors/flink-connector-hive/pom.xml
index 66cbc19..471e045 100644
--- a/flink-connectors/flink-connector-hive/pom.xml
+++ b/flink-connectors/flink-connector-hive/pom.xml
@@ -115,10 +115,6 @@ under the License.
commons-lang


-   com.zaxxer
-   HikariCP
-   
-   
co.cask.tephra
tephra-api

@@ -616,7 +612,7 @@ under the License.
test

 
-
+   
 org.apache.flink
 flink-test-utils_${scala.binary.version}
 ${project.version}
@@ -671,6 +667,13 @@ under the License.



+   hive-3.1.1
+   
+   3.1.1
+   
+
+   
+   
skip-hive-tests


diff --git 
a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableFactory.java
 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableFactory.java
index 73626d0..235919c 100644
--- 
a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableFactory.java
+++ 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableFactory.java
@@ -25,6 +25,8 @@ import org.apache.flink.table.catalog.CatalogTable;
 import org.apache.flink.table.catalog.CatalogTableImpl;
 import org.apache.flink.table.catalog.ObjectPath;
 import org.apache.flink.table.catalog.config.CatalogConfig;
+import org.apache.flink.table.catalog.hive.client.HiveShim;
+import org.apache.flink.table.catalog.hive.client.HiveShimLoader;
 import org.apache.flink.table.catalog.hive.descriptors.HiveCatalogValidator;
 import org.apache.flink.table.factories.FunctionDefinitionFactory;
 import org.apache.flink.table.factories.TableFactoryUtil;
@@ -71,6 +73,7 @@ public class HiveTableFactory
 
private final HiveConf hiveConf;
private final String hiveVersion;
+   private final HiveShim hiveShim;
 
public HiveTableFactory(HiveConf hiveConf) {
this.hiveConf = checkNotNull(hiveConf, "hiveConf cannot be 
null");
@@ -78,6 +81,7 @@ public class HiveTableFactory
// this has to come from hiveConf, otherwise we may lose what 
user specifies in the yaml 

[flink] branch release-1.8 updated: [FLINK-13966][licensing] Pin locale for deterministic sort order

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new ba1cd43  [FLINK-13966][licensing] Pin locale for deterministic sort 
order
ba1cd43 is described below

commit ba1cd43b6ed42070fd9013159a5b65adb18b3975
Author: Chesnay Schepler 
AuthorDate: Thu Sep 5 09:18:37 2019 +0200

[FLINK-13966][licensing] Pin locale for deterministic sort order
---
 tools/releasing/collect_license_files.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/releasing/collect_license_files.sh 
b/tools/releasing/collect_license_files.sh
index dad75bc..6da1733 100755
--- a/tools/releasing/collect_license_files.sh
+++ b/tools/releasing/collect_license_files.sh
@@ -48,7 +48,7 @@ done
 
 NOTICE="${DST}/NOTICE"
 [ -f "${NOTICE}" ] && rm "${NOTICE}"
-find "${TMP}" -name "NOTICE" | sort | xargs cat >> "${NOTICE}"
+(export LC_ALL=C; find "${TMP}" -name "NOTICE" | sort | xargs cat >> 
"${NOTICE}")
 
 LICENSES="${DST}/licenses"
 [ -f "${LICENSES}" ] && rm -r ""



[flink] branch release-1.9 updated: [FLINK-13966][licensing] Pin locale for deterministic sort order

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 1dd2611  [FLINK-13966][licensing] Pin locale for deterministic sort 
order
1dd2611 is described below

commit 1dd2611c0b5446254b91c5cbc6c14ab8ab3f1a2d
Author: Chesnay Schepler 
AuthorDate: Thu Sep 5 09:18:37 2019 +0200

[FLINK-13966][licensing] Pin locale for deterministic sort order
---
 tools/releasing/collect_license_files.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/releasing/collect_license_files.sh 
b/tools/releasing/collect_license_files.sh
index dad75bc..6da1733 100755
--- a/tools/releasing/collect_license_files.sh
+++ b/tools/releasing/collect_license_files.sh
@@ -48,7 +48,7 @@ done
 
 NOTICE="${DST}/NOTICE"
 [ -f "${NOTICE}" ] && rm "${NOTICE}"
-find "${TMP}" -name "NOTICE" | sort | xargs cat >> "${NOTICE}"
+(export LC_ALL=C; find "${TMP}" -name "NOTICE" | sort | xargs cat >> 
"${NOTICE}")
 
 LICENSES="${DST}/licenses"
 [ -f "${LICENSES}" ] && rm -r ""



[flink] branch master updated (7bda6f6 -> 6f8b79e)

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 7bda6f6  [FLINK-13842][docs] Improve Javadocs and web documentation of 
the StreamingFileSink
 add 6f8b79e  [FLINK-13966][licensing] Pin locale for deterministic sort 
order

No new revisions were added by this update.

Summary of changes:
 tools/releasing/collect_license_files.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink] branch master updated: [FLINK-13842][docs] Improve Javadocs and web documentation of the StreamingFileSink

2019-09-05 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 7bda6f6  [FLINK-13842][docs] Improve Javadocs and web documentation of 
the StreamingFileSink
7bda6f6 is described below

commit 7bda6f61c9ce0db80f2b1d4c8e7fd6cad9b0eaf9
Author: Gyula Fora 
AuthorDate: Thu Sep 5 16:04:25 2019 +0200

[FLINK-13842][docs] Improve Javadocs and web documentation of the 
StreamingFileSink

Closes #9530
---
 docs/dev/connectors/streamfile_sink.md | 339 +
 docs/fig/streamfilesink_bucketing.png  | Bin 0 -> 100661 bytes
 .../sink/filesystem/StreamingFileSink.java |  25 ++
 .../rollingpolicies/DefaultRollingPolicy.java  |   6 +
 4 files changed, 316 insertions(+), 54 deletions(-)

diff --git a/docs/dev/connectors/streamfile_sink.md 
b/docs/dev/connectors/streamfile_sink.md
index 9017447..c59e646 100644
--- a/docs/dev/connectors/streamfile_sink.md
+++ b/docs/dev/connectors/streamfile_sink.md
@@ -23,30 +23,140 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+* This will be replaced by the TOC
+{:toc}
+
 This connector provides a Sink that writes partitioned files to filesystems
 supported by the [Flink `FileSystem` abstraction]({{ 
site.baseurl}}/ops/filesystems/index.html).
 
-Since in streaming the input is potentially infinite, the streaming file sink 
writes data
-into buckets. The bucketing behaviour is configurable but a useful default is 
time-based
-bucketing where we start writing a new bucket every hour and thus get
-individual files that each contain a part of the infinite output stream.
+In order to handle unbounded data streams, the streaming file sink writes 
incoming data
+into buckets. The bucketing behaviour is fully configurable with a default 
time-based
+bucketing where we start writing a new bucket every hour and thus get files 
that correspond to
+records received during certain time intervals from the stream.
+
+The bucket directories themselves contain several part files with the actual 
output data, with at least
+one for each subtask of the sink that has received data for the bucket. 
Additional part files will be created according to the configurable
+rolling policy. The default policy rolls files based on size, a timeout that 
specifies the maximum duration for which a file can be open, and a maximum 
inactivity timeout after which the file is closed.
+
+ 
+ IMPORTANT: Checkpointing needs to be enabled when using the 
StreamingFileSink. Part files can only be finalized
+ on successful checkpoints. If checkpointing is disabled part files will 
forever stay in `in-progress` or `pending` state
+ and cannot be safely read by downstream systems.
+ 
+
+ 
+
+### Bucket Assignment
+
+The bucketing logic defines how the data will be structured into 
subdirectories inside the base output directory.
+
+Both row and bulk formats use the [DateTimeBucketAssigner]({{ 
site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 as the default assigner.
+By default the DateTimeBucketAssigner creates hourly buckets based on the 
system default timezone
+with the following format: `-MM-dd--HH`. Both the date format (i.e. bucket 
size) and timezone can be
+configured manually.
+
+We can specify a custom [BucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html)
 by calling `.withBucketAssigner(assigner)` on the format builders.
+
+Flink comes with two built in BucketAssigners:
+
+ - [DateTimeBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 : Default time based assigner
+ - [BasePathBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/BasePathBucketAssigner.html)
 : Assigner that stores all part files in the base path (single global bucket)
+
+### Rolling Policy
+
+The [RollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy.html)
 defines when a given in-progress part file will be closed and moved to the 
pending and later to finished state.
+In combination with the checkpointing interval (pending files become finished 
on the next checkpoint) this controls how quickly
+part files become available for downstream readers and also the size and 
number of these parts.
+
+Flink comes with two built-in RollingPolicies:
+
+ - [DefaultRollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.html)
+ - [OnCheckpointRollingPolicy]({{ 

[flink] branch master updated (663bcc4 -> 3e2f5fa)

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 663bcc4  [FLINK-13457][build][travis] Remove JDK 9 support
 add 3e2f5fa  [FLINK-13936][licensing] Update NOTICE-binary

No new revisions were added by this update.

Summary of changes:
 NOTICE-binary | 115 --
 licenses-binary/LICENSE.dagre |   2 +-
 2 files changed, 79 insertions(+), 38 deletions(-)



[flink] branch release-1.9 updated: [FLINK-13936][licensing] Update NOTICE-binary

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 70fa917  [FLINK-13936][licensing] Update NOTICE-binary
70fa917 is described below

commit 70fa917b8867a7b39dd5aa34f5e4fb16516e4b07
Author: Chesnay Schepler 
AuthorDate: Thu Sep 5 15:49:06 2019 +0200

[FLINK-13936][licensing] Update NOTICE-binary
---
 NOTICE-binary | 85 ++-
 licenses-binary/LICENSE.dagre |  2 +-
 2 files changed, 53 insertions(+), 34 deletions(-)

diff --git a/NOTICE-binary b/NOTICE-binary
index 2f12cdc..6b77a51 100644
--- a/NOTICE-binary
+++ b/NOTICE-binary
@@ -4698,7 +4698,6 @@ Copyright 2006-2019 The Apache Software Foundation
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
 
-
 flink-oss-fs-hadoop
 Copyright 2014-2019 The Apache Software Foundation
 
@@ -14295,13 +14294,19 @@ See bundled license files for details.
 - org.jline:jline-terminal:3.9.0
 - org.jline:jline-reader:3.9.0
 
-=
-= NOTICE file corresponding to section 4d of the Apache License Version 2.0 =
-=
-This product includes software developed by
-Joda.org (http://www.joda.org/).
+// --
+// NOTICE file corresponding to the section 4d of The Apache License,
+// Version 2.0, in this case for Apache Flink
+// --
 
-flink-sql-parser
+Apache Flink
+Copyright 2006-2019 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+
+flink-state-processor-api
 Copyright 2014-2019 The Apache Software Foundation
 
 // --
@@ -16404,18 +16409,12 @@ This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
 
 
-flink-table-uber
-Copyright 2014-2019 The Apache Software Foundation
-
 flink-table-uber-blink
 Copyright 2014-2019 The Apache Software Foundation
 
 flink-table-common
 Copyright 2014-2019 The Apache Software Foundation
 
-flink-sql-parser
-Copyright 2014-2019 The Apache Software Foundation
-
 flink-table-api-java
 Copyright 2014-2019 The Apache Software Foundation
 
@@ -16450,6 +16449,9 @@ See bundled license files for details
 - org.codehaus.janino:janino:3.0.9
 - org.codehaus.janino:commons-compiler:3.0.9
 
+flink-sql-parser
+Copyright 2014-2019 The Apache Software Foundation
+
 Apache Calcite Avatica
 Copyright 2012-2019 The Apache Software Foundation
 
@@ -16509,6 +16511,36 @@ Copyright 2014-2019 The Apache Software Foundation
 flink-cep
 Copyright 2014-2019 The Apache Software Foundation
 
+// --
+// NOTICE file corresponding to the section 4d of The Apache License,
+// Version 2.0, in this case for Apache Flink
+// --
+
+Apache Flink
+Copyright 2006-2019 The Apache Software Foundation
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+
+flink-table-uber
+Copyright 2014-2019 The Apache Software Foundation
+
+flink-table-common
+Copyright 2014-2019 The Apache Software Foundation
+
+flink-table-api-java
+Copyright 2014-2019 The Apache Software Foundation
+
+flink-table-api-scala
+Copyright 2014-2019 The Apache Software Foundation
+
+flink-table-api-java-bridge
+Copyright 2014-2019 The Apache Software Foundation
+
+flink-table-api-scala-bridge
+Copyright 2014-2019 The Apache Software Foundation
+
 flink-table-planner
 Copyright 2014-2019 The Apache Software Foundation
 
@@ -16531,6 +16563,9 @@ See bundled license files for details
 - org.codehaus.janino:janino:3.0.9
 - org.codehaus.janino:commons-compiler:3.0.9
 
+flink-sql-parser
+Copyright 2014-2019 The Apache Software Foundation
+
 Calcite Core
 Copyright 2012-2019 The Apache Software Foundation
 
@@ -16587,27 +16622,11 @@ FasterXML.com (http://fasterxml.com).
 This product includes software developed by
 Joda.org (http://www.joda.org/).
 
+flink-cep
+Copyright 2014-2019 The Apache Software Foundation
+
 Apache log4j
 Copyright 2007 The Apache Software Foundation
 
 This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-// --
-// NOTICE file corresponding to the section 4d of The Apache License,
-// Version 2.0, in this case for Apache Flink
-// --
-
-Apache Flink
-Copyright 2006-2019 The 

[flink] branch master updated (3c991dc -> 663bcc4)

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3c991dc  [FLINK-13927][docs] Add note about adding Hadoop dependencies 
to run/debug Flink locally in mini cluster
 add d73e25b  [FLINK-13528][kafka] Disable tests failing on Java 11
 add 4016157  [FLINK-13515][tests] Disable test on Java 11
 add a65bd5e  [FLINK-13516][tests] Disable test on Java 11
 add 95881a9  [FLINK-13893][s3] Add jaxb-api dependency on Java 11
 add 9d6f1e6  [FLINK-13457][travis] Setup JDK 11 builds
 add 663bcc4  [FLINK-13457][build][travis] Remove JDK 9 support

No new revisions were added by this update.

Summary of changes:
 .travis.yml| 86 --
 flink-connectors/flink-connector-cassandra/pom.xml | 21 --
 flink-connectors/flink-connector-hive/pom.xml  | 19 -
 .../flink-connector-kafka-0.10/pom.xml | 14 
 .../flink-connector-kafka-0.11/pom.xml | 14 
 .../kafka/KafkaShortRetention08ITCase.java |  4 +
 .../connectors/kafka/Kafka09SecuredRunITCase.java  |  3 +
 flink-end-to-end-tests/run-pre-commit-tests.sh |  8 +-
 flink-filesystems/flink-s3-fs-hadoop/pom.xml   | 22 ++
 flink-filesystems/flink-s3-fs-presto/pom.xml   | 21 ++
 .../flink/test/classloading/ClassLoaderITCase.java |  3 +
 .../flink/yarn/YARNSessionFIFOSecuredITCase.java   |  3 +
 pom.xml| 65 
 tools/travis/stage.sh  | 11 ---
 tools/travis_watchdog.sh   | 40 +-
 15 files changed, 125 insertions(+), 209 deletions(-)



[flink-web] 01/02: [FLINK-13821] Add missing foundation links & add events section

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit ce569aecbe33f6cf5697d9e8e6214e2011e04875
Author: Robert Metzger 
AuthorDate: Wed Sep 4 19:07:49 2019 +0200

[FLINK-13821] Add missing foundation links & add events section

This closes #261.
---
 _includes/navbar.html |  23 +++
 img/flink-forward.png | Bin 0 -> 19207 bytes
 index.md  |  27 ++-
 index.zh.md   |  24 
 4 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/_includes/navbar.html b/_includes/navbar.html
index 7df4993..6516e2d 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -147,6 +147,29 @@
 
 Plan Visualizer 
 
+  
+
+https://apache.org; target="_blank">Apache Software 
Foundation 
+
+
+  
+.smalllinks {
+  display: inline !important;
+}
+.smalllinks:hover {
+  background: none !important;
+}
+  
+
+  https://www.apache.org/licenses/; 
target="_blank">Licenses 
+
+  https://www.apache.org/security/; 
target="_blank">Security 
+
+  https://www.apache.org/foundation/sponsorship.html; 
target="_blank">Donate 
+
+  https://www.apache.org/foundation/thanks.html; target="_blank">Thanks 

+
+
   
 
 
diff --git a/img/flink-forward.png b/img/flink-forward.png
new file mode 100644
index 000..9dab0fb
Binary files /dev/null and b/img/flink-forward.png differ
diff --git a/index.md b/index.md
index f7a2fe2..876d494 100644
--- a/index.md
+++ b/index.md
@@ -6,7 +6,7 @@ layout: base
 
   
 
-  **Apache Flink® - Stateful Computations over Data Streams**
+  **Apache Flink® — Stateful Computations over Data Streams**
 
   
 
@@ -310,6 +310,31 @@ layout: base
 
 
 
+
+
+
+
+  
+
+
+
+
+  Upcoming Events
+
+
+
+  
+  https://flink-forward.org; target="_blank">
+
+  
+  
+  https://events.apache.org/x/current-event.html; target="_blank">
+https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
+  
+
+
+
+
 
 
 
diff --git a/index.zh.md b/index.zh.md
index 1a66609..8a96b47 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -304,6 +304,30 @@ layout: base
 
 
 
+
+
+
+
+  
+
+
+
+
+  Upcoming Events
+
+
+
+  
+  https://flink-forward.org; target="_blank">
+
+  
+  
+  https://events.apache.org/x/current-event.html; target="_blank">
+https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
+  
+
+
+
 
 
 



[flink-web] branch asf-site updated (6b0ffa7 -> 8b4af0c)

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 6b0ffa7  Rebuild website
 new ce569ae  [FLINK-13821] Add missing foundation links & add events 
section
 new 8b4af0c  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _includes/navbar.html  |  23 ++
 content/2019/05/03/pulsar-flink.html   |  23 ++
 content/2019/05/14/temporal-tables.html|  23 ++
 content/2019/05/19/state-ttl.html  |  23 ++
 content/2019/06/05/flink-network-stack.html|  23 ++
 content/2019/06/26/broadcast-state.html|  23 ++
 content/2019/07/23/flink-network-stack-2.html  |  23 ++
 content/blog/index.html|  23 ++
 content/blog/page2/index.html  |  23 ++
 content/blog/page3/index.html  |  23 ++
 content/blog/page4/index.html  |  23 ++
 content/blog/page5/index.html  |  23 ++
 content/blog/page6/index.html  |  23 ++
 content/blog/page7/index.html  |  23 ++
 content/blog/page8/index.html  |  23 ++
 content/blog/page9/index.html  |  23 ++
 .../blog/release_1.0.0-changelog_known_issues.html |  23 ++
 content/blog/release_1.1.0-changelog.html  |  23 ++
 content/blog/release_1.2.0-changelog.html  |  23 ++
 content/blog/release_1.3.0-changelog.html  |  23 ++
 content/community.html |  23 ++
 .../code-style-and-quality-common.html |  23 ++
 .../code-style-and-quality-components.html |  23 ++
 .../code-style-and-quality-formatting.html |  23 ++
 .../contributing/code-style-and-quality-java.html  |  23 ++
 .../code-style-and-quality-preamble.html   |  23 ++
 .../code-style-and-quality-pull-requests.html  |  23 ++
 .../contributing/code-style-and-quality-scala.html |  23 ++
 content/contributing/contribute-code.html  |  23 ++
 content/contributing/contribute-documentation.html |  23 ++
 content/contributing/how-to-contribute.html|  23 ++
 content/contributing/improve-website.html  |  23 ++
 content/contributing/reviewing-prs.html|  23 ++
 content/documentation.html |  23 ++
 content/downloads.html |  23 ++
 content/ecosystem.html |  23 ++
 content/faq.html   |  23 ++
 .../2017/07/04/flink-rescalable-state.html |  23 ++
 .../2018/01/30/incremental-checkpointing.html  |  23 ++
 .../01/end-to-end-exactly-once-apache-flink.html   |  23 ++
 .../features/2019/03/11/prometheus-monitoring.html |  23 ++
 content/flink-applications.html|  23 ++
 content/flink-architecture.html|  23 ++
 content/flink-operations.html  |  23 ++
 content/gettinghelp.html   |  23 ++
 content/img/flink-forward.png  | Bin 0 -> 19207 bytes
 content/index.html |  50 -
 content/material.html  |  23 ++
 content/news/2014/08/26/release-0.6.html   |  23 ++
 content/news/2014/09/26/release-0.6.1.html |  23 ++
 content/news/2014/10/03/upcoming_events.html   |  23 ++
 content/news/2014/11/04/release-0.7.0.html |  23 ++
 content/news/2014/11/18/hadoop-compatibility.html  |  23 ++
 content/news/2015/01/06/december-in-flink.html |  23 ++
 content/news/2015/01/21/release-0.8.html   |  23 ++
 content/news/2015/02/04/january-in-flink.html  |  23 ++
 content/news/2015/02/09/streaming-example.html |  23 ++
 .../news/2015/03/02/february-2015-in-flink.html|  23 ++
 .../13/peeking-into-Apache-Flinks-Engine-Room.html |  23 ++
 content/news/2015/04/07/march-in-flink.html|  23 ++
 .../news/2015/04/13/release-0.9.0-milestone1.html  |  23 ++
 .../2015/05/11/Juggling-with-Bits-and-Bytes.html   |  23 ++
 .../news/2015/05/14/Community-update-April.html|  23 ++
 .../24/announcing-apache-flink-0.9.0-release.html  

[flink] branch master updated (5531b23 -> 3c991dc)

2019-09-05 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5531b23  [FLINK-13892][hs] Harden HistoryServerTest
 add 3c991dc  [FLINK-13927][docs] Add note about adding Hadoop dependencies 
to run/debug Flink locally in mini cluster

No new revisions were added by this update.

Summary of changes:
 docs/ops/deployment/hadoop.md| 22 ++
 docs/ops/deployment/hadoop.zh.md | 22 ++
 2 files changed, 44 insertions(+)



[flink-web] 01/02: Update Roadmap after the release of Flink 1.9.

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit af7bdfc71911b461a0579f94a3a9651249621145
Author: Marta Paes Moreira 
AuthorDate: Wed Sep 4 08:22:22 2019 +0200

Update Roadmap after the release of Flink 1.9.

* Remove finished features.
* Add newly started and planned efforts.

Co-Authored-By: Till Rohrmann 

This closes #260.
---
 roadmap.md | 128 ++---
 1 file changed, 55 insertions(+), 73 deletions(-)

diff --git a/roadmap.md b/roadmap.md
index 2993b0b..46a0933 100644
--- a/roadmap.md
+++ b/roadmap.md
@@ -22,17 +22,17 @@ under the License.
 
 
 
-{% toc %}
+{% toc %} 
 
 **Preamble:** This is not an authoritative roadmap in the sense of a strict 
plan with a specific
-timeline. Rather, we, the community, share our vision for the future and give 
an overview of the bigger
+timeline. Rather, we — the community — share our vision for the future and 
give an overview of the bigger
 initiatives that are going on and are receiving attention. This roadmap shall 
give users and
 contributors an understanding where the project is going and what they can 
expect to come.
 
 The roadmap is continuously updated. New features and efforts should be added 
to the roadmap once
 there is consensus that they will happen and what they will roughly look like 
for the user.
 
-**Last Update:** 2019-05-08
+**Last Update:** 2019-09-04
 
 # Analytics, Applications, and the roles of DataStream, DataSet, and Table API
 
@@ -41,38 +41,36 @@ Flink views stream processing as a [unifying paradigm for 
data processing]({{ si
 
   - The **Table API / SQL** is becoming the primary API for analytical use 
cases, in a unified way
 across batch and streaming. To support analytical use cases in a more 
streamlined fashion,
-the API is extended with additional functions 
([FLIP-29](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739)).
+the API is being extended with more convenient multi-row/column operations 
([FLIP-29](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739)).
 
-Like SQL, the Table API is *declarative*, operates on a *logical schema*, 
and applies *automatic optimization*.
+- Like SQL, the Table API is *declarative*, operates on a *logical 
schema*, and applies *automatic optimization*.
 Because of these properties, that API does not give direct access to time 
and state.
 
+- The Table API is also the foundation for the Machine Learning (ML) 
efforts inititated in 
([FLIP-39](https://cwiki.apache.org/confluence/display/FLINK/FLIP-39+Flink+ML+pipeline+and+ML+libs)),
 that will allow users to easily build, persist and serve 
([FLINK-13167](https://issues.apache.org/jira/browse/FLINK-13167)) ML 
pipelines/workflows through a set of abstract core interfaces.
+
   - The **DataStream API** is the primary API for data-driven applications and 
data pipelines.
 It uses *physical data types* (Java/Scala classes) and there is no 
automatic rewriting.
-The applications have explicit control over *time* and *state* (state, 
triggers, proc. fun.).
-
-In the long run, the DataStream API should fully subsume the DataSet API 
through *bounded streams*.
+The applications have explicit control over *time* and *state* (state, 
triggers, proc fun.). 
+In the long run, the DataStream API will fully subsume the DataSet API 
through *bounded streams*.
 
 # Batch and Streaming Unification
 
-Flink's approach is to cover batch and streaming by the same APIs, on a 
streaming runtime.
+Flink's approach is to cover batch and streaming by the same APIs on a 
streaming runtime.
 [This blog post]({{ site.baseurl 
}}/news/2019/02/13/unified-batch-streaming-blink.html)
-gives an introduction to the unification effort. 
+gives an introduction to the unification effort.
 
 The biggest user-facing parts currently ongoing are:
 
-  - Table API restructuring 
[FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions)
-that decouples the Table API from batch/streaming specific environments 
and dependencies.
+  - Table API restructuring 
([FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions))
+that decouples the Table API from batch/streaming specific environments 
and dependencies. Some key parts of the FLIP are completed, such as the modular 
decoupling of expression parsing and the removal of Scala dependencies, and the 
next step is to unify the function stack 
([FLINK-12710](https://issues.apache.org/jira/browse/FLINK-12710)).
+
+  - The new source interfaces generalize across batch and streaming, making 
every connector usable as a batch and streaming data source 

[flink-web] 02/02: Rebuild website

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 6b0ffa71a32211eb9ad5f2c9cab5162e88db6fcd
Author: Fabian Hueske 
AuthorDate: Thu Sep 5 11:35:56 2019 +0200

Rebuild website
---
 content/roadmap.html | 147 +--
 1 file changed, 71 insertions(+), 76 deletions(-)

diff --git a/content/roadmap.html b/content/roadmap.html
index 18cbcc8..98b983a 100644
--- a/content/roadmap.html
+++ b/content/roadmap.html
@@ -184,23 +184,25 @@ under the License.
   Batch and Streaming 
Unification
   Fast Batch (Bounded 
Streams)
   Stream Processing Use 
Cases
-  Deployment, Scaling, 
Security
+  Deployment, Scaling and 
Security
+  Resource Management and 
Configuration
   Ecosystem
-  Connectors  Formats
+  Non-JVM Languages (Python)
+  Connectors and Formats
   Miscellaneous
 
 
 
 
 Preamble: This is not an authoritative roadmap in the 
sense of a strict plan with a specific
-timeline. Rather, we, the community, share our vision for the future and give 
an overview of the bigger
+timeline. Rather, we — the community — share our vision for the future and 
give an overview of the bigger
 initiatives that are going on and are receiving attention. This roadmap shall 
give users and
 contributors an understanding where the project is going and what they can 
expect to come.
 
 The roadmap is continuously updated. New features and efforts should be 
added to the roadmap once
 there is consensus that they will happen and what they will roughly look like 
for the user.
 
-Last Update: 2019-05-08
+Last Update: 2019-09-04
 
 Analytics,
 Applications, and the roles of DataStream, DataSet, and Table API
 
@@ -211,23 +213,29 @@ there is consensus that they will happen and what they 
will roughly look like fo
   
 The Table API / SQL is becoming the primary API for 
analytical use cases, in a unified way
 across batch and streaming. To support analytical use cases in a more 
streamlined fashion,
-the API is extended with additional functions (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739;>FLIP-29).
+the API is being extended with more convenient multi-row/column operations (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739;>FLIP-29).
 
-Like SQL, the Table API is declarative, operates on a 
logical schema, and applies automatic optimization.
+
+  
+Like SQL, the Table API is declarative, operates on a 
logical schema, and applies automatic optimization.
 Because of these properties, that API does not give direct access to time and 
state.
+  
+  
+The Table API is also the foundation for the Machine Learning (ML) 
efforts inititated in (https://cwiki.apache.org/confluence/display/FLINK/FLIP-39+Flink+ML+pipeline+and+ML+libs;>FLIP-39),
 that will allow users to easily build, persist and serve (https://issues.apache.org/jira/browse/FLINK-13167;>FLINK-13167) ML 
pipelines/workflows through a set of abstract core interfaces.
+  
+
   
   
 The DataStream API is the primary API for data-driven 
applications and data pipelines.
 It uses physical data types (Java/Scala classes) and there is no 
automatic rewriting.
-The applications have explicit control over time and state 
(state, triggers, proc. fun.).
-
-In the long run, the DataStream API should fully subsume the DataSet 
API through bounded streams.
+The applications have explicit control over time and state 
(state, triggers, proc fun.). 
+In the long run, the DataStream API will fully subsume the DataSet API through 
bounded streams.
   
 
 
 Batch and Streaming Unification
 
-Flink’s approach is to cover batch and streaming by the same APIs, on a 
streaming runtime.
+Flink’s approach is to cover batch and streaming by the same APIs on a 
streaming runtime.
 This blog 
post
 gives an introduction to the unification effort.
 
@@ -235,23 +243,20 @@ gives an introduction to the unification effort.
 
 
   
-Table API restructuring https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions;>FLIP-32
-that decouples the Table API from batch/streaming specific environments and 
dependencies.
+Table API restructuring (https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions;>FLIP-32)
+that decouples the Table API from batch/streaming specific environments and 
dependencies. Some key parts of the FLIP are completed, such as the modular 
decoupling of expression parsing and the removal of Scala dependencies, and the 
next step is to unify the function stack (https://issues.apache.org/jira/browse/FLINK-12710;>FLINK-12710).
   
   
-The new source interfaces https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface;>FLIP-27
-generalize across batch and streaming, making 

[flink-web] branch asf-site updated (e63933b -> 6b0ffa7)

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from e63933b  Rebuild website
 new af7bdfc  Update Roadmap after the release of Flink 1.9.
 new 6b0ffa7  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/roadmap.html | 147 +--
 roadmap.md   | 128 +++-
 2 files changed, 126 insertions(+), 149 deletions(-)



[flink] branch release-1.8 updated: [FLINK-13892][hs] Harden HistoryServerTest

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new d540794  [FLINK-13892][hs] Harden HistoryServerTest
d540794 is described below

commit d5407947f23f7a6a9e8a8dbc7d2e78ea6257b7a8
Author: Chesnay Schepler 
AuthorDate: Thu Aug 29 13:15:08 2019 +0200

[FLINK-13892][hs] Harden HistoryServerTest
---
 .../flink/runtime/webmonitor/history/HistoryServer.java  |  6 +++---
 .../webmonitor/history/HistoryServerArchiveFetcher.java  | 12 ++--
 .../flink/runtime/webmonitor/history/HistoryServerTest.java  |  7 ---
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
index a93fe93..f1a5330 100644
--- 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
+++ 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
@@ -140,9 +140,9 @@ public class HistoryServer {
this(config, new CountDownLatch(0));
}
 
-   public HistoryServer(Configuration config, CountDownLatch 
numFinishedPolls) throws IOException, FlinkException {
+   public HistoryServer(Configuration config, CountDownLatch 
numArchivedJobs) throws IOException, FlinkException {
Preconditions.checkNotNull(config);
-   Preconditions.checkNotNull(numFinishedPolls);
+   Preconditions.checkNotNull(numArchivedJobs);
 
this.config = config;
if 
(config.getBoolean(HistoryServerOptions.HISTORY_SERVER_WEB_SSL_ENABLED) && 
SSLUtils.isRestSSLEnabled(config)) {
@@ -187,7 +187,7 @@ public class HistoryServer {
}
 
long refreshIntervalMillis = 
config.getLong(HistoryServerOptions.HISTORY_SERVER_ARCHIVE_REFRESH_INTERVAL);
-   archiveFetcher = new 
HistoryServerArchiveFetcher(refreshIntervalMillis, refreshDirs, webDir, 
numFinishedPolls);
+   archiveFetcher = new 
HistoryServerArchiveFetcher(refreshIntervalMillis, refreshDirs, webDir, 
numArchivedJobs);
 
this.shutdownHook = ShutdownHookUtil.addShutdownHook(
HistoryServer.this::stop,
diff --git 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
index f95b14c..47888cd 100644
--- 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
+++ 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
@@ -79,9 +79,9 @@ class HistoryServerArchiveFetcher {
private final JobArchiveFetcherTask fetcherTask;
private final long refreshIntervalMillis;
 
-   HistoryServerArchiveFetcher(long refreshIntervalMillis, 
List refreshDirs, File webDir, CountDownLatch 
numFinishedPolls) {
+   HistoryServerArchiveFetcher(long refreshIntervalMillis, 
List refreshDirs, File webDir, CountDownLatch 
numArchivedJobs) {
this.refreshIntervalMillis = refreshIntervalMillis;
-   this.fetcherTask = new JobArchiveFetcherTask(refreshDirs, 
webDir, numFinishedPolls);
+   this.fetcherTask = new JobArchiveFetcherTask(refreshDirs, 
webDir, numArchivedJobs);
if (LOG.isInfoEnabled()) {
for (HistoryServer.RefreshLocation refreshDir : 
refreshDirs) {
LOG.info("Monitoring directory {} for archived 
jobs.", refreshDir.getPath());
@@ -112,7 +112,7 @@ class HistoryServerArchiveFetcher {
static class JobArchiveFetcherTask extends TimerTask {
 
private final List refreshDirs;
-   private final CountDownLatch numFinishedPolls;
+   private final CountDownLatch numArchivedJobs;
 
/** Cache of all available jobs identified by their id. */
private final Set cachedArchives;
@@ -123,9 +123,9 @@ class HistoryServerArchiveFetcher {
 
private static final String JSON_FILE_ENDING = ".json";
 
-   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numFinishedPolls) {
+   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numArchivedJobs) {
this.refreshDirs = checkNotNull(refreshDirs);
-   this.numFinishedPolls = numFinishedPolls;
+   this.numArchivedJobs = numArchivedJobs;
this.cachedArchives = new HashSet<>();

[flink] branch release-1.9 updated: [FLINK-13892][hs] Harden HistoryServerTest

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 2f26f89  [FLINK-13892][hs] Harden HistoryServerTest
2f26f89 is described below

commit 2f26f894be9905efa5cc90e28479ef8d96a4fc8d
Author: Chesnay Schepler 
AuthorDate: Thu Aug 29 13:15:08 2019 +0200

[FLINK-13892][hs] Harden HistoryServerTest
---
 .../flink/runtime/webmonitor/history/HistoryServer.java  |  6 +++---
 .../webmonitor/history/HistoryServerArchiveFetcher.java  | 12 ++--
 .../flink/runtime/webmonitor/history/HistoryServerTest.java  |  7 ---
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
index e907836..37407bb 100644
--- 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
+++ 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java
@@ -137,9 +137,9 @@ public class HistoryServer {
this(config, new CountDownLatch(0));
}
 
-   public HistoryServer(Configuration config, CountDownLatch 
numFinishedPolls) throws IOException, FlinkException {
+   public HistoryServer(Configuration config, CountDownLatch 
numArchivedJobs) throws IOException, FlinkException {
Preconditions.checkNotNull(config);
-   Preconditions.checkNotNull(numFinishedPolls);
+   Preconditions.checkNotNull(numArchivedJobs);
 
this.config = config;
if (HistoryServerUtils.isSSLEnabled(config)) {
@@ -184,7 +184,7 @@ public class HistoryServer {
}
 
long refreshIntervalMillis = 
config.getLong(HistoryServerOptions.HISTORY_SERVER_ARCHIVE_REFRESH_INTERVAL);
-   archiveFetcher = new 
HistoryServerArchiveFetcher(refreshIntervalMillis, refreshDirs, webDir, 
numFinishedPolls);
+   archiveFetcher = new 
HistoryServerArchiveFetcher(refreshIntervalMillis, refreshDirs, webDir, 
numArchivedJobs);
 
this.shutdownHook = ShutdownHookUtil.addShutdownHook(
HistoryServer.this::stop,
diff --git 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
index f95b14c..47888cd 100644
--- 
a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
+++ 
b/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
@@ -79,9 +79,9 @@ class HistoryServerArchiveFetcher {
private final JobArchiveFetcherTask fetcherTask;
private final long refreshIntervalMillis;
 
-   HistoryServerArchiveFetcher(long refreshIntervalMillis, 
List refreshDirs, File webDir, CountDownLatch 
numFinishedPolls) {
+   HistoryServerArchiveFetcher(long refreshIntervalMillis, 
List refreshDirs, File webDir, CountDownLatch 
numArchivedJobs) {
this.refreshIntervalMillis = refreshIntervalMillis;
-   this.fetcherTask = new JobArchiveFetcherTask(refreshDirs, 
webDir, numFinishedPolls);
+   this.fetcherTask = new JobArchiveFetcherTask(refreshDirs, 
webDir, numArchivedJobs);
if (LOG.isInfoEnabled()) {
for (HistoryServer.RefreshLocation refreshDir : 
refreshDirs) {
LOG.info("Monitoring directory {} for archived 
jobs.", refreshDir.getPath());
@@ -112,7 +112,7 @@ class HistoryServerArchiveFetcher {
static class JobArchiveFetcherTask extends TimerTask {
 
private final List refreshDirs;
-   private final CountDownLatch numFinishedPolls;
+   private final CountDownLatch numArchivedJobs;
 
/** Cache of all available jobs identified by their id. */
private final Set cachedArchives;
@@ -123,9 +123,9 @@ class HistoryServerArchiveFetcher {
 
private static final String JSON_FILE_ENDING = ".json";
 
-   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numFinishedPolls) {
+   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numArchivedJobs) {
this.refreshDirs = checkNotNull(refreshDirs);
-   this.numFinishedPolls = numFinishedPolls;
+   this.numArchivedJobs = numArchivedJobs;
this.cachedArchives = new HashSet<>();
this.webDir = checkNotNull(webDir);

[flink] branch master updated (e0759c3 -> 5531b23)

2019-09-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from e0759c3  [FLINK-13750][client][coordination] Separate HA services 
between client-side and server-side
 add 5531b23  [FLINK-13892][hs] Harden HistoryServerTest

No new revisions were added by this update.

Summary of changes:
 .../flink/runtime/webmonitor/history/HistoryServer.java  |  6 +++---
 .../webmonitor/history/HistoryServerArchiveFetcher.java  | 12 ++--
 .../flink/runtime/webmonitor/history/HistoryServerTest.java  |  7 ---
 3 files changed, 13 insertions(+), 12 deletions(-)