buildbot success in on flink-docs-release-1.5

2019-08-22 Thread buildbot
The Buildbot has detected a restored build on builder flink-docs-release-1.5 
while building . Full details are available at:
https://ci.apache.org/builders/flink-docs-release-1.5/builds/467

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: bb_slave2_ubuntu

Build Reason: The Nightly scheduler named 'flink-nightly-docs-release-1.5' 
triggered this build
Build Source Stamp: [branch release-1.5] HEAD
Blamelist: 

Build succeeded!

Sincerely,
 -The Buildbot





[flink] branch master updated: [FLINK-13354][docs] Add documentation for how to use blink planner

2019-08-22 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 57bb00a  [FLINK-13354][docs] Add documentation for how to use blink 
planner
57bb00a is described below

commit 57bb00a19f296312b3c06599901f4ef529d2feaf
Author: godfreyhe 
AuthorDate: Mon Aug 5 21:36:38 2019 +0800

[FLINK-13354][docs] Add documentation for how to use blink planner

This closes #9362
---
 docs/dev/table/common.md| 521 +--
 docs/dev/table/common.zh.md | 529 ++--
 2 files changed, 822 insertions(+), 228 deletions(-)

diff --git a/docs/dev/table/common.md b/docs/dev/table/common.md
index 1f496ec..c8523e4 100644
--- a/docs/dev/table/common.md
+++ b/docs/dev/table/common.md
@@ -27,6 +27,19 @@ The Table API and SQL are integrated in a joint API. The 
central concept of this
 * This will be replaced by the TOC
 {:toc}
 
+Main Differences Between the Two Planners
+-
+
+1. Blink treats batch jobs as a special case of streaming. As such, the 
conversion between Table and DataSet is also not supported, and batch jobs will 
not be translated into `DateSet` programs but translated into `DataStream` 
programs, the same as the streaming jobs.
+2. The Blink planner does not support `BatchTableSource`, use bounded 
`StreamTableSource` instead of it.
+3. The Blink planner only support the brand new `Catalog` and does not support 
`ExternalCatalog` which is deprecated.
+4. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
+5. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) are only 
used for the Blink planner.
+6. The implementation(`CalciteConfig`) of `PlannerConfig` in two planners is 
different.
+7. The Blink planner will optimize multiple-sinks into one DAG (supported only 
on `TableEnvironment`, not on `StreamTableEnvironment`). The old planner will 
always optimize each sink into a new DAG, where all DAGs are independent of 
each other.
+8. The old planner does not support catalog statistics now, while the Blink 
planner does.
+
+
 Structure of Table API and SQL Programs
 ---
 
@@ -35,12 +48,9 @@ All Table API and SQL programs for batch and streaming 
follow the same pattern.
 
 
 {% highlight java %}
-// for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
-// create a TableEnvironment
-// for batch programs use BatchTableEnvironment instead of 
StreamTableEnvironment
-StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
+// create a TableEnvironment for specific planner batch or streaming
+TableEnvironment tableEnv = ...; // see "Create a TableEnvironment" section
 
 // register a Table
 tableEnv.registerTable("table1", ...)// or
@@ -57,18 +67,16 @@ Table sqlResult  = tableEnv.sqlQuery("SELECT ... FROM 
table2 ... ");
 tapiResult.insertInto("outputTable");
 
 // execute
-env.execute();
+tableEnv.execute("java_job");
 
 {% endhighlight %}
 
 
 
 {% highlight scala %}
-// for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-val env = StreamExecutionEnvironment.getExecutionEnvironment
 
-// create a TableEnvironment
-val tableEnv = StreamTableEnvironment.create(env)
+// create a TableEnvironment for specific planner batch or streaming
+val tableEnv = ... // see "Create a TableEnvironment" section
 
 // register a Table
 tableEnv.registerTable("table1", ...)   // or
@@ -85,18 +93,16 @@ val sqlResult  = tableEnv.sqlQuery("SELECT ... FROM table2 
...")
 tapiResult.insertInto("outputTable")
 
 // execute
-env.execute()
+tableEnv.execute("scala_job")
 
 {% endhighlight %}
 
 
 
 {% highlight python %}
-# for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-env = StreamExecutionEnvironment.get_execution_environment()
 
-# create a TableEnvironment
-table_env = StreamTableEnvironment.create(env)
+# create a TableEnvironment for specific planner batch or streaming
+table_env = ... # see "Create a TableEnvironment" section
 
 # register a Table
 table_env.register_table("table1", ...)   # or
@@ -140,82 +146,153 @@ A `Table` is always bound to a specific 
`TableEnvironment`. It is not possible t
 
 A `TableEnvironment` is created by calling the static 
`BatchTableEnvironment.create()` or `StreamTableEnvironment.create()` method 
with a `StreamExecutionEnvironment` or an 

[flink] branch release-1.9 updated: [FLINK-13354] [docs] Add documentation for how to use blink planner

2019-08-22 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 1dce494  [FLINK-13354] [docs] Add documentation for how to use blink 
planner
1dce494 is described below

commit 1dce49445263cca3733c3923c129c7c3c1f9b581
Author: godfreyhe 
AuthorDate: Fri Aug 16 15:28:49 2019 +0800

[FLINK-13354] [docs] Add documentation for how to use blink planner
---
 docs/dev/table/common.md| 527 +--
 docs/dev/table/common.zh.md | 535 ++--
 2 files changed, 830 insertions(+), 232 deletions(-)

diff --git a/docs/dev/table/common.md b/docs/dev/table/common.md
index 1e786d1..41acfc6 100644
--- a/docs/dev/table/common.md
+++ b/docs/dev/table/common.md
@@ -27,6 +27,19 @@ The Table API and SQL are integrated in a joint API. The 
central concept of this
 * This will be replaced by the TOC
 {:toc}
 
+Main Differences Between the Two Planners
+-
+
+1. Blink treats batch jobs as a special case of streaming. As such, the 
conversion between Table and DataSet is also not supported, and batch jobs will 
not be translated into `DateSet` programs but translated into `DataStream` 
programs, the same as the streaming jobs.
+2. The Blink planner does not support `BatchTableSource`, use bounded 
`StreamTableSource` instead of it. 
+3. The Blink planner only support the brand new `Catalog` and does not support 
`ExternalCatalog` which is deprecated.
+4. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
+5. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) are only 
used for the Blink planner.
+6. The implementation(`CalciteConfig`) of `PlannerConfig` in two planners is 
different.
+7. The Blink planner will optimize multiple-sinks into one DAG (supported only 
on `TableEnvironment`, not on `StreamTableEnvironment`). The old planner will 
always optimize each sink into a new DAG, where all DAGs are independent of 
each other.
+8. The old planner does not support catalog statistics now, while the Blink 
planner does.
+
+
 Structure of Table API and SQL Programs
 ---
 
@@ -35,12 +48,9 @@ All Table API and SQL programs for batch and streaming 
follow the same pattern.
 
 
 {% highlight java %}
-// for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
-// create a TableEnvironment
-// for batch programs use BatchTableEnvironment instead of 
StreamTableEnvironment
-StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
+// create a TableEnvironment for specific planner batch or streaming
+TableEnvironment tableEnv = ...; // see "Create a TableEnvironment" section
 
 // register a Table
 tableEnv.registerTable("table1", ...)// or
@@ -58,18 +68,16 @@ Table sqlResult  = tableEnv.sqlQuery("SELECT ... FROM 
table2 ... ");
 tapiResult.insertInto("outputTable");
 
 // execute
-env.execute();
+tableEnv.execute("java_job");
 
 {% endhighlight %}
 
 
 
 {% highlight scala %}
-// for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-val env = StreamExecutionEnvironment.getExecutionEnvironment
 
-// create a TableEnvironment
-val tableEnv = StreamTableEnvironment.create(env)
+// create a TableEnvironment for specific planner batch or streaming
+val tableEnv = ... // see "Create a TableEnvironment" section
 
 // register a Table
 tableEnv.registerTable("table1", ...)   // or
@@ -87,18 +95,16 @@ val sqlResult  = tableEnv.sqlQuery("SELECT ... FROM table2 
...")
 tapiResult.insertInto("outputTable")
 
 // execute
-env.execute()
+tableEnv.execute("scala_job")
 
 {% endhighlight %}
 
 
 
 {% highlight python %}
-# for batch programs use ExecutionEnvironment instead of 
StreamExecutionEnvironment
-env = StreamExecutionEnvironment.get_execution_environment()
 
-# create a TableEnvironment
-table_env = StreamTableEnvironment.create(env)
+# create a TableEnvironment for specific planner batch or streaming
+table_env = ... # see "Create a TableEnvironment" section
 
 # register a Table
 table_env.register_table("table1", ...)   # or
@@ -142,82 +148,153 @@ A `Table` is always bound to a specific 
`TableEnvironment`. It is not possible t
 
 A `TableEnvironment` is created by calling the static 
`BatchTableEnvironment.create()` or `StreamTableEnvironment.create()` method 
with a `StreamExecutionEnvironment` or an `ExecutionEnvironment` and an 

[flink] branch master updated: [hotfix][table-blink] Ignore tests in GroupWindowTableAggregateITCase and TableAggregateITCase

2019-08-22 Thread hequn
This is an automated email from the ASF dual-hosted git repository.

hequn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b6645c0  [hotfix][table-blink] Ignore tests in 
GroupWindowTableAggregateITCase and TableAggregateITCase
b6645c0 is described below

commit b6645c0abcbbe6a94586916772b15bea15818d95
Author: hequn8128 
AuthorDate: Fri Aug 23 10:48:39 2019 +0800

[hotfix][table-blink] Ignore tests in GroupWindowTableAggregateITCase and 
TableAggregateITCase

Ignore the tests for the time being and recover it when bug is fixed in 
blink-planner
---
 .../planner/runtime/stream/table/GroupWindowTableAggregateITCase.scala | 3 ++-
 .../table/planner/runtime/stream/table/TableAggregateITCase.scala  | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/GroupWindowTableAggregateITCase.scala
 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/GroupWindowTableAggregateITCase.scala
index 456fa32..31dd9ae 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/GroupWindowTableAggregateITCase.scala
+++ 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/GroupWindowTableAggregateITCase.scala
@@ -31,10 +31,11 @@ import org.apache.flink.table.planner.utils.Top3
 import org.apache.flink.types.Row
 import org.apache.flink.table.planner.runtime.utils.TestData._
 import org.junit.Assert._
-import org.junit.Test
+import org.junit.{Ignore, Test}
 import org.junit.runner.RunWith
 import org.junit.runners.Parameterized
 
+@Ignore("Remove this ignore when FLINK-13740 is solved.")
 @RunWith(classOf[Parameterized])
 class GroupWindowTableAggregateITCase(mode: StateBackendMode)
   extends StreamingWithStateTestBase(mode) {
diff --git 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/TableAggregateITCase.scala
 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/TableAggregateITCase.scala
index 7b18b96..e53e1bc 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/TableAggregateITCase.scala
+++ 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/table/TableAggregateITCase.scala
@@ -30,11 +30,12 @@ import org.apache.flink.types.Row
 import org.junit.Assert.assertEquals
 import org.junit.runner.RunWith
 import org.junit.runners.Parameterized
-import org.junit.{Before, Test}
+import org.junit.{Before, Ignore, Test}
 
 /**
   * Tests of groupby (without window) table aggregations
   */
+@Ignore("Remove this ignore when FLINK-13740 is solved.")
 @RunWith(classOf[Parameterized])
 class TableAggregateITCase(mode: StateBackendMode) extends 
StreamingWithStateTestBase(mode) {
 



[flink] branch release-1.9 updated: [hotfix][docs] Mark 1.9.0 as stable

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 22e46c4  [hotfix][docs] Mark 1.9.0 as stable
22e46c4 is described below

commit 22e46c41cad8591c1cbd6a9b00b6958dcb7b91b0
Author: Chesnay Schepler 
AuthorDate: Thu Aug 22 20:47:50 2019 +0200

[hotfix][docs] Mark 1.9.0 as stable
---
 docs/_config.yml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/_config.yml b/docs/_config.yml
index c3c91b2..9aa4182 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -27,12 +27,12 @@
 # we change the version for the complete docs when forking of a release branch
 # etc.
 # The full version string as referenced in Maven (e.g. 1.2.1)
-version: "1.9-SNAPSHOT"
+version: "1.9.0"
 # For stable releases, leave the bugfix version out (e.g. 1.2). For snapshot
 # release this should be the same as the regular version
-version_title: "1.9-SNAPSHOT"
+version_title: "1.9"
 # Branch on Github for this version
-github_branch: "master"
+github_branch: "release-1.9"
 
 # Plain Scala version is needed for e.g. the Gradle quickstart.
 scala_version: "2.11"



[flink] branch release-1.8 updated: [FLINK-13761][scala] Deprecate Scala SplitStream

2019-08-22 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new c6b07c7  [FLINK-13761][scala] Deprecate Scala SplitStream
c6b07c7 is described below

commit c6b07c7d47036c1e23c2eceaeaf9fd3baca3b311
Author: 张志豪 
AuthorDate: Sun Aug 18 21:16:20 2019 +0800

[FLINK-13761][scala] Deprecate Scala SplitStream

Deprecate Scala SplitStream which has been superseded by side outputs.

This closes #9474.
---
 .../main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala| 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
 
b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
index ca4bcc0..16a5a48 100644
--- 
a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
+++ 
b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
@@ -28,6 +28,7 @@ import org.apache.flink.streaming.api.datastream.{ 
SplitStream => SplitJavaStrea
  * To apply a transformation on the whole output simply call
  * the appropriate method on this stream.
  */
+@deprecated("Please use side outputs instead of split/select", "deprecated 
since 1.8.2")
 @Public
 class SplitStream[T](javaStream: SplitJavaStream[T]) extends 
DataStream[T](javaStream){
 



[flink] branch release-1.9 updated: [FLINK-13761][scala] Deprecate Scala SplitStream

2019-08-22 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 36366ac  [FLINK-13761][scala] Deprecate Scala SplitStream
36366ac is described below

commit 36366ac477342c289fc54c6b613ad40189be9168
Author: 张志豪 
AuthorDate: Sun Aug 18 21:16:20 2019 +0800

[FLINK-13761][scala] Deprecate Scala SplitStream

Deprecate Scala SplitStream which has been superseded by side outputs.

This closes #9474.
---
 .../main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala| 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
 
b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
index ca4bcc0..16a5a48 100644
--- 
a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
+++ 
b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala
@@ -28,6 +28,7 @@ import org.apache.flink.streaming.api.datastream.{ 
SplitStream => SplitJavaStrea
  * To apply a transformation on the whole output simply call
  * the appropriate method on this stream.
  */
+@deprecated("Please use side outputs instead of split/select", "deprecated 
since 1.8.2")
 @Public
 class SplitStream[T](javaStream: SplitJavaStream[T]) extends 
DataStream[T](javaStream){
 



[flink] branch master updated (84cbe31 -> 8be6ad1)

2019-08-22 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 84cbe31  [FLINK-13716][docs] Remove Program-related chinese 
documentation
 add 8be6ad1  [FLINK-13761][scala] Deprecate Scala SplitStream

No new revisions were added by this update.

Summary of changes:
 .../main/scala/org/apache/flink/streaming/api/scala/SplitStream.scala| 1 +
 1 file changed, 1 insertion(+)



[flink-web] 01/02: Add a new committer with the ASF id azagrebin

2019-08-22 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit d61151bbecea7aa9e4c44511bbe84e4d6d76caf2
Author: Andrey Zagrebin 
AuthorDate: Thu Aug 22 15:17:10 2019 +0200

Add a new committer with the ASF id azagrebin
---
 community.md | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/community.md b/community.md
index 6c70099..2f4e689 100644
--- a/community.md
+++ b/community.md
@@ -450,6 +450,12 @@ Flink Forward is a conference happening yearly in 
different locations around the
 Committer
 kurt
   
+  
+https://avatars0.githubusercontent.com/u/10573485?s=50; 
class="committer-avatar">
+Andrey Zagrebin
+Committer
+azagrebin
+  
 
 
 



[flink-web] branch asf-site updated (b2439e9 -> 36dd318)

2019-08-22 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from b2439e9  Minor fix in 1.9 announcement
 new d61151b  Add a new committer with the ASF id azagrebin
 new 36dd318  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community.md   | 6 ++
 content/community.html | 6 ++
 2 files changed, 12 insertions(+)



[flink-web] 02/02: Rebuild website

2019-08-22 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 36dd3180b98f55dd6d8a758ca1c1fe56752f029f
Author: Andrey Zagrebin 
AuthorDate: Thu Aug 22 15:36:21 2019 +0200

Rebuild website
---
 content/community.html | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/content/community.html b/content/community.html
index 1d18937..1ed68ab 100644
--- a/content/community.html
+++ b/content/community.html
@@ -638,6 +638,12 @@
 Committer
 kurt
   
+  
+https://avatars0.githubusercontent.com/u/10573485?s=50; 
class="committer-avatar" />
+Andrey Zagrebin
+Committer
+azagrebin
+  
 
 
 



[flink] branch master updated (6618cef -> 84cbe31)

2019-08-22 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6618cef  [FLINK-13806][metrics] Log all errors on DEBUG
 add 0ef2de5  [FLINK-13714] Remove Program-related code
 add 5a5a14e  [FLINK-13715][docs] Remove Program-related english 
documentation
 add 84cbe31  [FLINK-13716][docs] Remove Program-related chinese 
documentation

No new revisions were added by this update.

Summary of changes:
 docs/dev/packaging.md  |  24 +--
 docs/dev/packaging.zh.md   |  24 +--
 .../org/apache/flink/client/LocalExecutor.java |  15 --
 .../apache/flink/client/program/ClusterClient.java |  72 +++-
 .../flink/client/program/PackagedProgram.java  | 205 -
 .../flink/client/program/PackagedProgramUtils.java |  24 +--
 .../apache/flink/client/program/ClientTest.java|  13 +-
 .../java/org/apache/flink/api/common/Program.java  |  42 -
 8 files changed, 71 insertions(+), 348 deletions(-)
 delete mode 100644 
flink-core/src/main/java/org/apache/flink/api/common/Program.java



[flink-web] branch asf-site updated: Minor fix in 1.9 announcement

2019-08-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new b2439e9  Minor fix in 1.9 announcement
b2439e9 is described below

commit b2439e9c62847ededecd32020dbde7f9903722ef
Author: Robert Metzger 
AuthorDate: Thu Aug 22 14:22:06 2019 +0200

Minor fix in 1.9 announcement
---
 _posts/2019-08-22-release-1.9.0.md |  5 -
 content/news/2019/08/22/release-1.9.0.html | 13 +
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/_posts/2019-08-22-release-1.9.0.md 
b/_posts/2019-08-22-release-1.9.0.md
index 954b1e1..8ac4678 100644
--- a/_posts/2019-08-22-release-1.9.0.md
+++ b/_posts/2019-08-22-release-1.9.0.md
@@ -61,8 +61,11 @@ batch-shuffle connections of a job define the boundaries of 
its failover
 regions. More details are available in
 
[FLIP-1](https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures).
 ![alt_text]({{site.baseurl}}/img/blog/release-19-flip1.png "Fine-grained Batch
-Recovery") To use this new failover strategy, you need to do the following
+Recovery") 
+
+To use this new failover strategy, you need to do the following
 settings:
+
  * Make sure you have the entry `jobmanager.execution.failover-strategy:
region` in your `flink-conf.yaml`.
 
diff --git a/content/news/2019/08/22/release-1.9.0.html 
b/content/news/2019/08/22/release-1.9.0.html
index cd85ff9..d1a07aa 100644
--- a/content/news/2019/08/22/release-1.9.0.html
+++ b/content/news/2019/08/22/release-1.9.0.html
@@ -231,10 +231,15 @@ batch-shuffle connections of a job define the boundaries 
of its failover
 regions. More details are available in
 https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures;>FLIP-1.
  To use this new failover strategy, you need to do the following
-settings:
- * Make sure you have the entry jobmanager.execution.failover-strategy:
-   region in your flink-conf.yaml.
+Recovery" />
+
+To use this new failover strategy, you need to do the following
+settings:
+
+
+  Make sure you have the entry 
jobmanager.execution.failover-strategy:
+region in your flink-conf.yaml.
+
 
 Note: The configuration of the 1.9 distribution has that 
entry by default,
   but when reusing a configuration file from previous setups, you have to add



[flink-shaded] branch master updated: [FLINK-13770][netty] Bump version to 4.1.39.Final

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-shaded.git


The following commit(s) were added to refs/heads/master by this push:
 new 4fe8ee1  [FLINK-13770][netty] Bump version to 4.1.39.Final
4fe8ee1 is described below

commit 4fe8ee133a56b2a18bb56f13b1ed1ecc92251516
Author: Nico Kruber 
AuthorDate: Thu Aug 22 14:04:24 2019 +0200

[FLINK-13770][netty] Bump version to 4.1.39.Final
---
 flink-shaded-netty-4/pom.xml| 7 ++-
 flink-shaded-netty-4/src/main/resources/META-INF/NOTICE | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/flink-shaded-netty-4/pom.xml b/flink-shaded-netty-4/pom.xml
index 75ae8d3..78eadd5 100644
--- a/flink-shaded-netty-4/pom.xml
+++ b/flink-shaded-netty-4/pom.xml
@@ -34,7 +34,7 @@ under the License.
 ${netty.version}-8.0
 
 
-4.1.32.Final
+4.1.39.Final
 
 
 
@@ -92,6 +92,11 @@ under the License.
 
 
 
+
+
+
+
+
 
 
 
diff --git a/flink-shaded-netty-4/src/main/resources/META-INF/NOTICE 
b/flink-shaded-netty-4/src/main/resources/META-INF/NOTICE
index f2224d0..e3a6841 100644
--- a/flink-shaded-netty-4/src/main/resources/META-INF/NOTICE
+++ b/flink-shaded-netty-4/src/main/resources/META-INF/NOTICE
@@ -6,4 +6,4 @@ The Apache Software Foundation (http://www.apache.org/).
 
 This project bundles the following dependencies under the Apache Software 
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
 
-- io.netty:netty-all:4.1.32.Final
+- io.netty:netty-all:4.1.39.Final



[flink-web] branch asf-site updated: Fix date in 1.9 ann & rebuild

2019-08-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 425a5c2  Fix date in 1.9 ann & rebuild
425a5c2 is described below

commit 425a5c2f1b40a66e6c13210eb51f627499b002cd
Author: Robert Metzger 
AuthorDate: Thu Aug 22 13:50:07 2019 +0200

Fix date in 1.9 ann & rebuild

This closes #244
---
 _posts/2019-08-22-release-1.9.0.md |   8 +-
 content/blog/feed.xml  | 369 ++
 content/blog/index.html|  39 +-
 content/blog/page2/index.html  |  38 +-
 content/blog/page3/index.html  |  40 +-
 content/blog/page4/index.html  |  40 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  43 ++-
 content/blog/page8/index.html  |  43 ++-
 content/blog/page9/index.html  |  25 ++
 content/index.html |   9 +-
 content/news/2019/08/22/release-1.9.0.html | 577 +
 content/zh/index.html  |   9 +-
 14 files changed, 1187 insertions(+), 133 deletions(-)

diff --git a/_posts/2019-08-22-release-1.9.0.md 
b/_posts/2019-08-22-release-1.9.0.md
index dfb45c3..954b1e1 100644
--- a/_posts/2019-08-22-release-1.9.0.md
+++ b/_posts/2019-08-22-release-1.9.0.md
@@ -1,12 +1,8 @@
 ---
 layout: post 
 title:  "Apache Flink 1.9.0 Release Announcement" 
-date:   2019-08-22 12:40:00
+date: 2019-08-22T02:30:00.000Z
 categories: news
-authors:
-- marta:
-  name: "Marta Moreira"
-  twitter: "morsapaes"
 ---
 
 
@@ -120,7 +116,7 @@ stop a job with a savepoint that is consistent with the 
emitted data.
 You can suspend a job with Flink’s CLI client as follows:
 
 ```
-bin/flink stop -s [:targetDirectory] :jobId
+bin/flink stop -p [:targetDirectory] :jobId
 ```
 
 The final job state is set to `FINISHED` on success, allowing
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 07c2528..918f49b 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,375 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Apache Flink 1.9.0 Release Announcement
+pThe Apache Flink community is proud to announce the 
release of Apache Flink
+1.9.0./p
+
+pThe Apache Flink project’s goal is to develop a stream processing 
system to
+unify and power many forms of real-time and offline data processing
+applications as well as event-driven applications. In this release, we have
+made a huge step forward in that effort, by integrating Flink’s stream and
+batch processing capabilities under a single, unified runtime./p
+
+pSignificant features on this path are batch-style recovery for batch 
jobs and
+a preview of the new Blink-based query engine for Table API and SQL queries.
+We are also excited to announce the availability of the State Processor API,
+which is one of the most frequently requested features and enables users to
+read and write savepoints with Flink DataSet jobs. Finally, Flink 1.9 includes
+a reworked WebUI and previews of Flink’s new Python Table API and its
+integration with the Apache Hive ecosystem./p
+
+pThis blog post describes all major new features and improvements, 
important
+changes to be aware of and what to expect moving forward. For more details,
+check the a 
href=https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522amp;version=12344601complete
 release
+changelog/a./p
+
+pThe binary distribution and source artifacts for this release are now
+available via the a 
href=https://flink.apache.org/downloads.htmlDownloads/a; 
page of
+the Flink project, along with the updated
+a 
href=https://ci.apache.org/projects/flink/flink-docs-release-1.9/documentation/a;.
+Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with
+the code@Public/code annotation./p
+
+pPlease feel encouraged to download the release and share your 
thoughts with
+the community through the Flink a 
href=https://flink.apache.org/community.html#mailing-listsmailing
+lists/a or
+a 
href=https://issues.apache.org/jira/projects/FLINK/summaryJIRA/a;.
 As always,
+feedback is very much appreciated!/p
+
+div class=page-toc
+ul id=markdown-toc
+  lia href=#new-features-and-improvements 
id=markdown-toc-new-features-and-improvementsNew Features and 
Improvements/aul
+  lia href=#fine-grained-batch-recovery-flip-1 
id=markdown-toc-fine-grained-batch-recovery-flip-1Fine-grained 
Batch Recovery (FLIP-1)/a/li
+  lia href=#state-processor-api-flip-43 
id=markdown-toc-state-processor-api-flip-43State Processor API 
(FLIP-43)/a/li
+  lia href=#stop-with-savepoint-flip-34 
id=markdown-toc-stop-with-savepoint-flip-34Stop-with-Savepoint 
(FLIP-34)/a/li
+  lia 

[flink-web] branch asf-site updated (7ebe3e2 -> 2c3c0af)

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 7ebe3e2  [hotfix] Update resource link of Comcast PoweredBy entry.
 add e88b9f3  Add 1.9.0 release announcement
 add 982e6cd  Add missing 1.9.0 download link entry
 add 2c3c0af  Rebuild website

No new revisions were added by this update.

Summary of changes:
 _posts/2019-08-22-release-1.9.0.md | 367 +
 content/contributing/reviewing-prs.html|   2 +-
 content/downloads.html |   1 +
 content/img/blog/release-19-flip1.png  | Bin 0 -> 32435 bytes
 content/img/blog/release-19-stack.png  | Bin 0 -> 66386 bytes
 content/img/blog/release-19-web1.png   | Bin 0 -> 121643 bytes
 content/img/blog/release-19-web2.png   | Bin 0 -> 133292 bytes
 content/poweredby.html |   2 +-
 content/zh/contributing/reviewing-prs.html |   2 +-
 content/zh/poweredby.html  |   2 +-
 downloads.md   |   1 +
 img/blog/release-19-flip1.png  | Bin 0 -> 32435 bytes
 img/blog/release-19-stack.png  | Bin 0 -> 66386 bytes
 img/blog/release-19-web1.png   | Bin 0 -> 121643 bytes
 img/blog/release-19-web2.png   | Bin 0 -> 133292 bytes
 15 files changed, 373 insertions(+), 4 deletions(-)
 create mode 100644 _posts/2019-08-22-release-1.9.0.md
 create mode 100755 content/img/blog/release-19-flip1.png
 create mode 100755 content/img/blog/release-19-stack.png
 create mode 100755 content/img/blog/release-19-web1.png
 create mode 100755 content/img/blog/release-19-web2.png
 create mode 100755 img/blog/release-19-flip1.png
 create mode 100755 img/blog/release-19-stack.png
 create mode 100755 img/blog/release-19-web1.png
 create mode 100755 img/blog/release-19-web2.png



[flink-web] branch asf-site updated (ecd2365 -> 7ebe3e2)

2019-08-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from ecd2365  [hotfix] Fix links in reviewing-prs page.
 add 7ebe3e2  [hotfix] Update resource link of Comcast PoweredBy entry.

No new revisions were added by this update.

Summary of changes:
 poweredby.md| 2 +-
 poweredby.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[flink-web] branch asf-site updated (dbf99f3 -> ecd2365)

2019-08-22 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from dbf99f3  Rebuild website
 add ecd2365  [hotfix] Fix links in reviewing-prs page.

No new revisions were added by this update.

Summary of changes:
 contributing/reviewing-prs.md| 2 +-
 contributing/reviewing-prs.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[flink] branch release-1.8 updated (481332e -> b837e1c)

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 481332e  [FLINK-13488][tests] Harden ConnectedComponents E2E test
 add b837e1c  [FLINK-13806][metrics] Log all errors on DEBUG

No new revisions were added by this update.

Summary of changes:
 .../runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)



[flink] branch release-1.9 updated: [FLINK-13806][metrics] Log all errors on DEBUG

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new d708ab2  [FLINK-13806][metrics] Log all errors on DEBUG
d708ab2 is described below

commit d708ab222846735b4ec4562fe8609389cb616da2
Author: Chesnay Schepler 
AuthorDate: Thu Aug 22 09:42:28 2019 +0200

[FLINK-13806][metrics] Log all errors on DEBUG
---
 .../runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java
index b9250d6..f157006 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java
@@ -139,7 +139,7 @@ public class MetricFetcherImpl 
implements MetricFetche
queryServiceAddressesFuture.whenCompleteAsync(
(Collection 
queryServiceAddresses, Throwable throwable) -> {
if (throwable != null) {
-   LOG.warn("Requesting 
paths for query services failed.", throwable);
+   LOG.debug("Requesting 
paths for query services failed.", throwable);
} else {
for (String 
queryServiceAddress : queryServiceAddresses) {

retrieveAndQueryMetrics(queryServiceAddress);
@@ -157,7 +157,7 @@ public class MetricFetcherImpl 
implements MetricFetche

taskManagerQueryServiceGatewaysFuture.whenCompleteAsync(
(Collection> 
queryServiceGateways, Throwable throwable) -> {
if (throwable != null) {
-   LOG.warn("Requesting 
TaskManager's path for query services failed.", throwable);
+   LOG.debug("Requesting 
TaskManager's path for query services failed.", throwable);
} else {
List 
taskManagersToRetain = queryServiceGateways
.stream()
@@ -175,7 +175,7 @@ public class MetricFetcherImpl 
implements MetricFetche
executor);
}
} catch (Exception e) {
-   LOG.warn("Exception while fetching metrics.", e);
+   LOG.debug("Exception while fetching metrics.", e);
}
}
 



[flink] branch master updated (948ab73 -> 6618cef)

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 948ab73  [hotfix][python] Add System.exit() at the end of 
PythonGatewayServer to ensure the JVM will exit if its parent process dies.
 add 6618cef  [FLINK-13806][metrics] Log all errors on DEBUG

No new revisions were added by this update.

Summary of changes:
 .../runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)



[flink-web] 02/03: Add 1.9.0 release announcement

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 040c74ca45142d39b0aa206cc40937645053b89f
Author: Robert Metzger 
AuthorDate: Tue Aug 13 11:58:27 2019 +0200

Add 1.9.0 release announcement
---
 _posts/2019-08-22-release-1.9.0.md | 367 +
 img/blog/release-19-flip1.png  | Bin 0 -> 32435 bytes
 img/blog/release-19-stack.png  | Bin 0 -> 66386 bytes
 img/blog/release-19-web1.png   | Bin 0 -> 121643 bytes
 img/blog/release-19-web2.png   | Bin 0 -> 133292 bytes
 5 files changed, 367 insertions(+)

diff --git a/_posts/2019-08-22-release-1.9.0.md 
b/_posts/2019-08-22-release-1.9.0.md
new file mode 100644
index 000..9bab119
--- /dev/null
+++ b/_posts/2019-08-22-release-1.9.0.md
@@ -0,0 +1,367 @@
+---
+layout: post 
+title:  "Apache Flink 1.9.0 Release Announcement" 
+date:   2019-8-22 12:10:00
+categories: news
+authors:
+- till:
+  name: "Marta Moreira"
+  twitter: "morsapaes"
+---
+
+
+The Apache Flink community is proud to announce the release of Apache Flink
+1.9.0.
+
+The Apache Flink project's goal is to develop a stream processing system to
+unify and power many forms of real-time and offline data processing
+applications as well as event-driven applications. In this release, we have
+made a huge step forward in that effort, by integrating Flink’s stream and
+batch processing capabilities under a single, unified runtime.
+
+Significant features on this path are batch-style recovery for batch jobs and
+a preview of the new Blink-based query engine for Table API and SQL queries.
+We are also excited to announce the availability of the State Processor API,
+which is one of the most frequently requested features and enables users to
+read and write savepoints with Flink DataSet jobs. Finally, Flink 1.9 includes
+a reworked WebUI and previews of Flink’s new Python Table API and its
+integration with the Apache Hive ecosystem.
+
+This blog post describes all major new features and improvements, important
+changes to be aware of and what to expect moving forward. For more details,
+check the [complete release
+changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601).
+
+The binary distribution and source artifacts for this release are now
+available via the [Downloads](https://flink.apache.org/downloads.html) page of
+the Flink project, along with the updated
+[documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/).
+Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with
+the `@Public` annotation.
+
+Please feel encouraged to download the release and share your thoughts with
+the community through the Flink [mailing
+lists](https://flink.apache.org/community.html#mailing-lists) or
+[JIRA](https://issues.apache.org/jira/projects/FLINK/summary). As always,
+feedback is very much appreciated!
+
+
+{% toc %}
+
+
+## New Features and Improvements
+
+
+### Fine-grained Batch Recovery (FLIP-1)
+
+The time to recover a batch (DataSet, Table API and SQL) job from a task
+failure was significantly reduced. Until Flink 1.9, task failures in batch
+jobs were recovered by canceling all tasks and restarting the whole job, i.e,
+the job was started from scratch and all progress was voided. With this
+release, Flink can be configured to limit the recovery to only those tasks
+that are in the same **failover region**. A failover region is the set of
+tasks that are connected via pipelined data exchanges. Hence, the
+batch-shuffle connections of a job define the boundaries of its failover
+regions. More details are available in
+[FLIP-1](https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures).
+![alt_text]({{site.baseurl}}/img/blog/release-19-flip1.png "Fine-grained Batch
+Recovery") To use this new failover strategy, you need to do the following
+settings:
+ * Make sure you have the entry `jobmanager.execution.failover-strategy:
+   region` in your `flink-conf.yaml`.
+
+**Note:** The configuration of the 1.9 distribution has that entry by default,
+  but when reusing a configuration file from previous setups, you have to add
+  it manually.
+
+Moreover, you need to set the `ExecutionMode` of batch jobs in the
+`ExecutionConfig` to `BATCH` to configure that data shuffles are not pipelined
+and jobs have more than one failover region.
+
+The "Region" failover strategy also improves the recovery of “embarrassingly
+parallel” streaming jobs, i.e., jobs without any shuffle like keyBy() or
+rebalance. When such a job is recovered, only the tasks of the affected
+pipeline (failover region) are restarted. For all other streaming jobs, the
+recovery behavior is the same as in prior Flink versions.
+
+
+### State Processor API (FLIP-43)
+
+Up to Flink 1.9, accessing the state of a job from the outside was limited to
+the 

[flink-web] 01/03: Add missing 1.9.0 download link entry

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 032d019f7fa3c541d55b1d22eb865a23d645823a
Author: Tzu-Li (Gordon) Tai 
AuthorDate: Thu Aug 22 08:57:55 2019 +0200

Add missing 1.9.0 download link entry
---
 downloads.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/downloads.md b/downloads.md
index bcf19f6..4d73c27 100644
--- a/downloads.md
+++ b/downloads.md
@@ -176,6 +176,7 @@ Note that the community is always open to discussing bugfix 
releases for even ol
 All Flink releases are available via 
[https://archive.apache.org/dist/flink/](https://archive.apache.org/dist/flink/)
 including checksums and cryptographic signatures. At the time of writing, this 
includes the following versions:
 
 ### Flink
+- Flink 1.9.0 - 2019-08-22 
([Source](https://archive.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-src.tgz),
 [Binaries](https://archive.apache.org/dist/flink/flink-1.9.0/), 
[Docs]({{site.DOCS_BASE_URL}}flink-docs-release-1.9/), 
[Javadocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.9/api/java), 
[ScalaDocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.9/api/scala/index.html))
 - Flink 1.8.1 - 2019-07-02 
([Source](https://archive.apache.org/dist/flink/flink-1.8.1/flink-1.8.1-src.tgz),
 [Binaries](https://archive.apache.org/dist/flink/flink-1.8.1/), 
[Docs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/), 
[Javadocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/api/java), 
[ScalaDocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/api/scala/index.html))
 - Flink 1.8.0 - 2019-04-09 
([Source](https://archive.apache.org/dist/flink/flink-1.8.0/flink-1.8.0-src.tgz),
 [Binaries](https://archive.apache.org/dist/flink/flink-1.8.0/), 
[Docs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/), 
[Javadocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/api/java), 
[ScalaDocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.8/api/scala/index.html))
 - Flink 1.7.2 - 2019-02-15 
([Source](https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-src.tgz),
 [Binaries](https://archive.apache.org/dist/flink/flink-1.7.2/), 
[Docs]({{site.DOCS_BASE_URL}}flink-docs-release-1.7/), 
[Javadocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.7/api/java), 
[ScalaDocs]({{site.DOCS_BASE_URL}}flink-docs-release-1.7/api/scala/index.html))



[flink-web] 03/03: Rebuild website

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b6e0932cb5edcdf99722e13483c4f17631cff4fe
Author: Tzu-Li (Gordon) Tai 
AuthorDate: Thu Aug 22 12:06:27 2019 +0200

Rebuild website
---
 content/downloads.html|   1 +
 content/img/blog/release-19-flip1.png | Bin 0 -> 32435 bytes
 content/img/blog/release-19-stack.png | Bin 0 -> 66386 bytes
 content/img/blog/release-19-web1.png  | Bin 0 -> 121643 bytes
 content/img/blog/release-19-web2.png  | Bin 0 -> 133292 bytes
 5 files changed, 1 insertion(+)

diff --git a/content/downloads.html b/content/downloads.html
index 0616e23..5f68c79 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -452,6 +452,7 @@ main Flink release:
 
 Flink
 
+  Flink 1.9.0 - 2019-08-22 (https://archive.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.9.0/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.9/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/scala/index.html;>ScalaDocs)
   Flink 1.8.1 - 2019-07-02 (https://archive.apache.org/dist/flink/flink-1.8.1/flink-1.8.1-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.8.1/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.8/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
   Flink 1.8.0 - 2019-04-09 (https://archive.apache.org/dist/flink/flink-1.8.0/flink-1.8.0-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.8.0/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.8/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/scala/index.html;>ScalaDocs)
   Flink 1.7.2 - 2019-02-15 (https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.7.2/;>Binaries, https://ci.apache.org/projects/flink/flink-docs-release-1.7/;>Docs, 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/java;>Javadocs,
 https://ci.apache.org/projects/flink/flink-docs-release-1.7/api/scala/index.html;>ScalaDocs)
diff --git a/content/img/blog/release-19-flip1.png 
b/content/img/blog/release-19-flip1.png
new file mode 100755
index 000..dda2626
Binary files /dev/null and b/content/img/blog/release-19-flip1.png differ
diff --git a/content/img/blog/release-19-stack.png 
b/content/img/blog/release-19-stack.png
new file mode 100755
index 000..877b51f
Binary files /dev/null and b/content/img/blog/release-19-stack.png differ
diff --git a/content/img/blog/release-19-web1.png 
b/content/img/blog/release-19-web1.png
new file mode 100755
index 000..1b8c8cb
Binary files /dev/null and b/content/img/blog/release-19-web1.png differ
diff --git a/content/img/blog/release-19-web2.png 
b/content/img/blog/release-19-web2.png
new file mode 100755
index 000..6c29f44
Binary files /dev/null and b/content/img/blog/release-19-web2.png differ



[flink-web] branch asf-site updated (dbf99f3 -> b6e0932)

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from dbf99f3  Rebuild website
 new 032d019  Add missing 1.9.0 download link entry
 new 040c74c  Add 1.9.0 release announcement
 new b6e0932  Rebuild website

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2019-08-22-release-1.9.0.md| 367 ++
 content/downloads.html|   1 +
 content/img/blog/release-19-flip1.png | Bin 0 -> 32435 bytes
 content/img/blog/release-19-stack.png | Bin 0 -> 66386 bytes
 content/img/blog/release-19-web1.png  | Bin 0 -> 121643 bytes
 content/img/blog/release-19-web2.png  | Bin 0 -> 133292 bytes
 downloads.md  |   1 +
 img/blog/release-19-flip1.png | Bin 0 -> 32435 bytes
 img/blog/release-19-stack.png | Bin 0 -> 66386 bytes
 img/blog/release-19-web1.png  | Bin 0 -> 121643 bytes
 img/blog/release-19-web2.png  | Bin 0 -> 133292 bytes
 11 files changed, 369 insertions(+)
 create mode 100644 _posts/2019-08-22-release-1.9.0.md
 create mode 100755 content/img/blog/release-19-flip1.png
 create mode 100755 content/img/blog/release-19-stack.png
 create mode 100755 content/img/blog/release-19-web1.png
 create mode 100755 content/img/blog/release-19-web2.png
 create mode 100755 img/blog/release-19-flip1.png
 create mode 100755 img/blog/release-19-stack.png
 create mode 100755 img/blog/release-19-web1.png
 create mode 100755 img/blog/release-19-web2.png



[flink] branch release-1.9 updated: [hotfix][python] Add System.exit() at the end of PythonGatewayServer to ensure the JVM will exit if its parent process dies.

2019-08-22 Thread hequn
This is an automated email from the ASF dual-hosted git repository.

hequn pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 719c2bb  [hotfix][python] Add System.exit() at the end of 
PythonGatewayServer to ensure the JVM will exit if its parent process dies.
719c2bb is described below

commit 719c2bbcaf9cd68cbf03afd7d9e72b07ec4c6e91
Author: Wei Zhong 
AuthorDate: Tue Aug 20 17:55:31 2019 +0800

[hotfix][python] Add System.exit() at the end of PythonGatewayServer to 
ensure the JVM will exit if its parent process dies.

This closes #9490
---
 .../apache/flink/client/python/PythonGatewayServer.java   | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
index 64f2ef1..8f47ba7 100644
--- 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
+++ 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
@@ -70,11 +70,16 @@ public class PythonGatewayServer {
System.exit(1);
}
 
-   // Exit on EOF or broken pipe.  This ensures that the server 
dies
-   // if its parent program dies.
-   while (System.in.read() != -1) {
-   // Do nothing
+   try {
+   // Exit on EOF or broken pipe.  This ensures that the 
server dies
+   // if its parent program dies.
+   while (System.in.read() != -1) {
+   // Do nothing
+   }
+   gatewayServer.shutdown();
+   System.exit(0);
+   } finally {
+   System.exit(1);
}
-   gatewayServer.shutdown();
}
 }



[flink] branch master updated: [hotfix][python] Add System.exit() at the end of PythonGatewayServer to ensure the JVM will exit if its parent process dies.

2019-08-22 Thread hequn
This is an automated email from the ASF dual-hosted git repository.

hequn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 948ab73  [hotfix][python] Add System.exit() at the end of 
PythonGatewayServer to ensure the JVM will exit if its parent process dies.
948ab73 is described below

commit 948ab73523bed79d1672884207fd3a06f33d9572
Author: Wei Zhong 
AuthorDate: Tue Aug 20 17:55:31 2019 +0800

[hotfix][python] Add System.exit() at the end of PythonGatewayServer to 
ensure the JVM will exit if its parent process dies.

This closes #9490
---
 .../apache/flink/client/python/PythonGatewayServer.java   | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
index 64f2ef1..8f47ba7 100644
--- 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
+++ 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java
@@ -70,11 +70,16 @@ public class PythonGatewayServer {
System.exit(1);
}
 
-   // Exit on EOF or broken pipe.  This ensures that the server 
dies
-   // if its parent program dies.
-   while (System.in.read() != -1) {
-   // Do nothing
+   try {
+   // Exit on EOF or broken pipe.  This ensures that the 
server dies
+   // if its parent program dies.
+   while (System.in.read() != -1) {
+   // Do nothing
+   }
+   gatewayServer.shutdown();
+   System.exit(0);
+   } finally {
+   System.exit(1);
}
-   gatewayServer.shutdown();
}
 }



[flink] branch master updated (163cf18 -> 42337ca)

2019-08-22 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 163cf18  [hotfix] [docs] Add 1.9 to list of previous docs
 add 42337ca  [FLINK-13797] Fix log formats

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/flink/runtime/fs/hdfs/HadoopFsFactory.java | 7 ++-
 .../mesos/runtime/clusterframework/LaunchableMesosWorker.java  | 2 +-
 .../api/functions/source/MessageAcknowledgingSourceBase.java   | 2 +-
 3 files changed, 8 insertions(+), 3 deletions(-)



buildbot failure in on flink-docs-release-1.5

2019-08-22 Thread buildbot
The Buildbot has detected a new failure on builder flink-docs-release-1.5 while 
building . Full details are available at:
https://ci.apache.org/builders/flink-docs-release-1.5/builds/466

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: bb_slave2_ubuntu

Build Reason: The Nightly scheduler named 'flink-nightly-docs-release-1.5' 
triggered this build
Build Source Stamp: [branch release-1.5] HEAD
Blamelist: 

BUILD FAILED: failed Build docs

Sincerely,
 -The Buildbot





[flink] branch master updated: [hotfix] [docs] Add 1.9 to list of previous docs

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 163cf18  [hotfix] [docs] Add 1.9 to list of previous docs
163cf18 is described below

commit 163cf186e238b550ad5a06b5490498fa77738dc2
Author: Tzu-Li (Gordon) Tai 
AuthorDate: Thu Aug 22 08:49:01 2019 +0200

[hotfix] [docs] Add 1.9 to list of previous docs
---
 docs/_config.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/_config.yml b/docs/_config.yml
index e79387c..b80 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -60,6 +60,7 @@ is_stable: false
 show_outdated_warning: false
 
 previous_docs:
+  1.9: http://ci.apache.org/projects/flink/flink-docs-release-1.9
   1.8: http://ci.apache.org/projects/flink/flink-docs-release-1.8
   1.7: http://ci.apache.org/projects/flink/flink-docs-release-1.7
   1.6: http://ci.apache.org/projects/flink/flink-docs-release-1.6



[flink-web] branch asf-site updated (3f46fcf -> dbf99f3)

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 3f46fcf  Rebuild website
 new 2432e48  Update stable version to 1.9.0 and add download links
 new dbf99f3  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml|  81 ++-
 content/2019/05/03/pulsar-flink.html   |   2 +-
 content/2019/05/14/temporal-tables.html|   2 +-
 content/2019/05/19/state-ttl.html  |   2 +-
 content/2019/06/05/flink-network-stack.html|   2 +-
 content/2019/06/26/broadcast-state.html|   2 +-
 content/2019/07/23/flink-network-stack-2.html  |   2 +-
 content/blog/index.html|   2 +-
 content/blog/page2/index.html  |   2 +-
 content/blog/page3/index.html  |   2 +-
 content/blog/page4/index.html  |   2 +-
 content/blog/page5/index.html  |   2 +-
 content/blog/page6/index.html  |   2 +-
 content/blog/page7/index.html  |   2 +-
 content/blog/page8/index.html  |   2 +-
 content/blog/page9/index.html  |   2 +-
 .../blog/release_1.0.0-changelog_known_issues.html |   2 +-
 content/blog/release_1.1.0-changelog.html  |   2 +-
 content/blog/release_1.2.0-changelog.html  |   2 +-
 content/blog/release_1.3.0-changelog.html  |   2 +-
 content/community.html |   2 +-
 .../code-style-and-quality-common.html |   2 +-
 .../code-style-and-quality-components.html |   2 +-
 .../code-style-and-quality-formatting.html |   2 +-
 .../contributing/code-style-and-quality-java.html  |   2 +-
 .../code-style-and-quality-preamble.html   |   2 +-
 .../code-style-and-quality-pull-requests.html  |   2 +-
 .../contributing/code-style-and-quality-scala.html |   2 +-
 content/contributing/contribute-code.html  |   2 +-
 content/contributing/contribute-documentation.html |   2 +-
 content/contributing/how-to-contribute.html|   2 +-
 content/contributing/improve-website.html  |   2 +-
 content/contributing/reviewing-prs.html|   2 +-
 content/documentation.html |   2 +-
 content/downloads.html |  72 +++--
 content/ecosystem.html |   2 +-
 content/faq.html   |   2 +-
 .../2017/07/04/flink-rescalable-state.html |   2 +-
 .../2018/01/30/incremental-checkpointing.html  |   2 +-
 .../01/end-to-end-exactly-once-apache-flink.html   |   2 +-
 .../features/2019/03/11/prometheus-monitoring.html |   2 +-
 content/flink-applications.html|   2 +-
 content/flink-architecture.html|   2 +-
 content/flink-operations.html  |   2 +-
 content/gettinghelp.html   |   2 +-
 content/index.html |   2 +-
 content/material.html  |   2 +-
 content/news/2014/08/26/release-0.6.html   |   2 +-
 content/news/2014/09/26/release-0.6.1.html |   2 +-
 content/news/2014/10/03/upcoming_events.html   |   2 +-
 content/news/2014/11/04/release-0.7.0.html |   2 +-
 content/news/2014/11/18/hadoop-compatibility.html  |   2 +-
 content/news/2015/01/06/december-in-flink.html |   2 +-
 content/news/2015/01/21/release-0.8.html   |   2 +-
 content/news/2015/02/04/january-in-flink.html  |   2 +-
 content/news/2015/02/09/streaming-example.html |   2 +-
 .../news/2015/03/02/february-2015-in-flink.html|   2 +-
 .../13/peeking-into-Apache-Flinks-Engine-Room.html |   2 +-
 content/news/2015/04/07/march-in-flink.html|   2 +-
 .../news/2015/04/13/release-0.9.0-milestone1.html  |   2 +-
 .../2015/05/11/Juggling-with-Bits-and-Bytes.html   |   2 +-
 .../news/2015/05/14/Community-update-April.html|   2 +-
 .../24/announcing-apache-flink-0.9.0-release.html  |   2 +-
 .../news/2015/08/24/introducing-flink-gelly.html   |   2 +-
 content/news/2015/09/01/release-0.9.1.html |   2 +-
 content/news/2015/09/03/flink-forward.html |   2 +-
 content/news/2015/09/16/off-heap-memory.html   |   2 +-
 content/news/2015/11/16/release-0.10.0.html|   2 +-
 content/news/2015/11/27/release-0.10.1.html|   2 +-
 content/news/2015/12/04/Introducing-windows.html   |   2 +-
 content/news/2015/12/11/storm-compatibility.html   |   2 +-
 content/news/2015/12/18/a-year-in-review.html  |   2 +-
 

[flink-web] 01/02: Update stable version to 1.9.0 and add download links

2019-08-22 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 2432e48f511e7e3d381400e4f4e73871200e6f22
Author: Tzu-Li (Gordon) Tai 
AuthorDate: Thu Aug 22 08:40:13 2019 +0200

Update stable version to 1.9.0 and add download links
---
 _config.yml | 81 +++--
 1 file changed, 79 insertions(+), 2 deletions(-)

diff --git a/_config.yml b/_config.yml
index 5e72e79..4f4be26 100644
--- a/_config.yml
+++ b/_config.yml
@@ -9,8 +9,8 @@ url: https://flink.apache.org
 
 DOCS_BASE_URL: https://ci.apache.org/projects/flink/
 
-FLINK_VERSION_STABLE: 1.8.1
-FLINK_VERSION_STABLE_SHORT: 1.8
+FLINK_VERSION_STABLE: 1.9.0
+FLINK_VERSION_STABLE_SHORT: 1.9
 
 FLINK_ISSUES_URL: https://issues.apache.org/jira/browse/FLINK
 FLINK_GITHUB_URL: https://github.com/apache/flink
@@ -50,6 +50,83 @@ FLINK_GITHUB_REPO_NAME: flink
 
 flink_releases:
   -
+  version_short: 1.9
+  binary_release:
+  name: "Apache Flink 1.9.0"
+  scala_211:
+  id: "190-download_211"
+  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.11.tgz;
+  asc_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.11.tgz.asc;
+  sha512_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.11.tgz.sha512;
+  scala_212:
+  id: "190-download_212"
+  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.12.tgz;
+  asc_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.12.tgz.asc;
+  sha512_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-bin-scala_2.12.tgz.sha512;
+  source_release:
+  name: "Apache Flink 1.9.0"
+  id: "190-download-source"
+  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.9.0/flink-1.9.0-src.tgz;
+  asc_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-src.tgz.asc;
+  sha512_url: 
"https://www.apache.org/dist/flink/flink-1.9.0/flink-1.9.0-src.tgz.sha512;
+  optional_components:
+-
+  name: "Pre-bundled Hadoop 2.4.1"
+  category: "Pre-bundled Hadoop"
+  scala_dependent: false
+  id: bundled-hadoop-241-70
+  url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.4.1-7.0/flink-shaded-hadoop-2-uber-2.4.1-7.0.jar
+  asc_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.4.1-7.0/flink-shaded-hadoop-2-uber-2.4.1-7.0.jar.asc
+  sha_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.4.1-7.0/flink-shaded-hadoop-2-uber-2.4.1-7.0.jar.sha1
+-
+  name: "Pre-bundled Hadoop 2.6.5"
+  category: "Pre-bundled Hadoop"
+  scala_dependent: false
+  id: bundled-hadoop-265-70
+  url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.6.5-7.0/flink-shaded-hadoop-2-uber-2.6.5-7.0.jar
+  asc_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.6.5-7.0/flink-shaded-hadoop-2-uber-2.6.5-7.0.jar.asc
+  sha_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.6.5-7.0/flink-shaded-hadoop-2-uber-2.6.5-7.0.jar.sha1
+-
+  name: "Pre-bundled Hadoop 2.7.5"
+  category: "Pre-bundled Hadoop"
+  scala_dependent: false
+  id: bundled-hadoop-275-70
+  url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-7.0/flink-shaded-hadoop-2-uber-2.7.5-7.0.jar
+  asc_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-7.0/flink-shaded-hadoop-2-uber-2.7.5-7.0.jar.asc
+  sha_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.7.5-7.0/flink-shaded-hadoop-2-uber-2.7.5-7.0.jar.sha1
+-
+  name: "Pre-bundled Hadoop 2.8.3"
+  category: "Pre-bundled Hadoop"
+  scala_dependent: false
+  id: bundled-hadoop-283-70
+  url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-7.0/flink-shaded-hadoop-2-uber-2.8.3-7.0.jar
+  asc_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-7.0/flink-shaded-hadoop-2-uber-2.8.3-7.0.jar.asc
+  sha_url: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-7.0/flink-shaded-hadoop-2-uber-2.8.3-7.0.jar.sha1
+-
+  name: "Avro SQL Format"
+  category: "SQL Formats"
+  scala_dependent: false
+  id: 190-sql-format-avro
+  url: 

[flink] branch release-1.9 updated: [FLINK-13564][table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-22 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new a106738  [FLINK-13564][table-planner-blink] throw exception if 
constant with YEAR TO MONTH resolution was used for group windows
a106738 is described below

commit a106738721edd2f7853605ac68f6bb16e1d817b0
Author: godfreyhe 
AuthorDate: Sat Aug 3 17:50:55 2019 +0800

[FLINK-13564][table-planner-blink] throw exception if constant with YEAR TO 
MONTH resolution was used for group windows

This is a same fix with FLINK-11017 in blink planner.

This closes #9349
---
 .../logical/BatchLogicalWindowAggregateRule.scala  |  9 ++
 .../logical/LogicalWindowAggregateRuleBase.scala   | 12 
 .../logical/StreamLogicalWindowAggregateRule.scala | 14 -
 .../plan/stream/sql/agg/WindowAggregateTest.xml| 21 +
 .../plan/stream/sql/agg/WindowAggregateTest.scala  | 35 +-
 5 files changed, 82 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
index 86b9098..e711d8d 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
@@ -18,6 +18,7 @@
 
 package org.apache.flink.table.planner.plan.rules.logical
 
+import org.apache.flink.table.api.TableException
 import org.apache.flink.table.expressions.FieldReferenceExpression
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType
@@ -28,6 +29,8 @@ import org.apache.calcite.rel.`type`.RelDataType
 import org.apache.calcite.rel.logical.{LogicalAggregate, LogicalProject}
 import org.apache.calcite.rex._
 
+import _root_.java.math.{BigDecimal => JBigDecimal}
+
 /**
   * Planner rule that transforms simple [[LogicalAggregate]] on a 
[[LogicalProject]]
   * with windowing expression to [[LogicalWindowAggregate]] for batch.
@@ -73,6 +76,12 @@ class BatchLogicalWindowAggregateRule
   ref.getIndex)
 }
   }
+
+  def getOperandAsLong(call: RexCall, idx: Int): Long =
+call.getOperands.get(idx) match {
+  case v: RexLiteral => v.getValue.asInstanceOf[JBigDecimal].longValue()
+  case _ => throw new TableException("Only constant window descriptors are 
supported")
+}
 }
 
 object BatchLogicalWindowAggregateRule {
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
index ee24adb..9f88b8f 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
@@ -39,8 +39,6 @@ import org.apache.calcite.rex._
 import org.apache.calcite.sql.`type`.SqlTypeUtil
 import org.apache.calcite.util.ImmutableBitSet
 
-import _root_.java.math.BigDecimal
-
 import _root_.scala.collection.JavaConversions._
 
 /**
@@ -247,11 +245,6 @@ abstract class LogicalWindowAggregateRuleBase(description: 
String)
   windowExpr: RexCall,
   windowExprIdx: Int,
   rowType: RelDataType): LogicalWindow = {
-def getOperandAsLong(call: RexCall, idx: Int): Long =
-  call.getOperands.get(idx) match {
-case v: RexLiteral => v.getValue.asInstanceOf[BigDecimal].longValue()
-case _ => throw new TableException("Only constant window descriptors 
are supported")
-  }
 
 val timeField = getTimeFieldReference(windowExpr.getOperands.get(0), 
windowExprIdx, rowType)
 val resultType = 
Some(fromDataTypeToLogicalType(timeField.getOutputDataType))
@@ -288,4 +281,9 @@ abstract class LogicalWindowAggregateRuleBase(description: 
String)
   operand: RexNode,
   windowExprIdx: Int,
   rowType: RelDataType): FieldReferenceExpression
+
+  /**
+* get operand value as Long type
+*/
+  def getOperandAsLong(call: RexCall, idx: Int): Long
 }
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala
 

[flink] branch master updated: [FLINK-13564][table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-22 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 8455872  [FLINK-13564][table-planner-blink] throw exception if 
constant with YEAR TO MONTH resolution was used for group windows
8455872 is described below

commit 845587232736b01bcad3c7a87f94a570842f011b
Author: godfreyhe 
AuthorDate: Sat Aug 3 17:50:55 2019 +0800

[FLINK-13564][table-planner-blink] throw exception if constant with YEAR TO 
MONTH resolution was used for group windows

This is a same fix with FLINK-11017 in blink planner.

This closes #9349
---
 .../logical/BatchLogicalWindowAggregateRule.scala  |  9 ++
 .../logical/LogicalWindowAggregateRuleBase.scala   | 12 
 .../logical/StreamLogicalWindowAggregateRule.scala | 14 -
 .../plan/stream/sql/agg/WindowAggregateTest.xml| 21 +
 .../plan/stream/sql/agg/WindowAggregateTest.scala  | 35 +-
 5 files changed, 82 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
index 86b9098..e711d8d 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
@@ -18,6 +18,7 @@
 
 package org.apache.flink.table.planner.plan.rules.logical
 
+import org.apache.flink.table.api.TableException
 import org.apache.flink.table.expressions.FieldReferenceExpression
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory.toLogicalType
@@ -28,6 +29,8 @@ import org.apache.calcite.rel.`type`.RelDataType
 import org.apache.calcite.rel.logical.{LogicalAggregate, LogicalProject}
 import org.apache.calcite.rex._
 
+import _root_.java.math.{BigDecimal => JBigDecimal}
+
 /**
   * Planner rule that transforms simple [[LogicalAggregate]] on a 
[[LogicalProject]]
   * with windowing expression to [[LogicalWindowAggregate]] for batch.
@@ -73,6 +76,12 @@ class BatchLogicalWindowAggregateRule
   ref.getIndex)
 }
   }
+
+  def getOperandAsLong(call: RexCall, idx: Int): Long =
+call.getOperands.get(idx) match {
+  case v: RexLiteral => v.getValue.asInstanceOf[JBigDecimal].longValue()
+  case _ => throw new TableException("Only constant window descriptors are 
supported")
+}
 }
 
 object BatchLogicalWindowAggregateRule {
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
index ee24adb..9f88b8f 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalWindowAggregateRuleBase.scala
@@ -39,8 +39,6 @@ import org.apache.calcite.rex._
 import org.apache.calcite.sql.`type`.SqlTypeUtil
 import org.apache.calcite.util.ImmutableBitSet
 
-import _root_.java.math.BigDecimal
-
 import _root_.scala.collection.JavaConversions._
 
 /**
@@ -247,11 +245,6 @@ abstract class LogicalWindowAggregateRuleBase(description: 
String)
   windowExpr: RexCall,
   windowExprIdx: Int,
   rowType: RelDataType): LogicalWindow = {
-def getOperandAsLong(call: RexCall, idx: Int): Long =
-  call.getOperands.get(idx) match {
-case v: RexLiteral => v.getValue.asInstanceOf[BigDecimal].longValue()
-case _ => throw new TableException("Only constant window descriptors 
are supported")
-  }
 
 val timeField = getTimeFieldReference(windowExpr.getOperands.get(0), 
windowExprIdx, rowType)
 val resultType = 
Some(fromDataTypeToLogicalType(timeField.getOutputDataType))
@@ -288,4 +281,9 @@ abstract class LogicalWindowAggregateRuleBase(description: 
String)
   operand: RexNode,
   windowExprIdx: Int,
   rowType: RelDataType): FieldReferenceExpression
+
+  /**
+* get operand value as Long type
+*/
+  def getOperandAsLong(call: RexCall, idx: Int): Long
 }
diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/StreamLogicalWindowAggregateRule.scala