[flink] branch release0 created (now cacfd98)

2020-09-30 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch release0
in repository https://gitbox.apache.org/repos/asf/flink.git.


  at cacfd98  [FLINK-19430][docs-zh][python] Translate page 
datastream_tutorial into Chinese (#13498)

No new revisions were added by this update.



[flink] branch release0 created (now cacfd98)

2020-09-30 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch release0
in repository https://gitbox.apache.org/repos/asf/flink.git.


  at cacfd98  [FLINK-19430][docs-zh][python] Translate page 
datastream_tutorial into Chinese (#13498)

No new revisions were added by this update.



[flink-web] 04/04: Rebuild website

2020-06-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit ac41805001489cdf211456f131cca4c397188ec2
Author: Fabian Hueske 
AuthorDate: Mon Jun 15 16:19:40 2020 +0200

Rebuild website
---
 content/blog/feed.xml  | 135 +---
 content/blog/index.html|  43 ++-
 content/blog/page10/index.html |  36 ++-
 content/blog/page11/index.html |  38 ++-
 content/blog/page12/index.html |  25 ++
 content/blog/page2/index.html  |  39 ++-
 content/blog/page3/index.html  |  38 ++-
 content/blog/page4/index.html  |  38 ++-
 content/blog/page5/index.html  |  38 ++-
 content/blog/page6/index.html  |  40 ++-
 content/blog/page7/index.html  |  40 ++-
 content/blog/page8/index.html  |  40 ++-
 content/blog/page9/index.html  |  38 ++-
 .../2020-06-15-flink-on-zeppelin/create_sink.png   | Bin 0 -> 138803 bytes
 .../2020-06-15-flink-on-zeppelin/create_source.png | Bin 0 -> 147213 bytes
 .../img/blog/2020-06-15-flink-on-zeppelin/etl.png  | Bin 0 -> 55319 bytes
 .../blog/2020-06-15-flink-on-zeppelin/preview.png  | Bin 0 -> 89756 bytes
 content/index.html |  10 +-
 .../news/2020/06/15/flink-on-zeppelin-part1.html   | 351 +
 content/zh/index.html  |  10 +-
 20 files changed, 756 insertions(+), 203 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 23b09f0..eabf57c 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,102 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 
1
+pThe latest release of a 
href=https://zeppelin.apache.org/Apache Zeppelin/a 
comes with a redesigned interpreter for Apache Flink (version Flink 1.10+ is 
only supported moving forward) 
+that allows developers to use Flink directly on Zeppelin notebooks for 
interactive data analysis. I wrote 2 posts about how to use Flink in Zeppelin. 
This is part-1 where I explain how the Flink interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on 
Zeppelin./p
+
+h1 id=the-flink-interpreter-in-zeppelin-09The Flink 
Interpreter in Zeppelin 0.9/h1
+
+pThe Flink interpreter can be accessed and configured from Zeppelin’s 
interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage 
of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch amp; streaming 
executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the 
below five interpreters:/p
+
+ul
+  li%flink - Provides a Scala environment/li
+  li%flink.pyflink   - Provides a python environment/li
+  li%flink.ipyflink   - Provides an ipython environment/li
+  li%flink.ssql - Provides a stream sql environment/li
+  li%flink.bsql - Provides a batch sql environment/li
+/ul
+
+pNot only has the interpreter been extended to support writing Flink 
applications in three languages, but it has also extended the available 
execution modes for Flink that now include:/p
+
+ul
+  liRunning Flink in Local Mode/li
+  liRunning Flink in Remote Mode/li
+  liRunning Flink in Yarn Mode/li
+/ul
+
+pYou can find more information about how to get started with Zeppelin 
and all the execution modes for Flink applications in a 
href=https://github.com/apache/zeppelin/tree/master/notebook/Flink%20TutorialZeppelin
 notebooks/a in this post./p
+
+h1 id=flink-on-zeppelin-for-stream-processingFlink on 
Zeppelin for Stream processing/h1
+
+pPerforming stream processing jobs with Apache Flink on Zeppelin 
allows you to run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL 
and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on 
Zeppelin:/p
+
+pYou can use Flink SQL to perform streaming ETL by following the steps 
below 
+(for the full tutorial, please refer to the a 
href=https://github.com/apache/zeppelin/blob/master/notebook/Flink%20Tutorial/4.%20Streaming%20ETL_2EYD56B9B.zplnFlink
 Tutorial/Streaming ETL tutorial/a of the Zeppelin 
distribution):/p
+
+ul
+  liStep 1. Create source table to represent the source 
data./li
+/ul
+
+center
+img 
src=/img/blog/2020-06-15-flink-on-zeppelin/create_source.png 
width=80% alt=Create Source Table /
+/center
+
+ul
+  liStep 2. Create a sink table to represent the processed 
data./li
+/ul
+
+center
+img src=/img/blog/2020-06-15-flink-on-zeppelin/create_sink.png 
width=80% alt=Create Si

[flink-web] 03/04: update blog for part-1

2020-06-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 07a3b5ff22a2c1267a5d6d61a947f95a188007c8
Author: Jeff Zhang 
AuthorDate: Mon Jun 1 22:10:17 2020 +0800

update blog for part-1
---
 ...ppelin.md => 2020-06-15-flink-on-zeppelin-part1.md} |  17 +
 .../create_sink.png| Bin
 .../create_source.png  | Bin
 .../etl.png| Bin
 .../preview.png| Bin
 5 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/_posts/2020-05-25-flink-on-zeppelin.md 
b/_posts/2020-06-15-flink-on-zeppelin-part1.md
similarity index 84%
rename from _posts/2020-05-25-flink-on-zeppelin.md
rename to _posts/2020-06-15-flink-on-zeppelin-part1.md
index b965fe4..d0070d1 100644
--- a/_posts/2020-05-25-flink-on-zeppelin.md
+++ b/_posts/2020-06-15-flink-on-zeppelin-part1.md
@@ -1,7 +1,7 @@
 ---
 layout: post
-title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
-date:   2020-05-25T08:00:00.000Z
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 1"
+date:   2020-06-15T08:00:00.000Z
 categories: news
 authors:
 - zjffdu:
@@ -10,7 +10,7 @@ authors:
 ---
 
 The latest release of [Apache Zeppelin](https://zeppelin.apache.org/) comes 
with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only 
supported moving forward) 
-that allows developers to use Flink directly on Zeppelin notebooks for 
interactive data analysis. In this post, we explain how the Flink interpreter 
in Zeppelin works, 
+that allows developers to use Flink directly on Zeppelin notebooks for 
interactive data analysis. I wrote 2 posts about how to use Flink in Zeppelin. 
This is part-1 where I explain how the Flink interpreter in Zeppelin works, 
 and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
 
 # The Flink Interpreter in Zeppelin 0.9
@@ -48,31 +48,32 @@ You can use Flink SQL to perform streaming ETL by following 
the steps below
 * Step 1. Create source table to represent the source data.
 
 
-
+
 
 
 * Step 2. Create a sink table to represent the processed data.
 
 
-
+
 
 
 * Step 3. After creating the source and sink table, we can insert them to our 
statement to trigger the stream processing job as the following: 
 
 
-
+
 
 
 * Step 4. After initiating the streaming job, you can use another SQL 
statement to query the sink table to verify the results of your job. Here you 
can see the top 10 records which will be refreshed every 3 seconds.
 
 
-
+
 
 
 # Summary
 
 In this post, we explained how the redesigned Flink interpreter works in 
Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs 
with 
-Flink and Zeppelin. You can find an additional [tutorial for batch processing 
with Flink on 
Zeppelin](https://medium.com/@zjffdu/flink-on-zeppelin-part-2-batch-711731df5ad9)
 as well as using Flink on Zeppelin for 
+Flink and Zeppelin. In the next post, I will talk about how to do streaming 
data visualization via Flink on Zeppelin.
+Besides that, you can find an additional [tutorial for batch processing with 
Flink on 
Zeppelin](https://medium.com/@zjffdu/flink-on-zeppelin-part-2-batch-711731df5ad9)
 as well as using Flink on Zeppelin for 
 more advance operations like resource isolation, job concurrency & 
parallelism, multiple Hadoop & Hive environments and more on our series of 
posts on Medium.
 And here's a list of [Flink on Zeppelin tutorial 
videos](https://www.youtube.com/watch?v=YxPo0Fosjjg=PL4oy12nnS7FFtg3KV1iS5vDb0pTz12VcX)
 for your reference.
 
diff --git a/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png 
b/img/blog/2020-06-15-flink-on-zeppelin/create_sink.png
similarity index 100%
rename from img/blog/2020-05-25-flink-on-zeppelin/create_sink.png
rename to img/blog/2020-06-15-flink-on-zeppelin/create_sink.png
diff --git a/img/blog/2020-05-25-flink-on-zeppelin/create_source.png 
b/img/blog/2020-06-15-flink-on-zeppelin/create_source.png
similarity index 100%
rename from img/blog/2020-05-25-flink-on-zeppelin/create_source.png
rename to img/blog/2020-06-15-flink-on-zeppelin/create_source.png
diff --git a/img/blog/2020-05-25-flink-on-zeppelin/etl.png 
b/img/blog/2020-06-15-flink-on-zeppelin/etl.png
similarity index 100%
rename from img/blog/2020-05-25-flink-on-zeppelin/etl.png
rename to img/blog/2020-06-15-flink-on-zeppelin/etl.png
diff --git a/img/blog/2020-05-25-flink-on-zeppelin/preview.png 
b/img/blog/2020-06-15-flink-on-zeppelin/preview.png
similarity index 100%
rename from img/blog/2020-05-25-flink-on-zeppelin/preview.png
rename to img/blog/2020-06-15-flink-on-zeppelin/preview.png



[flink-web] 01/04: [blog] flink on zeppelin

2020-06-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit a31c781cc232c65233d523c21c30824135c4ab1b
Author: Jeff Zhang 
AuthorDate: Mon May 25 22:30:27 2020 +0800

[blog] flink on zeppelin
---
 _posts/2020-05-25-flink-on-zeppelin.md |  83 +
 .../2020-05-25-flink-on-zeppelin/create_sink.png   | Bin 0 -> 138803 bytes
 .../2020-05-25-flink-on-zeppelin/create_source.png | Bin 0 -> 147213 bytes
 img/blog/2020-05-25-flink-on-zeppelin/etl.png  | Bin 0 -> 55319 bytes
 img/blog/2020-05-25-flink-on-zeppelin/preview.png  | Bin 0 -> 89756 bytes
 5 files changed, 83 insertions(+)

diff --git a/_posts/2020-05-25-flink-on-zeppelin.md 
b/_posts/2020-05-25-flink-on-zeppelin.md
new file mode 100644
index 000..080a74b
--- /dev/null
+++ b/_posts/2020-05-25-flink-on-zeppelin.md
@@ -0,0 +1,83 @@
+---
+layout: post
+title:  "Flink on Zeppelin Notebooks for Interactive Data Analysis"
+date:   2020-05-25T08:00:00.000Z
+categories: news
+authors:
+- zjffdu:
+  name: "Jeff Zhang"
+  twitter: "zjffdu"
+---
+
+The latest release of Apache Zeppelin comes with a redesigned interpreter for 
Apache Flink (version Flink 1.10+ is only supported moving forward) 
+that allows developers and data engineers to use Flink directly on Zeppelin 
notebooks for interactive data analysis. In this post, we explain how the Flink 
interpreter in Zeppelin works, 
+and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
+
+# The Flink Interpreter in Zeppelin 0.9
+
+The Flink interpreter can be accessed and configured from Zeppelin’s 
interpreter settings page. 
+The interpreter has been refactored so that Flink users can now take advantage 
of Zeppelin to write Flink applications in three languages, 
+namely Scala, Python (PyFlink) and SQL (for both batch & streaming 
executions). 
+Zeppelin 0.9 now comes with the Flink interpreter group, consisting of the 
below five interpreters: 
+
+* %flink - Provides a Scala environment
+* %flink.pyflink   - Provides a python environment
+* %flink.ipyflink   - Provides an ipython environment
+* %flink.bsql - Provides a stream sql environment
+* %flink.ssql - Provides a batch sql environment
+
+Not only has the interpreter been extended to support writing Flink 
applications in three languages, but it has also extended the available 
execution modes for Flink that now include:
+* Running Flink in Local Mode
+* Running Flink in Remote Mode
+* Running Flink in Yarn Mode
+
+
+You can find more information about how to get started with Zeppelin and all 
the execution modes for Flink applications in Zeppelin notebooks in this post. 
+
+
+# Flink on Zeppelin for Stream processing
+
+Performing stream processing jobs with Apache Flink on Zeppelin allows you to 
run most major streaming cases, 
+such as streaming ETL and real time data analytics, with the use of Flink SQL 
and specific UDFs. 
+Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
+
+You can use Flink SQL to perform streaming ETL by following the steps below 
+(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL 
tutorial of the Zeppelin distribution):
+
+* Step 1. Create source table to represent the source data.
+
+
+
+
+
+* Step 2. Create a sink table to represent the processed data.
+
+
+
+
+
+* Step 3. After creating the source and sink table, we can use insert them to 
our statement to trigger the streaming processing job as the following: 
+
+
+
+
+
+* Step 4. After initiating the streaming job, you can use another SQL 
statement to query the sink table to verify your streaming job. Here you can 
see the top 10 records which will be refreshed every 3 seconds.
+
+
+
+
+
+# Summary
+
+In this post, we explained how the redesigned Flink interpreter works in 
Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs 
with 
+Flink and Zeppelin. You can find additional tutorial for batch processing with 
Flink on Zeppelin as well as using Flink on Zeppelin for 
+more advance operations like resource isolation, job concurrency & 
parallelism, multiple Hadoop & Hive environments and more on our series of post 
on Medium.
+
+# References
+
+* [Apache Zeppelin official website](http://zeppelin.apache.org)
+* Flink on Zeppelin tutorials - [Part 
1](https://medium.com/@zjffdu/flink-on-zeppelin-part-1-get-started-2591aaa6aa47)
+* Flink on Zeppelin tutorials - [Part 
2](https://medium.com/@zjffdu/flink-on-zeppelin-part-2-batch-711731df5ad9)
+* Flink on Zeppelin tutorials - [Part 
3](https://medium.com/@zjffdu/flink-on-zeppelin-part-3-streaming-5fca1e16754)
+* Flink on Zeppelin tutorials - [Part 
4](https://medium.com/@zjffdu/flink-on-zeppelin-part-4-advanced-usage-998b74908cd9)
diff --git a/img/blog/2020-05-25-flink-on-zeppelin/create_sink.png 
b/img/blog

[flink-web] branch asf-site updated (75a8f23 -> ac41805)

2020-06-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 75a8f23  regenerate page
 new a31c781  [blog] flink on zeppelin
 new ff4ccf3  Apply suggestions from code review
 new 07a3b5f  update blog for part-1
 new ac41805  Rebuild website

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2020-06-15-flink-on-zeppelin-part1.md   |  88 ++
 content/blog/feed.xml  | 135 +++--
 content/blog/index.html|  43 ---
 content/blog/page10/index.html |  36 --
 content/blog/page11/index.html |  38 +++---
 content/blog/page12/index.html |  25 
 content/blog/page2/index.html  |  39 --
 content/blog/page3/index.html  |  38 +++---
 content/blog/page4/index.html  |  38 --
 content/blog/page5/index.html  |  38 +++---
 content/blog/page6/index.html  |  40 +++---
 content/blog/page7/index.html  |  40 +++---
 content/blog/page8/index.html  |  40 +++---
 content/blog/page9/index.html  |  38 --
 .../2020-06-15-flink-on-zeppelin/create_sink.png   | Bin 0 -> 138803 bytes
 .../2020-06-15-flink-on-zeppelin/create_source.png | Bin 0 -> 147213 bytes
 .../img/blog/2020-06-15-flink-on-zeppelin/etl.png  | Bin 0 -> 55319 bytes
 .../blog/2020-06-15-flink-on-zeppelin/preview.png  | Bin 0 -> 89756 bytes
 content/index.html |  10 +-
 .../06/15/flink-on-zeppelin-part1.html}| 118 --
 content/zh/index.html  |  10 +-
 .../2020-06-15-flink-on-zeppelin/create_sink.png   | Bin 0 -> 138803 bytes
 .../2020-06-15-flink-on-zeppelin/create_source.png | Bin 0 -> 147213 bytes
 img/blog/2020-06-15-flink-on-zeppelin/etl.png  | Bin 0 -> 55319 bytes
 img/blog/2020-06-15-flink-on-zeppelin/preview.png  | Bin 0 -> 89756 bytes
 25 files changed, 574 insertions(+), 240 deletions(-)
 create mode 100644 _posts/2020-06-15-flink-on-zeppelin-part1.md
 create mode 100644 
content/img/blog/2020-06-15-flink-on-zeppelin/create_sink.png
 create mode 100644 
content/img/blog/2020-06-15-flink-on-zeppelin/create_source.png
 create mode 100644 content/img/blog/2020-06-15-flink-on-zeppelin/etl.png
 create mode 100644 content/img/blog/2020-06-15-flink-on-zeppelin/preview.png
 copy content/news/{2015/09/03/flink-forward.html => 
2020/06/15/flink-on-zeppelin-part1.html} (66%)
 create mode 100644 img/blog/2020-06-15-flink-on-zeppelin/create_sink.png
 create mode 100644 img/blog/2020-06-15-flink-on-zeppelin/create_source.png
 create mode 100644 img/blog/2020-06-15-flink-on-zeppelin/etl.png
 create mode 100644 img/blog/2020-06-15-flink-on-zeppelin/preview.png



[flink-web] 02/04: Apply suggestions from code review

2020-06-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit ff4ccf3054aa4bf07b906740222c67db150dda90
Author: Jeff Zhang 
AuthorDate: Wed May 27 12:08:11 2020 +0800

Apply suggestions from code review

address comment

Co-authored-by: MarkSfik <47176197+marks...@users.noreply.github.com>
Co-authored-by: morsapaes 
---
 _posts/2020-05-25-flink-on-zeppelin.md | 24 ++--
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/_posts/2020-05-25-flink-on-zeppelin.md 
b/_posts/2020-05-25-flink-on-zeppelin.md
index 080a74b..b965fe4 100644
--- a/_posts/2020-05-25-flink-on-zeppelin.md
+++ b/_posts/2020-05-25-flink-on-zeppelin.md
@@ -9,8 +9,8 @@ authors:
   twitter: "zjffdu"
 ---
 
-The latest release of Apache Zeppelin comes with a redesigned interpreter for 
Apache Flink (version Flink 1.10+ is only supported moving forward) 
-that allows developers and data engineers to use Flink directly on Zeppelin 
notebooks for interactive data analysis. In this post, we explain how the Flink 
interpreter in Zeppelin works, 
+The latest release of [Apache Zeppelin](https://zeppelin.apache.org/) comes 
with a redesigned interpreter for Apache Flink (version Flink 1.10+ is only 
supported moving forward) 
+that allows developers to use Flink directly on Zeppelin notebooks for 
interactive data analysis. In this post, we explain how the Flink interpreter 
in Zeppelin works, 
 and provide a tutorial for running Streaming ETL with Flink on Zeppelin.
 
 # The Flink Interpreter in Zeppelin 0.9
@@ -23,16 +23,17 @@ Zeppelin 0.9 now comes with the Flink interpreter group, 
consisting of the below
 * %flink - Provides a Scala environment
 * %flink.pyflink   - Provides a python environment
 * %flink.ipyflink   - Provides an ipython environment
-* %flink.bsql - Provides a stream sql environment
-* %flink.ssql - Provides a batch sql environment
+* %flink.ssql - Provides a stream sql environment
+* %flink.bsql - Provides a batch sql environment
 
 Not only has the interpreter been extended to support writing Flink 
applications in three languages, but it has also extended the available 
execution modes for Flink that now include:
+
 * Running Flink in Local Mode
 * Running Flink in Remote Mode
 * Running Flink in Yarn Mode
 
 
-You can find more information about how to get started with Zeppelin and all 
the execution modes for Flink applications in Zeppelin notebooks in this post. 
+You can find more information about how to get started with Zeppelin and all 
the execution modes for Flink applications in [Zeppelin 
notebooks](https://github.com/apache/zeppelin/tree/master/notebook/Flink%20Tutorial)
 in this post. 
 
 
 # Flink on Zeppelin for Stream processing
@@ -42,7 +43,7 @@ such as streaming ETL and real time data analytics, with the 
use of Flink SQL an
 Below we showcase how you can execute streaming ETL using Flink on Zeppelin: 
 
 You can use Flink SQL to perform streaming ETL by following the steps below 
-(for the full tutorial, please refer to the Flink Tutorial/Streaming ETL 
tutorial of the Zeppelin distribution):
+(for the full tutorial, please refer to the [Flink Tutorial/Streaming ETL 
tutorial](https://github.com/apache/zeppelin/blob/master/notebook/Flink%20Tutorial/4.%20Streaming%20ETL_2EYD56B9B.zpln)
 of the Zeppelin distribution):
 
 * Step 1. Create source table to represent the source data.
 
@@ -56,13 +57,13 @@ You can use Flink SQL to perform streaming ETL by following 
the steps below
 
 
 
-* Step 3. After creating the source and sink table, we can use insert them to 
our statement to trigger the streaming processing job as the following: 
+* Step 3. After creating the source and sink table, we can insert them to our 
statement to trigger the stream processing job as the following: 
 
 
 
 
 
-* Step 4. After initiating the streaming job, you can use another SQL 
statement to query the sink table to verify your streaming job. Here you can 
see the top 10 records which will be refreshed every 3 seconds.
+* Step 4. After initiating the streaming job, you can use another SQL 
statement to query the sink table to verify the results of your job. Here you 
can see the top 10 records which will be refreshed every 3 seconds.
 
 
 
@@ -71,8 +72,10 @@ You can use Flink SQL to perform streaming ETL by following 
the steps below
 # Summary
 
 In this post, we explained how the redesigned Flink interpreter works in 
Zeppelin 0.9.0 and provided some examples for performing streaming ETL jobs 
with 
-Flink and Zeppelin. You can find additional tutorial for batch processing with 
Flink on Zeppelin as well as using Flink on Zeppelin for 
-more advance operations like resource isolation, job concurrency & 
parallelism, multiple Hadoop & Hive environments and more on our series of post 
on Medium.
+Flink an

[flink] branch release-1.10 updated: [hotfix][docs] Fix and improve query configuration docs.

2020-05-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new 38a75ca  [hotfix][docs] Fix and improve query configuration docs.
38a75ca is described below

commit 38a75cab9b2aba86a0ed8deaf986a3d9d8d7f1f3
Author: Fabian Hueske 
AuthorDate: Tue May 12 11:03:59 2020 +0200

[hotfix][docs] Fix and improve query configuration docs.

* Fix: TableConfig is *not* passed back when a Table is translated.
---
 docs/dev/table/streaming/query_configuration.md| 4 ++--
 docs/dev/table/streaming/query_configuration.zh.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/dev/table/streaming/query_configuration.md 
b/docs/dev/table/streaming/query_configuration.md
index 3bf0c45..bf84843 100644
--- a/docs/dev/table/streaming/query_configuration.md
+++ b/docs/dev/table/streaming/query_configuration.md
@@ -22,9 +22,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Table API and SQL queries have the same semantics regardless whether their 
input is bounded batch input or unbounded stream input. In many cases, 
continuous queries on streaming input are capable of computing accurate results 
that are identical to offline computed results. However, this is not possible 
in general case because continuous queries have to restrict the size of the 
state they are maintaining in order to avoid to run out of storage and to be 
able to process unbounded streaming [...]
+Table API and SQL queries have the same semantics regardless whether their 
input is a finite set of rows or an unbounded stream of table changes. In many 
cases, continuous queries on streaming input are able to compute accurate 
results that are identical to offline computed results. However, for some 
continuous queries you have to limit the size of the state they are maintaining 
in order to avoid to run out of storage while ingesting an unbounded stream of 
input. It depends on the charac [...]
 
-Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object. The `TableConfig` can be obtained from the 
`TableEnvironment` and is passed back when a `Table` is translated, i.e., when 
it is [transformed into a DataStream]({{ site.baseurl 
}}/dev/table/common.html#convert-a-table-into-a-datastream-or-dataset) or 
[emitted via a TableSink](../common.html#emit-a-table).
+Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object, which can be obtained from the `TableEnvironment`.
 
 
 
diff --git a/docs/dev/table/streaming/query_configuration.zh.md 
b/docs/dev/table/streaming/query_configuration.zh.md
index 3bf0c45..bf84843 100644
--- a/docs/dev/table/streaming/query_configuration.zh.md
+++ b/docs/dev/table/streaming/query_configuration.zh.md
@@ -22,9 +22,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Table API and SQL queries have the same semantics regardless whether their 
input is bounded batch input or unbounded stream input. In many cases, 
continuous queries on streaming input are capable of computing accurate results 
that are identical to offline computed results. However, this is not possible 
in general case because continuous queries have to restrict the size of the 
state they are maintaining in order to avoid to run out of storage and to be 
able to process unbounded streaming [...]
+Table API and SQL queries have the same semantics regardless whether their 
input is a finite set of rows or an unbounded stream of table changes. In many 
cases, continuous queries on streaming input are able to compute accurate 
results that are identical to offline computed results. However, for some 
continuous queries you have to limit the size of the state they are maintaining 
in order to avoid to run out of storage while ingesting an unbounded stream of 
input. It depends on the charac [...]
 
-Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object. The `TableConfig` can be obtained from the 
`TableEnvironment` and is passed back when a `Table` is translated, i.e., when 
it is [transformed into a DataStream]({{ site.baseurl 
}}/dev/table/common.html#convert-a-table-into-a-datastream-or-dataset) or 
[emitted via a TableSink](../common.html#emit-a-table).
+Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object, which can be obtained from the `TableEnvironment`.
 
 
 



[flink] branch master updated: [hotfix][docs] Fix and improve query configuration docs.

2020-05-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 87c9e4d  [hotfix][docs] Fix and improve query configuration docs.
87c9e4d is described below

commit 87c9e4dd482914a81c5ac69bd85bca5f5674c377
Author: Fabian Hueske 
AuthorDate: Tue May 12 11:03:59 2020 +0200

[hotfix][docs] Fix and improve query configuration docs.

* Fix: TableConfig is *not* passed back when a Table is translated.
---
 docs/dev/table/streaming/query_configuration.md| 4 ++--
 docs/dev/table/streaming/query_configuration.zh.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/dev/table/streaming/query_configuration.md 
b/docs/dev/table/streaming/query_configuration.md
index 3bf0c45..bf84843 100644
--- a/docs/dev/table/streaming/query_configuration.md
+++ b/docs/dev/table/streaming/query_configuration.md
@@ -22,9 +22,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Table API and SQL queries have the same semantics regardless whether their 
input is bounded batch input or unbounded stream input. In many cases, 
continuous queries on streaming input are capable of computing accurate results 
that are identical to offline computed results. However, this is not possible 
in general case because continuous queries have to restrict the size of the 
state they are maintaining in order to avoid to run out of storage and to be 
able to process unbounded streaming [...]
+Table API and SQL queries have the same semantics regardless whether their 
input is a finite set of rows or an unbounded stream of table changes. In many 
cases, continuous queries on streaming input are able to compute accurate 
results that are identical to offline computed results. However, for some 
continuous queries you have to limit the size of the state they are maintaining 
in order to avoid to run out of storage while ingesting an unbounded stream of 
input. It depends on the charac [...]
 
-Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object. The `TableConfig` can be obtained from the 
`TableEnvironment` and is passed back when a `Table` is translated, i.e., when 
it is [transformed into a DataStream]({{ site.baseurl 
}}/dev/table/common.html#convert-a-table-into-a-datastream-or-dataset) or 
[emitted via a TableSink](../common.html#emit-a-table).
+Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object, which can be obtained from the `TableEnvironment`.
 
 
 
diff --git a/docs/dev/table/streaming/query_configuration.zh.md 
b/docs/dev/table/streaming/query_configuration.zh.md
index 3bf0c45..bf84843 100644
--- a/docs/dev/table/streaming/query_configuration.zh.md
+++ b/docs/dev/table/streaming/query_configuration.zh.md
@@ -22,9 +22,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Table API and SQL queries have the same semantics regardless whether their 
input is bounded batch input or unbounded stream input. In many cases, 
continuous queries on streaming input are capable of computing accurate results 
that are identical to offline computed results. However, this is not possible 
in general case because continuous queries have to restrict the size of the 
state they are maintaining in order to avoid to run out of storage and to be 
able to process unbounded streaming [...]
+Table API and SQL queries have the same semantics regardless whether their 
input is a finite set of rows or an unbounded stream of table changes. In many 
cases, continuous queries on streaming input are able to compute accurate 
results that are identical to offline computed results. However, for some 
continuous queries you have to limit the size of the state they are maintaining 
in order to avoid to run out of storage while ingesting an unbounded stream of 
input. It depends on the charac [...]
 
-Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object. The `TableConfig` can be obtained from the 
`TableEnvironment` and is passed back when a `Table` is translated, i.e., when 
it is [transformed into a DataStream]({{ site.baseurl 
}}/dev/table/common.html#convert-a-table-into-a-datastream-or-dataset) or 
[emitted via a TableSink](../common.html#emit-a-table).
+Flink's Table API and SQL interface provide parameters to tune the accuracy 
and resource consumption of continuous queries. The parameters are specified 
via a `TableConfig` object, which can be obtained from the `TableEnvironment`.
 
 
 



[flink-playgrounds] branch release-1.9 updated: [FLINK-16540] Fully specify bugfix version of Flink images in docker-compose.yaml

2020-03-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 95d2fdc  [FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml
95d2fdc is described below

commit 95d2fdc0078df96b2ae2b4a40ccde76f83327f8a
Author: Fabian Hueske 
AuthorDate: Wed Mar 11 11:26:17 2020 +0100

[FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml

* Update Flink version to 1.9.2

This closes #10.
---
 docker/ops-playground-image/Dockerfile| 2 +-
 operations-playground/docker-compose.yaml | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docker/ops-playground-image/Dockerfile 
b/docker/ops-playground-image/Dockerfile
index 59b40a0..d931804 100644
--- a/docker/ops-playground-image/Dockerfile
+++ b/docker/ops-playground-image/Dockerfile
@@ -32,7 +32,7 @@ RUN mvn clean install
 # Build Operations Playground Image
 ###
 
-FROM flink:1.9.0-scala_2.11
+FROM flink:1.9.2-scala_2.11
 
 WORKDIR /opt/flink/bin
 
diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 5a88b98..270bb2d 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -20,7 +20,7 @@ version: "2.1"
 services:
   client:
 build: ../docker/ops-playground-image
-image: apache/flink-ops-playground:2-FLINK-1.9-scala_2.11
+image: apache/flink-ops-playground:3-FLINK-1.9-scala_2.11
 command: "flink run -d -p 2 /opt/ClickCountJob.jar --bootstrap.servers 
kafka:9092 --checkpointing --event-time"
 depends_on:
   - jobmanager
@@ -35,7 +35,7 @@ services:
 depends_on:
   - kafka
   jobmanager:
-image: flink:1.9-scala_2.11
+image: flink:1.9.2-scala_2.11
 command: "jobmanager.sh start-foreground"
 ports:
   - 8081:8081
@@ -46,7 +46,7 @@ services:
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
   taskmanager:
-image: flink:1.9-scala_2.11
+image: flink:1.9.2-scala_2.11
 depends_on:
   - jobmanager
 command: "taskmanager.sh start-foreground"



[flink-playgrounds] branch release-1.10 updated: [FLINK-16540] Fully specify bugfix version of Flink images in docker-compose.yaml

2020-03-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new f3261ca  [FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml
f3261ca is described below

commit f3261ca2bcfb69439050024cd94f2ceae488b0f1
Author: Fabian Hueske 
AuthorDate: Wed Mar 11 11:26:17 2020 +0100

[FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml

This closes #10.
---
 operations-playground/docker-compose.yaml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 4b25f15..919f648 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -35,7 +35,7 @@ services:
 depends_on:
   - kafka
   jobmanager:
-image: flink:1.10-scala_2.11
+image: flink:1.10.0-scala_2.11
 command: "jobmanager.sh start-foreground"
 ports:
   - 8081:8081
@@ -46,7 +46,7 @@ services:
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
   taskmanager:
-image: flink:1.10-scala_2.11
+image: flink:1.10.0-scala_2.11
 depends_on:
   - jobmanager
 command: "taskmanager.sh start-foreground"



[flink-playgrounds] branch master updated: [FLINK-16540] Fully specify bugfix version of Flink images in docker-compose.yaml

2020-03-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/master by this push:
 new a27301e  [FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml
a27301e is described below

commit a27301ecaace8bacefb2464ef0a788b81ba11827
Author: Fabian Hueske 
AuthorDate: Wed Mar 11 11:26:17 2020 +0100

[FLINK-16540] Fully specify bugfix version of Flink images in 
docker-compose.yaml

This closes #10.
---
 operations-playground/docker-compose.yaml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 4b25f15..919f648 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -35,7 +35,7 @@ services:
 depends_on:
   - kafka
   jobmanager:
-image: flink:1.10-scala_2.11
+image: flink:1.10.0-scala_2.11
 command: "jobmanager.sh start-foreground"
 ports:
   - 8081:8081
@@ -46,7 +46,7 @@ services:
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
   taskmanager:
-image: flink:1.10-scala_2.11
+image: flink:1.10.0-scala_2.11
 depends_on:
   - jobmanager
 command: "taskmanager.sh start-foreground"



[flink-playgrounds] branch release-1.10 created (now aca293d)

2020-03-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git.


  at aca293d  [FLINK-16148] Update Operations Playground to Flink 1.10.0

No new revisions were added by this update.



[flink-playgrounds] branch master updated: [FLINK-16148] Update Operations Playground to Flink 1.10.0

2020-03-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/master by this push:
 new aca293d  [FLINK-16148] Update Operations Playground to Flink 1.10.0
aca293d is described below

commit aca293d8b20874555c9491c593f7c3991f670ad1
Author: David Anderson 
AuthorDate: Tue Mar 10 20:06:33 2020 +0100

[FLINK-16148] Update Operations Playground to Flink 1.10.0
---
 README.md | 2 +-
 docker/ops-playground-image/Dockerfile| 2 +-
 .../java/flink-playground-clickcountjob/pom.xml   | 4 ++--
 operations-playground/README.md   | 2 +-
 operations-playground/conf/flink-conf.yaml| 1 +
 operations-playground/docker-compose.yaml | 8 
 6 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/README.md b/README.md
index 9226d5e..cf39303 100644
--- a/README.md
+++ b/README.md
@@ -11,7 +11,7 @@ Currently, the following playgrounds are available:
 
 * The **Flink Operations Playground** in the (`operations-playground` folder) 
lets you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
 Flink job. The playground is presented in detail in the
-["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
+["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.10/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
 
 * The interactive SQL playground is still under development and will be added 
shortly.
 
diff --git a/docker/ops-playground-image/Dockerfile 
b/docker/ops-playground-image/Dockerfile
index 59b40a0..bc62a5e 100644
--- a/docker/ops-playground-image/Dockerfile
+++ b/docker/ops-playground-image/Dockerfile
@@ -32,7 +32,7 @@ RUN mvn clean install
 # Build Operations Playground Image
 ###
 
-FROM flink:1.9.0-scala_2.11
+FROM flink:1.10.0-scala_2.11
 
 WORKDIR /opt/flink/bin
 
diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
index 893c11e..bead849 100644
--- a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
+++ b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
@@ -22,7 +22,7 @@ under the License.
 
org.apache.flink
flink-playground-clickcountjob
-   2-FLINK-1.9_2.11
+   1-FLINK-1.10_2.11
 
flink-playground-clickcountjob
jar
@@ -44,7 +44,7 @@ under the License.
 
 

UTF-8
-   1.9.0
+   1.10.0
1.8
2.11
${java.version}
diff --git a/operations-playground/README.md b/operations-playground/README.md
index 58bd366..de02fb8 100644
--- a/operations-playground/README.md
+++ b/operations-playground/README.md
@@ -47,4 +47,4 @@ docker-compose down
 ## Further instructions
 
 The playground setup and more detailed instructions are presented in the
-["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-master/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
+["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.10/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
diff --git a/operations-playground/conf/flink-conf.yaml 
b/operations-playground/conf/flink-conf.yaml
index 5c8d0e6..bfa4384 100644
--- a/operations-playground/conf/flink-conf.yaml
+++ b/operations-playground/conf/flink-conf.yaml
@@ -20,6 +20,7 @@ jobmanager.rpc.address: jobmanager
 blob.server.port: 6124
 query.server.port: 6125
 
+taskmanager.memory.process.size: 1568m
 taskmanager.numberOfTaskSlots: 2
 
 state.backend: filesystem
diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 5a88b98..4b25f15 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -20,7 +20,7 @@ version: "2.1"
 services:
   client:
 build: ../docker/ops-playground-image
-image: apache/flink-ops-playground:2-FLINK-1.9-scala_2.11
+image: apache/flink-ops-playground:1-FLINK-1.10-scala_2.11
 command: "flink run -d -p 2 /opt/ClickCountJob.jar --boo

[flink-playgrounds] branch release-1.8 updated: [hotfix] Fixing minor issues with the English in the README for the operations-playground

2020-03-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new a552c62  [hotfix] Fixing minor issues with the English in the README 
for the operations-playground
a552c62 is described below

commit a552c624b43c3a7cf1f4b2749f60300ee416efdf
Author: David Anderson 
AuthorDate: Tue Mar 10 18:46:25 2020 +0100

[hotfix] Fixing minor issues with the English in the README for the 
operations-playground

This closes #8.
---
 operations-playground/README.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/operations-playground/README.md b/operations-playground/README.md
index fe147d2..acb20c6 100644
--- a/operations-playground/README.md
+++ b/operations-playground/README.md
@@ -1,16 +1,16 @@
 # Flink Operations Playground
 
-The Flink operations playground let's you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
+The Flink operations playground lets you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
 
 * Observing automatic failure recovery of an application
 * Upgrading and rescaling an application
 * Querying the runtime metrics of an application
 
-It is based on a [docker-compose](https://docs.docker.com/compose/) 
environment and super easy to setup.
+It's based on a [docker-compose](https://docs.docker.com/compose/) environment 
and is super easy to setup.
 
 ## Setup
 
-The operations playground requires a custom Docker image in addition to public 
images for Flink, Kafka, and ZooKeeper. 
+The operations playground requires a custom Docker image, as well as public 
images for Flink, Kafka, and ZooKeeper. 
 
 The `docker-compose.yaml` file of the operations playground is located in the 
`operations-playground` directory. Assuming you are at the root directory of 
the [`flink-playgrounds`](https://github.com/apache/flink-playgrounds) 
repository, change to the `operations-playground` folder by running
 
@@ -34,7 +34,7 @@ Once you built the Docker image, run the following command to 
start the playgrou
 docker-compose up -d
 ```
 
-You can check if the playground was successfully started, if you can access 
the WebUI of the Flink cluster at 
[http://localhost:8081](http://localhost:8081).
+You can check if the playground was successfully started by accessing the 
WebUI of the Flink cluster at [http://localhost:8081](http://localhost:8081).
 
 ### Stopping the Playground
 



[flink-playgrounds] branch release-1.9 updated: [hotfix] Fixing minor issues with the English in the README for the operations-playground

2020-03-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 835b3e3  [hotfix] Fixing minor issues with the English in the README 
for the operations-playground
835b3e3 is described below

commit 835b3e371694a41a7f9fb9fd2328fd8893699f92
Author: David Anderson 
AuthorDate: Tue Mar 10 18:46:25 2020 +0100

[hotfix] Fixing minor issues with the English in the README for the 
operations-playground

This closes #8.
---
 operations-playground/README.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/operations-playground/README.md b/operations-playground/README.md
index d17353c..5d64b76 100644
--- a/operations-playground/README.md
+++ b/operations-playground/README.md
@@ -1,16 +1,16 @@
 # Flink Operations Playground
 
-The Flink operations playground let's you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
+The Flink operations playground lets you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
 
 * Observing automatic failure recovery of an application
 * Upgrading and rescaling an application
 * Querying the runtime metrics of an application
 
-It is based on a [docker-compose](https://docs.docker.com/compose/) 
environment and super easy to setup.
+It's based on a [docker-compose](https://docs.docker.com/compose/) environment 
and is super easy to setup.
 
 ## Setup
 
-The operations playground requires a custom Docker image in addition to public 
images for Flink, Kafka, and ZooKeeper. 
+The operations playground requires a custom Docker image, as well as public 
images for Flink, Kafka, and ZooKeeper. 
 
 The `docker-compose.yaml` file of the operations playground is located in the 
`operations-playground` directory. Assuming you are at the root directory of 
the [`flink-playgrounds`](https://github.com/apache/flink-playgrounds) 
repository, change to the `operations-playground` folder by running
 
@@ -34,7 +34,7 @@ Once you built the Docker image, run the following command to 
start the playgrou
 docker-compose up -d
 ```
 
-You can check if the playground was successfully started, if you can access 
the WebUI of the Flink cluster at 
[http://localhost:8081](http://localhost:8081).
+You can check if the playground was successfully started by accessing the 
WebUI of the Flink cluster at [http://localhost:8081](http://localhost:8081).
 
 ### Stopping the Playground
 



[flink-playgrounds] branch master updated: [hotfix] Fixing minor issues with the English in the README for the operations-playground

2020-03-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/master by this push:
 new c1cb662  [hotfix] Fixing minor issues with the English in the README 
for the operations-playground
c1cb662 is described below

commit c1cb66235e5b842522365996f2812cf21502b644
Author: David Anderson 
AuthorDate: Tue Mar 10 18:46:25 2020 +0100

[hotfix] Fixing minor issues with the English in the README for the 
operations-playground

This closes #8.
---
 operations-playground/README.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/operations-playground/README.md b/operations-playground/README.md
index 9eb0387..58bd366 100644
--- a/operations-playground/README.md
+++ b/operations-playground/README.md
@@ -1,16 +1,16 @@
 # Flink Operations Playground
 
-The Flink operations playground let's you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
+The Flink operations playground lets you explore and play with [Apache 
Flink](https://flink.apache.org)'s features to manage and operate stream 
processing jobs, including
 
 * Observing automatic failure recovery of an application
 * Upgrading and rescaling an application
 * Querying the runtime metrics of an application
 
-It is based on a [docker-compose](https://docs.docker.com/compose/) 
environment and super easy to setup.
+It's based on a [docker-compose](https://docs.docker.com/compose/) environment 
and is super easy to setup.
 
 ## Setup
 
-The operations playground requires a custom Docker image in addition to public 
images for Flink, Kafka, and ZooKeeper. 
+The operations playground requires a custom Docker image, as well as public 
images for Flink, Kafka, and ZooKeeper. 
 
 The `docker-compose.yaml` file of the operations playground is located in the 
`operations-playground` directory. Assuming you are at the root directory of 
the [`flink-playgrounds`](https://github.com/apache/flink-playgrounds) 
repository, change to the `operations-playground` folder by running
 
@@ -34,7 +34,7 @@ Once you built the Docker image, run the following command to 
start the playgrou
 docker-compose up -d
 ```
 
-You can check if the playground was successfully started, if you can access 
the WebUI of the Flink cluster at 
[http://localhost:8081](http://localhost:8081).
+You can check if the playground was successfully started by accessing the 
WebUI of the Flink cluster at [http://localhost:8081](http://localhost:8081).
 
 ### Stopping the Playground
 



[flink-web] 01/02: Add blog post: Beam on Flink

2020-02-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b6afc461972cb866539dc8d82f3d2ba83672b4d3
Author: Maximilian Michels 
AuthorDate: Fri Jan 31 12:44:32 2020 +0100

Add blog post: Beam on Flink

* Co-authored by MarkSfik <47176197+marks...@users.noreply.github.com>

This closes #298.
---
 ...22-apache-beam-how-beam-runs-on-top-of-flink.md | 163 +
 .../classic-flink-runner-beam.png  | Bin 0 -> 254000 bytes
 .../flink-runner-beam-beam-vision.png  | Bin 0 -> 314000 bytes
 ...nner-beam-language-portability-architecture.png | Bin 0 -> 852926 bytes
 .../flink-runner-beam-language-portability.png | Bin 0 -> 675989 bytes
 .../flink-runner-beam-runner-translation-paths.png | Bin 0 -> 77258 bytes
 .../flink-runner-beam-serializers-coders.png   | Bin 0 -> 107341 bytes
 7 files changed, 163 insertions(+)

diff --git a/_posts/2020-02-22-apache-beam-how-beam-runs-on-top-of-flink.md 
b/_posts/2020-02-22-apache-beam-how-beam-runs-on-top-of-flink.md
new file mode 100644
index 000..b04c116
--- /dev/null
+++ b/_posts/2020-02-22-apache-beam-how-beam-runs-on-top-of-flink.md
@@ -0,0 +1,163 @@
+---
+layout: post
+title: 'Apache Beam: How Beam Runs on Top of Flink'
+date: 2020-02-22T12:00:00.000Z
+category: ecosystem
+authors:
+- maximilian:
+  name: "Maximilian Michels"
+  twitter: "stadtlegende"
+- markos:
+  name: "Markos Sfikas"
+  twitter: "MarkSfik"
+excerpt: This blog post discusses the reasons to use Flink together with Beam 
for your stream processing needs and takes a closer look at how Flink works 
with Beam under the hood.
+
+---
+
+Note: This blog post is based on the talk ["Beam on Flink: How Does It 
Actually Work?"](https://www.youtube.com/watch?v=hxHGLrshnCY).
+
+[Apache Flink](https://flink.apache.org/) and [Apache 
Beam](https://beam.apache.org/) are open-source frameworks for parallel, 
distributed data processing at scale. Unlike Flink, Beam does not come with a 
full-blown execution engine of its own but plugs into other execution engines, 
such as Apache Flink, Apache Spark, or Google Cloud Dataflow. In this blog post 
we discuss the reasons to use Flink together with Beam for your batch and 
stream processing needs. We also take a closer look at [...]
+
+
+# What is Apache Beam
+
+[Apache Beam](https://beam.apache.org/) is an open-source, unified model for 
defining batch and streaming data-parallel processing pipelines. It is unified 
in the sense that you use a single API, in contrast to using a separate API for 
batch and streaming like it is the case in Flink. Beam was originally developed 
by Google which released it in 2014 as the Cloud Dataflow SDK. In 2016, it was 
donated to [the Apache Software Foundation](https://www.apache.org/) with the 
name of Beam. It ha [...]
+
+The execution model, as well as the API of Apache Beam, are similar to 
Flink's. Both frameworks are inspired by the 
[MapReduce](https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdf),
 
[MillWheel](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41378.pdf),
 and [Dataflow](https://research.google/pubs/pub43864/) papers. Like Flink, 
Beam is designed for parallel, distributed data processing. Both have similar 
transform [...]
+
+One of the most exciting developments in the Beam technology is the 
framework’s support for multiple programming languages including Java, Python, 
Go, Scala and SQL. Essentially, developers can write their applications in a 
programming language of their choice. Beam, with the help of the Runners, 
translates the program to one of the execution engines, as shown in the diagram 
below.
+
+
+
+
+
+
+# Reasons to use Beam with Flink
+
+Why would you want to use Beam with Flink instead of directly using Flink? 
Ultimately, Beam and Flink complement each other and provide additional value 
to the user. The main reasons for using Beam with Flink are the following: 
+
+* Beam provides a unified API for both batch and streaming scenarios.
+* Beam comes with native support for different programming languages, like 
Python or Go with all their libraries like Numpy, Pandas, Tensorflow, or TFX.
+* You get the power of Apache Flink like its exactly-once semantics, strong 
memory management and robustness.
+* Beam programs run on your existing Flink infrastructure or infrastructure 
for other supported Runners, like Spark or Google Cloud Dataflow. 
+* You get additional features like side inputs and cross-language pipelines 
that are not supported natively in Flink but only supported when using Beam 
with Flink. 
+
+
+# The Flink Runner in Beam
+
+The Flink Runner in Beam translates Beam pipelines into Flink jobs. The 
translation can be parameterized using B

[flink-web] branch asf-site updated (8140660 -> 438d5cf)

2020-02-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 8140660  Rebuild website
 new b6afc46  Add blog post: Beam on Flink
 new 438d5cf  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...22-apache-beam-how-beam-runs-on-top-of-flink.md | 163 +
 content/blog/feed.xml  | 239 -
 content/blog/index.html|  40 ++-
 content/blog/page10/index.html |  45 +--
 content/blog/{page4 => page11}/index.html  | 164 ++---
 content/blog/page2/index.html  |  40 ++-
 content/blog/page3/index.html  |  40 ++-
 content/blog/page4/index.html  |  42 ++-
 content/blog/page5/index.html  |  45 ++-
 content/blog/page6/index.html  |  43 ++-
 content/blog/page7/index.html  |  40 ++-
 content/blog/page8/index.html  |  42 ++-
 content/blog/page9/index.html  |  40 ++-
 .../apache-beam-how-beam-runs-on-top-of-flink.html | 388 +
 .../classic-flink-runner-beam.png  | Bin 0 -> 254000 bytes
 .../flink-runner-beam-beam-vision.png  | Bin 0 -> 314000 bytes
 ...nner-beam-language-portability-architecture.png | Bin 0 -> 852926 bytes
 .../flink-runner-beam-language-portability.png | Bin 0 -> 675989 bytes
 .../flink-runner-beam-runner-translation-paths.png | Bin 0 -> 77258 bytes
 .../flink-runner-beam-serializers-coders.png   | Bin 0 -> 107341 bytes
 content/index.html |   6 +-
 content/zh/index.html  |   6 +-
 .../classic-flink-runner-beam.png  | Bin 0 -> 254000 bytes
 .../flink-runner-beam-beam-vision.png  | Bin 0 -> 314000 bytes
 ...nner-beam-language-portability-architecture.png | Bin 0 -> 852926 bytes
 .../flink-runner-beam-language-portability.png | Bin 0 -> 675989 bytes
 .../flink-runner-beam-runner-translation-paths.png | Bin 0 -> 77258 bytes
 .../flink-runner-beam-serializers-coders.png   | Bin 0 -> 107341 bytes
 28 files changed, 989 insertions(+), 394 deletions(-)
 create mode 100644 
_posts/2020-02-22-apache-beam-how-beam-runs-on-top-of-flink.md
 copy content/blog/{page4 => page11}/index.html (84%)
 create mode 100644 
content/ecosystem/2020/02/22/apache-beam-how-beam-runs-on-top-of-flink.html
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/classic-flink-runner-beam.png
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-beam-vision.png
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability-architecture.png
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability.png
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-runner-translation-paths.png
 create mode 100644 
content/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-serializers-coders.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/classic-flink-runner-beam.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/flink-runner-beam-beam-vision.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability-architecture.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/flink-runner-beam-runner-translation-paths.png
 create mode 100644 
img/blog/2020-02-22-beam-on-flink/flink-runner-beam-serializers-coders.png



[flink-web] 02/02: Rebuild website

2020-02-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 438d5cf459a73530f8995fa18f2c8792d85d9236
Author: Fabian Hueske 
AuthorDate: Mon Feb 24 12:27:06 2020 +0100

Rebuild website
---
 content/blog/feed.xml  | 239 -
 content/blog/index.html|  40 ++-
 content/blog/page10/index.html |  45 +--
 content/blog/{page4 => page11}/index.html  | 164 ++---
 content/blog/page2/index.html  |  40 ++-
 content/blog/page3/index.html  |  40 ++-
 content/blog/page4/index.html  |  42 ++-
 content/blog/page5/index.html  |  45 ++-
 content/blog/page6/index.html  |  43 ++-
 content/blog/page7/index.html  |  40 ++-
 content/blog/page8/index.html  |  42 ++-
 content/blog/page9/index.html  |  40 ++-
 .../apache-beam-how-beam-runs-on-top-of-flink.html | 388 +
 .../classic-flink-runner-beam.png  | Bin 0 -> 254000 bytes
 .../flink-runner-beam-beam-vision.png  | Bin 0 -> 314000 bytes
 ...nner-beam-language-portability-architecture.png | Bin 0 -> 852926 bytes
 .../flink-runner-beam-language-portability.png | Bin 0 -> 675989 bytes
 .../flink-runner-beam-runner-translation-paths.png | Bin 0 -> 77258 bytes
 .../flink-runner-beam-serializers-coders.png   | Bin 0 -> 107341 bytes
 content/index.html |   6 +-
 content/zh/index.html  |   6 +-
 21 files changed, 826 insertions(+), 394 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 20b395c..2ae389a 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,162 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Apache Beam: How Beam Runs on Top of Flink
+pNote: This blog post is based on the talk a 
href=https://www.youtube.com/watch?v=hxHGLrshnCY“Beam on Flink: 
How Does It Actually Work?”/a./p
+
+pa href=https://flink.apache.org/Apache 
Flink/a and a href=https://beam.apache.org/Apache 
Beam/a are open-source frameworks for parallel, distributed data 
processing at scale. Unlike Flink, Beam does not come with a full-blown 
execution engine of its own but plugs into other execution engines, such as 
Apache Flink, Apache Spark, or Google Cloud Dataflow. In this blog post we 
discuss the reasons to use Flink together with Bea [...]
+
+h1 id=what-is-apache-beamWhat is Apache Beam/h1
+
+pa href=https://beam.apache.org/Apache 
Beam/a is an open-source, unified model for defining batch and 
streaming data-parallel processing pipelines. It is unified in the sense that 
you use a single API, in contrast to using a separate API for batch and 
streaming like it is the case in Flink. Beam was originally developed by Google 
which released it in 2014 as the Cloud Dataflow SDK. In 2016, it was donated to 
a href=https://www.apache.org/ [...]
+
+pThe execution model, as well as the API of Apache Beam, are similar 
to Flink’s. Both frameworks are inspired by the a 
href=https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdfMapReduce/a;,
 a 
href=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41378.pdfMillWheel/a;,
 and a 
href=https://research.google/pubs/pub43864/Dataflow/a; 
[...]
+
+pOne of the most exciting developments in the Beam technology is the 
framework’s support for multiple programming languages including Java, Python, 
Go, Scala and SQL. Essentially, developers can write their applications in a 
programming language of their choice. Beam, with the help of the Runners, 
translates the program to one of the execution engines, as shown in the diagram 
below./p
+
+center
+img 
src=/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-beam-vision.png
 width=600px alt=The vision of Apache Beam /
+/center
+
+h1 id=reasons-to-use-beam-with-flinkReasons to use Beam 
with Flink/h1
+
+pWhy would you want to use Beam with Flink instead of directly using 
Flink? Ultimately, Beam and Flink complement each other and provide additional 
value to the user. The main reasons for using Beam with Flink are the 
following:/p
+
+ul
+  liBeam provides a unified API for both batch and streaming 
scenarios./li
+  liBeam comes with native support for different programming 
languages, like Python or Go with all their libraries like Numpy, Pandas, 
Tensorflow, or TFX./li
+  liYou get the power of Apache Flink like its exactly-once semantics, 
strong memory management and robustness./li
+  liBeam programs run on your existing Flink infrastructure or 
infrastructure for other supported Runners, like Spark or Google Cloud 
D

[flink-web] 02/02: Rebuild website

2020-02-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 81406600a87c13c6e9d21cc64f069a25929ba01c
Author: Fabian Hueske 
AuthorDate: Thu Feb 20 17:03:08 2020 +0100

Rebuild website
---
 content/blog/feed.xml| 125 ++
 content/blog/index.html  |  36 ++--
 content/blog/page10/index.html   |  28 +++
 content/blog/page2/index.html|  36 ++--
 content/blog/page3/index.html|  38 +++--
 content/blog/page4/index.html|  40 +++--
 content/blog/page5/index.html|  40 +++--
 content/blog/page6/index.html|  40 +++--
 content/blog/page7/index.html|  40 +++--
 content/blog/page8/index.html|  39 +++--
 content/blog/page9/index.html|  42 +++--
 content/index.html   |   6 +-
 content/news/2020/02/20/ddl.html | 358 +++
 content/zh/index.html|   6 +-
 14 files changed, 735 insertions(+), 139 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 838a6ce..20b395c 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,131 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+No Java Required: Configuring Sources and Sinks in SQL
+h1 id=introductionIntroduction/h1
+
+pThe recent a 
href=https://flink.apache.org/news/2020/02/11/release-1.10.0.htmlApache
 Flink 1.10 release/a includes many exciting features.
+In particular, it marks the end of the community’s year-long effort to merge 
in the a 
href=https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.htmlBlink
 SQL contribution/a from Alibaba.
+The reason the community chose to spend so much time on the contribution is 
that SQL works.
+It allows Flink to offer a truly unified interface over batch and streaming 
and makes stream processing accessible to a broad audience of developers and 
analysts.
+Best of all, Flink SQL is ANSI-SQL compliant, which means if you’ve ever used 
a database in the past, you already know itsup 
id=fnref:1a href=#fn:1 
class=footnote1/a/sup!/p
+
+pA lot of work focused on improving runtime performance and 
progressively extending its coverage of the SQL standard.
+Flink now supports the full TPC-DS query set for batch queries, reflecting the 
readiness of its SQL engine to address the needs of modern data warehouse-like 
workloads.
+Its streaming SQL supports an almost equal set of features - those that are 
well defined on a streaming runtime - including a 
href=https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.htmlcomplex
 joins/a and a 
href=https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.htmlMATCH_RECOGNIZE/a./p;
+
+pAs important as this work is, the community also strives to make 
these features generally accessible to the broadest audience possible.
+That is why the Flink community is excited in 1.10 to offer production-ready 
DDL syntax (e.g., codeCREATE TABLE/code, codeDROP 
TABLE/code) and a refactored catalog interface./p
+
+h1 id=accessing-your-data-where-it-livesAccessing Your 
Data Where It Lives/h1
+
+pFlink does not store data at rest; it is a compute engine and 
requires other systems to consume input from and write its output.
+Those that have used Flink’s codeDataStream/code API in the 
past will be familiar with connectors that allow for interacting with external 
systems. 
+Flink has a vast connector ecosystem that includes all major message queues, 
filesystems, and databases./p
+
+div class=alert alert-info
+If your favorite system does not have a connector maintained in the central 
Apache Flink repository, check out the a 
href=https://flink-packages.orgflink packages 
website/a, which has a growing number of community-maintained 
components.
+/div
+
+pWhile these connectors are battle-tested and production-ready, they 
are written in Java and configured in code, which means they are not amenable 
to pure SQL or Table applications.
+For a holistic SQL experience, not only queries need to be written in SQL, but 
also table definitions./p
+
+h1 id=create-table-statementsCREATE TABLE 
Statements/h1
+
+pWhile Flink SQL has long provided table abstractions atop some of 
Flink’s most popular connectors, configurations were not always so 
straightforward.
+Beginning in 1.10, Flink supports defining tables through codeCREATE 
TABLE/code statements.
+With this feature, users can now create logical tables, backed by various 
external systems, in pure SQL./p
+
+pBy defining tables in SQL, developers can write queries against 
logical schemas that are abstracted away from the underlying physical data 
store. Coupled with Flink SQL’s unified approach to batch and stream 
processing, Flink provides a straight line from discovery to 
production./p
+
+pUsers can define tables over static data sets

[flink-web] 01/02: Add blog post on SQL DDL support.

2020-02-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 4a2f7e84415b18ebf0812457619596ea286be2bf
Author: Seth Wiesman 
AuthorDate: Fri Jan 31 15:13:03 2020 -0600

Add blog post on SQL DDL support.

This closes #299.
---
 _posts/2020-02-20-ddl.md | 127 +++
 1 file changed, 127 insertions(+)

diff --git a/_posts/2020-02-20-ddl.md b/_posts/2020-02-20-ddl.md
new file mode 100644
index 000..8a9ce1f
--- /dev/null
+++ b/_posts/2020-02-20-ddl.md
@@ -0,0 +1,127 @@
+---
+layout: post
+title: "No Java Required: Configuring Sources and Sinks in SQL"
+date:  2020-02-20 12:00:00
+categories: news
+authors:
+- seth:
+  name: "Seth Wiesman"
+  twitter: "sjwiesman"
+
+
+excerpt: This post discusses the efforts of the Flink community as they relate 
to end to end applications with SQL in Apache Flink.
+---
+
+# Introduction
+
+The recent [Apache Flink 1.10 
release](https://flink.apache.org/news/2020/02/11/release-1.10.0.html) includes 
many exciting features.
+In particular, it marks the end of the community's year-long effort to merge 
in the [Blink SQL 
contribution](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html)
 from Alibaba.
+The reason the community chose to spend so much time on the contribution is 
that SQL works.
+It allows Flink to offer a truly unified interface over batch and streaming 
and makes stream processing accessible to a broad audience of developers and 
analysts.
+Best of all, Flink SQL is ANSI-SQL compliant, which means if you've ever used 
a database in the past, you already know it[^1]!
+
+A lot of work focused on improving runtime performance and progressively 
extending its coverage of the SQL standard.
+Flink now supports the full TPC-DS query set for batch queries, reflecting the 
readiness of its SQL engine to address the needs of modern data warehouse-like 
workloads.
+Its streaming SQL supports an almost equal set of features - those that are 
well defined on a streaming runtime - including [complex 
joins](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html)
 and 
[MATCH_RECOGNIZE](https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.html).
+
+As important as this work is, the community also strives to make these 
features generally accessible to the broadest audience possible.
+That is why the Flink community is excited in 1.10 to offer production-ready 
DDL syntax (e.g., `CREATE TABLE`, `DROP TABLE`) and a refactored catalog 
interface.
+
+# Accessing Your Data Where It Lives
+
+Flink does not store data at rest; it is a compute engine and requires other 
systems to consume input from and write its output.
+Those that have used Flink's `DataStream` API in the past will be familiar 
with connectors that allow for interacting with external systems. 
+Flink has a vast connector ecosystem that includes all major message queues, 
filesystems, and databases.
+
+
+If your favorite system does not have a connector maintained in the central 
Apache Flink repository, check out the https://flink-packages.org;>flink packages website, which has a 
growing number of community-maintained components.
+
+
+While these connectors are battle-tested and production-ready, they are 
written in Java and configured in code, which means they are not amenable to 
pure SQL or Table applications.
+For a holistic SQL experience, not only queries need to be written in SQL, but 
also table definitions. 
+
+# CREATE TABLE Statements
+
+While Flink SQL has long provided table abstractions atop some of Flink's most 
popular connectors, configurations were not always so straightforward.
+Beginning in 1.10, Flink supports defining tables through `CREATE TABLE` 
statements.
+With this feature, users can now create logical tables, backed by various 
external systems, in pure SQL. 
+
+By defining tables in SQL, developers can write queries against logical 
schemas that are abstracted away from the underlying physical data store. 
Coupled with Flink SQL's unified approach to batch and stream processing, Flink 
provides a straight line from discovery to production.
+
+Users can define tables over static data sets, anything from a local CSV file 
to a full-fledged data lake or even Hive.
+Leveraging Flink's efficient batch processing capabilities, they can perform 
ad-hoc queries searching for exciting insights.
+Once something interesting is identified, businesses can gain real-time and 
continuous insights by merely altering the table so that it is powered by a 
message queue such as Kafka.
+Because Flink guarantees SQL queries have unified semantics over batch and 
streaming, users can be confident that redeploying this query as a continuous 
streaming application over a message queue will output ident

[flink-web] branch asf-site updated (39b5126 -> 8140660)

2020-02-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 39b5126  Rebuild website
 new 4a2f7e8  Add blog post on SQL DDL support.
 new 8140660  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2020-02-20-ddl.md | 127 ++
 content/blog/feed.xml| 125 ++
 content/blog/index.html  |  36 ++--
 content/blog/page10/index.html   |  28 +++
 content/blog/page2/index.html|  36 ++--
 content/blog/page3/index.html|  38 +++--
 content/blog/page4/index.html|  40 +++--
 content/blog/page5/index.html|  40 +++--
 content/blog/page6/index.html|  40 +++--
 content/blog/page7/index.html|  40 +++--
 content/blog/page8/index.html|  39 +++--
 content/blog/page9/index.html|  42 +++--
 content/index.html   |   6 +-
 content/news/2020/02/20/ddl.html | 358 +++
 content/zh/index.html|   6 +-
 15 files changed, 862 insertions(+), 139 deletions(-)
 create mode 100644 _posts/2020-02-20-ddl.md
 create mode 100644 content/news/2020/02/20/ddl.html



[flink] branch release-1.10 updated: [hotfix][docs] Minor improvements of glossary.

2020-02-19 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new 4852707  [hotfix][docs] Minor improvements of glossary.
4852707 is described below

commit 4852707d9bdad5e218a722e947ce0e4d8171c535
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Mon Sep 16 19:07:44 2019 +0200

[hotfix][docs] Minor improvements of glossary.

This closes #9694.
---
 docs/concepts/glossary.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/concepts/glossary.md b/docs/concepts/glossary.md
index 22f2c99..e670fee 100644
--- a/docs/concepts/glossary.md
+++ b/docs/concepts/glossary.md
@@ -79,8 +79,8 @@ whole [Flink Master](#flink-master) was called JobManager.
  Logical Graph
 
 A logical graph is a directed graph describing the high-level logic of a 
stream processing program.
-The nodes are [Operators](#operator) and the edges indicate 
input/output-relationships or
-data streams or data sets.
+The nodes are [Operators](#operator) and the edges indicate 
input/output-relationships of the 
+operators and correspond to data streams or data sets.
 
  Managed State
 
@@ -161,6 +161,6 @@ subsequent Tasks.
 A Transformation is applied on one or more data streams or data sets and 
results in one or more
 output data streams or data sets. A transformation might change a data stream 
or data set on a
 per-record basis, but might also only change its partitioning or perform an 
aggregation. While
-[Operators](#operator) and [Functions](#function)) are the "physical" parts of 
Flink's API,
-Transformations are only an API concept. Specifically, most - but not all - 
transformations are
+[Operators](#operator) and [Functions](#function) are the "physical" parts of 
Flink's API,
+Transformations are only an API concept. Specifically, most transformations are
 implemented by certain [Operators](#operator).



[flink] branch master updated: [hotfix][docs] Minor improvements of glossary.

2020-02-19 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c4db705  [hotfix][docs] Minor improvements of glossary.
c4db705 is described below

commit c4db7052c78d6b8204170e17a80a2416fa760523
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Mon Sep 16 19:07:44 2019 +0200

[hotfix][docs] Minor improvements of glossary.

This closes #9694.
---
 docs/concepts/glossary.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/concepts/glossary.md b/docs/concepts/glossary.md
index 22f2c99..e670fee 100644
--- a/docs/concepts/glossary.md
+++ b/docs/concepts/glossary.md
@@ -79,8 +79,8 @@ whole [Flink Master](#flink-master) was called JobManager.
  Logical Graph
 
 A logical graph is a directed graph describing the high-level logic of a 
stream processing program.
-The nodes are [Operators](#operator) and the edges indicate 
input/output-relationships or
-data streams or data sets.
+The nodes are [Operators](#operator) and the edges indicate 
input/output-relationships of the 
+operators and correspond to data streams or data sets.
 
  Managed State
 
@@ -161,6 +161,6 @@ subsequent Tasks.
 A Transformation is applied on one or more data streams or data sets and 
results in one or more
 output data streams or data sets. A transformation might change a data stream 
or data set on a
 per-record basis, but might also only change its partitioning or perform an 
aggregation. While
-[Operators](#operator) and [Functions](#function)) are the "physical" parts of 
Flink's API,
-Transformations are only an API concept. Specifically, most - but not all - 
transformations are
+[Operators](#operator) and [Functions](#function) are the "physical" parts of 
Flink's API,
+Transformations are only an API concept. Specifically, most transformations are
 implemented by certain [Operators](#operator).



[flink-web] branch asf-site updated (d054934 -> 182067e)

2020-02-14 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from d054934  Rebuild website
 new 72229dc  [hotfix] Correcting expired links in the Powered By page.
 new 182067e  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/poweredby.html| 4 ++--
 content/zh/poweredby.html | 4 ++--
 poweredby.md  | 4 ++--
 poweredby.zh.md   | 4 ++--
 4 files changed, 8 insertions(+), 8 deletions(-)



[flink-web] 02/02: Rebuild website

2020-02-14 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 182067e542d627d86b0cc9c155896d07f6511ac2
Author: Fabian Hueske 
AuthorDate: Fri Feb 14 10:08:43 2020 +0100

Rebuild website
---
 content/poweredby.html| 4 ++--
 content/zh/poweredby.html | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/content/poweredby.html b/content/poweredby.html
index 6fe5e9c..b773b9a 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -250,7 +250,7 @@
   
   
 
-  King, the creators of Candy Crush Saga, uses Flink to provide data 
science teams a real-time analytics dashboard. https://techblog.king.com/rbea-scalable-real-time-analytics-king/; 
target="_blank"> Read about King's Flink implementation
+  King, the creators of Candy Crush Saga, uses Flink to provide data 
science teams a real-time analytics dashboard. https://www.youtube.com/watch?v=17tUR4TsvpM; target="_blank"> Learn more about King's 
Flink implementation
   
   
 
@@ -303,7 +303,7 @@
   
   
 
-  Telefónica NEXT's TÜV-certified Data Anonymization Platform is powered 
by Flink. https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target="_blank"> Read more about Telefónica NEXT
+  Telefónica NEXT's TÜV-certified Data Anonymization Platform is powered 
by Flink. https://2016.flink-forward.org/index.html%3Fp=592.html; 
target="_blank"> Read more about Telefónica NEXT
   
   
 
diff --git a/content/zh/poweredby.html b/content/zh/poweredby.html
index 0928ec0..1875455 100644
--- a/content/zh/poweredby.html
+++ b/content/zh/poweredby.html
@@ -248,7 +248,7 @@
   
   
 
-  King,Candy Crush Saga的创建者,使用 Flink 为数据科学团队提供实时分析仪表板。https://techblog.king.com/rbea-scalable-real-time-analytics-king/; 
target="_blank"> 阅读 King 的 Flink 实现
+  King,Candy Crush Saga的创建者,使用 Flink 为数据科学团队提供实时分析仪表板。https://www.youtube.com/watch?v=17tUR4TsvpM; target="_blank"> 阅读 King 的 Flink 实现
   
   
 
@@ -302,7 +302,7 @@
   
   
 
-  Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target="_blank"> 了解更多关于 Telefónica NEXT 的信息
+  Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://2016.flink-forward.org/index.html%3Fp=592.html; 
target="_blank"> 了解更多关于 Telefónica NEXT 的信息
   
   
 



[flink-web] 01/02: [hotfix] Correcting expired links in the Powered By page.

2020-02-14 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 72229dc1bdc0159c484da52384ed02e5c562eea4
Author: Marta Paes Moreira 
AuthorDate: Thu Feb 13 16:29:38 2020 +0100

[hotfix] Correcting expired links in the Powered By page.

This closes #306.
---
 poweredby.md| 4 ++--
 poweredby.zh.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/poweredby.md b/poweredby.md
index b2680b2..0da77b3 100644
--- a/poweredby.md
+++ b/poweredby.md
@@ -70,7 +70,7 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   
   
 
-  King, the creators of Candy Crush Saga, uses Flink to provide data 
science teams a real-time analytics dashboard. https://techblog.king.com/rbea-scalable-real-time-analytics-king/; 
target='_blank'> Read about King's Flink implementation
+  King, the creators of Candy Crush Saga, uses Flink to provide data 
science teams a real-time analytics dashboard. https://www.youtube.com/watch?v=17tUR4TsvpM; target='_blank'> Learn more about King's 
Flink implementation
   
   
 
@@ -123,7 +123,7 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   
   
 
-  Telefónica NEXT's TÜV-certified Data Anonymization Platform is powered 
by Flink. https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target='_blank'> Read more about Telefónica NEXT
+  Telefónica NEXT's TÜV-certified Data Anonymization Platform is powered 
by Flink. https://2016.flink-forward.org/index.html%3Fp=592.html; 
target='_blank'> Read more about Telefónica NEXT
   
   
 
diff --git a/poweredby.zh.md b/poweredby.zh.md
index cf7598f..eb69aa7 100644
--- a/poweredby.zh.md
+++ b/poweredby.zh.md
@@ -70,7 +70,7 @@ Apache Flink 为全球许多公司和企业的关键业务提供支持。在这
   
   
 
-  King,Candy Crush Saga的创建者,使用 Flink 为数据科学团队提供实时分析仪表板。https://techblog.king.com/rbea-scalable-real-time-analytics-king/; 
target='_blank'> 阅读 King 的 Flink 实现
+  King,Candy Crush Saga的创建者,使用 Flink 为数据科学团队提供实时分析仪表板。https://www.youtube.com/watch?v=17tUR4TsvpM; target='_blank'> 阅读 King 的 Flink 实现
   
   
 
@@ -124,7 +124,7 @@ Apache Flink 为全球许多公司和企业的关键业务提供支持。在这
   
   
 
-  Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://next.telefonica.de/en/solutions/big-data-privacy-services; 
target='_blank'> 了解更多关于 Telefónica NEXT 的信息
+  Telefónica NEXT 的 TÜV 认证数据匿名平台由 Flink 提供支持。https://2016.flink-forward.org/index.html%3Fp=592.html; 
target='_blank'> 了解更多关于 Telefónica NEXT 的信息
   
   
 



[flink-web] 02/02: Rebuild website

2020-02-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 51eb92cdc2c03ce13194b7ddc11458cc336f8edd
Author: Fabian Hueske 
AuthorDate: Fri Feb 7 14:40:34 2020 +0100

Rebuild website
---
 content/blog/feed.xml  | 231 ++
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  25 ++
 content/blog/page2/index.html  |  42 +-
 content/blog/page3/index.html  |  46 +-
 content/blog/page4/index.html  |  42 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  40 +-
 content/index.html |   6 +-
 .../a-guide-for-unit-testing-in-apache-flink.html  | 464 +
 content/zh/index.html  |   6 +-
 14 files changed, 953 insertions(+), 145 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 1e3cf0d..284f51d 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,237 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+A Guide for Unit Testing in Apache Flink
+pWriting unit tests is one of the essential tasks of 
designing a production-grade application. Without tests, a single change in 
code can result in cascades of failure in production. Thus unit tests should be 
written for all types of applications, be it a simple job cleaning data and 
training a model or a complex multi-tenant, real-time data processing system. 
In the following sections, we provide a guide for unit testing of Apache Flink 
applications. 
+Apache Flink provides a robust unit testing framework to make sure your 
applications behave in production as expected during development. You need to 
include the following dependencies to utilize the provided framework./p
+
+div class=highlightprecode 
class=language-xmlspan 
class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-test-utils_${scala.binary.version}span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span${flink.version}span
 class=ntlt;/versiongt;/span
+  span 
class=ntlt;scopegt;/spantestspan 
class=ntlt;/scopegt;/span
+span class=ntlt;/dependencygt;/span 
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-runtime_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.9.0span 
class=ntlt;/versiongt;/span
+  span 
class=ntlt;scopegt;/spantestspan 
class=ntlt;/scopegt;/span
+  span 
class=ntlt;classifiergt;/spantestsspan 
class=ntlt;/classifiergt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-streaming-java_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.9.0span 
class=ntlt;/versiongt;/span
+  span 
class=ntlt;scopegt;/spantestspan 
class=ntlt;/scopegt;/span
+  span 
class=ntlt;classifiergt;/spantestsspan 
class=ntlt;/classifiergt;/span
+span 
class=ntlt;/dependencygt;/span/code/pre/div
+
+pThe strategy of writing unit tests differs for various operators. You 
can break down the strategy into the following three buckets:/p
+
+ul
+  liStateless Operators/li
+  liStateful Operators/li
+  liTimed Process Operators/li
+/ul
+
+h1 id=stateless-operatorsStateless Operators/h1
+
+pWriting unit tests for a stateless operator is a breeze. You need to 
follow the basic norm of writing a test case, i.e., create an instance of the 
function class and test the appropriate methods. Let’s take an example of a 
simple codeMap/code operator./p
+
+div class=highlightprecode 
class=language-javaspan 
class=kdpublic/span span 
class=kdclass/span span 
class=ncMyStatelessMap/span span 
class=kdimplements/span span 
class=nMapFunction/spanspan 
class=olt;/spanspan 
class=nString/spanspan class= [...]
+  span class=nd@Override/span
+  span class=kdpublic/span span 
class=nString/span span 
class=nfmap/spanspan 
class=o(/spanspan 
class=nString/span span 
class=nin/spanspan 
class=o)/span span 
class=kdthrows/span span 
class=nException/span span 
class=o{ [...]
+span class=nString/span span 
class=nout/span span 
class=o=/span span 
class=squot;hello quot;/span span 
class=o+/span span 
class=nin/spanspan 
class=o;/span
+span class=kreturn/span span 
class=nout/spanspan

[flink-web] branch asf-site updated (b511eda -> 51eb92c)

2020-02-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from b511eda  re-render website
 new e7f8f3f  Add blog post: "Unit Testing of Apache Flink Applications".
 new 51eb92c  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...-03-a-guide-for-unit-testing-in-apache-flink.md | 253 +++
 content/blog/feed.xml  | 231 ++
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  25 ++
 content/blog/page2/index.html  |  42 +-
 content/blog/page3/index.html  |  46 +-
 content/blog/page4/index.html  |  42 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  40 +-
 content/index.html |   6 +-
 .../a-guide-for-unit-testing-in-apache-flink.html  | 464 +
 content/zh/index.html  |   6 +-
 15 files changed, 1206 insertions(+), 145 deletions(-)
 create mode 100644 
_posts/2020-02-03-a-guide-for-unit-testing-in-apache-flink.md
 create mode 100644 
content/news/2020/02/07/a-guide-for-unit-testing-in-apache-flink.html



[flink-web] 01/02: Add blog post: "Unit Testing of Apache Flink Applications".

2020-02-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit e7f8f3f8f1916c6728afc744fda74c01dfef6c80
Author: Kartik Khare 
AuthorDate: Tue Feb 4 01:42:10 2020 +0530

Add blog post: "Unit Testing of Apache Flink Applications".

This closes #300.
---
 ...-03-a-guide-for-unit-testing-in-apache-flink.md | 253 +
 1 file changed, 253 insertions(+)

diff --git a/_posts/2020-02-03-a-guide-for-unit-testing-in-apache-flink.md 
b/_posts/2020-02-03-a-guide-for-unit-testing-in-apache-flink.md
new file mode 100644
index 000..ead2a49
--- /dev/null
+++ b/_posts/2020-02-03-a-guide-for-unit-testing-in-apache-flink.md
@@ -0,0 +1,253 @@
+---
+layout: post
+title: "A Guide for Unit Testing in Apache Flink"
+date:  2020-02-07 12:00:00
+categories: news
+authors:
+- kartik:
+  name: "Kartik Khare"
+  twitter: "khare_khote"
+
+
+excerpt: This post provides a detailed guide for unit testing of Apache Flink 
applications.
+ 
+---
+
+Writing unit tests is one of the essential tasks of designing a 
production-grade application. Without tests, a single change in code can result 
in cascades of failure in production. Thus unit tests should be written for all 
types of applications, be it a simple job cleaning data and training a model or 
a complex multi-tenant, real-time data processing system. In the following 
sections, we provide a guide for unit testing of Apache Flink applications. 
+Apache Flink provides a robust unit testing framework to make sure your 
applications behave in production as expected during development. You need to 
include the following dependencies to utilize the provided framework.
+
+```xml
+
+  org.apache.flink
+  flink-test-utils_${scala.binary.version}
+  ${flink.version}
+  test
+ 
+
+  org.apache.flink
+  flink-runtime_2.11
+  1.9.0
+  test
+  tests
+
+
+  org.apache.flink
+  flink-streaming-java_2.11
+  1.9.0
+  test
+  tests
+
+```
+
+The strategy of writing unit tests differs for various operators. You can 
break down the strategy into the following three buckets: 
+
+* Stateless Operators
+* Stateful Operators
+* Timed Process Operators
+
+
+# Stateless Operators
+
+Writing unit tests for a stateless operator is a breeze. You need to follow 
the basic norm of writing a test case, i.e., create an instance of the function 
class and test the appropriate methods. Let’s take an example of a simple `Map` 
operator.
+
+```java
+public class MyStatelessMap implements MapFunction {
+  @Override
+  public String map(String in) throws Exception {
+String out = "hello " + in;
+return out;
+  }
+}
+```
+
+The test case for the above operator should look like
+
+```java
+@Test
+public void testMap() throws Exception {
+  MyStatelessMap statelessMap = new MyStatelessMap();
+  String out = statelessMap.map("world");
+  Assert.assertEquals("hello world", out);
+}
+```
+
+Pretty simple, right? Let’s take a look at one for the `FlatMap` operator.
+
+```java
+public class MyStatelessFlatMap implements FlatMapFunction {
+  @Override
+  public void flatMap(String in, Collector collector) throws Exception 
{
+String out = "hello " + in;
+collector.collect(out);
+  }
+}
+```
+
+`FlatMap` operators require a `Collector` object along with the input. For the 
test case, we have two options: 
+
+1. Mock the `Collector` object using Mockito
+2. Use the `ListCollector` provided by Flink
+
+I prefer the second method as it requires fewer lines of code and is suitable 
for most of the cases.
+
+```java
+@Test
+public void testFlatMap() throws Exception {
+  MyStatelessFlatMap statelessFlatMap = new MyStatelessFlatMap();
+  List out = new ArrayList<>();
+  ListCollector listCollector = new ListCollector<>(out);
+  statelessFlatMap.flatMap("world", listCollector);
+  Assert.assertEquals(Lists.newArrayList("hello world"), out);
+}
+```
+
+
+# Stateful Operators
+
+Writing test cases for stateful operators requires more effort. You need to 
check whether the operator state is updated correctly and if it is cleaned up 
properly along with the output of the operator.
+
+Let’s take an example of stateful `FlatMap` function
+
+```java
+public class StatefulFlatMap extends RichFlatMapFunction {
+  ValueState previousInput;
+
+  @Override
+  public void open(Configuration parameters) throws Exception {
+previousInput = getRuntimeContext().getState(
+  new ValueStateDescriptor("previousInput", Types.STRING));
+  }
+
+  @Override
+  public void flatMap(String in, Collector collector) throws Exception 
{
+String out = "hello " + in;
+if(previousInput.value() != null){
+  out = out + " " + previousInput.value();
+}
+previousInput.update(in);
+collector.collect(out);
+  }
+}
+```
+
+The intri

[flink-web] branch asf-site updated (0c80e86 -> 3a953f8)

2020-01-29 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 0c80e86  Add Stateful Functions repository to website
 new fb51490  Add "State Unlocked" blog post
 new 3a953f8  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...ocked-interacting-with-state-in-apache-flink.md | 253 +++
 content/blog/feed.xml  | 239 +++
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  25 ++
 content/blog/page2/index.html  |  36 +-
 content/blog/page3/index.html  |  38 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  38 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  40 +-
 ...state-in-flink-state-processor-api-visual-1.png | Bin 0 -> 52723 bytes
 ...state-in-flink-state-processor-api-visual-2.png | Bin 0 -> 46207 bytes
 .../managing-state-in-flink-visual-1.png   | Bin 0 -> 419621 bytes
 .../managing-state-in-flink-visual-2.png   | Bin 0 -> 408204 bytes
 content/index.html |   8 +-
 ...ked-interacting-with-state-in-apache-flink.html | 470 +
 content/zh/index.html  |   8 +-
 ...state-in-flink-state-processor-api-visual-1.png | Bin 0 -> 52723 bytes
 ...state-in-flink-state-processor-api-visual-2.png | Bin 0 -> 46207 bytes
 .../managing-state-in-flink-visual-1.png   | Bin 0 -> 419621 bytes
 .../managing-state-in-flink-visual-2.png   | Bin 0 -> 408204 bytes
 23 files changed, 1210 insertions(+), 139 deletions(-)
 create mode 100755 
_posts/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink.md
 create mode 100755 
content/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-1.png
 create mode 100755 
content/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-2.png
 create mode 100755 
content/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-1.png
 create mode 100755 
content/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-2.png
 create mode 100644 
content/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html
 create mode 100755 
img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-1.png
 create mode 100755 
img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-2.png
 create mode 100755 
img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-1.png
 create mode 100755 
img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-2.png



[flink-web] 02/02: Rebuild website

2020-01-29 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 3a953f8faf1ac137c55cd9a5d195f209f43eeab3
Author: Fabian Hueske 
AuthorDate: Wed Jan 29 18:34:00 2020 +0100

Rebuild website
---
 content/blog/feed.xml  | 239 +++
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  25 ++
 content/blog/page2/index.html  |  36 +-
 content/blog/page3/index.html  |  38 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  38 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  40 +-
 ...state-in-flink-state-processor-api-visual-1.png | Bin 0 -> 52723 bytes
 ...state-in-flink-state-processor-api-visual-2.png | Bin 0 -> 46207 bytes
 .../managing-state-in-flink-visual-1.png   | Bin 0 -> 419621 bytes
 .../managing-state-in-flink-visual-2.png   | Bin 0 -> 408204 bytes
 content/index.html |   8 +-
 ...ked-interacting-with-state-in-apache-flink.html | 470 +
 content/zh/index.html  |   8 +-
 18 files changed, 957 insertions(+), 139 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 742368a..9a4424e 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,245 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+State Unlocked: Interacting with State in Apache Flink
+h1 id=introductionIntroduction/h1
+
+pWith stateful stream-processing becoming the norm for complex 
event-driven applications and real-time analytics, a 
href=https://flink.apache.org/Apache Flink/a is often 
the backbone for running business logic and managing an organization’s most 
valuable asset — its data — as application state in Flink./p
+
+pIn order to provide a state-of-the-art experience to Flink 
developers, the Apache Flink community makes significant efforts to provide the 
safety and future-proof guarantees organizations need while managing state in 
Flink. In particular, Flink developers should have sufficient means to access 
and modify their state, as well as making bootstrapping state with existing 
data from external systems a piece-of-cake. These efforts span multiple Flink 
major releases and consist of the  [...]
+
+ol
+  liEvolvable state schema in Apache Flink/li
+  liFlexibility in swapping state backends, and/li
+  liThe State processor API, an offline tool to read, write and modify 
state in Flink/li
+/ol
+
+pThis post discusses the community’s efforts related to state 
management in Flink, provides some practical examples of how the different 
features and APIs can be utilized and covers some future ideas for new and 
improved ways of managing state in Apache Flink./p
+
+h1 id=stream-processing-what-is-stateStream processing: 
What is State?/h1
+
+pTo set the tone for the remaining of the post, let us first try to 
explain the very definition of state in stream processing. When it comes to 
stateful stream processing, state comprises of the information that an 
application or stream processing engine will remember across events and streams 
as more realtime (unbounded) and/or offline (bounded) data flow through the 
system. Most trivial applications are inherently stateful; even the example of 
a simple COUNT operation, whereby  [...]
+
+pTo better understand how Flink manages state, one can think of Flink 
like a three-layered state abstraction, as illustrated in the diagram 
below./p
+
+center
+img 
src=/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-1.png
 width=600px alt=State in Apache Flink /
+/center
+pbr //p
+
+pOn the top layer, sits the Flink user code, for example, a 
codeKeyedProcessFunction/code that contains some value state. 
This is a simple variable whose value state annotations makes it automatically 
fault-tolerant, re-scalable and queryable by the runtime. These variables are 
backed by the configured state backend that sits either on-heap or on-disk 
(RocksDB State Backend) and provides data locality, proximity to the 
computation and speed when it comes to per-re [...]
+
+pA savepoint is a snapshot of the distributed, global state of an 
application at a logical point-in-time and is stored in an external distributed 
file system or blob storage such as HDFS, or S3. Upon upgrading an application 
or implementing a code change  — such as adding a new operator or changing a 
field — the Flink job can restart by re-loading the ap

[flink-web] 01/02: Add "State Unlocked" blog post

2020-01-29 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit fb51490a60d8084b61416607c6d9a8d9c2f91169
Author: Seth Wiesman 
AuthorDate: Tue Dec 10 12:30:47 2019 -0600

Add "State Unlocked" blog post

This closes #288.
---
 ...ocked-interacting-with-state-in-apache-flink.md | 253 +
 ...state-in-flink-state-processor-api-visual-1.png | Bin 0 -> 52723 bytes
 ...state-in-flink-state-processor-api-visual-2.png | Bin 0 -> 46207 bytes
 .../managing-state-in-flink-visual-1.png   | Bin 0 -> 419621 bytes
 .../managing-state-in-flink-visual-2.png   | Bin 0 -> 408204 bytes
 5 files changed, 253 insertions(+)

diff --git 
a/_posts/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink.md 
b/_posts/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink.md
new file mode 100755
index 000..ee6bb3c
--- /dev/null
+++ b/_posts/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink.md
@@ -0,0 +1,253 @@
+---
+layout: post
+title: "State Unlocked: Interacting with State in Apache Flink"
+date:  2020-01-29 12:00:00
+categories: news
+authors:
+- seth:
+  name: "Seth Wiesman"
+  twitter: "sjwiesman"
+
+
+excerpt: This post discusses the efforts of the Flink community as they relate 
to state management in Apache Flink. We showcase some practical examples of how 
the different features and APIs can be utilized and cover some future ideas for 
new and improved ways of managing state in Apache Flink.
+ 
+---
+
+# Introduction
+
+With stateful stream-processing becoming the norm for complex event-driven 
applications and real-time analytics, [Apache Flink](https://flink.apache.org/) 
is often the backbone for running business logic and managing an organization’s 
most valuable asset — its data — as application state in Flink. 
+
+In order to provide a state-of-the-art experience to Flink developers, the 
Apache Flink community makes significant efforts to provide the safety and 
future-proof guarantees organizations need while managing state in Flink. In 
particular, Flink developers should have sufficient means to access and modify 
their state, as well as making bootstrapping state with existing data from 
external systems a piece-of-cake. These efforts span multiple Flink major 
releases and consist of the following:
+
+1. Evolvable state schema in Apache Flink
+2. Flexibility in swapping state backends, and
+3. The State processor API, an offline tool to read, write and modify state in 
Flink
+
+This post discusses the community’s efforts related to state management in 
Flink, provides some practical examples of how the different features and APIs 
can be utilized and covers some future ideas for new and improved ways of 
managing state in Apache Flink.
+
+
+# Stream processing: What is State?
+
+To set the tone for the remaining of the post, let us first try to explain the 
very definition of state in stream processing. When it comes to stateful stream 
processing, state comprises of the information that an application or stream 
processing engine will remember across events and streams as more realtime 
(unbounded) and/or offline (bounded) data flow through the system. Most trivial 
applications are inherently stateful; even the example of a simple COUNT 
operation, whereby when coun [...]
+
+To better understand how Flink manages state, one can think of Flink like a 
three-layered state abstraction, as illustrated in the diagram below. 
+
+
+
+
+
+
+On the top layer, sits the Flink user code, for example, a 
`KeyedProcessFunction` that contains some value state. This is a simple 
variable whose value state annotations makes it automatically fault-tolerant, 
re-scalable and queryable by the runtime. These variables are backed by the 
configured state backend that sits either on-heap or on-disk (RocksDB State 
Backend) and provides data locality, proximity to the computation and speed 
when it comes to per-record computations. Finally, when [...]
+
+A savepoint is a snapshot of the distributed, global state of an application 
at a logical point-in-time and is stored in an external distributed file system 
or blob storage such as HDFS, or S3. Upon upgrading an application or 
implementing a code change  — such as adding a new operator or changing a field 
— the Flink job can restart by re-loading the application state from the 
savepoint into the state backend, making it local and available for the 
computation and continue processing as i [...]
+
+
+
+
+
+
+
+ It is important to remember here that state is one of the most valuable 
components of a Flink application carrying all the information about both 
where you are now and where you are going. State is among the most long-lived 
components in a Flink service since it can be carried across jobs, operators, 
configurations,

[flink-web] branch asf-site updated (ddfcb41 -> e165b5e)

2020-01-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from ddfcb41  Rebuild website
 new 8d4b689  Update Operator versions in the KUDO blog post.
 new e165b5e  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2019-11-06-flink-kubernetes-kudo.md | 14 +++---
 content/blog/feed.xml  | 14 +++---
 content/news/2019/12/09/flink-kubernetes-kudo.html | 14 +++---
 3 files changed, 21 insertions(+), 21 deletions(-)



[flink-web] 01/02: Update Operator versions in the KUDO blog post.

2020-01-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 8d4b6895fde835a06ca573acd5317e4c31357546
Author: Tobi Knaup 
AuthorDate: Fri Jan 17 09:52:51 2020 -0800

Update Operator versions in the KUDO blog post.

This closes #293.
---
 _posts/2019-11-06-flink-kubernetes-kudo.md | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/_posts/2019-11-06-flink-kubernetes-kudo.md 
b/_posts/2019-11-06-flink-kubernetes-kudo.md
index 4e66cf6..a022e1f 100644
--- a/_posts/2019-11-06-flink-kubernetes-kudo.md
+++ b/_posts/2019-11-06-flink-kubernetes-kudo.md
@@ -27,7 +27,7 @@ If you’re using a different way to provision Kubernetes, make 
sure you have at
 
 Install the `kubectl` CLI tool. The KUDO CLI is a plugin for the Kubernetes 
CLI. The official instructions for installing and setting up kubectl are 
[here](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
 
-Next, let’s install the KUDO CLI. At the time of this writing, the latest KUDO 
version is v0.8.0. You can find the CLI binaries for download 
[here](https://github.com/kudobuilder/kudo/releases). Download the 
`kubectl-kudo` binary for your OS and architecture.
+Next, let’s install the KUDO CLI. At the time of this writing, the latest KUDO 
version is v0.10.0. You can find the CLI binaries for download 
[here](https://github.com/kudobuilder/kudo/releases). Download the 
`kubectl-kudo` binary for your OS and architecture.
 
 If you’re using Homebrew on MacOS, you can install the CLI via:
 
@@ -48,17 +48,17 @@ This will create several resources. First, it will create 
the [Custom Resource D
 The KUDO CLI leverages the kubectl plugin system, which gives you all its 
functionality under `kubectl kudo`. This is a convenient way to install and 
deal with your KUDO Operators. For our demo, we use Kafka and Flink which 
depend on ZooKeeper. To make the ZooKeeper Operator available on the cluster, 
run:
 
 ```
-$ kubectl kudo install zookeeper --version=0.2.0 --skip-instance
+$ kubectl kudo install zookeeper --version=0.3.0 --skip-instance
 ```
 
 The --skip-instance flag skips the creation of a ZooKeeper instance. The 
flink-demo Operator that we’re going to install below will create it as a 
dependency instead. Now let’s make the Kafka and Flink Operators available the 
same way:
 
 ```
-$ kubectl kudo install kafka --version=0.1.3 --skip-instance
+$ kubectl kudo install kafka --version=1.2.0 --skip-instance
 ```
 
 ```
-$ kubectl kudo install flink --version=0.1.1 --skip-instance
+$ kubectl kudo install flink --version=0.2.1 --skip-instance
 ```
 
 This installs all the Operator versions needed for our demo.
@@ -80,7 +80,7 @@ Next, change into the “operators” directory and install the 
demo-operator fr
 ```
 $ cd operators
 $ kubectl kudo install 
repository/flink/docs/demo/financial-fraud/demo-operator --instance flink-demo
-instance.kudo.dev/v1alpha1/flink-demo created
+instance.kudo.dev/v1beta1/flink-demo created
 ```
 
 This time we didn’t include the --skip-instance flag, so KUDO will actually 
deploy all the components, including Flink, Kafka, and ZooKeeper. KUDO 
orchestrates deployments and other lifecycle operations using 
[plans](https://kudo.dev/docs/concepts.html#plan) that were defined by the 
Operator developer. Plans are similar to 
[runbooks](https://en.wikipedia.org/wiki/Runbook) and encapsulate all the 
procedures required to operate the software. We can track the status of the 
deployment using  [...]
@@ -89,7 +89,7 @@ This time we didn’t include the --skip-instance flag, so KUDO 
will actually de
 $ kubectl kudo plan status --instance flink-demo
 Plan(s) for "flink-demo" in namespace "default":
 .
-└── flink-demo (Operator-Version: "flink-demo-0.1.1" Active-Plan: "deploy")
+└── flink-demo (Operator-Version: "flink-demo-0.1.4" Active-Plan: "deploy")
└── Plan deploy (serial strategy) [IN_PROGRESS]
├── Phase dependencies [IN_PROGRESS]
│   ├── Step zookeeper (COMPLETE)
@@ -109,7 +109,7 @@ The output shows that the “deploy” plan is in progress and 
that it consists
 $ kubectl kudo plan status --instance flink-demo-kafka
 Plan(s) for "flink-demo-kafka" in namespace "default":
 .
-└── flink-demo-kafka (Operator-Version: "kafka-0.1.3" Active-Plan: "deploy")
+└── flink-demo-kafka (Operator-Version: "kafka-1.2.0" Active-Plan: "deploy")
├── Plan deploy (serial strategy) [IN_PROGRESS]
│   └── Phase deploy-kafka [IN_PROGRESS]
│   └── Step deploy (IN_PROGRESS)



[flink-web] 02/02: Rebuild website

2020-01-20 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit e165b5ec467ebd927599340e7d0c625cb48750c6
Author: Fabian Hueske 
AuthorDate: Mon Jan 20 13:46:37 2020 +0100

Rebuild website
---
 content/blog/feed.xml  | 14 +++---
 content/news/2019/12/09/flink-kubernetes-kudo.html | 14 +++---
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 2fe71fe..742368a 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -378,7 +378,7 @@ To understand why this is the case, let us start with 
articulating a realistic s
 
 pInstall the codekubectl/code CLI tool. The KUDO CLI 
is a plugin for the Kubernetes CLI. The official instructions for installing 
and setting up kubectl are a 
href=https://kubernetes.io/docs/tasks/tools/install-kubectl/here/a./p;
 
-pNext, let’s install the KUDO CLI. At the time of this writing, the 
latest KUDO version is v0.8.0. You can find the CLI binaries for download a 
href=https://github.com/kudobuilder/kudo/releaseshere/a;.
 Download the codekubectl-kudo/code binary for your OS and 
architecture./p
+pNext, let’s install the KUDO CLI. At the time of this writing, the 
latest KUDO version is v0.10.0. You can find the CLI binaries for download 
a 
href=https://github.com/kudobuilder/kudo/releaseshere/a;.
 Download the codekubectl-kudo/code binary for your OS and 
architecture./p
 
 pIf you’re using Homebrew on MacOS, you can install the CLI 
via:/p
 
@@ -396,15 +396,15 @@ $KUDO_HOME has been configured at /Users/gerred/.kudo
 
 pThe KUDO CLI leverages the kubectl plugin system, which gives you all 
its functionality under codekubectl kudo/code. This is a 
convenient way to install and deal with your KUDO Operators. For our demo, we 
use Kafka and Flink which depend on ZooKeeper. To make the ZooKeeper Operator 
available on the cluster, run:/p
 
-div class=highlightprecode$ kubectl kudo 
install zookeeper --version=0.2.0 --skip-instance
+div class=highlightprecode$ kubectl kudo 
install zookeeper --version=0.3.0 --skip-instance
 /code/pre/div
 
 pThe –skip-instance flag skips the creation of a ZooKeeper instance. 
The flink-demo Operator that we’re going to install below will create it as a 
dependency instead. Now let’s make the Kafka and Flink Operators available the 
same way:/p
 
-div class=highlightprecode$ kubectl kudo 
install kafka --version=0.1.3 --skip-instance
+div class=highlightprecode$ kubectl kudo 
install kafka --version=1.2.0 --skip-instance
 /code/pre/div
 
-div class=highlightprecode$ kubectl kudo 
install flink --version=0.1.1 --skip-instance
+div class=highlightprecode$ kubectl kudo 
install flink --version=0.2.1 --skip-instance
 /code/pre/div
 
 pThis installs all the Operator versions needed for our demo./p
@@ -424,7 +424,7 @@ $KUDO_HOME has been configured at /Users/gerred/.kudo
 
 div class=highlightprecode$ cd operators
 $ kubectl kudo install 
repository/flink/docs/demo/financial-fraud/demo-operator --instance flink-demo
-instance.kudo.dev/v1alpha1/flink-demo created
+instance.kudo.dev/v1beta1/flink-demo created
 /code/pre/div
 
 pThis time we didn’t include the –skip-instance flag, so KUDO will 
actually deploy all the components, including Flink, Kafka, and ZooKeeper. KUDO 
orchestrates deployments and other lifecycle operations using a 
href=https://kudo.dev/docs/concepts.html#planplans/a; 
that were defined by the Operator developer. Plans are similar to a 
href=https://en.wikipedia.org/wiki/Runbookrunbooks/a; 
and encapsulate all the procedures required [...]
@@ -432,7 +432,7 @@ instance.kudo.dev/v1alpha1/flink-demo created
 div class=highlightprecode$ kubectl kudo 
plan status --instance flink-demo
 Plan(s) for flink-demo in namespace default:
 .
-└── flink-demo (Operator-Version: flink-demo-0.1.1 Active-Plan: 
deploy)
+└── flink-demo (Operator-Version: flink-demo-0.1.4 Active-Plan: 
deploy)
└── Plan deploy (serial strategy) [IN_PROGRESS]
├── Phase dependencies [IN_PROGRESS]
│   ├── Step zookeeper (COMPLETE)
@@ -451,7 +451,7 @@ Plan(s) for flink-demo in namespace 
default:
 div class=highlightprecode$ kubectl kudo 
plan status --instance flink-demo-kafka
 Plan(s) for flink-demo-kafka in namespace default:
 .
-└── flink-demo-kafka (Operator-Version: kafka-0.1.3 Active-Plan: 
deploy)
+└── flink-demo-kafka (Operator-Version: kafka-1.2.0 Active-Plan: 
deploy)
├── Plan deploy (serial strategy) [IN_PROGRESS]
│   └── Phase deploy-kafka [IN_PROGRESS]
│   └── Step deploy (IN_PROGRESS)
diff --git a/content/news/2019/12/09/flink-kubernetes-kudo.html 
b/content/news/2019/12/09/flink-kubernetes-kudo.html
index 694c9bc..31a0498 100644
--- a/content/news/2019/12/09/flink-kubernetes-kudo.html
+++ b/content/news/2019/12/09/flink-kubernetes-kudo.html
@@ -215,7 +215,7 @@
 
 Install

[flink-web] branch asf-site updated (e62c514 -> 0701ee2)

2020-01-17 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from e62c514  Rebuild website
 new bf970b5  [hotfix] Correct payer/recepient terminology
 new 0701ee2  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2020-01-15-demo-fraud-detection.md | 10 +-
 content/news/2020/01/15/demo-fraud-detection.html | 10 +-
 2 files changed, 10 insertions(+), 10 deletions(-)



[flink-web] 01/02: [hotfix] Correct payer/recepient terminology

2020-01-17 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit bf970b54e4f40cb53298c49783ba8b1662f25979
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Fri Jan 17 12:40:34 2020 +0100

[hotfix] Correct payer/recepient terminology

This closes #292.
---
 _posts/2020-01-15-demo-fraud-detection.md | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/_posts/2020-01-15-demo-fraud-detection.md 
b/_posts/2020-01-15-demo-fraud-detection.md
index 02511db..96a3c27 100644
--- a/_posts/2020-01-15-demo-fraud-detection.md
+++ b/_posts/2020-01-15-demo-fraud-detection.md
@@ -97,12 +97,12 @@ DataStream<...> windowed = input
 This approach is the main building block for achieving horizontal scalability 
in a wide range of use cases. However, in the case of an application striving 
to provide flexibility in business logic at runtime, this is not enough.
 To understand why this is the case, let us start with articulating a realistic 
sample rule definition for our fraud detection system in the form of a 
functional requirement:  
 
-*"Whenever the **sum** of the accumulated **payment amount** from the same 
**beneficiary** to the same **payee** within the **duration of a week** is 
**greater** than **1 000 000 $** - fire an alert."*
+*"Whenever the **sum** of the accumulated **payment amount** from the same 
**payer** to the same **beneficiary** within the **duration of a week** is 
**greater** than **1 000 000 $** - fire an alert."*
 
 In this formulation we can spot a number of parameters that we would like to 
be able to specify in a newly-submitted rule and possibly even later modify or 
tweak at runtime:
 
 1. Aggregation field (payment amount)  
-1. Grouping fields (beneficiary + payee)  
+1. Grouping fields (payer + beneficiary)  
 1. Aggregation function (sum)  
 1. Window duration (1 week)  
 1. Limit (1 000 000)  
@@ -114,7 +114,7 @@ Accordingly, we will use the following simple JSON format 
to define the aforemen
 {
   "ruleId": 1,
   "ruleState": "ACTIVE",
-  "groupingKeyNames": ["beneficiaryId", "payeeId"],
+  "groupingKeyNames": ["payerId", "beneficiaryId"],
   "aggregateFieldName": "paymentAmount",
   "aggregatorFunctionType": "SUM",
   "limitOperatorType": "GREATER",
@@ -123,7 +123,7 @@ Accordingly, we will use the following simple JSON format 
to define the aforemen
 }
 ```
 
-At this point, it is important to understand that **`groupingKeyNames`** 
determine the actual physical grouping of events - all Transactions with the 
same values of specified parameters (e.g. _beneficiary #25 -> payee #12_) have 
to be aggregated in the same physical instance of the evaluating operator. 
Naturally, the process of distributing data in such a way in Flink's API is 
realised by a `keyBy()` function.
+At this point, it is important to understand that **`groupingKeyNames`** 
determine the actual physical grouping of events - all Transactions with the 
same values of specified parameters (e.g. _payer #25 -> beneficiary #12_) have 
to be aggregated in the same physical instance of the evaluating operator. 
Naturally, the process of distributing data in such a way in Flink's API is 
realised by a `keyBy()` function.
 
 Most examples in Flink's 
`keyBy()`[documentation](https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-field-expressions)
 use a hard-coded `KeySelector`, which extracts specific fixed events' fields. 
However, to support the desired flexibility, we have to extract them in a more 
dynamic fashion based on the specifications of the rules. For this, we will 
have to use one additional operator that prepares every event for dispatching 
to a correct aggr [...]
 
@@ -173,7 +173,7 @@ public class DynamicKeyFunction
   ...
 }
 ```
- `KeysExtractor.getKey()` uses reflection to extract the required values of 
`groupingKeyNames` fields from events and combines them as a single 
concatenated String key, e.g `"{beneficiaryId=25;payeeId=12}"`. Flink will 
calculate the hash of this key and assign the processing of this particular 
combination to a specific server in the cluster. This will allow tracking all 
transactions between _beneficiary #25_ and _payee #12_ and evaluating defined 
rules within the desired time window.
+ `KeysExtractor.getKey()` uses reflection to extract the required values of 
`groupingKeyNames` fields from events and combines them as a single 
concatenated String key, e.g `"{payerId=25;beneficiaryId=12}"`. Flink will 
calculate the hash of this key and assign the processing of this particular 
combination to a specific server in the cluster. This will allow tracki

[flink-web] 02/02: Rebuild website

2020-01-17 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 0701ee2190f78d4a43cf1a8e4f4212b2c1845be0
Author: Fabian Hueske 
AuthorDate: Fri Jan 17 13:20:28 2020 +0100

Rebuild website
---
 content/news/2020/01/15/demo-fraud-detection.html | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/content/news/2020/01/15/demo-fraud-detection.html 
b/content/news/2020/01/15/demo-fraud-detection.html
index 1ad32c2..eae1381 100644
--- a/content/news/2020/01/15/demo-fraud-detection.html
+++ b/content/news/2020/01/15/demo-fraud-detection.html
@@ -274,13 +274,13 @@ We hope that this series will place these powerful 
approaches into your tool bel
 This approach is the main building block for achieving horizontal 
scalability in a wide range of use cases. However, in the case of an 
application striving to provide flexibility in business logic at runtime, this 
is not enough.
 To understand why this is the case, let us start with articulating a realistic 
sample rule definition for our fraud detection system in the form of a 
functional requirement:
 
-“Whenever the sum of the accumulated payment 
amount from the same beneficiary to the same 
payee within the duration of a week is 
greater than 1 000 000 $ - fire an 
alert.”
+“Whenever the sum of the accumulated payment 
amount from the same payer to the same 
beneficiary within the duration of a week is 
greater than 1 000 000 $ - fire an 
alert.”
 
 In this formulation we can spot a number of parameters that we would like 
to be able to specify in a newly-submitted rule and possibly even later modify 
or tweak at runtime:
 
 
   Aggregation field (payment amount)
-  Grouping fields (beneficiary + payee)
+  Grouping fields (payer + beneficiary)
   Aggregation function (sum)
   Window duration (1 week)
   Limit (1 000 000)
@@ -292,7 +292,7 @@ To understand why this is the case, let us start with 
articulating a realistic s
 {
   ruleId: 1,
   ruleState: ACTIVE,
-  groupingKeyNames: 
[beneficiaryId, payeeId],
+  groupingKeyNames: 
[payerId, beneficiaryId],
   aggregateFieldName: paymentAmount,
   aggregatorFunctionType: SUM,
   limitOperatorType: GREATER,
@@ -300,7 +300,7 @@ To understand why this is the case, let us start with 
articulating a realistic s
   windowMinutes: 
10080
 }
 
-At this point, it is important to understand that 
groupingKeyNames determine the actual physical 
grouping of events - all Transactions with the same values of specified 
parameters (e.g. beneficiary #25 - payee #12) have to be 
aggregated in the same physical instance of the evaluating operator. Naturally, 
the process of distributing data in such a way in Flink’s API is realised by a 
keyBy() function.
+At this point, it is important to understand that 
groupingKeyNames determine the actual physical 
grouping of events - all Transactions with the same values of specified 
parameters (e.g. payer #25 - beneficiary #12) have to be 
aggregated in the same physical instance of the evaluating operator. Naturally, 
the process of distributing data in such a way in Flink’s API is realised by a 
keyBy() function.
 
 Most examples in Flink’s keyBy()https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-field-expressions;>documentation
 use a hard-coded KeySelector, which extracts specific fixed 
events’ fields. However, to support the desired flexibility, we have to extract 
them in a more dynamic fashion based on the specifications of the rules. For 
this, we will have to use one additional operator that prepares every eve [...]
 
@@ -346,7 +346,7 @@ To understand why this is the case, let us start with 
articulating a realistic s
   }
   ...
 }
-KeysExtractor.getKey() uses reflection to extract the required 
values of groupingKeyNames fields from events and combines them as 
a single concatenated String key, e.g 
"{beneficiaryId=25;payeeId=12}". Flink will calculate the hash of 
this key and assign the processing of this particular combination to a specific 
server in the cluster. This will allow tracking all transactions between 
beneficiary #25 and payee #12 and evaluating  [...]
+KeysExtractor.getKey() uses reflection to extract the required 
values of groupingKeyNames fields from events and combines them as 
a single concatenated String key, e.g 
"{payerId=25;beneficiaryId=12}". Flink will calculate the hash of 
this key and assign the processing of this particular combination to a specific 
server in the cluster. This will allow tracking all transactions between 
payer #25 and beneficiary #12 and evaluating  [...]
 
 Notice that a wrapper class Keyed with the following signature 
was introduced as the output type of DynamicKeyFunction:
 



[flink-web] 01/02: Add part 1 of application patterns blog post series.

2020-01-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 455f6269c26a57c2a6502ae603a063e502a382e5
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Mon Jan 6 12:32:19 2020 +0100

Add part 1 of application patterns blog post series.

This closes #289.
---
 _posts/2020-01-15-demo-fraud-detection.md  | 222 +
 .../architecture.png   | Bin 0 -> 214660 bytes
 .../2019-11-19-demo-fraud-detection/end-to-end.png | Bin 0 -> 195993 bytes
 .../shuffle_function_1.png | Bin 0 -> 152296 bytes
 img/blog/2019-11-19-demo-fraud-detection/ui.png| Bin 0 -> 1296038 bytes
 5 files changed, 222 insertions(+)

diff --git a/_posts/2020-01-15-demo-fraud-detection.md 
b/_posts/2020-01-15-demo-fraud-detection.md
new file mode 100644
index 000..02511db
--- /dev/null
+++ b/_posts/2020-01-15-demo-fraud-detection.md
@@ -0,0 +1,222 @@
+---
+layout: post
+title: "Advanced Flink Application Patterns Vol.1:
+Case Study of a Fraud Detection System"
+date: 2020-01-15T12:00:00.000Z
+authors:
+- alex:
+  name: "Alexander Fedulov"
+  twitter: "alex_fedulov"
+categories: news
+excerpt: In this series of blog posts you will learn about three powerful 
Flink patterns for building streaming applications.
+---
+
+In this series of blog posts you will learn about three powerful Flink 
patterns for building streaming applications:
+
+ - Dynamic updates of application logic
+ - Dynamic data partitioning (shuffle), controlled at runtime
+ - Low latency alerting based on custom windowing logic (without using the 
window API)
+
+These patterns expand the possibilities of what is achievable with statically 
defined data flows and provide the building blocks to fulfill complex business 
requirements.
+
+**Dynamic updates of application logic** allow Flink jobs to change at 
runtime, without downtime from stopping and resubmitting the code.  
+
+**Dynamic data partitioning** provides the ability to change how events are 
distributed and grouped by Flink at runtime. Such functionality often becomes a 
natural requirement when building jobs with dynamically reconfigurable 
application logic.  
+
+**Custom window management** demonstrates how you can utilize the low level 
[process function 
API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html),
 when the native [window 
API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html)
 is not exactly matching your requirements. Specifically, you will learn how to 
implement low latency alerting on windows and how to limit state growth with 
timers.
+
+These patterns build on top of core Flink functionality, however, they might 
not be immediately apparent from the framework's documentation as explaining 
and presenting the motivation behind them is not always trivial without a 
concrete use case. That is why we will showcase these patterns with a practical 
example that offers a real-world usage scenario for Apache Flink — a _Fraud 
Detection_ engine.
+We hope that this series will place these powerful approaches into your tool 
belt and enable you to take on new and exciting tasks.
+
+In the first blog post of the series we will look at the high-level 
architecture of the demo application, describe its components and their 
interactions. We will then deep dive into the implementation details of the 
first pattern in the series - **dynamic data partitioning**.
+
+
+You will be able to run the full Fraud Detection Demo application locally and 
look into the details of the implementation by using the accompanying GitHub 
repository.
+
+### Fraud Detection Demo
+
+The full source code for our fraud detection demo is open source and available 
online. To run it locally, check out the following repository and follow the 
steps in the README:
+
+[https://github.com/afedulov/fraud-detection-demo](https://github.com/afedulov/fraud-detection-demo)
+
+You will see the demo is a self-contained application - it only requires 
`docker` and `docker-compose` to be built from sources and includes the 
following components:
+
+ - Apache Kafka (message broker) with ZooKeeper
+ - Apache Flink ([application 
cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/glossary.html#flink-application-cluster))
+ - Fraud Detection Web App
+
+The high-level goal of the Fraud Detection engine is to consume a stream of 
financial transactions and evaluate them against a set of rules. These rules 
are subject to frequent changes and tweaks. In a real production system, it is 
important to be able to add and remove them at runtime, without incurring an 
expensive penalty of stopping and restarting the job.
+
+When you navigate to the

[flink-web] 02/02: Rebuild website

2020-01-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 1c63ea08d295e692406762812b82969137db7db3
Author: Fabian Hueske 
AuthorDate: Wed Jan 15 17:29:26 2020 +0100

Rebuild website
---
 content/blog/feed.xml  | 212 ++
 content/blog/index.html|  45 ++-
 content/blog/page10/index.html |  32 +-
 content/blog/page2/index.html  |  45 ++-
 content/blog/page3/index.html  |  45 ++-
 content/blog/page4/index.html  |  47 ++-
 content/blog/page5/index.html  |  47 ++-
 content/blog/page6/index.html  |  47 ++-
 content/blog/page7/index.html  |  45 ++-
 content/blog/page8/index.html  |  43 +-
 content/blog/page9/index.html  |  45 ++-
 .../architecture.png   | Bin 0 -> 214660 bytes
 .../2019-11-19-demo-fraud-detection/end-to-end.png | Bin 0 -> 195993 bytes
 .../shuffle_function_1.png | Bin 0 -> 152296 bytes
 .../blog/2019-11-19-demo-fraud-detection/ui.png| Bin 0 -> 1296038 bytes
 content/index.html |   6 +-
 content/news/2020/01/15/demo-fraud-detection.html  | 443 +
 content/zh/index.html  |   6 +-
 18 files changed, 963 insertions(+), 145 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index c044975..653680c 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,218 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Advanced Flink Application Patterns Vol.1: Case Study of a Fraud 
Detection System
+pIn this series of blog posts you will learn about three 
powerful Flink patterns for building streaming applications:/p
+
+ul
+  liDynamic updates of application logic/li
+  liDynamic data partitioning (shuffle), controlled at 
runtime/li
+  liLow latency alerting based on custom windowing logic (without 
using the window API)/li
+/ul
+
+pThese patterns expand the possibilities of what is achievable with 
statically defined data flows and provide the building blocks to fulfill 
complex business requirements./p
+
+pstrongDynamic updates of application logic/strong 
allow Flink jobs to change at runtime, without downtime from stopping and 
resubmitting the code.br /
+br /
+strongDynamic data partitioning/strong provides the ability to 
change how events are distributed and grouped by Flink at runtime. Such 
functionality often becomes a natural requirement when building jobs with 
dynamically reconfigurable application logic.br /
+br /
+strongCustom window management/strong demonstrates how you can 
utilize the low level a 
href=https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.htmlprocess
 function API/a, when the native a 
href=https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.htmlwindow
 API/a is not exactly matching your requirements. Specifically, you 
will learn how to impl [...]
+
+pThese patterns build on top of core Flink functionality, however, 
they might not be immediately apparent from the framework’s documentation as 
explaining and presenting the motivation behind them is not always trivial 
without a concrete use case. That is why we will showcase these patterns with a 
practical example that offers a real-world usage scenario for Apache Flink — a 
emFraud Detection/em engine.
+We hope that this series will place these powerful approaches into your tool 
belt and enable you to take on new and exciting tasks./p
+
+pIn the first blog post of the series we will look at the high-level 
architecture of the demo application, describe its components and their 
interactions. We will then deep dive into the implementation details of the 
first pattern in the series - strongdynamic data 
partitioning/strong./p
+
+pYou will be able to run the full Fraud Detection Demo application 
locally and look into the details of the implementation by using the 
accompanying GitHub repository./p
+
+h3 id=fraud-detection-demoFraud Detection Demo/h3
+
+pThe full source code for our fraud detection demo is open source and 
available online. To run it locally, check out the following repository and 
follow the steps in the README:/p
+
+pa 
href=https://github.com/afedulov/fraud-detection-demohttps://github.com/afedulov/fraud-detection-demo/a/p;
+
+pYou will see the demo is a self-contained application - it only 
requires codedocker/code and 
codedocker-compose/code to be built from sources and includes 
the following components:/p
+
+ul
+  liApache Kafka (message broker) with ZooKeeper/li
+  liApache Flink (a 
href=https://ci.apache.org/projec

[flink-web] branch asf-site updated (f78b71e -> 1c63ea0)

2020-01-15 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from f78b71e  Rebuild website
 new 455f626  Add part 1 of application patterns blog post series.
 new 1c63ea0  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2020-01-15-demo-fraud-detection.md  | 222 +++
 content/blog/feed.xml  | 212 ++
 content/blog/index.html|  45 ++-
 content/blog/page10/index.html |  32 +-
 content/blog/page2/index.html  |  45 ++-
 content/blog/page3/index.html  |  45 ++-
 content/blog/page4/index.html  |  47 ++-
 content/blog/page5/index.html  |  47 ++-
 content/blog/page6/index.html  |  47 ++-
 content/blog/page7/index.html  |  45 ++-
 content/blog/page8/index.html  |  43 +-
 content/blog/page9/index.html  |  45 ++-
 .../architecture.png   | Bin 0 -> 214660 bytes
 .../2019-11-19-demo-fraud-detection/end-to-end.png | Bin 0 -> 195993 bytes
 .../shuffle_function_1.png | Bin 0 -> 152296 bytes
 .../blog/2019-11-19-demo-fraud-detection/ui.png| Bin 0 -> 1296038 bytes
 content/index.html |   6 +-
 content/news/2020/01/15/demo-fraud-detection.html  | 443 +
 content/zh/index.html  |   6 +-
 .../architecture.png   | Bin 0 -> 214660 bytes
 .../2019-11-19-demo-fraud-detection/end-to-end.png | Bin 0 -> 195993 bytes
 .../shuffle_function_1.png | Bin 0 -> 152296 bytes
 img/blog/2019-11-19-demo-fraud-detection/ui.png| Bin 0 -> 1296038 bytes
 23 files changed, 1185 insertions(+), 145 deletions(-)
 create mode 100644 _posts/2020-01-15-demo-fraud-detection.md
 create mode 100644 
content/img/blog/2019-11-19-demo-fraud-detection/architecture.png
 create mode 100644 
content/img/blog/2019-11-19-demo-fraud-detection/end-to-end.png
 create mode 100644 
content/img/blog/2019-11-19-demo-fraud-detection/shuffle_function_1.png
 create mode 100644 content/img/blog/2019-11-19-demo-fraud-detection/ui.png
 create mode 100644 content/news/2020/01/15/demo-fraud-detection.html
 create mode 100644 img/blog/2019-11-19-demo-fraud-detection/architecture.png
 create mode 100644 img/blog/2019-11-19-demo-fraud-detection/end-to-end.png
 create mode 100644 
img/blog/2019-11-19-demo-fraud-detection/shuffle_function_1.png
 create mode 100644 img/blog/2019-11-19-demo-fraud-detection/ui.png



[flink-playgrounds] branch release-1.9 updated: [hotfix] Use correct checkpoint docker volume as set in state.checkpoint.dir.

2020-01-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 198fa24  [hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.
198fa24 is described below

commit 198fa24c0dc2e2141f3e22feef1d50f0bb784b0c
Author: Patrick Wiener 
AuthorDate: Mon Oct 7 14:44:33 2019 +0200

[hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.

This closes #6.
---
 operations-playground/docker-compose.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 7907092..5a88b98 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -41,7 +41,7 @@ services:
   - 8081:8081
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -52,7 +52,7 @@ services:
 command: "taskmanager.sh start-foreground"
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -70,4 +70,4 @@ services:
 ports:
   - 9094:9094
 volumes:
-  flink-checkpoint-directory:
\ No newline at end of file
+  flink-checkpoints-directory:



[flink-playgrounds] branch release-1.8 updated: [hotfix] Use correct checkpoint docker volume as set in state.checkpoint.dir.

2020-01-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 5347d76  [hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.
5347d76 is described below

commit 5347d7678a11df6ae007c28ada735de6ce72226a
Author: Patrick Wiener 
AuthorDate: Mon Oct 7 14:44:33 2019 +0200

[hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.

This closes #6.
---
 operations-playground/docker-compose.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index d498070..68643de 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -41,7 +41,7 @@ services:
   - 8081:8081
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -52,7 +52,7 @@ services:
 command: "taskmanager.sh start-foreground"
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -70,4 +70,4 @@ services:
 ports:
   - 9094:9094
 volumes:
-  flink-checkpoint-directory:
\ No newline at end of file
+  flink-checkpoints-directory:



[flink-playgrounds] branch master updated: [hotfix] Use correct checkpoint docker volume as set in state.checkpoint.dir.

2020-01-07 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/master by this push:
 new 73d0cad  [hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.
73d0cad is described below

commit 73d0cad9733df6de59a402d749df8419d67e9c75
Author: Patrick Wiener 
AuthorDate: Mon Oct 7 14:44:33 2019 +0200

[hotfix] Use correct checkpoint docker volume as set in 
state.checkpoint.dir.

This closes #6.
---
 operations-playground/docker-compose.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/operations-playground/docker-compose.yaml 
b/operations-playground/docker-compose.yaml
index 7907092..5a88b98 100644
--- a/operations-playground/docker-compose.yaml
+++ b/operations-playground/docker-compose.yaml
@@ -41,7 +41,7 @@ services:
   - 8081:8081
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -52,7 +52,7 @@ services:
 command: "taskmanager.sh start-foreground"
 volumes:
   - ./conf:/opt/flink/conf
-  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - flink-checkpoints-directory:/tmp/flink-checkpoints-directory
   - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
 environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
@@ -70,4 +70,4 @@ services:
 ports:
   - 9094:9094
 volumes:
-  flink-checkpoint-directory:
\ No newline at end of file
+  flink-checkpoints-directory:



[flink-web] branch asf-site updated (20e5a8b -> b69c6fe)

2019-12-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 20e5a8b  Rebuild website
 new d23026a  [FLINK-14213] Replace link to "Local Setup Tutorial" by link 
to "Getting Started Overview".
 new b69c6fe  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _data/i18n.yml  | 6 --
 _includes/navbar.html   | 4 ++--
 content/2019/05/03/pulsar-flink.html| 4 ++--
 content/2019/05/14/temporal-tables.html | 4 ++--
 content/2019/05/19/state-ttl.html   | 4 ++--
 content/2019/06/05/flink-network-stack.html | 4 ++--
 content/2019/06/26/broadcast-state.html | 4 ++--
 content/2019/07/23/flink-network-stack-2.html   | 4 ++--
 content/blog/index.html | 4 ++--
 content/blog/page10/index.html  | 4 ++--
 content/blog/page2/index.html   | 4 ++--
 content/blog/page3/index.html   | 4 ++--
 content/blog/page4/index.html   | 4 ++--
 content/blog/page5/index.html   | 4 ++--
 content/blog/page6/index.html   | 4 ++--
 content/blog/page7/index.html   | 4 ++--
 content/blog/page8/index.html   | 4 ++--
 content/blog/page9/index.html   | 4 ++--
 content/blog/release_1.0.0-changelog_known_issues.html  | 4 ++--
 content/blog/release_1.1.0-changelog.html   | 4 ++--
 content/blog/release_1.2.0-changelog.html   | 4 ++--
 content/blog/release_1.3.0-changelog.html   | 4 ++--
 content/community.html  | 4 ++--
 content/contributing/code-style-and-quality-common.html | 4 ++--
 content/contributing/code-style-and-quality-components.html | 4 ++--
 content/contributing/code-style-and-quality-formatting.html | 4 ++--
 content/contributing/code-style-and-quality-java.html   | 4 ++--
 content/contributing/code-style-and-quality-preamble.html   | 4 ++--
 content/contributing/code-style-and-quality-pull-requests.html  | 4 ++--
 content/contributing/code-style-and-quality-scala.html  | 4 ++--
 content/contributing/contribute-code.html   | 4 ++--
 content/contributing/contribute-documentation.html  | 4 ++--
 content/contributing/how-to-contribute.html | 4 ++--
 content/contributing/improve-website.html   | 4 ++--
 content/contributing/reviewing-prs.html | 4 ++--
 content/documentation.html  | 4 ++--
 content/downloads.html  | 4 ++--
 content/ecosystem.html  | 4 ++--
 content/faq.html| 4 ++--
 content/feature/2019/09/13/state-processor-api.html | 4 ++--
 content/features/2017/07/04/flink-rescalable-state.html | 4 ++--
 content/features/2018/01/30/incremental-checkpointing.html  | 4 ++--
 .../features/2018/03/01/end-to-end-exactly-once-apache-flink.html   | 4 ++--
 content/features/2019/03/11/prometheus-monitoring.html  | 4 ++--
 content/flink-applications.html | 4 ++--
 content/flink-architecture.html | 4 ++--
 content/flink-operations.html   | 4 ++--
 content/gettinghelp.html| 4 ++--
 content/index.html  | 4 ++--
 content/material.html   | 4 ++--
 content/news/2014/08/26/release-0.6.html| 4 ++--
 content/news/2014/09/26/release-0.6.1.html  | 4 ++--
 content/news/2014/10/03/upcoming_events.html| 4 ++--
 content/news/2014/11/04/release-0.7.0.html  | 4 ++--
 content/news/2014/11/18/hadoop-compatibility.html   | 4 ++--
 content/news/2015/01/06/december-in-flink.html

[flink-web] 01/02: [FLINK-14213] Replace link to "Local Setup Tutorial" by link to "Getting Started Overview".

2019-12-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit d23026ac16470157ad8b50e1d753f1471dae593e
Author: Fabian Hueske 
AuthorDate: Wed Sep 25 15:19:40 2019 +0200

[FLINK-14213] Replace link to "Local Setup Tutorial" by link to "Getting 
Started Overview".

This closes #272.
---
 _data/i18n.yml| 6 --
 _includes/navbar.html | 4 ++--
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/_data/i18n.yml b/_data/i18n.yml
index 9db8599..6c43ccf 100644
--- a/_data/i18n.yml
+++ b/_data/i18n.yml
@@ -7,7 +7,7 @@ en:
 powered_by: Powered By
 faq: FAQ
 downloads: Downloads
-tutorials: Tutorials
+getting_started: Getting Started
 documentation: Documentation
 getting_help: Getting Help
 ecosystem: Ecosystem
@@ -20,6 +20,7 @@ en:
 contribute_docs: Contribute Documentation
 contribute_website: Contribute to the Website
 roadmap: Roadmap
+tutorials: Tutorials
 
 zh:
 what_is_flink: Apache Flink 是什么?
@@ -30,7 +31,7 @@ zh:
 powered_by: Flink 用户
 faq: 常见问题
 downloads: 下载
-tutorials: 教程
+getting_started: 教程
 documentation: 文档
 getting_help: 获取帮助
 ecosystem: Ecosystem
@@ -43,3 +44,4 @@ zh:
 contribute_docs: 贡献文档
 contribute_website: 贡献网站
 roadmap: 开发计划
+tutorials: 教程
\ No newline at end of file
diff --git a/_includes/navbar.html b/_includes/navbar.html
index 71539bd..3acf26e 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -63,9 +63,9 @@
 
 {{ 
site.data.i18n[page.language].downloads }}
 
-
+
 
-  {{ 
site.data.i18n[page.language].tutorials }} 
+  {{ 
site.data.i18n[page.language].getting_started }} 
 
 
 



[flink-web] branch asf-site updated (826c7af -> 2ca9684)

2019-10-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 826c7af  Rebuild website.
 new 7fc8570  Add Gojek to Powered By page.
 new 2ca9684  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/img/poweredby/gojek-logo.png | Bin 0 -> 12795 bytes
 content/index.html   |   6 ++
 content/poweredby.html   |   4 
 img/poweredby/gojek-logo.png | Bin 0 -> 12795 bytes
 index.md |   6 ++
 poweredby.md |   4 
 6 files changed, 20 insertions(+)
 create mode 100644 content/img/poweredby/gojek-logo.png
 create mode 100644 img/poweredby/gojek-logo.png



[flink-web] 01/02: Add Gojek to Powered By page.

2019-10-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 7fc857030998ea8ce6366bfec63850e08e24c563
Author: Fabian Hueske 
AuthorDate: Tue Sep 24 17:24:14 2019 +0200

Add Gojek to Powered By page.

This closes #273.
---
 img/poweredby/gojek-logo.png | Bin 0 -> 12795 bytes
 index.md |   6 ++
 poweredby.md |   4 
 3 files changed, 10 insertions(+)

diff --git a/img/poweredby/gojek-logo.png b/img/poweredby/gojek-logo.png
new file mode 100644
index 000..a015bcc
Binary files /dev/null and b/img/poweredby/gojek-logo.png differ
diff --git a/index.md b/index.md
index af2b860..bd3bf2f 100644
--- a/index.md
+++ b/index.md
@@ -211,6 +211,12 @@ layout: base
 
 
   
+
+  
+ 
+
+
+  
 
   
 
diff --git a/poweredby.md b/poweredby.md
index 06efa74..f002f39 100644
--- a/poweredby.md
+++ b/poweredby.md
@@ -61,6 +61,10 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   Ericsson used Flink to build a real-time anomaly detector with machine 
learning over large infrastructures. https://www.oreilly.com/ideas/applying-the-kappa-architecture-in-the-telco-industry;
 target='_blank'> Read a detailed overview on O'Reilly 
Ideas
   
   
+
+  Gojek is a Super App: one app with over 20 services uses Flink to power 
their self-serve platform empowering data-driven decisions across functions. 
https://blog.gojekengineering.com/how-our-diy-platform-creates-value-through-network-effects-76e1e8bad0db;
 target='_blank'> Read more on the Gojek engineering 
blog
+  
+  
 
   Huawei is a leading global provider of ICT infrastructure and smart 
devices. Huawei Cloud provides Cloud Service based on Flink. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-jinkui-shi-and-radu-tudoran-flink-realtime-analysis-in-cloudstream-service-of-huawei-cloud;
 target='_blank'> Learn about how Flink powers Cloud 
Service
   



[flink-web] 02/02: Rebuild website

2019-10-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 2ca968440d94d3947c039c6002e161218bba47af
Author: Fabian Hueske 
AuthorDate: Wed Oct 2 10:48:14 2019 +0200

Rebuild website
---
 content/img/poweredby/gojek-logo.png | Bin 0 -> 12795 bytes
 content/index.html   |   6 ++
 content/poweredby.html   |   4 
 3 files changed, 10 insertions(+)

diff --git a/content/img/poweredby/gojek-logo.png 
b/content/img/poweredby/gojek-logo.png
new file mode 100644
index 000..a015bcc
Binary files /dev/null and b/content/img/poweredby/gojek-logo.png differ
diff --git a/content/index.html b/content/index.html
index b69c18a..f0e112f 100644
--- a/content/index.html
+++ b/content/index.html
@@ -382,6 +382,12 @@
 
 
   
+
+  
+ 
+
+
+  
 
   
 
diff --git a/content/poweredby.html b/content/poweredby.html
index f8ea91d..7cf4c98 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -236,6 +236,10 @@
   Ericsson used Flink to build a real-time anomaly detector with machine 
learning over large infrastructures. https://www.oreilly.com/ideas/applying-the-kappa-architecture-in-the-telco-industry;
 target="_blank"> Read a detailed overview on O'Reilly 
Ideas
   
   
+
+  Gojek is a Super App: one app with over 20 services uses Flink to power 
their self-serve platform empowering data-driven decisions across functions. 
https://blog.gojekengineering.com/how-our-diy-platform-creates-value-through-network-effects-76e1e8bad0db;
 target="_blank"> Read more on the Gojek engineering 
blog
+  
+  
 
   Huawei is a leading global provider of ICT infrastructure and smart 
devices. Huawei Cloud provides Cloud Service based on Flink. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-jinkui-shi-and-radu-tudoran-flink-realtime-analysis-in-cloudstream-service-of-huawei-cloud;
 target="_blank"> Learn about how Flink powers Cloud 
Service
   



[flink-playgrounds] branch release-1.8 updated: [hotfix] Improve instructions to update playgrounds.

2019-09-25 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new e7da54d  [hotfix] Improve instructions to update playgrounds.
e7da54d is described below

commit e7da54d18821106b644b5b66392d9a8633e38980
Author: Fabian Hueske 
AuthorDate: Wed Sep 25 13:59:35 2019 +0200

[hotfix] Improve instructions to update playgrounds.

This closes #5.
---
 README.md   |  2 +-
 howto-update-playgrounds.md | 52 ++---
 2 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/README.md b/README.md
index c9881a3..22ab4d2 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@ Each subfolder of this repository contains the docker-compose 
setup of a playgro
 
 Currently, the following playgrounds are available:
 
-* The **Flink Operations Playground** in the (`operations-playground` folder) 
let's you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
+* The **Flink Operations Playground** in the (`operations-playground` folder) 
lets you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
 Flink job. The playground is presented in detail in the
 ["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.8/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
 
diff --git a/howto-update-playgrounds.md b/howto-update-playgrounds.md
index 2885b62..f7d4d2c 100644
--- a/howto-update-playgrounds.md
+++ b/howto-update-playgrounds.md
@@ -1,7 +1,7 @@
 
 # Versioning 
 
-When updating the playgrounds we have to deal with three versions that need to 
be adjusted.
+When updating the playgrounds we have to deal with three versions that might 
need to be adjusted.
 
 Externally defined versions:
 
@@ -14,29 +14,57 @@ Internally defined version:
 
 # Updating the playgrounds
 
+## Update playgrounds due to a new bugfix Flink release
+
+Apache Flink bugfix releases are frequently published. For example, Flink 
1.8.2 is the second bugfix release in the Flink 1.8 release line. Bugfix 
releases are binary compatible with previous releases on the same minor release 
line.
+
+When a new bugfix release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
+
+1. All `pom.xml`: 
+  * Update the versions of all Flink dependencies 
+  * Update the Maven artifact version to the new playground version 
(`4-Flink-1.9` is the fourth update of the playground for the Flink 1.9 line).
+  * All Maven projects should still build as bugfix releases should be 
compatible with previous versions.
+2. All `Dockerfile`: 
+   * Update the version of the base image to the new Flink Docker image 
version.
+3. `docker-compose.yaml`: 
+   * Update the version of the custom Docker images to the new playground 
version.
+   * Update the version of the Flink containers to the new Flink docker 
image version if necessary.
+
+The `flink-playgrounds` repository has a branch for every minor Flink release 
(for example `release-1.9` for all releases of the Flink 1.9 line). Updates for 
a bugfix release should be pushed to the existing branch of the updated release 
line.
+
 ## Update playgrounds due to a new minor (or major) Flink release
 
-First of all, check that a Flink Docker image was published on [Docker 
Hub](https://hub.docker.com/_/flink) for the new Flink version.
+A major release marks a significant new version of Flink that probably breaks 
a lot of existing code. For example Flink 2.0.0 would be a major release that 
starts the Flink 2.x line. A minor release marks a new version of Apache Flink 
with significant improvements and changes. For example, Flink 1.9.0 is the 
minor release which starts the Flink 1.9 release line. New minor releases aim 
to be compatible but might deprecate, evolve, or remove code of previous 
releases. For updates due to mi [...]
 
-Update all playgrounds as follows:
+When a new minor or major release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
 
 1. All `pom.xml`: 
   * Update the versions of all Flink de

[flink-playgrounds] branch release-1.9 updated: [hotfix] Improve instructions to update playgrounds.

2019-09-25 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new e5777be  [hotfix] Improve instructions to update playgrounds.
e5777be is described below

commit e5777be3f482b18657ec6f1c4729c6aac4027a08
Author: Fabian Hueske 
AuthorDate: Wed Sep 25 13:59:35 2019 +0200

[hotfix] Improve instructions to update playgrounds.

This closes #5.
---
 README.md   |  2 +-
 howto-update-playgrounds.md | 52 ++---
 2 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/README.md b/README.md
index 2beaf5d..9226d5e 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@ Each subfolder of this repository contains the docker-compose 
setup of a playgro
 
 Currently, the following playgrounds are available:
 
-* The **Flink Operations Playground** in the (`operations-playground` folder) 
let's you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
+* The **Flink Operations Playground** in the (`operations-playground` folder) 
lets you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
 Flink job. The playground is presented in detail in the
 ["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
 
diff --git a/howto-update-playgrounds.md b/howto-update-playgrounds.md
index 2885b62..f7d4d2c 100644
--- a/howto-update-playgrounds.md
+++ b/howto-update-playgrounds.md
@@ -1,7 +1,7 @@
 
 # Versioning 
 
-When updating the playgrounds we have to deal with three versions that need to 
be adjusted.
+When updating the playgrounds we have to deal with three versions that might 
need to be adjusted.
 
 Externally defined versions:
 
@@ -14,29 +14,57 @@ Internally defined version:
 
 # Updating the playgrounds
 
+## Update playgrounds due to a new bugfix Flink release
+
+Apache Flink bugfix releases are frequently published. For example, Flink 
1.8.2 is the second bugfix release in the Flink 1.8 release line. Bugfix 
releases are binary compatible with previous releases on the same minor release 
line.
+
+When a new bugfix release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
+
+1. All `pom.xml`: 
+  * Update the versions of all Flink dependencies 
+  * Update the Maven artifact version to the new playground version 
(`4-Flink-1.9` is the fourth update of the playground for the Flink 1.9 line).
+  * All Maven projects should still build as bugfix releases should be 
compatible with previous versions.
+2. All `Dockerfile`: 
+   * Update the version of the base image to the new Flink Docker image 
version.
+3. `docker-compose.yaml`: 
+   * Update the version of the custom Docker images to the new playground 
version.
+   * Update the version of the Flink containers to the new Flink docker 
image version if necessary.
+
+The `flink-playgrounds` repository has a branch for every minor Flink release 
(for example `release-1.9` for all releases of the Flink 1.9 line). Updates for 
a bugfix release should be pushed to the existing branch of the updated release 
line.
+
 ## Update playgrounds due to a new minor (or major) Flink release
 
-First of all, check that a Flink Docker image was published on [Docker 
Hub](https://hub.docker.com/_/flink) for the new Flink version.
+A major release marks a significant new version of Flink that probably breaks 
a lot of existing code. For example Flink 2.0.0 would be a major release that 
starts the Flink 2.x line. A minor release marks a new version of Apache Flink 
with significant improvements and changes. For example, Flink 1.9.0 is the 
minor release which starts the Flink 1.9 release line. New minor releases aim 
to be compatible but might deprecate, evolve, or remove code of previous 
releases. For updates due to mi [...]
 
-Update all playgrounds as follows:
+When a new minor or major release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
 
 1. All `pom.xml`: 
   * Update the versions of all Flink de

[flink-playgrounds] branch master updated: [hotfix] Improve instructions to update playgrounds.

2019-09-25 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git


The following commit(s) were added to refs/heads/master by this push:
 new ac828b1  [hotfix] Improve instructions to update playgrounds.
ac828b1 is described below

commit ac828b13e70d7c4c09670a0014a4f1736bbacdf0
Author: Fabian Hueske 
AuthorDate: Wed Sep 25 13:59:35 2019 +0200

[hotfix] Improve instructions to update playgrounds.

This closes #5.
---
 README.md   |  2 +-
 howto-update-playgrounds.md | 52 ++---
 2 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/README.md b/README.md
index 2beaf5d..9226d5e 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@ Each subfolder of this repository contains the docker-compose 
setup of a playgro
 
 Currently, the following playgrounds are available:
 
-* The **Flink Operations Playground** in the (`operations-playground` folder) 
let's you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
+* The **Flink Operations Playground** in the (`operations-playground` folder) 
lets you explore and play with Flink's features to manage and operate stream 
processing jobs. You can witness how Flink recovers a job from a failure, 
upgrade and rescale a job, and query job metrics. The playground consists of a 
Flink cluster, a Kafka cluster and an example 
 Flink job. The playground is presented in detail in the
 ["Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started/docker-playgrounds/flink-operations-playground.html)
 of Flink's documentation.
 
diff --git a/howto-update-playgrounds.md b/howto-update-playgrounds.md
index 2885b62..f7d4d2c 100644
--- a/howto-update-playgrounds.md
+++ b/howto-update-playgrounds.md
@@ -1,7 +1,7 @@
 
 # Versioning 
 
-When updating the playgrounds we have to deal with three versions that need to 
be adjusted.
+When updating the playgrounds we have to deal with three versions that might 
need to be adjusted.
 
 Externally defined versions:
 
@@ -14,29 +14,57 @@ Internally defined version:
 
 # Updating the playgrounds
 
+## Update playgrounds due to a new bugfix Flink release
+
+Apache Flink bugfix releases are frequently published. For example, Flink 
1.8.2 is the second bugfix release in the Flink 1.8 release line. Bugfix 
releases are binary compatible with previous releases on the same minor release 
line.
+
+When a new bugfix release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
+
+1. All `pom.xml`: 
+  * Update the versions of all Flink dependencies 
+  * Update the Maven artifact version to the new playground version 
(`4-Flink-1.9` is the fourth update of the playground for the Flink 1.9 line).
+  * All Maven projects should still build as bugfix releases should be 
compatible with previous versions.
+2. All `Dockerfile`: 
+   * Update the version of the base image to the new Flink Docker image 
version.
+3. `docker-compose.yaml`: 
+   * Update the version of the custom Docker images to the new playground 
version.
+   * Update the version of the Flink containers to the new Flink docker 
image version if necessary.
+
+The `flink-playgrounds` repository has a branch for every minor Flink release 
(for example `release-1.9` for all releases of the Flink 1.9 line). Updates for 
a bugfix release should be pushed to the existing branch of the updated release 
line.
+
 ## Update playgrounds due to a new minor (or major) Flink release
 
-First of all, check that a Flink Docker image was published on [Docker 
Hub](https://hub.docker.com/_/flink) for the new Flink version.
+A major release marks a significant new version of Flink that probably breaks 
a lot of existing code. For example Flink 2.0.0 would be a major release that 
starts the Flink 2.x line. A minor release marks a new version of Apache Flink 
with significant improvements and changes. For example, Flink 1.9.0 is the 
minor release which starts the Flink 1.9 release line. New minor releases aim 
to be compatible but might deprecate, evolve, or remove code of previous 
releases. For updates due to mi [...]
 
-Update all playgrounds as follows:
+When a new minor or major release is published, we have to wait until the 
corresponding Flink Docker image is published on [Docker 
Hub](https://hub.docker.com/_/flink). Once the Flink Docker image is available, 
update all playgrounds as follows:
 
 1. All `pom.xml`: 
   * Update the versions of all Flink dependencies 
-

[flink-web] 01/02: Add Razorpay to Powered By page.

2019-09-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 87a034140e97be42616e1a3dbe58e4f7a014e560
Author: Fabian Hueske 
AuthorDate: Tue Sep 24 15:46:32 2019 +0200

Add Razorpay to Powered By page.

This closes #271.
---
 img/poweredby/razorpay-logo.png | Bin 0 -> 22780 bytes
 index.md|   6 ++
 poweredby.md|   4 
 3 files changed, 10 insertions(+)

diff --git a/img/poweredby/razorpay-logo.png b/img/poweredby/razorpay-logo.png
new file mode 100644
index 000..1813a7d
Binary files /dev/null and b/img/poweredby/razorpay-logo.png differ
diff --git a/index.md b/index.md
index 9242e71..af2b860 100644
--- a/index.md
+++ b/index.md
@@ -266,6 +266,12 @@ layout: base
 
 
   
+
+  
+  
+
+
+  
 
   
   
diff --git a/poweredby.md b/poweredby.md
index f4f7eb3..06efa74 100644
--- a/poweredby.md
+++ b/poweredby.md
@@ -102,6 +102,10 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   OVH leverages Flink to develop streaming-oriented applications such as 
real-time Business Intelligence or alerting systems. https://www.ovh.com/fr/blog/handling-ovhs-alerts-with-apache-flink/; 
target='_blank'> Read more about how OVH is using Flink
   
   
+
+  Razorpay, one of India's largest payment gateways, built their in-house 
platform Mitra with Apache Flink to scale AI feature generation and model 
serving in real-time. https://medium.com/razorpay-unfiltered/data-science-at-scale-using-apache-flink-982cb18848b;
 target='_blank'> Read more about data science with Flink 
at Razorpay
+  
+  
 
   ResearchGate, a social network for scientists, uses Flink for network 
analysis and near-duplicate detection. http://2016.flink-forward.org/kb_sessions/joining-infinity-windowless-stream-processing-with-flink/;
 target='_blank'> See ResearchGate at Flink Forward 2016
   



[flink-web] branch asf-site updated (b8b507e -> 826c7af)

2019-09-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from b8b507e  Rebuild website
 new 87a0341  Add Razorpay to Powered By page.
 new 826c7af  Rebuild website.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/img/poweredby/razorpay-logo.png | Bin 0 -> 22780 bytes
 content/index.html  |   6 ++
 content/poweredby.html  |   4 
 img/poweredby/razorpay-logo.png | Bin 0 -> 22780 bytes
 index.md|   6 ++
 poweredby.md|   4 
 6 files changed, 20 insertions(+)
 create mode 100644 content/img/poweredby/razorpay-logo.png
 create mode 100644 img/poweredby/razorpay-logo.png



[flink-web] 02/02: Rebuild website.

2019-09-24 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 826c7af89e3086d1c07074f381f15cb33f0a61c1
Author: Fabian Hueske 
AuthorDate: Tue Sep 24 17:26:22 2019 +0200

Rebuild website.
---
 content/img/poweredby/razorpay-logo.png | Bin 0 -> 22780 bytes
 content/index.html  |   6 ++
 content/poweredby.html  |   4 
 3 files changed, 10 insertions(+)

diff --git a/content/img/poweredby/razorpay-logo.png 
b/content/img/poweredby/razorpay-logo.png
new file mode 100644
index 000..1813a7d
Binary files /dev/null and b/content/img/poweredby/razorpay-logo.png differ
diff --git a/content/index.html b/content/index.html
index f0fc296..b69c18a 100644
--- a/content/index.html
+++ b/content/index.html
@@ -437,6 +437,12 @@
 
 
   
+
+  
+  
+
+
+  
 
   
   
diff --git a/content/poweredby.html b/content/poweredby.html
index acd424a..f8ea91d 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -277,6 +277,10 @@
   OVH leverages Flink to develop streaming-oriented applications such as 
real-time Business Intelligence or alerting systems. https://www.ovh.com/fr/blog/handling-ovhs-alerts-with-apache-flink/; 
target="_blank"> Read more about how OVH is using Flink
   
   
+
+  Razorpay, one of India's largest payment gateways, built their in-house 
platform Mitra with Apache Flink to scale AI feature generation and model 
serving in real-time. https://medium.com/razorpay-unfiltered/data-science-at-scale-using-apache-flink-982cb18848b;
 target="_blank"> Read more about data science with Flink 
at Razorpay
+  
+  
 
   ResearchGate, a social network for scientists, uses Flink for network 
analysis and near-duplicate detection. http://2016.flink-forward.org/kb_sessions/joining-infinity-windowless-stream-processing-with-flink/;
 target="_blank"> See ResearchGate at Flink Forward 2016
   



[flink] branch release-1.9 updated: [FLINK-14160][docs] Describe --backpressure option for Operations Playground.

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 5997acc  [FLINK-14160][docs] Describe --backpressure option for 
Operations Playground.
5997acc is described below

commit 5997accdf9d5a8c5a934b7888f6826ca9fd1acf8
Author: David Anderson 
AuthorDate: Sat Sep 21 10:51:02 2019 +0200

[FLINK-14160][docs] Describe --backpressure option for Operations 
Playground.

This closes #9739.
---
 .../docker-playgrounds/flink-operations-playground.md   | 17 ++---
 .../flink-operations-playground.zh.md   | 17 ++---
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index bb720b4..e0cd10d 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -132,7 +132,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -798,8 +798,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -811,3 +811,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 
 the timestamp of the `ClickEvent`. Consequently, the number of events per 
window will not be exactly
 one thousand anymore. 
+
+The *Click Event Count* application also has another option, turned off by 
default, that you can 
+enable to explore the behavior of this job under backpressure. You can add 
this option in the 
+command of the *client* container in `docker-compose.yaml`.
+
+* `--backpressure` adds an additional operator into the middle of the job that 
causes severe backpressure 
+during even-numbered minutes (e.g., during 10:12, but not during 10:13). This 
can be observed by 
+inspecting various [network metrics]({{ site.baseurl 
}}/monitoring/metrics.html#default-shuffle-service) 
+such as `outputQueueLength` and `outPoolUsage`, and/or by using the 
+[backpressure monitoring]({{ site.baseurl 
}}/monitoring/back_pressure.html#monitoring-back-pressure) 
+available in the WebUI.
\ No newline at end of file
diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
index b3c4f24..65b0ee1 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
@@ -132,7 +132,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -798,8 +798,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -811,3 +811,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead

[flink] branch master updated: [FLINK-14160][docs] Describe --backpressure option for Operations Playground.

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c83c186  [FLINK-14160][docs] Describe --backpressure option for 
Operations Playground.
c83c186 is described below

commit c83c18671bc0056a341877f312ba293ae5811953
Author: David Anderson 
AuthorDate: Sat Sep 21 10:51:02 2019 +0200

[FLINK-14160][docs] Describe --backpressure option for Operations 
Playground.

This closes #9739.
---
 .../docker-playgrounds/flink-operations-playground.md   | 17 ++---
 .../flink-operations-playground.zh.md   | 17 ++---
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index 38a0848..c9f7675 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -136,7 +136,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -802,8 +802,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -815,3 +815,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead of 
 the timestamp of the `ClickEvent`. Consequently, the number of events per 
window will not be exactly
 one thousand anymore. 
+
+The *Click Event Count* application also has another option, turned off by 
default, that you can 
+enable to explore the behavior of this job under backpressure. You can add 
this option in the 
+command of the *client* container in `docker-compose.yaml`.
+
+* `--backpressure` adds an additional operator into the middle of the job that 
causes severe backpressure 
+during even-numbered minutes (e.g., during 10:12, but not during 10:13). This 
can be observed by 
+inspecting various [network metrics]({{ site.baseurl 
}}/monitoring/metrics.html#default-shuffle-service) 
+such as `outputQueueLength` and `outPoolUsage`, and/or by using the 
+[backpressure monitoring]({{ site.baseurl 
}}/monitoring/back_pressure.html#monitoring-back-pressure) 
+available in the WebUI.
\ No newline at end of file
diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
index 38a0848..c9f7675 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.zh.md
@@ -136,7 +136,7 @@ will show you how to interact with the Flink Cluster and 
demonstrate some of Fli
 
 ### Flink WebUI
 
-The most natural starting point to observe your Flink Cluster is the Web UI 
exposed under 
+The most natural starting point to observe your Flink Cluster is the WebUI 
exposed under 
 [http://localhost:8081](http://localhost:8081). If everything went well, 
you'll see that the cluster initially consists of 
 one TaskManager and executes a Job called *Click Event Count*.
 
@@ -802,8 +802,8 @@ TaskManager metrics);
 
 ## Variants
 
-You might have noticed that the *Click Event Count* was always started with 
`--checkpointing` and 
-`--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
+You might have noticed that the *Click Event Count* application was always 
started with `--checkpointing` 
+and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
 * `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/internals/stream_checkpointing.html), 
@@ -815,3 +815,14 @@ lost.
 Job. When disabled, the Job will assign events to windows based on the 
wall-clock time instead

[flink-playgrounds] 01/02: [FLINK-14160] Add --backpressure option to the ClickEventCount job in the operations playground

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 41acc3b90bbf43e6879f2e3d9cdded0cac980524
Author: David Anderson 
AuthorDate: Thu Sep 19 20:08:58 2019 +0200

[FLINK-14160] Add --backpressure option to the ClickEventCount job in the 
operations playground

This closes #4.
---
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 4 files changed, 71 insertions(+), 6 deletions(-)

diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
index 3d17fcd..893c11e 100644
--- a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
+++ b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
@@ -22,7 +22,7 @@ under the License.
 
org.apache.flink
flink-playground-clickcountjob
-   1-FLINK-1.9_2.11
+   2-FLINK-1.9_2.11
 
flink-playground-clickcountjob
jar
diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
index 0316bc6..f3d628c 100644
--- 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
+++ 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
@@ -18,6 +18,7 @@
 package org.apache.flink.playgrounds.ops.clickcount;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.playgrounds.ops.clickcount.functions.BackpressureMap;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.ClickEventStatisticsCollector;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.CountingAggregator;
 import org.apache.flink.playgrounds.ops.clickcount.records.ClickEvent;
@@ -25,6 +26,7 @@ import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventDeserializa
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatistics;
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatisticsSerializationSchema;
 import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
 import org.apache.flink.streaming.api.windowing.time.Time;
@@ -47,6 +49,7 @@ import java.util.concurrent.TimeUnit;
  * The Job can be configured via the command line:
  * * "--checkpointing": enables checkpointing
  * * "--event-time": set the StreamTimeCharacteristic to EventTime
+ * * "--backpressure": insert an operator that causes periodic backpressure
  * * "--input-topic": the name of the Kafka Topic to consume {@link 
ClickEvent}s from
  * * "--output-topic": the name of the Kafka Topic to produce {@link 
ClickEventStatistics} to
  * * "--bootstrap.servers": comma-separated list of Kafka brokers
@@ -56,6 +59,7 @@ public class ClickEventCount {
 
public static final String CHECKPOINTING_OPTION = "checkpointing";
public static final String EVENT_TIME_OPTION = "event-time";
+   public static final String BACKPRESSURE_OPTION = "backpressure";
 
public static final Time WINDOW_SIZE = Time.of(15, TimeUnit.SECONDS);
 
@@ -66,6 +70,8 @@ public class ClickEventCount {
 
configureEnvironment(params, env);
 
+   boolean inflictBackpressure = params.has(BACKPRESSURE_OPTION);
+
String inputTopic = params.get("input-topic", "input");
String outputTopic = params.get("output-topic", "output");
String brokers = params.get("bootstrap.servers", 
"localhost:9092");
@@ -73,19 +79,32 @@ public class ClickEventCount {
kafkaProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
brokers);
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, 
"click-event-count");
 
-   env.addSource(new FlinkKafkaConsumer<>(inputTopic, new 
ClickEventDeserializationSchema(), kafkaProps))
+   DataStream clicks =
+  

[flink-playgrounds] 02/02: [hotfix] Improve .gitignore

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 5b93147d2fc050a5ce9597dcc8c478c1b9ed08c4
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 12:03:20 2019 +0200

[hotfix] Improve .gitignore
---
 .gitignore | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/.gitignore b/.gitignore
index d4e4d76..d04cff5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,3 @@
-*/.idea
-*/target
-*/dependency-reduced-pom.xml
+**/.idea
+**/target
+**/dependency-reduced-pom.xml



[flink-playgrounds] branch release-1.9 updated (b575647 -> 5b93147)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git.


from b575647  [hotfix] Update URL in ops playground README.md to Flink 1.9 
docs.
 new 41acc3b  [FLINK-14160] Add --backpressure option to the 
ClickEventCount job in the operations playground
 new 5b93147  [hotfix] Improve .gitignore

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore |  6 +--
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 5 files changed, 74 insertions(+), 9 deletions(-)
 create mode 100644 
docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/functions/BackpressureMap.java



[flink-playgrounds] 02/02: [hotfix] Improve .gitignore

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 00db5d0904ca1a023eb9612b12eccd25961f31a9
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 12:03:20 2019 +0200

[hotfix] Improve .gitignore
---
 .gitignore | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/.gitignore b/.gitignore
index d4e4d76..d04cff5 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,3 @@
-*/.idea
-*/target
-*/dependency-reduced-pom.xml
+**/.idea
+**/target
+**/dependency-reduced-pom.xml



[flink-playgrounds] 01/02: [FLINK-14160] Add --backpressure option to the ClickEventCount job in the operations playground

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git

commit 1c7c254fc7827e74db7c3c387348e7ca2219788a
Author: David Anderson 
AuthorDate: Thu Sep 19 20:08:58 2019 +0200

[FLINK-14160] Add --backpressure option to the ClickEventCount job in the 
operations playground

This closes #4.
---
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 4 files changed, 71 insertions(+), 6 deletions(-)

diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
index 3d17fcd..893c11e 100644
--- a/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
+++ b/docker/ops-playground-image/java/flink-playground-clickcountjob/pom.xml
@@ -22,7 +22,7 @@ under the License.
 
org.apache.flink
flink-playground-clickcountjob
-   1-FLINK-1.9_2.11
+   2-FLINK-1.9_2.11
 
flink-playground-clickcountjob
jar
diff --git 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
index 0316bc6..f3d628c 100644
--- 
a/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
+++ 
b/docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/ClickEventCount.java
@@ -18,6 +18,7 @@
 package org.apache.flink.playgrounds.ops.clickcount;
 
 import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.playgrounds.ops.clickcount.functions.BackpressureMap;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.ClickEventStatisticsCollector;
 import 
org.apache.flink.playgrounds.ops.clickcount.functions.CountingAggregator;
 import org.apache.flink.playgrounds.ops.clickcount.records.ClickEvent;
@@ -25,6 +26,7 @@ import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventDeserializa
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatistics;
 import 
org.apache.flink.playgrounds.ops.clickcount.records.ClickEventStatisticsSerializationSchema;
 import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
 import org.apache.flink.streaming.api.windowing.time.Time;
@@ -47,6 +49,7 @@ import java.util.concurrent.TimeUnit;
  * The Job can be configured via the command line:
  * * "--checkpointing": enables checkpointing
  * * "--event-time": set the StreamTimeCharacteristic to EventTime
+ * * "--backpressure": insert an operator that causes periodic backpressure
  * * "--input-topic": the name of the Kafka Topic to consume {@link 
ClickEvent}s from
  * * "--output-topic": the name of the Kafka Topic to produce {@link 
ClickEventStatistics} to
  * * "--bootstrap.servers": comma-separated list of Kafka brokers
@@ -56,6 +59,7 @@ public class ClickEventCount {
 
public static final String CHECKPOINTING_OPTION = "checkpointing";
public static final String EVENT_TIME_OPTION = "event-time";
+   public static final String BACKPRESSURE_OPTION = "backpressure";
 
public static final Time WINDOW_SIZE = Time.of(15, TimeUnit.SECONDS);
 
@@ -66,6 +70,8 @@ public class ClickEventCount {
 
configureEnvironment(params, env);
 
+   boolean inflictBackpressure = params.has(BACKPRESSURE_OPTION);
+
String inputTopic = params.get("input-topic", "input");
String outputTopic = params.get("output-topic", "output");
String brokers = params.get("bootstrap.servers", 
"localhost:9092");
@@ -73,19 +79,32 @@ public class ClickEventCount {
kafkaProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
brokers);
kafkaProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, 
"click-event-count");
 
-   env.addSource(new FlinkKafkaConsumer<>(inputTopic, new 
ClickEventDeserializationSchema(), kafkaProps))
+   DataStream clicks =
+   env.addSource(new 
FlinkKafkaConsumer<>

[flink-playgrounds] branch master updated (5d636ae -> 00db5d0)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink-playgrounds.git.


from 5d636ae  [hotfix] Update URL in ops playground README.md to Flink 
master docs.
 new 1c7c254  [FLINK-14160] Add --backpressure option to the 
ClickEventCount job in the operations playground
 new 00db5d0  [hotfix] Improve .gitignore

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore |  6 +--
 .../java/flink-playground-clickcountjob/pom.xml|  2 +-
 .../ops/clickcount/ClickEventCount.java| 25 ++--
 .../ops/clickcount/functions/BackpressureMap.java  | 46 ++
 operations-playground/docker-compose.yaml  |  4 +-
 5 files changed, 74 insertions(+), 9 deletions(-)
 create mode 100644 
docker/ops-playground-image/java/flink-playground-clickcountjob/src/main/java/org/apache/flink/playgrounds/ops/clickcount/functions/BackpressureMap.java



[flink-web] 02/02: Rebuild website

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b8b507e248b103a1c107bc4e54d6267a4d681bb7
Author: Fabian Hueske 
AuthorDate: Mon Sep 23 11:33:44 2019 +0200

Rebuild website
---
 content/img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 content/index.html|   6 ++
 content/poweredby.html|   4 
 content/zh/index.html |   8 +++-
 content/zh/poweredby.html |   4 
 5 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/content/img/poweredby/xiaomi-logo.png 
b/content/img/poweredby/xiaomi-logo.png
new file mode 100644
index 000..004707a
Binary files /dev/null and b/content/img/poweredby/xiaomi-logo.png differ
diff --git a/content/index.html b/content/index.html
index aff2564..f0fc296 100644
--- a/content/index.html
+++ b/content/index.html
@@ -461,6 +461,12 @@
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/content/poweredby.html b/content/poweredby.html
index 81f67eb..acd424a 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -297,6 +297,10 @@
   Uber built their internal SQL-based, open-source streaming analytics 
platform AthenaX on Apache Flink. https://eng.uber.com/athenax/; target="_blank"> Read more on the Uber 
engineering blog
   
   
+
+Xiaomi, one of the largest electronics companies in China, built a 
platform with Flink to improve the efficiency of developing and operating 
real-time applications and use it in real-time recommendations. https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target="_blank"> Learn more about how Xiaomi is using 
Flink.
+  
+  
 
   Yelp utilizes Flink to power its data connectors ecosystem and stream 
processing infrastructure. https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target="_blank"> Find out more watching a Flink Forward 
talk
   
diff --git a/content/zh/index.html b/content/zh/index.html
index 166d33c..d7cfc4a 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -440,7 +440,7 @@
   
 
 
-  
+  
 
   
   
@@ -459,6 +459,12 @@
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/content/zh/poweredby.html b/content/zh/poweredby.html
index dd0a42c..a4fc2b4 100644
--- a/content/zh/poweredby.html
+++ b/content/zh/poweredby.html
@@ -295,6 +295,10 @@
   Uber 在 Apache Flink 上构建了基于 SQL 的开源流媒体分析平台 AthenaX。https://eng.uber.com/athenax/; target="_blank"> 更多信息请访问Uber工程博客
   
   
+
+小米,作为中国最大的专注于硬件与软件开发的公司之一,利用 Flink 
构建了一个内部平台,以提高开发运维实时应用程序的效率,并用于实时推荐等场景。https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target="_blank"> 详细了解小米如何使用 Flink 的。
+  
+  
 
   Yelp 利用 Flink 为其数据连接器生态系统和流处理基础架构提供支持。https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target="_blank"> 请参阅 Flink Forward 上的演讲
   



[flink-web] branch asf-site updated (caf1c41 -> b8b507e)

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from caf1c41  Rebuild website
 new 2cbd74b  Add Xiaomi to the Powered By page
 new b8b507e  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 content/index.html|   6 ++
 content/poweredby.html|   4 
 content/zh/index.html |   8 +++-
 content/zh/poweredby.html |   4 
 img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 index.md  |   6 ++
 index.zh.md   |   8 +++-
 poweredby.md  |   4 
 poweredby.zh.md   |   4 
 10 files changed, 42 insertions(+), 2 deletions(-)
 create mode 100644 content/img/poweredby/xiaomi-logo.png
 create mode 100644 img/poweredby/xiaomi-logo.png



[flink-web] 01/02: Add Xiaomi to the Powered By page

2019-09-23 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 2cbd74bd8a4048fe48cef8040011da80ad34ce26
Author: Jark Wu 
AuthorDate: Mon Sep 23 16:23:52 2019 +0800

Add Xiaomi to the Powered By page

This closes #270.
---
 img/poweredby/xiaomi-logo.png | Bin 0 -> 48194 bytes
 index.md  |   6 ++
 index.zh.md   |   8 +++-
 poweredby.md  |   4 
 poweredby.zh.md   |   4 
 5 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/img/poweredby/xiaomi-logo.png b/img/poweredby/xiaomi-logo.png
new file mode 100644
index 000..004707a
Binary files /dev/null and b/img/poweredby/xiaomi-logo.png differ
diff --git a/index.md b/index.md
index 73a37ab..9242e71 100644
--- a/index.md
+++ b/index.md
@@ -290,6 +290,12 @@ layout: base
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/index.zh.md b/index.zh.md
index a8a11ce..b798558 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -271,7 +271,7 @@ layout: base
   
 
 
-  
+  
 
   
   
@@ -290,6 +290,12 @@ layout: base
 
 
   
+
+  
+
+
+
+  
 
   
   
diff --git a/poweredby.md b/poweredby.md
index 7775388..f4f7eb3 100644
--- a/poweredby.md
+++ b/poweredby.md
@@ -122,6 +122,10 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   Uber built their internal SQL-based, open-source streaming analytics 
platform AthenaX on Apache Flink. https://eng.uber.com/athenax/; target='_blank'> Read more on the Uber 
engineering blog
   
   
+
+Xiaomi, one of the largest electronics companies in China, built a 
platform with Flink to improve the efficiency of developing and operating 
real-time applications and use it in real-time recommendations. https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target='_blank'> Learn more about how Xiaomi is using 
Flink.
+  
+  
 
   Yelp utilizes Flink to power its data connectors ecosystem and stream 
processing infrastructure. https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target='_blank'> Find out more watching a Flink Forward 
talk
   
diff --git a/poweredby.zh.md b/poweredby.zh.md
index d8b1e65..52b2f40 100644
--- a/poweredby.zh.md
+++ b/poweredby.zh.md
@@ -122,6 +122,10 @@ Apache Flink 为全球许多公司和企业的关键业务提供支持。在这
   Uber 在 Apache Flink 上构建了基于 SQL 的开源流媒体分析平台 AthenaX。https://eng.uber.com/athenax/; target='_blank'> 更多信息请访问Uber工程博客
   
   
+
+小米,作为中国最大的专注于硬件与软件开发的公司之一,利用 Flink 
构建了一个内部平台,以提高开发运维实时应用程序的效率,并用于实时推荐等场景。https://files.alicdn.com/tpsservice/d77d3ed3f2709790f0d84f4ec279a486.pdf; 
target='_blank'> 详细了解小米如何使用 Flink 的。
+  
+  
 
   Yelp 利用 Flink 为其数据连接器生态系统和流处理基础架构提供支持。https://ververica.com/flink-forward/resources/powering-yelps-data-pipeline-infrastructure-with-apache-flink;
 target='_blank'> 请参阅 Flink Forward 上的演讲
   



[flink] 01/02: [FLINK-12746][docs] Add DataStream API Walkthrough

2019-09-18 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit df8f9a586143bbd719b6e9f03592e02e45629a9a
Author: Seth Wiesman 
AuthorDate: Mon Jul 22 16:01:44 2019 -0500

[FLINK-12746][docs] Add DataStream API Walkthrough

This closes #9201.
---
 docs/fig/fraud-transactions.svg|  71 ++
 .../getting-started/walkthroughs/datastream_api.md | 925 +
 .../walkthroughs/datastream_api.zh.md  | 925 +
 docs/getting-started/walkthroughs/table_api.md |   2 +-
 docs/getting-started/walkthroughs/table_api.zh.md  |   2 +-
 flink-end-to-end-tests/run-nightly-tests.sh|   2 +
 flink-end-to-end-tests/test-scripts/common.sh  |  12 +
 flink-end-to-end-tests/test-scripts/test_cli.sh|  11 -
 ...throughs.sh => test_datastream_walkthroughs.sh} |  35 +-
 .../test-scripts/test_table_walkthroughs.sh|   1 +
 .../flink/walkthrough/common/entity/Alert.java |  61 ++
 .../flink/walkthrough/common/sink/AlertSink.java   |  43 +
 .../flink-walkthrough-datastream-java/pom.xml  |  37 +
 .../META-INF/maven/archetype-metadata.xml  |  36 +
 .../src/main/resources/archetype-resources/pom.xml | 225 +
 .../src/main/java/FraudDetectionJob.java   |  50 ++
 .../src/main/java/FraudDetector.java   |  48 ++
 .../src/main/resources/log4j.properties|  24 +
 .../flink-walkthrough-datastream-scala/pom.xml |  37 +
 .../META-INF/maven/archetype-metadata.xml  |  36 +
 .../src/main/resources/archetype-resources/pom.xml | 256 ++
 .../src/main/resources/log4j.properties|  24 +
 .../src/main/scala/FraudDetectionJob.scala |  51 ++
 .../src/main/scala/FraudDetector.scala |  49 ++
 flink-walkthroughs/pom.xml |   2 +
 25 files changed, 2945 insertions(+), 20 deletions(-)

diff --git a/docs/fig/fraud-transactions.svg b/docs/fig/fraud-transactions.svg
new file mode 100644
index 000..f8e59d9
--- /dev/null
+++ b/docs/fig/fraud-transactions.svg
@@ -0,0 +1,71 @@
+
+
+
+
+http://www.w3.org/1999/xlink; xmlns="http://www.w3.org/2000/svg; 
width="100%" height="100%">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Apache Flink offers a DataStream API for building robust, stateful streaming 
applications.
+It provides fine-grained control over state and time, which allows for the 
implementation of advanced event-driven systems.
+In this step-by-step guide you'll learn how to build a stateful streaming 
application with Flink's DataStream API.
+
+* This will be replaced by the TOC
+{:toc}
+
+## What Are You Building? 
+
+Credit card fraud is a growing concern in the digital age.
+Criminals steal credit card numbers by running scams or hacking into insecure 
systems.
+Stolen numbers are tested by making one or more small purchases, often for a 
dollar or less.
+If that works, they then make more significant purchases to get items they can 
sell or keep for themselves.
+
+In this tutorial, you will build a fraud detection system for alerting on 
suspicious credit card transactions.
+Using a simple set of rules, you will see how Flink allows us to implement 
advanced business logic and act in real-time.
+
+## Prerequisites
+
+This walkthrough assumes that you have some familiarity with Java or Scala, 
but you should be able to follow along even if you are coming from a different 
programming language.
+
+## Help, I’m Stuck! 
+
+If you get stuck, check out the [community support 
resources](https://flink.apache.org/gettinghelp.html).
+In particular, Apache Flink's [user mailing 
list](https://flink.apache.org/community.html#mailing-lists) is consistently 
ranked as one of the most active of any Apache project and a great way to get 
help quickly.
+
+## How to Follow Along
+
+If you want to follow along, you will require a computer with:
+
+* Java 8 
+* Maven 
+
+A provided Flink Maven Archetype will create a skeleton project with all the 
necessary dependencies quickly, so you only need to focus on filling out the 
business logic.
+These dependencies include `flink-streaming-java` which is the core dependency 
for all Flink streaming applications and `flink-walkthrough-common` that has 
data generators and other classes specific to this walkthrough.

[flink] branch master updated (e2728c0 -> ee0d6fd)

2019-09-18 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from e2728c0  [FLINK-14067] Remove unused 
PlanExecutor.getOptimizerPlanAsJSON()
 new df8f9a5  [FLINK-12746][docs] Add DataStream API Walkthrough
 new ee0d6fd  [FLINK-12746] Add DataStream API Walkthrough

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/projectsetup/java_api_quickstart.md   |   2 +-
 docs/dev/projectsetup/java_api_quickstart.zh.md|   2 +-
 docs/dev/projectsetup/scala_api_quickstart.md  |   2 +-
 docs/dev/projectsetup/scala_api_quickstart.zh.md   |   2 +-
 docs/fig/fraud-transactions.svg|  71 ++
 docs/getting-started/docker-playgrounds/index.md   |   2 +-
 .../getting-started/docker-playgrounds/index.zh.md |   2 +-
 docs/getting-started/examples/index.md |   2 +-
 docs/getting-started/examples/index.zh.md  |   2 +-
 docs/getting-started/index.md  |   5 +-
 docs/getting-started/tutorials/datastream_api.md   | 430 --
 .../getting-started/tutorials/datastream_api.zh.md | 430 --
 docs/getting-started/tutorials/index.md|   2 +-
 docs/getting-started/tutorials/index.zh.md |   2 +-
 .../getting-started/walkthroughs/datastream_api.md | 925 +
 .../walkthroughs/datastream_api.zh.md  | 925 +
 docs/getting-started/walkthroughs/index.md |   2 +-
 docs/getting-started/walkthroughs/index.zh.md  |   2 +-
 docs/getting-started/walkthroughs/table_api.md |   2 +-
 docs/getting-started/walkthroughs/table_api.zh.md  |   2 +-
 docs/index.md  |  26 +-
 docs/redirects/example_quickstart.md   |   2 +-
 docs/redirects/tutorials_datastream_api.md |   2 +-
 flink-end-to-end-tests/run-nightly-tests.sh|   2 +
 flink-end-to-end-tests/test-scripts/common.sh  |  12 +
 flink-end-to-end-tests/test-scripts/test_cli.sh|  11 -
 ...throughs.sh => test_datastream_walkthroughs.sh} |  35 +-
 .../test-scripts/test_table_walkthroughs.sh|   1 +
 .../flink/walkthrough/common/entity/Alert.java |  61 ++
 .../flink/walkthrough/common/sink/AlertSink.java   |  43 +
 .../flink-walkthrough-datastream-java/pom.xml  |  26 +-
 .../META-INF/maven/archetype-metadata.xml  |  25 +-
 .../src/main/resources/archetype-resources/pom.xml | 225 +
 .../src/main/java/FraudDetectionJob.java   |  50 ++
 .../src/main/java/FraudDetector.java   |  48 ++
 .../src/main/resources/log4j.properties|  24 +
 .../flink-walkthrough-datastream-scala/pom.xml |  26 +-
 .../META-INF/maven/archetype-metadata.xml  |  25 +-
 .../src/main/resources/archetype-resources/pom.xml | 256 ++
 .../src/main/resources/log4j.properties|  24 +
 .../src/main/scala/FraudDetectionJob.scala |  51 ++
 .../src/main/scala/FraudDetector.scala |  49 ++
 flink-walkthroughs/pom.xml |   2 +
 43 files changed, 2908 insertions(+), 932 deletions(-)
 create mode 100644 docs/fig/fraud-transactions.svg
 delete mode 100644 docs/getting-started/tutorials/datastream_api.md
 delete mode 100644 docs/getting-started/tutorials/datastream_api.zh.md
 create mode 100644 docs/getting-started/walkthroughs/datastream_api.md
 create mode 100644 docs/getting-started/walkthroughs/datastream_api.zh.md
 copy flink-end-to-end-tests/test-scripts/{test_table_walkthroughs.sh => 
test_datastream_walkthroughs.sh} (75%)
 create mode 100644 
flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/entity/Alert.java
 create mode 100644 
flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/sink/AlertSink.java
 copy docs/getting-started/docker-playgrounds/index.md => 
flink-walkthroughs/flink-walkthrough-datastream-java/pom.xml (54%)
 copy docs/getting-started/docker-playgrounds/index.md => 
flink-walkthroughs/flink-walkthrough-datastream-java/src/main/resources/META-INF/maven/archetype-metadata.xml
 (52%)
 create mode 100644 
flink-walkthroughs/flink-walkthrough-datastream-java/src/main/resources/archetype-resources/pom.xml
 create mode 100644 
flink-walkthroughs/flink-walkthrough-datastream-java/src/main/resources/archetype-resources/src/main/java/FraudDetectionJob.java
 create mode 100644 
flink-walkthroughs/flink-walkthrough-datastream-java/src/main/resources/archetype-resources/src/main/java/FraudDetector.java
 create mode 100644 
flink-walkthroughs/flink-walkthrough-datastream-java/src/main/resources/archetype-resources/src/main/resources/log4j.properties
 

[flink] 02/02: [FLINK-12746] Add DataStream API Walkthrough

2019-09-18 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ee0d6fdf0604d74bd1cf9a6eb9cf5338ac1aa4f9
Author: Fabian Hueske 
AuthorDate: Tue Sep 17 17:49:51 2019 +0200

[FLINK-12746] Add DataStream API Walkthrough

* Remove old DataStream tutorial
* Update links to new API walkthrough
* Update order of menu entries in "Getting Started" section
* Update index pages to reflect updated "Getting Started" section.
---
 docs/dev/projectsetup/java_api_quickstart.md   |   2 +-
 docs/dev/projectsetup/java_api_quickstart.zh.md|   2 +-
 docs/dev/projectsetup/scala_api_quickstart.md  |   2 +-
 docs/dev/projectsetup/scala_api_quickstart.zh.md   |   2 +-
 docs/getting-started/docker-playgrounds/index.md   |   2 +-
 .../getting-started/docker-playgrounds/index.zh.md |   2 +-
 docs/getting-started/examples/index.md |   2 +-
 docs/getting-started/examples/index.zh.md  |   2 +-
 docs/getting-started/index.md  |   5 +-
 docs/getting-started/tutorials/datastream_api.md   | 430 -
 .../getting-started/tutorials/datastream_api.zh.md | 430 -
 docs/getting-started/tutorials/index.md|   2 +-
 docs/getting-started/tutorials/index.zh.md |   2 +-
 docs/getting-started/walkthroughs/index.md |   2 +-
 docs/getting-started/walkthroughs/index.zh.md  |   2 +-
 docs/index.md  |  26 +-
 docs/redirects/example_quickstart.md   |   2 +-
 docs/redirects/tutorials_datastream_api.md |   2 +-
 18 files changed, 35 insertions(+), 884 deletions(-)

diff --git a/docs/dev/projectsetup/java_api_quickstart.md 
b/docs/dev/projectsetup/java_api_quickstart.md
index 2b27fa0..a5b0bc4 100644
--- a/docs/dev/projectsetup/java_api_quickstart.md
+++ b/docs/dev/projectsetup/java_api_quickstart.md
@@ -336,7 +336,7 @@ can run the application from the JAR file without 
additionally specifying the ma
 Write your application!
 
 If you are writing a streaming application and you are looking for inspiration 
what to write,
-take a look at the [Stream Processing Application Tutorial]({{ site.baseurl 
}}/getting-started/tutorials/datastream_api.html#writing-a-flink-program).
+take a look at the [Stream Processing Application Tutorial]({{ site.baseurl 
}}/getting-started/walkthroughs/datastream_api.html).
 
 If you are writing a batch processing application and you are looking for 
inspiration what to write,
 take a look at the [Batch Application Examples]({{ site.baseurl 
}}/dev/batch/examples.html).
diff --git a/docs/dev/projectsetup/java_api_quickstart.zh.md 
b/docs/dev/projectsetup/java_api_quickstart.zh.md
index 653fab4..4a89491 100644
--- a/docs/dev/projectsetup/java_api_quickstart.zh.md
+++ b/docs/dev/projectsetup/java_api_quickstart.zh.md
@@ -323,7 +323,7 @@ __注意:__ 如果你使用其他类而不是 *StreamingJob* 作为应用程
 开始编写应用!
 
 如果你准备编写流处理应用,正在寻找灵感来写什么,
-可以看看[流处理应用程序教程]({{ site.baseurl 
}}/zh/getting-started/tutorials/datastream_api.html#writing-a-flink-program)
+可以看看[流处理应用程序教程]({{ site.baseurl 
}}/zh/getting-started/walkthroughs/datastream_api.html)
 
 如果你准备编写批处理应用,正在寻找灵感来写什么,
 可以看看[批处理应用程序示例]({{ site.baseurl }}/zh/dev/batch/examples.html)
diff --git a/docs/dev/projectsetup/scala_api_quickstart.md 
b/docs/dev/projectsetup/scala_api_quickstart.md
index a9de50a..b03518a 100644
--- a/docs/dev/projectsetup/scala_api_quickstart.md
+++ b/docs/dev/projectsetup/scala_api_quickstart.md
@@ -212,7 +212,7 @@ can run time application from the JAR file without 
additionally specifying the m
 Write your application!
 
 If you are writing a streaming application and you are looking for inspiration 
what to write,
-take a look at the [Stream Processing Application Tutorial]({{ site.baseurl 
}}/getting-started/tutorials/datastream_api.html#writing-a-flink-program)
+take a look at the [Stream Processing Application Tutorial]({{ site.baseurl 
}}/getting-started/walkthroughs/datastream_api.html)
 
 If you are writing a batch processing application and you are looking for 
inspiration what to write,
 take a look at the [Batch Application Examples]({{ site.baseurl 
}}/dev/batch/examples.html)
diff --git a/docs/dev/projectsetup/scala_api_quickstart.zh.md 
b/docs/dev/projectsetup/scala_api_quickstart.zh.md
index 187f295..888682d 100644
--- a/docs/dev/projectsetup/scala_api_quickstart.zh.md
+++ b/docs/dev/projectsetup/scala_api_quickstart.zh.md
@@ -204,7 +204,7 @@ __注意:__ 如果你使用其他类而不是 *StreamingJob* 作为应用程序
 开始编写你的应用!
 
 如果你准备编写流处理应用,正在寻找灵感来写什么,
-可以看看[流处理应用程序教程]({{ site.baseurl 
}}/zh/getting-started/tutorials/datastream_api.html#writing-a-flink-program)
+可以看看[流处理应用程序教程]({{ site.baseurl 
}}/zh/getting-started/walkthroughs/datastream_api.html)
 
 如果你准备编写批处理应用,正在寻找灵感来写什么,
 可以看看[批处理应用程序示例]({{ site.baseurl }}/zh/dev/batch/examples.html)
diff --git a/docs/getting-started/docker-

[flink-web] 01/03: Rebuild website

2019-09-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 36c49b45c70261cdd53752f625641d1f540ca1f0
Author: Fabian Hueske 
AuthorDate: Fri Sep 13 09:48:30 2019 +0200

Rebuild website
---
 content/blog/feed.xml | 103 +++---
 content/downloads.html|   2 +-
 content/zh/downloads.html |   2 +-
 3 files changed, 99 insertions(+), 8 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index a3f9991..2113d6f 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,97 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Apache Flink 1.8.2 Released
+pThe Apache Flink community released the second bugfix 
version of the Apache Flink 1.8 series./p
+
+pThis release includes 23 fixes and minor improvements for Flink 
1.8.1. The list below includes a detailed list of all fixes and 
improvements./p
+
+pWe highly recommend all users to upgrade to Flink 1.8.2./p
+
+pUpdated Maven dependencies:/p
+
+div class=highlightprecode 
class=language-xmlspan 
class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-javaspan
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-streaming-java_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-clients_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span 
class=ntlt;/dependencygt;/span/code/pre/div
+
+pYou can find the binaries on the updated a 
href=/downloads.htmlDownloads page/a./p
+
+pList of resolved issues:/p
+
+h2Bug
+/h2
+ul
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13941FLINK-13941/a;]
 - Prevent data-loss by not cleaning up small part files from S3.
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-9526FLINK-9526/a;]
 - BucketingSink end-to-end test failed on Travis
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-10368FLINK-10368/a;]
 - #39;Kerberized YARN on Docker test#39; unstable
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12319FLINK-12319/a;]
 - StackOverFlowError in cep.nfa.sharedbuffer.SharedBuffer
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12736FLINK-12736/a;]
 - ResourceManager may release TM with allocated slots
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12889FLINK-12889/a;]
 - Job keeps in FAILING state
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13059FLINK-13059/a;]
 - Cassandra Connector leaks Semaphore on Exception; hangs on close
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13159FLINK-13159/a;]
 - java.lang.ClassNotFoundException when restore job
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13367FLINK-13367/a;]
 - Make ClosureCleaner detect writeReplace serialization override
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13369FLINK-13369/a;]
 - Recursive closure cleaner ends up with stackOverflow in case of 
circular dependency
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13394FLINK-13394/a;]
 - Use fallback unsafe secure MapR in nightly.sh
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13484FLINK-13484/a;]
 - ConnectedComponents end-to-end test instable with 
NoResourceAvailableException
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13499FLINK-13499/a;]
 - Remove dependency on MapR artifact repository
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13508FLINK-13508/a;]
 - CommonTestUtils#waitUntilCondition() may attempt to sleep with 
negative time
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13586FLINK-13586/a;]
 - Method ClosureCleaner.clean broke backward compatibility between 
1.8.0 and 1.8.1
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13761FLINK-13761/a;]
 - `SplitStream` should be deprecated because `SplitJavaStream` is 
deprecated
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13789FLINK-13789/a;]
 - Transactional Id Generation fails due to user code impacting 
formatting string
+/li
+li[a 
href=https://issues

[flink-web] 02/03: [blog] State Processor API

2019-09-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 21d447a8ef07e0285c6105e0eb07460dcf8a65f1
Author: Seth Wiesman 
AuthorDate: Thu Sep 12 17:14:06 2019 -0500

[blog] State Processor API

This closes #264.
---
 _posts/2019-09-13-state-processor-api.md   |  64 +
 .../application-my-app-state-processor-api.png | Bin 0 -> 49938 bytes
 .../database-my-app-state-processor-api.png| Bin 0 -> 50174 bytes
 3 files changed, 64 insertions(+)

diff --git a/_posts/2019-09-13-state-processor-api.md 
b/_posts/2019-09-13-state-processor-api.md
new file mode 100644
index 000..717f1b4
--- /dev/null
+++ b/_posts/2019-09-13-state-processor-api.md
@@ -0,0 +1,64 @@
+---
+layout: post
+title: "The State Processor API: How to Read, write and modify the state of 
Flink applications"
+date: 2019-09-13T12:00:00.000Z
+category: feature
+authors:
+- Seth:
+  name: "Seth Wiesman"
+  twitter: "sjwiesman"
+
+- Fabian:
+  name: "Fabian Hueske"
+  twitter: "fhueske"
+
+excerpt: This post explores the State Processor API, introduced with Flink 
1.9.0, why this feature is a big step for Flink, what you can use it for, how 
to use it and explores some future directions that align the feature with 
Apache Flink's evolution into a system for unified batch and stream processing.
+---
+
+Whether you are running Apache FlinkⓇ in production or evaluated 
Flink as a computation framework in the past, you've probably found yourself 
asking the question: How can I access, write or update state in a Flink 
savepoint? Ask no more! [Apache Flink 
1.9.0](https://flink.apache.org/news/2019/08/22/release-1.9.0.html) introduces 
the [State Processor 
API](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html),
 a powerful extension of the  [...]
+ 
+In this post, we explain why this feature is a big step for Flink, what you 
can use it for, and how to use it. Finally, we will discuss the future of the 
State Processor API and how it aligns with our plans to evolve Flink into a 
system for [unified batch and stream 
processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html).
+
+## Stateful Stream Processing with Apache Flink until Flink 1.9
+
+All non-trivial stream processing applications are stateful and most of them 
are designed to run for months or years. Over time, many of them accumulate a 
lot of valuable state that can be very expensive or even impossible to rebuild 
if it gets lost due to a failure. In order to guarantee the consistency and 
durability of application state, Flink featured a sophisticated checkpointing 
and recovery mechanism from very early on. With every release, the Flink 
community has added more and mo [...]
+
+However, a feature that was commonly requested by Flink users was the ability 
to access the state of an application “from the outside”. This request was 
motivated by the need to validate or debug the state of an application, to 
migrate the state of an application to another application, to evolve an 
application from the Heap State Backend to the RocksDB State Backend, or to 
import the initial state of an application from an external system like a 
relational database.
+
+Despite all those convincing reasons to expose application state externally, 
your access options have been fairly limited until now. Flink's Queryable State 
feature only supports key-lookups (point queries) and does not guarantee the 
consistency of returned values (the value of a key might be different before 
and after an application recovered from a failure). Moreover, queryable state 
cannot be used to add or modify the state of an application. Also, savepoints, 
which are consistent sna [...]
+
+## Reading and Writing Application State with the State Processor API
+
+The State Processor API that comes with Flink 1.9 is a true game-changer in 
how you can work with application state! In a nutshell, it extends the DataSet 
API with Input and OutputFormats to read and write savepoint or checkpoint 
data. Due to the [interoperability of DataSet and Table 
API](https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api),
 you can even use relational Table API or SQL queries to analyze and process st 
[...]
+
+For example, you can take a savepoint of a running stream processing 
application and analyze it with a DataSet batch program to verify that the 
application behaves correctly. Or you can read a batch of data from any store, 
preprocess it, and write the result to a savepoint that you use to bootstrap 
the state of a streaming application. It's also possible to fix inconsistent 
state entries now. Finally, the State Processor API opens up many ways to 
evolve a stateful

[flink-web] 03/03: Rebuild website

2019-09-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 3003e0ec109f5ba0a6670d5dd0e107265b1c7638
Author: Fabian Hueske 
AuthorDate: Fri Sep 13 14:17:54 2019 +0200

Rebuild website
---
 content/blog/feed.xml  |  55 
 content/blog/index.html|  36 ++-
 content/blog/page2/index.html  |  38 +--
 content/blog/page3/index.html  |  40 +--
 content/blog/page4/index.html  |  40 +--
 content/blog/page5/index.html  |  40 +--
 content/blog/page6/index.html  |  40 +--
 content/blog/page7/index.html  |  39 ++-
 content/blog/page8/index.html  |  42 +--
 content/blog/page9/index.html  |  28 ++
 .../feature/2019/09/13/state-processor-api.html| 282 +
 .../application-my-app-state-processor-api.png | Bin 0 -> 49938 bytes
 .../database-my-app-state-processor-api.png| Bin 0 -> 50174 bytes
 content/index.html |   8 +-
 content/zh/index.html  |   8 +-
 15 files changed, 566 insertions(+), 130 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 2113d6f..1410119 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,61 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+The State Processor API: How to Read, write and modify the state of 
Flink applications
+pWhether you are running Apache 
FlinksupⓇ/sup in production or evaluated Flink as a computation 
framework in the past, you’ve probably found yourself asking the question: How 
can I access, write or update state in a Flink savepoint? Ask no more! a 
href=https://flink.apache.org/news/2019/08/22/release-1.9.0.htmlApache
 Flink 1.9.0/a introduces the a 
href=https://ci.apache.org/projects/flink/flink-docs-release-1.9/de [...]
+
+pIn this post, we explain why this feature is a big step for Flink, 
what you can use it for, and how to use it. Finally, we will discuss the future 
of the State Processor API and how it aligns with our plans to evolve Flink 
into a system for a 
href=https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.htmlunified
 batch and stream processing/a./p
+
+h2 
id=stateful-stream-processing-with-apache-flink-until-flink-19Stateful
 Stream Processing with Apache Flink until Flink 1.9/h2
+
+pAll non-trivial stream processing applications are stateful and most 
of them are designed to run for months or years. Over time, many of them 
accumulate a lot of valuable state that can be very expensive or even 
impossible to rebuild if it gets lost due to a failure. In order to guarantee 
the consistency and durability of application state, Flink featured a 
sophisticated checkpointing and recovery mechanism from very early on. With 
every release, the Flink community has added mo [...]
+
+pHowever, a feature that was commonly requested by Flink users was the 
ability to access the state of an application “from the outside”. This request 
was motivated by the need to validate or debug the state of an application, to 
migrate the state of an application to another application, to evolve an 
application from the Heap State Backend to the RocksDB State Backend, or to 
import the initial state of an application from an external system like a 
relational database./p
+
+pDespite all those convincing reasons to expose application state 
externally, your access options have been fairly limited until now. Flink’s 
Queryable State feature only supports key-lookups (point queries) and does not 
guarantee the consistency of returned values (the value of a key might be 
different before and after an application recovered from a failure). Moreover, 
queryable state cannot be used to add or modify the state of an application. 
Also, savepoints, which are consi [...]
+
+h2 
id=reading-and-writing-application-state-with-the-state-processor-apiReading
 and Writing Application State with the State Processor API/h2
+
+pThe State Processor API that comes with Flink 1.9 is a true 
game-changer in how you can work with application state! In a nutshell, it 
extends the DataSet API with Input and OutputFormats to read and write 
savepoint or checkpoint data. Due to the a 
href=https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-apiinteroperability
 of DataSet and Table API/a, you can even use relational Table AP [...]
+
+pFor example, you can take a savepoint of a running stream processing 
application and analyze it with a DataSet batch program to verify that the 
application behaves correctly. Or you can read a batch of data from any store, 
prepro

[flink-web] branch asf-site updated (c048fa4 -> 3003e0e)

2019-09-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from c048fa4  Update release 1.8.2 blog
 new 36c49b4  Rebuild website
 new 21d447a  [blog] State Processor API
 new 3003e0e  Rebuild website

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2019-09-13-state-processor-api.md   |  64 +
 content/blog/feed.xml  | 158 +++-
 content/blog/index.html|  36 ++-
 content/blog/page2/index.html  |  38 +--
 content/blog/page3/index.html  |  40 +--
 content/blog/page4/index.html  |  40 +--
 content/blog/page5/index.html  |  40 +--
 content/blog/page6/index.html  |  40 +--
 content/blog/page7/index.html  |  39 ++-
 content/blog/page8/index.html  |  42 +--
 content/blog/page9/index.html  |  28 ++
 content/downloads.html |   2 +-
 .../feature/2019/09/13/state-processor-api.html| 282 +
 .../application-my-app-state-processor-api.png | Bin 0 -> 49938 bytes
 .../database-my-app-state-processor-api.png| Bin 0 -> 50174 bytes
 content/index.html |   8 +-
 content/zh/downloads.html  |   2 +-
 content/zh/index.html  |   8 +-
 .../application-my-app-state-processor-api.png | Bin 0 -> 49938 bytes
 .../database-my-app-state-processor-api.png| Bin 0 -> 50174 bytes
 20 files changed, 729 insertions(+), 138 deletions(-)
 create mode 100644 _posts/2019-09-13-state-processor-api.md
 create mode 100644 content/feature/2019/09/13/state-processor-api.html
 create mode 100644 
content/img/blog/2019-09-13-state-processor-api-blog/application-my-app-state-processor-api.png
 create mode 100644 
content/img/blog/2019-09-13-state-processor-api-blog/database-my-app-state-processor-api.png
 create mode 100644 
img/blog/2019-09-13-state-processor-api-blog/application-my-app-state-processor-api.png
 create mode 100644 
img/blog/2019-09-13-state-processor-api-blog/database-my-app-state-processor-api.png



[flink-web] 02/02: Rebuild website

2019-09-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 510b821d205a2d2d706d2dabff6f6319d5355e13
Author: Fabian Hueske 
AuthorDate: Wed Sep 11 11:48:24 2019 +0200

Rebuild website
---
 content/blog/feed.xml  | 208 ++
 content/blog/index.html|  42 +-
 content/blog/page2/index.html  |  46 ++-
 content/blog/page3/index.html  |  42 +-
 content/blog/page4/index.html  |  40 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  25 ++
 .../2019-09-05-flink-community-update_1.png| Bin 0 -> 97175 bytes
 .../2019-09-05-flink-community-update_2.png| Bin 0 -> 86011 bytes
 .../2019-09-05-flink-community-update_3.png| Bin 0 -> 1102360 bytes
 content/index.html |   6 +-
 content/news/2019/09/10/community-update.html  | 436 +
 content/zh/index.html  |   6 +-
 16 files changed, 879 insertions(+), 132 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 0016ef5..a3f9991 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,214 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Flink Community Update - September19
+pThis has been an exciting, fast-paced year for the 
Apache Flink community. But with over 10k messages across the mailing lists, 3k 
Jira tickets and 2k pull requests, it is not easy to keep up with the latest 
state of the project. Plus everything happening around it. With that in mind, 
we want to bring back regular community updates to the Flink blog./p
+
+pThe first post in the series takes you on an little detour across the 
year, to freshen up and make sure you’re all up to date./p
+
+div class=page-toc
+ul id=markdown-toc
+  lia href=#the-year-so-far-in-flink 
id=markdown-toc-the-year-so-far-in-flinkThe Year (so far) in 
Flink/aul
+  lia 
href=#integration-of-the-chinese-speaking-community 
id=markdown-toc-integration-of-the-chinese-speaking-communityIntegration
 of the Chinese-speaking community/a/li
+  lia href=#flink-documentation-20 
id=markdown-toc-flink-documentation-20Flink Documentation 
2.0/a/li
+  lia 
href=#improvement-of-the-contribution-process-and-experience 
id=markdown-toc-improvement-of-the-contribution-process-and-experienceImprovement
 of the Contribution Process and Experience/a/li
+  lia href=#new-committers-and-pmc-members 
id=markdown-toc-new-committers-and-pmc-membersNew Committers 
and PMC Members/aul
+  lia href=#new-pmc-members 
id=markdown-toc-new-pmc-membersNew PMC 
Members/a/li
+  lia href=#new-committers 
id=markdown-toc-new-committersNew 
Committers/a/li
+/ul
+  /li
+/ul
+  /li
+  lia href=#the-bigger-picture 
id=markdown-toc-the-bigger-pictureThe Bigger 
Picture/a/li
+  lia href=#upcoming-flink-community-events 
id=markdown-toc-upcoming-flink-community-eventsUpcoming Flink 
Community Events/aul
+  lia href=#north-america 
id=markdown-toc-north-americaNorth America/a/li
+  lia href=#europe 
id=markdown-toc-europeEurope/a/li
+  lia href=#asia 
id=markdown-toc-asiaAsia/a/li
+/ul
+  /li
+/ul
+
+/div
+
+h1 id=the-year-so-far-in-flinkThe Year (so far) in 
Flink/h1
+
+pTwo major versions were released this year: a 
href=https://flink.apache.org/news/2019/04/09/release-1.8.0.htmlFlink
 1.8/a and a 
href=https://flink.apache.org/news/2019/08/22/release-1.9.0.htmlFlink
 1.9/a; paving the way for the goal of making Flink the first framework 
to seamlessly support stream and batch processing with a single, unified 
runtime. The a 
href=https://flink.apache.org/news/2019/02/13/unified-batch- [...]
+
+pThe 1.9 release was the result of the strongbiggest community 
effort the project has experienced so far/strong, with the number of 
contributors soaring to 190 (see a 
href=#the-bigger-pictureThe Bigger Picture/a). For a 
quick overview of the upcoming work for Flink 1.10 (and beyond), have a look at 
the updated a 
href=https://flink.apache.org/roadmap.htmlroadmap/a!/p;
+
+h2 
id=integration-of-the-chinese-speaking-communityIntegration of 
the Chinese-speaking community/h2
+
+pAs the number of Chinese-speaking Flink users rapidly grows, the 
community is working on translating resources and creating dedicated spaces for 
discussion to invite and include these users in the wider Flink community. Part 
of the ongoing work is described in a 
href=https://cwiki.apache.org/confluence/display/

[flink-web] branch asf-site updated (f6b28f7 -> 510b821)

2019-09-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from f6b28f7  Rebuild website
 new e66627f  [blog] Flink Community Update - September'19.
 new 510b821  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2019-09-05-community-update.md  | 140 +++
 content/blog/feed.xml  | 208 ++
 content/blog/index.html|  42 +-
 content/blog/page2/index.html  |  46 ++-
 content/blog/page3/index.html  |  42 +-
 content/blog/page4/index.html  |  40 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  40 +-
 content/blog/page7/index.html  |  40 +-
 content/blog/page8/index.html  |  40 +-
 content/blog/page9/index.html  |  25 ++
 .../2019-09-05-flink-community-update_1.png| Bin 0 -> 97175 bytes
 .../2019-09-05-flink-community-update_2.png| Bin 0 -> 86011 bytes
 .../2019-09-05-flink-community-update_3.png| Bin 0 -> 1102360 bytes
 content/index.html |   6 +-
 content/news/2019/09/10/community-update.html  | 436 +
 content/zh/index.html  |   6 +-
 .../2019-09-05-flink-community-update_1.png| Bin 0 -> 97175 bytes
 .../2019-09-05-flink-community-update_2.png| Bin 0 -> 86011 bytes
 .../2019-09-05-flink-community-update_3.png| Bin 0 -> 1102360 bytes
 20 files changed, 1019 insertions(+), 132 deletions(-)
 create mode 100644 _posts/2019-09-05-community-update.md
 create mode 100644 
content/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_1.png
 create mode 100644 
content/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_2.png
 create mode 100644 
content/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_3.png
 create mode 100644 content/news/2019/09/10/community-update.html
 create mode 100644 
img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_1.png
 create mode 100644 
img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_2.png
 create mode 100644 
img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_3.png



[flink-web] 01/02: [blog] Flink Community Update - September'19.

2019-09-11 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit e66627f11bff58505c0198000e6524609b0bea1c
Author: Marta Paes Moreira 
AuthorDate: Fri Sep 6 09:43:06 2019 +0200

[blog] Flink Community Update - September'19.

This closes #263.
---
 _posts/2019-09-05-community-update.md  | 140 +
 .../2019-09-05-flink-community-update_1.png| Bin 0 -> 97175 bytes
 .../2019-09-05-flink-community-update_2.png| Bin 0 -> 86011 bytes
 .../2019-09-05-flink-community-update_3.png| Bin 0 -> 1102360 bytes
 4 files changed, 140 insertions(+)

diff --git a/_posts/2019-09-05-community-update.md 
b/_posts/2019-09-05-community-update.md
new file mode 100644
index 000..eb84e66
--- /dev/null
+++ b/_posts/2019-09-05-community-update.md
@@ -0,0 +1,140 @@
+---
+layout: post
+title: "Flink Community Update - September'19"
+date: 2019-09-10T12:00:00.000Z
+categories: news
+authors:
+- morsapaes:
+  name: "Marta Paes"
+  twitter: "morsapaes"
+
+excerpt: This has been an exciting, fast-paced year for the Apache Flink 
community. But with over 10k messages across the mailing lists, 3k Jira tickets 
and 2k pull requests, it is not easy to keep up with the latest state of the 
project. Plus everything happening around it. With that in mind, we want to 
bring back regular community updates to the Flink blog.
+---
+
+This has been an exciting, fast-paced year for the Apache Flink community. But 
with over 10k messages across the mailing lists, 3k Jira tickets and 2k pull 
requests, it is not easy to keep up with the latest state of the project. Plus 
everything happening around it. With that in mind, we want to bring back 
regular community updates to the Flink blog.
+
+The first post in the series takes you on an little detour across the year, to 
freshen up and make sure you're all up to date.
+
+{% toc %}
+
+# The Year (so far) in Flink
+
+Two major versions were released this year: [Flink 
1.8](https://flink.apache.org/news/2019/04/09/release-1.8.0.html) and [Flink 
1.9](https://flink.apache.org/news/2019/08/22/release-1.9.0.html); paving the 
way for the goal of making Flink the first framework to seamlessly support 
stream and batch processing with a single, unified runtime. The [contribution 
of 
Blink](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html)
 to Apache Flink was key in accelerating the pa [...]
+
+The 1.9 release was the result of the **biggest community effort the project 
has experienced so far**, with the number of contributors soaring to 190 (see 
[The Bigger Picture](#the-bigger-picture)). For a quick overview of the 
upcoming work for Flink 1.10 (and beyond), have a look at the updated 
[roadmap](https://flink.apache.org/roadmap.html)!
+
+## Integration of the Chinese-speaking community
+
+As the number of Chinese-speaking Flink users rapidly grows, the community is 
working on translating resources and creating dedicated spaces for discussion 
to invite and include these users in the wider Flink community. Part of the 
ongoing work is described in 
[FLIP-35](https://cwiki.apache.org/confluence/display/FLINK/FLIP-35%3A+Support+Chinese+Documents+and+Website)
 and has resulted in:
+
+* A new user mailing list (user-zh@f.a.o) dedicated to Chinese-speakers.
+
+* A Chinese translation of the Apache Flink 
[website](https://flink.apache.org/zh/) and 
[documentation](https://ci.apache.org/projects/flink/flink-docs-master/zh/).
+
+* Multiple meetups organized all over China, with the biggest one reaching a 
whopping number of 500+ participants. Some of these meetups were also organized 
in collaboration with communities from other projects, like Apache Pulsar and 
Apache Kafka.
+
+
+
+
+
+In case you're interested in knowing more about this work in progress, Robert 
Metzger and Fabian Hueske will be diving into "Inviting Apache Flink's Chinese 
User Community" at the upcoming ApacheCon Europe 2019 (see [Upcoming Flink 
Community Events](#upcoming-flink-community-events)).
+
+## Improving Flink's Documentation
+
+Besides the translation effort, the community has also been working quite hard 
on a **Flink docs overhaul**. The main goals are to:
+
+ * Organize and clean-up the structure of the docs;
+ 
+ * Align the content with the overall direction of the project;
+ 
+ * Improve the _getting-started_ material and make the content more accessible 
to different levels of Flink experience. 
+
+Given that there has been some confusion in the past regarding unclear 
definition of core Flink concepts, one of the first completed efforts was to 
introduce a 
[Glossary](https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/glossary.html#glossary)
 in the docs. To get up to speed with the roadmap for the remainder efforts, 
you can refer to 
[FLIP-42](

[flink] branch master updated: [hotfix] Fix NOTICE-binary.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 2880c98  [hotfix] Fix NOTICE-binary.
2880c98 is described below

commit 2880c9829fcc9a9c17b932c7e8daff9d255c025a
Author: Fabian Hueske 
AuthorDate: Fri Sep 6 16:28:07 2019 +0200

[hotfix] Fix NOTICE-binary.
---
 NOTICE-binary | 277 +++---
 1 file changed, 130 insertions(+), 147 deletions(-)

diff --git a/NOTICE-binary b/NOTICE-binary
index 24a61db..f37ba47 100644
--- a/NOTICE-binary
+++ b/NOTICE-binary
@@ -108,18 +108,6 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - org.eclipse.jetty:jetty-util:9.3.19.v20170502
 - org.eclipse.jetty:jetty-util-ajax:9.3.19.v20170502
 
-Apache Commons Lang
-Copyright 2001-2014 The Apache Software Foundation
-
-This product includes software from the Spring Framework,
-under the Apache License 2.0 (see: StringUtils.containsWhitespace())
-
-Apache Commons Collections
-Copyright 2001-2015 The Apache Software Foundation
-
-This product includes software developed by
-The Apache Software Foundation (http://www.apache.org/).
-
 
 flink-hadoop-fs
 Copyright 2014-2019 The Apache Software Foundation
@@ -3113,6 +3101,12 @@ which has the following notices:
 Apache Commons IO
 Copyright 2002-2012 The Apache Software Foundation
 
+This product includes software developed by
+The Apache Software Foundation (http://www.apache.org/).
+
+Apache Commons Collections
+Copyright 2001-2015 The Apache Software Foundation
+
 Apache Commons Logging
 Copyright 2003-2013 The Apache Software Foundation
 
@@ -3125,6 +3119,12 @@ Copyright 2000-2016 The Apache Software Foundation
 Apache Commons Configuration
 Copyright 2001-2017 The Apache Software Foundation
 
+Apache Commons Lang
+Copyright 2001-2014 The Apache Software Foundation
+
+This product includes software from the Spring Framework,
+under the Apache License 2.0 (see: StringUtils.containsWhitespace())
+
 htrace-core4
 Copyright 2016 The Apache Software Foundation
 
@@ -3449,113 +3449,6 @@ FasterXML.com (http://fasterxml.com).
 Apache Commons CLI
 Copyright 2001-2015 The Apache Software Foundation
 
-The Netty Project
-=
-
-Please visit the Netty web site for more information:
-
-  * http://netty.io/
-
-Copyright 2011 The Netty Project
-
-The Netty Project licenses this file to you under the Apache License,
-version 2.0 (the "License"); you may not use this file except in compliance
-with the License. You may obtain a copy of the License at:
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-License for the specific language governing permissions and limitations
-under the License.
-
-Also, please refer to each LICENSE..txt file, which is located in
-the 'license' directory of the distribution file, for the license terms of the
-components that this product depends on.
-

-This product contains the extensions to Java Collections Framework which has
-been derived from the works by JSR-166 EG, Doug Lea, and Jason T. Greene:
-
-  * LICENSE:
-* license/LICENSE.jsr166y.txt (Public Domain)
-  * HOMEPAGE:
-* http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/
-* 
http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbosscache/experimental/jsr166/
-
-  * LICENSE:
-* license/LICENSE.base64.txt (Public Domain)
-  * HOMEPAGE:
-* http://iharder.sourceforge.net/current/java/base64/
-
-  * LICENSE:
-* license/LICENSE.jzlib.txt (BSD Style License)
-  * HOMEPAGE:
-* http://www.jcraft.com/jzlib/
-
-  * LICENSE:
-* license/LICENSE.webbit.txt (BSD License)
-  * HOMEPAGE:
-* https://github.com/joewalnes/webbit
-
-This product optionally depends on 'Protocol Buffers', Google's data
-interchange format, which can be obtained at:
-
-  * LICENSE:
-* license/LICENSE.protobuf.txt (New BSD License)
-  * HOMEPAGE:
-* http://code.google.com/p/protobuf/
-
-This product optionally depends on 'Bouncy Castle Crypto APIs' to generate
-a temporary self-signed X.509 certificate when the JVM does not provide the
-equivalent functionality.  It can be obtained at:
-
-  * LICENSE:
-* license/LICENSE.bouncycastle.txt (MIT License)
-  * HOMEPAGE:
-* http://www.bouncycastle.org/
-
-This product optionally depends on 'SLF4J', a simple logging facade for Java,
-which can be obtained at:
-
-  * LICENSE:
-* license/LICENSE.slf4j.txt (MIT License)
-  * HOMEPAGE:
-* http://www.slf4j.org/
-
-This product optionally depend

[flink] branch release-1.9 updated: [FLINK-13942][docs] Add "Getting Started" overview page.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new fcdad72  [FLINK-13942][docs] Add "Getting Started" overview page.
fcdad72 is described below

commit fcdad7265866b5f5b44bb09f7b036e1540f7e27f
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 09:54:30 2019 +0200

[FLINK-13942][docs] Add "Getting Started" overview page.

[ci skip]
---
 docs/getting-started/index.md| 30 ++
 docs/getting-started/index.zh.md | 30 ++
 2 files changed, 60 insertions(+)

diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md
index bd80898..901e48c 100644
--- a/docs/getting-started/index.md
+++ b/docs/getting-started/index.md
@@ -4,6 +4,7 @@ nav-id: getting-started
 nav-title: ' 
Getting Started'
 nav-parent_id: root
 section-break: true
+nav-show_overview: true
 nav-pos: 1
 ---
 
+
+There are many ways to get started with Apache Flink. Which one is the best 
for you depends on your goal and prior experience.
+
+### Taking a first look at Flink
+
+The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+
+* The [**Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+
+
+
+### First steps with one of Flink's APIs
+
+The **API Tutorials** are the best way to get started and introduce you step 
by step to an API.
+A tutorial provides instructions to bootstrap a small Flink project with a 
code skeleton and shows how to extend it to a simple application.
+
+* The [**DataStream API**](./tutorials/datastream_api.html) tutorial shows how 
to implement a basic DataStream application. The DataStream API is Flink's main 
abstraction to implement stateful streaming applications with sophisticated 
time semantics in Java or Scala.
+
+
diff --git a/docs/getting-started/index.zh.md b/docs/getting-started/index.zh.md
index bd80898..901e48c 100644
--- a/docs/getting-started/index.zh.md
+++ b/docs/getting-started/index.zh.md
@@ -4,6 +4,7 @@ nav-id: getting-started
 nav-title: ' 
Getting Started'
 nav-parent_id: root
 section-break: true
+nav-show_overview: true
 nav-pos: 1
 ---
 
+
+There are many ways to get started with Apache Flink. Which one is the best 
for you depends on your goal and prior experience.
+
+### Taking a first look at Flink
+
+The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+
+* The [**Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+
+
+
+### First steps with one of Flink's APIs
+
+The **API Tutorials** are the best way to get started and introduce you step 
by step to an API.
+A tutorial provides instructions to bootstrap a small Flink project with a 
code skeleton and shows how to extend it to a simple application.
+
+* The [**DataStream API**](./tutorials/datastream_api.html) tutorial shows how 
to implement a basic DataStream application. The DataStream API is Flink's main 
abstraction to implement stateful streaming applications with sophisticated 
time semantics in Java or Scala.
+
+



[flink] branch master updated: [hotfix] Update NOTICE-binary file.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 3e79030  [hotfix] Update NOTICE-binary file.
3e79030 is described below

commit 3e79030f8ad2195920adfa3792d807536d256b96
Author: Fabian Hueske 
AuthorDate: Fri Sep 6 14:43:59 2019 +0200

[hotfix] Update NOTICE-binary file.
---
 NOTICE-binary | 307 +-
 1 file changed, 151 insertions(+), 156 deletions(-)

diff --git a/NOTICE-binary b/NOTICE-binary
index cf10657..24a61db 100644
--- a/NOTICE-binary
+++ b/NOTICE-binary
@@ -26,40 +26,18 @@ Copyright 2006-2019 The Apache Software Foundation
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
 
-flink-examples-streaming-click-event-count
+flink-examples-streaming-state-machine
 Copyright 2014-2019 The Apache Software Foundation
 
 This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt)
 
-- org.apache.kafka:kafka-clients:2.2.0
-
-
-flink-connector-kafka
-Copyright 2014-2019 The Apache Software Foundation
-
-flink-connector-kafka-base
-Copyright 2014-2019 The Apache Software Foundation
-
-// --
-// NOTICE file corresponding to the section 4d of The Apache License,
-// Version 2.0, in this case for Apache Flink
-// --
+- org.apache.kafka:kafka-clients:0.10.2.1
 
-Apache Flink
-Copyright 2006-2019 The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
 
-flink-examples-streaming-state-machine
+flink-connector-kafka-0.10
 Copyright 2014-2019 The Apache Software Foundation
 
-This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt)
-
-- org.apache.kafka:kafka-clients:2.2.0
-
-
-flink-connector-kafka
+flink-connector-kafka-0.9
 Copyright 2014-2019 The Apache Software Foundation
 
 flink-connector-kafka-base
@@ -130,6 +108,18 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - org.eclipse.jetty:jetty-util:9.3.19.v20170502
 - org.eclipse.jetty:jetty-util-ajax:9.3.19.v20170502
 
+Apache Commons Lang
+Copyright 2001-2014 The Apache Software Foundation
+
+This product includes software from the Spring Framework,
+under the Apache License 2.0 (see: StringUtils.containsWhitespace())
+
+Apache Commons Collections
+Copyright 2001-2015 The Apache Software Foundation
+
+This product includes software developed by
+The Apache Software Foundation (http://www.apache.org/).
+
 
 flink-hadoop-fs
 Copyright 2014-2019 The Apache Software Foundation
@@ -3123,12 +3113,6 @@ which has the following notices:
 Apache Commons IO
 Copyright 2002-2012 The Apache Software Foundation
 
-This product includes software developed by
-The Apache Software Foundation (http://www.apache.org/).
-
-Apache Commons Collections
-Copyright 2001-2015 The Apache Software Foundation
-
 Apache Commons Logging
 Copyright 2003-2013 The Apache Software Foundation
 
@@ -3141,12 +3125,6 @@ Copyright 2000-2016 The Apache Software Foundation
 Apache Commons Configuration
 Copyright 2001-2017 The Apache Software Foundation
 
-Apache Commons Lang
-Copyright 2001-2014 The Apache Software Foundation
-
-This product includes software from the Spring Framework,
-under the Apache License 2.0 (see: StringUtils.containsWhitespace())
-
 htrace-core4
 Copyright 2016 The Apache Software Foundation
 
@@ -3471,6 +3449,113 @@ FasterXML.com (http://fasterxml.com).
 Apache Commons CLI
 Copyright 2001-2015 The Apache Software Foundation
 
+The Netty Project
+=
+
+Please visit the Netty web site for more information:
+
+  * http://netty.io/
+
+Copyright 2011 The Netty Project
+
+The Netty Project licenses this file to you under the Apache License,
+version 2.0 (the "License"); you may not use this file except in compliance
+with the License. You may obtain a copy of the License at:
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+License for the specific language governing permissions and limitations
+under the License.
+
+Also, please refer to each LICENSE..txt file, which is located in
+the 'license' directory of the distribution file, for the license terms of the
+components that this product depends on.
+
+-

[flink] 02/03: [FLINK-12749][docs] Add operations playground.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a17623ba51f7cae83bc789cd4f8ffc7f105a8715
Author: Fabian Hueske 
AuthorDate: Mon Aug 26 17:00:24 2019 +0200

[FLINK-12749][docs] Add operations playground.

This closes #9543.

[ci skip]
---
 docs/fig/click-event-count-example.svg |  21 +
 docs/fig/flink-docker-playground.svg   |  21 +
 docs/fig/playground-webui-failure.png  | Bin 0 -> 37334 bytes
 docs/fig/playground-webui.png  | Bin 0 -> 18135 bytes
 .../flink-operations-playground.md | 817 +
 .../flink-operations-playground.zh.md  | 817 +
 docs/getting-started/docker-playgrounds/index.md   |  25 +
 .../getting-started/docker-playgrounds/index.zh.md |  25 +
 8 files changed, 1726 insertions(+)

diff --git a/docs/fig/click-event-count-example.svg 
b/docs/fig/click-event-count-example.svg
new file mode 100644
index 000..4d9c06f
--- /dev/null
+++ b/docs/fig/click-event-count-example.svg
@@ -0,0 +1,21 @@
+
+
+http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd;>
+http://www.w3.org/2000/svg; 
xmlns:xlink="http://www.w3.org/1999/xlink; version="1.1" width="713px" 
height="359px" viewBox="-0.5 -0.5 713 359" content="mxfile 
modified=2019-07-30T06:33:46.579Z host=www.draw.io 
agent=Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 
Firefox/66.0 etag=Gyms1__7o2-6Tou9Fwcv 
version=11.0.7 type=devicediagram 
id=axHalsAsTUV6G1jOH0Rx name=Page- [...]
\ No newline at end of file
diff --git a/docs/fig/flink-docker-playground.svg 
b/docs/fig/flink-docker-playground.svg
new file mode 100644
index 000..24a53e2
--- /dev/null
+++ b/docs/fig/flink-docker-playground.svg
@@ -0,0 +1,21 @@
+
+
+http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd;>
+http://www.w3.org/2000/svg; 
xmlns:xlink="http://www.w3.org/1999/xlink; version="1.1" width="681px" 
height="221px" viewBox="-0.5 -0.5 681 221" content="mxfile 
modified=2019-07-30T05:46:19.236Z host=www.draw.io 
agent=Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 
Firefox/66.0 etag=6b7qPJhosj6WVEuTns2y 
version=11.0.7 type=devicediagram 
id=zIUxMKcIWk6lTGESeTwo name=Page- [...]
\ No newline at end of file
diff --git a/docs/fig/playground-webui-failure.png 
b/docs/fig/playground-webui-failure.png
new file mode 100644
index 000..31968dc
Binary files /dev/null and b/docs/fig/playground-webui-failure.png differ
diff --git a/docs/fig/playground-webui.png b/docs/fig/playground-webui.png
new file mode 100644
index 000..3833d6d
Binary files /dev/null and b/docs/fig/playground-webui.png differ
diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
new file mode 100644
index 000..38a0848
--- /dev/null
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -0,0 +1,817 @@
+---
+title: "Flink Operations Playground"
+nav-title: 'Flink Operations Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same, 
and similar
+operational principles apply.
+
+In this playground, you will learn how to manage and run Flink Jobs. You will 
see how to deploy and 
+monitor an application, experience how Flink recovers from Job failure, and 
perform everyday 
+operational tasks like upgrades and rescaling.
+
+{% if site.version contains "SNAPSHOT" %}
+
+  
+  NOTE: The Apache Flink Docker images used for this playground are only 
available for
+  released versions of Apache Flink.
+  
+  Since you are currently looking at the latest SNAPSHOT
+  version of the documentation, all version references below will not work.
+  Please switch the documentation to the latest released version via the 
release picker which you
+  find on the left side below the menu.
+
+{% endif %}
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible for handling [Job]({{ site.baseurl 
}}/concepts/glossary.html#flink-job) submissions, 
+the supervision of Jobs as well as resource management. The Flink TaskManagers 
are the worker 
+processes and are responsible for the ex

[flink] 03/03: [FLINK-13942][docs] Add "Getting Started" overview page.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 275fe9641a8be0815fee152388bf105941b23418
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 09:54:30 2019 +0200

[FLINK-13942][docs] Add "Getting Started" overview page.

This closes #9603.

[ci skip]
---
 docs/getting-started/index.md| 35 +++
 docs/getting-started/index.zh.md | 35 +++
 2 files changed, 70 insertions(+)

diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md
index bd80898..861be99 100644
--- a/docs/getting-started/index.md
+++ b/docs/getting-started/index.md
@@ -4,6 +4,7 @@ nav-id: getting-started
 nav-title: ' 
Getting Started'
 nav-parent_id: root
 section-break: true
+nav-show_overview: true
 nav-pos: 1
 ---
 
+
+There are many ways to get started with Apache Flink. Which one is the best 
for you depends on your goal and prior experience.
+
+### Taking a first look at Flink
+
+The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+
+* The [**Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+
+
+
+### First steps with one of Flink's APIs
+
+The **Code Walkthroughs** are the best way to get started and introduce you 
step by step to an API.
+A walkthrough provides instructions to bootstrap a small Flink project with a 
code skeleton and shows how to extend it to a simple application.
+
+
+* The [**DataStream API**](./tutorials/datastream_api.html) tutorial shows how 
to implement a basic DataStream application. The DataStream API is Flink's main 
abstraction to implement stateful streaming applications with sophisticated 
time semantics in Java or Scala.
+
+* The [**Table API**](./walkthroughs/table_api.html) code walkthrough shows 
how to implement a simple Table API query on a batch source and how to evolve 
it into a continuous query on a streaming source. The Table API Flink's 
language-embedded, relational API to write SQL-like queries in Java or Scala 
which are automatically optimized similar to SQL queries. Table API queries can 
be executed on batch or streaming data with identical syntax and semantics.
+
+
diff --git a/docs/getting-started/index.zh.md b/docs/getting-started/index.zh.md
index bd80898..861be99 100644
--- a/docs/getting-started/index.zh.md
+++ b/docs/getting-started/index.zh.md
@@ -4,6 +4,7 @@ nav-id: getting-started
 nav-title: ' 
Getting Started'
 nav-parent_id: root
 section-break: true
+nav-show_overview: true
 nav-pos: 1
 ---
 
+
+There are many ways to get started with Apache Flink. Which one is the best 
for you depends on your goal and prior experience.
+
+### Taking a first look at Flink
+
+The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+
+* The [**Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+
+
+
+### First steps with one of Flink's APIs
+
+The **Code Walkthroughs** are the best way to get started and introduce you 
step by step to an API.
+A walkthrough provides instructions to bootstrap a small Flink project with a 
code skeleton and shows how to extend it to a simple application.
+
+
+* The [**DataStream API**](./tutorials/datastream_api.html) tutorial shows how 
to implement a basic DataStream application. The DataStream API is Flink's main 
abstraction to implement stateful streaming applications with sophisticated 
time semantics in Java or Scala.
+
+* The [**Table API**](./walkthroughs/table_api.html) code walkthrough shows 
how to implement a simple Table API query on a batch source and how to evolve 
it into a continuous query on a streaming source. The Table API Flink's 
language-embedded, relational API to write SQL-like queries in Java or Scala 
which are automatically optimized similar to SQL queries. Table API queries can 
be executed on batch or streaming data with identical syntax and semantics.
+
+



[flink] 01/03: [FLINK-12749][docs] Revert commit f695a76b10b0cb5f074bbb874fe374cd11e6eff3

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8998b2fe286d4ac788011f03fc6953f8b040b119
Author: Fabian Hueske 
AuthorDate: Tue Aug 27 13:51:44 2019 +0200

[FLINK-12749][docs] Revert commit f695a76b10b0cb5f074bbb874fe374cd11e6eff3
---
 docs/fig/click-event-count-example.svg |  21 -
 docs/fig/flink-docker-playground.svg   |  21 -
 docs/fig/playground-webui-failure.png  | Bin 37334 -> 0 bytes
 docs/fig/playground-webui.png  | Bin 18135 -> 0 bytes
 .../docker-playgrounds/flink_cluster_playground.md | 812 -
 .../flink_cluster_playground.zh.md | 774 
 docs/getting-started/docker-playgrounds/index.md   |  25 -
 .../getting-started/docker-playgrounds/index.zh.md |  25 -
 flink-dist/pom.xml |   7 -
 flink-dist/src/main/assemblies/bin.xml |  11 -
 .../pom.xml| 106 ---
 .../src/main/resources/META-INF/NOTICE |   9 -
 .../src/main/resources/META-INF/NOTICE |   2 +-
 flink-examples/flink-examples-build-helper/pom.xml |   1 -
 flink-examples/flink-examples-streaming/pom.xml|   3 +-
 .../statemachine/KafkaEventsGeneratorJob.java  |   4 +-
 .../examples/statemachine/StateMachineExample.java |   4 +-
 .../windowing/clickeventcount/ClickEventCount.java | 117 ---
 .../clickeventcount/ClickEventGenerator.java   | 122 
 .../functions/ClickEventStatisticsCollector.java   |  47 --
 .../functions/CountingAggregator.java  |  47 --
 .../clickeventcount/records/ClickEvent.java|  85 ---
 .../records/ClickEventDeserializationSchema.java   |  51 --
 .../records/ClickEventSerializationSchema.java |  55 --
 .../records/ClickEventStatistics.java  | 116 ---
 .../ClickEventStatisticsSerializationSchema.java   |  55 --
 26 files changed, 7 insertions(+), 2513 deletions(-)

diff --git a/docs/fig/click-event-count-example.svg 
b/docs/fig/click-event-count-example.svg
deleted file mode 100644
index 4d9c06f..000
--- a/docs/fig/click-event-count-example.svg
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd;>
-http://www.w3.org/2000/svg; 
xmlns:xlink="http://www.w3.org/1999/xlink; version="1.1" width="713px" 
height="359px" viewBox="-0.5 -0.5 713 359" content="mxfile 
modified=2019-07-30T06:33:46.579Z host=www.draw.io 
agent=Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 
Firefox/66.0 etag=Gyms1__7o2-6Tou9Fwcv 
version=11.0.7 type=devicediagram 
id=axHalsAsTUV6G1jOH0Rx name=Page- [...]
\ No newline at end of file
diff --git a/docs/fig/flink-docker-playground.svg 
b/docs/fig/flink-docker-playground.svg
deleted file mode 100644
index 24a53e2..000
--- a/docs/fig/flink-docker-playground.svg
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd;>
-http://www.w3.org/2000/svg; 
xmlns:xlink="http://www.w3.org/1999/xlink; version="1.1" width="681px" 
height="221px" viewBox="-0.5 -0.5 681 221" content="mxfile 
modified=2019-07-30T05:46:19.236Z host=www.draw.io 
agent=Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 
Firefox/66.0 etag=6b7qPJhosj6WVEuTns2y 
version=11.0.7 type=devicediagram 
id=zIUxMKcIWk6lTGESeTwo name=Page- [...]
\ No newline at end of file
diff --git a/docs/fig/playground-webui-failure.png 
b/docs/fig/playground-webui-failure.png
deleted file mode 100644
index 31968dc..000
Binary files a/docs/fig/playground-webui-failure.png and /dev/null differ
diff --git a/docs/fig/playground-webui.png b/docs/fig/playground-webui.png
deleted file mode 100644
index 3833d6d..000
Binary files a/docs/fig/playground-webui.png and /dev/null differ
diff --git 
a/docs/getting-started/docker-playgrounds/flink_cluster_playground.md 
b/docs/getting-started/docker-playgrounds/flink_cluster_playground.md
deleted file mode 100644
index 7f6ef23..000
--- a/docs/getting-started/docker-playgrounds/flink_cluster_playground.md
+++ /dev/null
@@ -1,812 +0,0 @@

-title: "Flink Cluster Playground"
-nav-title: 'Flink Cluster Playground'
-nav-parent_id: docker-playgrounds
-nav-pos: 1

-
-
-There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
-variety, the fundamental building blocks of a Flink Cluster remain the same, 
and similar
-operational principles apply.
-
-In this playground, you will learn how to manage and run Flink Jobs. You will 
see how to deploy and 
-monitor an application, experience how Flink recovers from Job failure, and 
perform everyday 
-operational tasks like upgrades and rescaling.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Anatomy of this Playground
-
-This play

[flink] branch master updated (d49e174 -> 275fe96)

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d49e174  [FLINK-13968][travis] Check correctness of binary licensing
 new 8998b2f  [FLINK-12749][docs] Revert commit 
f695a76b10b0cb5f074bbb874fe374cd11e6eff3
 new a17623b  [FLINK-12749][docs] Add operations playground.
 new 275fe96  [FLINK-13942][docs] Add "Getting Started" overview page.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...layground.md => flink-operations-playground.md} |  61 +++---
 ...und.zh.md => flink-operations-playground.zh.md} | 213 +
 docs/getting-started/index.md  |  35 
 docs/getting-started/index.zh.md   |  35 
 flink-dist/pom.xml |   7 -
 flink-dist/src/main/assemblies/bin.xml |  11 --
 .../pom.xml| 106 --
 .../src/main/resources/META-INF/NOTICE |   9 -
 .../src/main/resources/META-INF/NOTICE |   2 +-
 flink-examples/flink-examples-build-helper/pom.xml |   1 -
 flink-examples/flink-examples-streaming/pom.xml|   3 +-
 .../statemachine/KafkaEventsGeneratorJob.java  |   4 +-
 .../examples/statemachine/StateMachineExample.java |   4 +-
 .../windowing/clickeventcount/ClickEventCount.java | 117 ---
 .../clickeventcount/ClickEventGenerator.java   | 122 
 .../functions/ClickEventStatisticsCollector.java   |  47 -
 .../functions/CountingAggregator.java  |  47 -
 .../clickeventcount/records/ClickEvent.java|  85 
 .../records/ClickEventDeserializationSchema.java   |  51 -
 .../records/ClickEventSerializationSchema.java |  55 --
 .../records/ClickEventStatistics.java  | 116 ---
 .../ClickEventStatisticsSerializationSchema.java   |  55 --
 22 files changed, 238 insertions(+), 948 deletions(-)
 rename docs/getting-started/docker-playgrounds/{flink_cluster_playground.md => 
flink-operations-playground.md} (92%)
 rename docs/getting-started/docker-playgrounds/{flink_cluster_playground.zh.md 
=> flink-operations-playground.zh.md} (76%)
 delete mode 100644 
flink-examples/flink-examples-build-helper/flink-examples-streaming-click-event-count/pom.xml
 delete mode 100644 
flink-examples/flink-examples-build-helper/flink-examples-streaming-click-event-count/src/main/resources/META-INF/NOTICE
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventCount.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventGenerator.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/functions/ClickEventStatisticsCollector.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/functions/CountingAggregator.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/records/ClickEvent.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/records/ClickEventDeserializationSchema.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/records/ClickEventSerializationSchema.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/records/ClickEventStatistics.java
 delete mode 100644 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/records/ClickEventStatisticsSerializationSchema.java



[flink-web] branch asf-site updated (8b4af0c -> f6b28f7)

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 8b4af0c  Rebuild website
 new 8c02afc  [hotfix] Fix links to ASF license and events page.
 new f6b28f7  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _includes/navbar.html   | 15 ++-
 content/2019/05/03/pulsar-flink.html| 15 ++-
 content/2019/05/14/temporal-tables.html | 15 ++-
 content/2019/05/19/state-ttl.html   | 15 ++-
 content/2019/06/05/flink-network-stack.html | 15 ++-
 content/2019/06/26/broadcast-state.html | 15 ++-
 content/2019/07/23/flink-network-stack-2.html   | 15 ++-
 content/blog/index.html | 15 ++-
 content/blog/page2/index.html   | 15 ++-
 content/blog/page3/index.html   | 15 ++-
 content/blog/page4/index.html   | 15 ++-
 content/blog/page5/index.html   | 15 ++-
 content/blog/page6/index.html   | 15 ++-
 content/blog/page7/index.html   | 15 ++-
 content/blog/page8/index.html   | 15 ++-
 content/blog/page9/index.html   | 15 ++-
 content/blog/release_1.0.0-changelog_known_issues.html  | 15 ++-
 content/blog/release_1.1.0-changelog.html   | 15 ++-
 content/blog/release_1.2.0-changelog.html   | 15 ++-
 content/blog/release_1.3.0-changelog.html   | 15 ++-
 content/community.html  | 15 ++-
 content/contributing/code-style-and-quality-common.html | 15 ++-
 .../contributing/code-style-and-quality-components.html | 15 ++-
 .../contributing/code-style-and-quality-formatting.html | 15 ++-
 content/contributing/code-style-and-quality-java.html   | 15 ++-
 .../contributing/code-style-and-quality-preamble.html   | 15 ++-
 .../code-style-and-quality-pull-requests.html   | 15 ++-
 content/contributing/code-style-and-quality-scala.html  | 15 ++-
 content/contributing/contribute-code.html   | 15 ++-
 content/contributing/contribute-documentation.html  | 15 ++-
 content/contributing/how-to-contribute.html | 15 ++-
 content/contributing/improve-website.html   | 15 ++-
 content/contributing/reviewing-prs.html | 15 ++-
 content/documentation.html  | 15 ++-
 content/downloads.html  | 15 ++-
 content/ecosystem.html  | 15 ++-
 content/faq.html| 15 ++-
 content/features/2017/07/04/flink-rescalable-state.html | 15 ++-
 .../features/2018/01/30/incremental-checkpointing.html  | 15 ++-
 .../03/01/end-to-end-exactly-once-apache-flink.html | 15 ++-
 content/features/2019/03/11/prometheus-monitoring.html  | 15 ++-
 content/flink-applications.html | 15 ++-
 content/flink-architecture.html | 15 ++-
 content/flink-operations.html   | 15 ++-
 content/gettinghelp.html| 15 ++-
 content/index.html  | 17 +++--
 content/material.html   | 15 ++-
 content/news/2014/08/26/release-0.6.html| 15 ++-
 content/news/2014/09/26/release-0.6.1.html  | 15 ++-
 content/news/2014/10/03/upcoming_events.html| 15 ++-
 content/news/2014/11/04/release-0.7.0.html  | 15 ++-
 content/news/2014/11/18/hadoop-compatibility.html   | 15 ++-
 content/news/2015/01/06/december-in-flink.html  | 15 ++-
 content/news/2015/01/21/release-0.8.html| 15 ++-
 content/news/2015/02/04/january-in-flink.html   | 15 ++-
 content/news/2015/02/09/streaming-example.html  | 15 ++-
 content/news/2015/03/02/februar

[flink-web] 01/02: [hotfix] Fix links to ASF license and events page.

2019-09-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 8c02afc727e2770db11e0698f2de45db46f57fd0
Author: Fabian Hueske 
AuthorDate: Fri Sep 6 10:56:23 2019 +0200

[hotfix] Fix links to ASF license and events page.
---
 _includes/navbar.html | 15 ++-
 index.md  |  2 +-
 index.zh.md   |  2 +-
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/_includes/navbar.html b/_includes/navbar.html
index 6516e2d..148332a 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -153,21 +153,18 @@
 
 
   
-.smalllinks {
-  display: inline !important;
-}
-.smalllinks:hover {
-  background: none !important;
+.smalllinks:link {
+  display: inline-block !important; background: none; 
padding-top: 0px; padding-bottom: 0px; padding-right: 0px; min-width: 75px;
 }
   
 
-  https://www.apache.org/licenses/; 
target="_blank">Licenses 
+  https://www.apache.org/licenses/; 
target="_blank">License 
 
-  https://www.apache.org/security/; 
target="_blank">Security 
+  https://www.apache.org/security/; 
target="_blank">Security 
 
-  https://www.apache.org/foundation/sponsorship.html; 
target="_blank">Donate 
+  https://www.apache.org/foundation/sponsorship.html; 
target="_blank">Donate 
 
-  https://www.apache.org/foundation/thanks.html; target="_blank">Thanks 

+  https://www.apache.org/foundation/thanks.html; target="_blank">Thanks 

 
 
   
diff --git a/index.md b/index.md
index 876d494..73a37ab 100644
--- a/index.md
+++ b/index.md
@@ -328,7 +328,7 @@ layout: base
 
   
   
-  https://events.apache.org/x/current-event.html; target="_blank">
+  https://www.apache.org/events/current-event; target="_blank">
 https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
   
 
diff --git a/index.zh.md b/index.zh.md
index 8a96b47..3b9ad46 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -322,7 +322,7 @@ layout: base
 
   
   
-  https://events.apache.org/x/current-event.html; target="_blank">
+  https://www.apache.org/events/current-event; target="_blank">
 https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
   
 



[flink-web] 01/02: [FLINK-13821] Add missing foundation links & add events section

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit ce569aecbe33f6cf5697d9e8e6214e2011e04875
Author: Robert Metzger 
AuthorDate: Wed Sep 4 19:07:49 2019 +0200

[FLINK-13821] Add missing foundation links & add events section

This closes #261.
---
 _includes/navbar.html |  23 +++
 img/flink-forward.png | Bin 0 -> 19207 bytes
 index.md  |  27 ++-
 index.zh.md   |  24 
 4 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/_includes/navbar.html b/_includes/navbar.html
index 7df4993..6516e2d 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -147,6 +147,29 @@
 
 Plan Visualizer 
 
+  
+
+https://apache.org; target="_blank">Apache Software 
Foundation 
+
+
+  
+.smalllinks {
+  display: inline !important;
+}
+.smalllinks:hover {
+  background: none !important;
+}
+  
+
+  https://www.apache.org/licenses/; 
target="_blank">Licenses 
+
+  https://www.apache.org/security/; 
target="_blank">Security 
+
+  https://www.apache.org/foundation/sponsorship.html; 
target="_blank">Donate 
+
+  https://www.apache.org/foundation/thanks.html; target="_blank">Thanks 

+
+
   
 
 
diff --git a/img/flink-forward.png b/img/flink-forward.png
new file mode 100644
index 000..9dab0fb
Binary files /dev/null and b/img/flink-forward.png differ
diff --git a/index.md b/index.md
index f7a2fe2..876d494 100644
--- a/index.md
+++ b/index.md
@@ -6,7 +6,7 @@ layout: base
 
   
 
-  **Apache Flink® - Stateful Computations over Data Streams**
+  **Apache Flink® — Stateful Computations over Data Streams**
 
   
 
@@ -310,6 +310,31 @@ layout: base
 
 
 
+
+
+
+
+  
+
+
+
+
+  Upcoming Events
+
+
+
+  
+  https://flink-forward.org; target="_blank">
+
+  
+  
+  https://events.apache.org/x/current-event.html; target="_blank">
+https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
+  
+
+
+
+
 
 
 
diff --git a/index.zh.md b/index.zh.md
index 1a66609..8a96b47 100644
--- a/index.zh.md
+++ b/index.zh.md
@@ -304,6 +304,30 @@ layout: base
 
 
 
+
+
+
+
+  
+
+
+
+
+  Upcoming Events
+
+
+
+  
+  https://flink-forward.org; target="_blank">
+
+  
+  
+  https://events.apache.org/x/current-event.html; target="_blank">
+https://www.apache.org/events/current-event-234x60.png; 
alt="ApacheCon"/>
+  
+
+
+
 
 
 



[flink-web] branch asf-site updated (6b0ffa7 -> 8b4af0c)

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 6b0ffa7  Rebuild website
 new ce569ae  [FLINK-13821] Add missing foundation links & add events 
section
 new 8b4af0c  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _includes/navbar.html  |  23 ++
 content/2019/05/03/pulsar-flink.html   |  23 ++
 content/2019/05/14/temporal-tables.html|  23 ++
 content/2019/05/19/state-ttl.html  |  23 ++
 content/2019/06/05/flink-network-stack.html|  23 ++
 content/2019/06/26/broadcast-state.html|  23 ++
 content/2019/07/23/flink-network-stack-2.html  |  23 ++
 content/blog/index.html|  23 ++
 content/blog/page2/index.html  |  23 ++
 content/blog/page3/index.html  |  23 ++
 content/blog/page4/index.html  |  23 ++
 content/blog/page5/index.html  |  23 ++
 content/blog/page6/index.html  |  23 ++
 content/blog/page7/index.html  |  23 ++
 content/blog/page8/index.html  |  23 ++
 content/blog/page9/index.html  |  23 ++
 .../blog/release_1.0.0-changelog_known_issues.html |  23 ++
 content/blog/release_1.1.0-changelog.html  |  23 ++
 content/blog/release_1.2.0-changelog.html  |  23 ++
 content/blog/release_1.3.0-changelog.html  |  23 ++
 content/community.html |  23 ++
 .../code-style-and-quality-common.html |  23 ++
 .../code-style-and-quality-components.html |  23 ++
 .../code-style-and-quality-formatting.html |  23 ++
 .../contributing/code-style-and-quality-java.html  |  23 ++
 .../code-style-and-quality-preamble.html   |  23 ++
 .../code-style-and-quality-pull-requests.html  |  23 ++
 .../contributing/code-style-and-quality-scala.html |  23 ++
 content/contributing/contribute-code.html  |  23 ++
 content/contributing/contribute-documentation.html |  23 ++
 content/contributing/how-to-contribute.html|  23 ++
 content/contributing/improve-website.html  |  23 ++
 content/contributing/reviewing-prs.html|  23 ++
 content/documentation.html |  23 ++
 content/downloads.html |  23 ++
 content/ecosystem.html |  23 ++
 content/faq.html   |  23 ++
 .../2017/07/04/flink-rescalable-state.html |  23 ++
 .../2018/01/30/incremental-checkpointing.html  |  23 ++
 .../01/end-to-end-exactly-once-apache-flink.html   |  23 ++
 .../features/2019/03/11/prometheus-monitoring.html |  23 ++
 content/flink-applications.html|  23 ++
 content/flink-architecture.html|  23 ++
 content/flink-operations.html  |  23 ++
 content/gettinghelp.html   |  23 ++
 content/img/flink-forward.png  | Bin 0 -> 19207 bytes
 content/index.html |  50 -
 content/material.html  |  23 ++
 content/news/2014/08/26/release-0.6.html   |  23 ++
 content/news/2014/09/26/release-0.6.1.html |  23 ++
 content/news/2014/10/03/upcoming_events.html   |  23 ++
 content/news/2014/11/04/release-0.7.0.html |  23 ++
 content/news/2014/11/18/hadoop-compatibility.html  |  23 ++
 content/news/2015/01/06/december-in-flink.html |  23 ++
 content/news/2015/01/21/release-0.8.html   |  23 ++
 content/news/2015/02/04/january-in-flink.html  |  23 ++
 content/news/2015/02/09/streaming-example.html |  23 ++
 .../news/2015/03/02/february-2015-in-flink.html|  23 ++
 .../13/peeking-into-Apache-Flinks-Engine-Room.html |  23 ++
 content/news/2015/04/07/march-in-flink.html|  23 ++
 .../news/2015/04/13/release-0.9.0-milestone1.html  |  23 ++
 .../2015/05/11/Juggling-with-Bits-and-Bytes.html   |  23 ++
 .../news/2015/05/14/Community-update-April.html|  23 ++
 .../24/announcing-apac

[flink-web] 01/02: Update Roadmap after the release of Flink 1.9.

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit af7bdfc71911b461a0579f94a3a9651249621145
Author: Marta Paes Moreira 
AuthorDate: Wed Sep 4 08:22:22 2019 +0200

Update Roadmap after the release of Flink 1.9.

* Remove finished features.
* Add newly started and planned efforts.

Co-Authored-By: Till Rohrmann 

This closes #260.
---
 roadmap.md | 128 ++---
 1 file changed, 55 insertions(+), 73 deletions(-)

diff --git a/roadmap.md b/roadmap.md
index 2993b0b..46a0933 100644
--- a/roadmap.md
+++ b/roadmap.md
@@ -22,17 +22,17 @@ under the License.
 
 
 
-{% toc %}
+{% toc %} 
 
 **Preamble:** This is not an authoritative roadmap in the sense of a strict 
plan with a specific
-timeline. Rather, we, the community, share our vision for the future and give 
an overview of the bigger
+timeline. Rather, we — the community — share our vision for the future and 
give an overview of the bigger
 initiatives that are going on and are receiving attention. This roadmap shall 
give users and
 contributors an understanding where the project is going and what they can 
expect to come.
 
 The roadmap is continuously updated. New features and efforts should be added 
to the roadmap once
 there is consensus that they will happen and what they will roughly look like 
for the user.
 
-**Last Update:** 2019-05-08
+**Last Update:** 2019-09-04
 
 # Analytics, Applications, and the roles of DataStream, DataSet, and Table API
 
@@ -41,38 +41,36 @@ Flink views stream processing as a [unifying paradigm for 
data processing]({{ si
 
   - The **Table API / SQL** is becoming the primary API for analytical use 
cases, in a unified way
 across batch and streaming. To support analytical use cases in a more 
streamlined fashion,
-the API is extended with additional functions 
([FLIP-29](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739)).
+the API is being extended with more convenient multi-row/column operations 
([FLIP-29](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739)).
 
-Like SQL, the Table API is *declarative*, operates on a *logical schema*, 
and applies *automatic optimization*.
+- Like SQL, the Table API is *declarative*, operates on a *logical 
schema*, and applies *automatic optimization*.
 Because of these properties, that API does not give direct access to time 
and state.
 
+- The Table API is also the foundation for the Machine Learning (ML) 
efforts inititated in 
([FLIP-39](https://cwiki.apache.org/confluence/display/FLINK/FLIP-39+Flink+ML+pipeline+and+ML+libs)),
 that will allow users to easily build, persist and serve 
([FLINK-13167](https://issues.apache.org/jira/browse/FLINK-13167)) ML 
pipelines/workflows through a set of abstract core interfaces.
+
   - The **DataStream API** is the primary API for data-driven applications and 
data pipelines.
 It uses *physical data types* (Java/Scala classes) and there is no 
automatic rewriting.
-The applications have explicit control over *time* and *state* (state, 
triggers, proc. fun.).
-
-In the long run, the DataStream API should fully subsume the DataSet API 
through *bounded streams*.
+The applications have explicit control over *time* and *state* (state, 
triggers, proc fun.). 
+In the long run, the DataStream API will fully subsume the DataSet API 
through *bounded streams*.
 
 # Batch and Streaming Unification
 
-Flink's approach is to cover batch and streaming by the same APIs, on a 
streaming runtime.
+Flink's approach is to cover batch and streaming by the same APIs on a 
streaming runtime.
 [This blog post]({{ site.baseurl 
}}/news/2019/02/13/unified-batch-streaming-blink.html)
-gives an introduction to the unification effort. 
+gives an introduction to the unification effort.
 
 The biggest user-facing parts currently ongoing are:
 
-  - Table API restructuring 
[FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions)
-that decouples the Table API from batch/streaming specific environments 
and dependencies.
+  - Table API restructuring 
([FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions))
+that decouples the Table API from batch/streaming specific environments 
and dependencies. Some key parts of the FLIP are completed, such as the modular 
decoupling of expression parsing and the removal of Scala dependencies, and the 
next step is to unify the function stack 
([FLINK-12710](https://issues.apache.org/jira/browse/FLINK-12710)).
+
+  - The new source interfaces generalize across batch and streaming, making 
every connector usable as a batch and streaming data source 
([FLIP-27](https://cwiki.apache.org

[flink-web] 02/02: Rebuild website

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 6b0ffa71a32211eb9ad5f2c9cab5162e88db6fcd
Author: Fabian Hueske 
AuthorDate: Thu Sep 5 11:35:56 2019 +0200

Rebuild website
---
 content/roadmap.html | 147 +--
 1 file changed, 71 insertions(+), 76 deletions(-)

diff --git a/content/roadmap.html b/content/roadmap.html
index 18cbcc8..98b983a 100644
--- a/content/roadmap.html
+++ b/content/roadmap.html
@@ -184,23 +184,25 @@ under the License.
   Batch and Streaming 
Unification
   Fast Batch (Bounded 
Streams)
   Stream Processing Use 
Cases
-  Deployment, Scaling, 
Security
+  Deployment, Scaling and 
Security
+  Resource Management and 
Configuration
   Ecosystem
-  Connectors  Formats
+  Non-JVM Languages (Python)
+  Connectors and Formats
   Miscellaneous
 
 
 
 
 Preamble: This is not an authoritative roadmap in the 
sense of a strict plan with a specific
-timeline. Rather, we, the community, share our vision for the future and give 
an overview of the bigger
+timeline. Rather, we — the community — share our vision for the future and 
give an overview of the bigger
 initiatives that are going on and are receiving attention. This roadmap shall 
give users and
 contributors an understanding where the project is going and what they can 
expect to come.
 
 The roadmap is continuously updated. New features and efforts should be 
added to the roadmap once
 there is consensus that they will happen and what they will roughly look like 
for the user.
 
-Last Update: 2019-05-08
+Last Update: 2019-09-04
 
 Analytics,
 Applications, and the roles of DataStream, DataSet, and Table API
 
@@ -211,23 +213,29 @@ there is consensus that they will happen and what they 
will roughly look like fo
   
 The Table API / SQL is becoming the primary API for 
analytical use cases, in a unified way
 across batch and streaming. To support analytical use cases in a more 
streamlined fashion,
-the API is extended with additional functions (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739;>FLIP-29).
+the API is being extended with more convenient multi-row/column operations (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739;>FLIP-29).
 
-Like SQL, the Table API is declarative, operates on a 
logical schema, and applies automatic optimization.
+
+  
+Like SQL, the Table API is declarative, operates on a 
logical schema, and applies automatic optimization.
 Because of these properties, that API does not give direct access to time and 
state.
+  
+  
+The Table API is also the foundation for the Machine Learning (ML) 
efforts inititated in (https://cwiki.apache.org/confluence/display/FLINK/FLIP-39+Flink+ML+pipeline+and+ML+libs;>FLIP-39),
 that will allow users to easily build, persist and serve (https://issues.apache.org/jira/browse/FLINK-13167;>FLINK-13167) ML 
pipelines/workflows through a set of abstract core interfaces.
+  
+
   
   
 The DataStream API is the primary API for data-driven 
applications and data pipelines.
 It uses physical data types (Java/Scala classes) and there is no 
automatic rewriting.
-The applications have explicit control over time and state 
(state, triggers, proc. fun.).
-
-In the long run, the DataStream API should fully subsume the DataSet 
API through bounded streams.
+The applications have explicit control over time and state 
(state, triggers, proc fun.). 
+In the long run, the DataStream API will fully subsume the DataSet API through 
bounded streams.
   
 
 
 Batch and Streaming Unification
 
-Flink’s approach is to cover batch and streaming by the same APIs, on a 
streaming runtime.
+Flink’s approach is to cover batch and streaming by the same APIs on a 
streaming runtime.
 This blog 
post
 gives an introduction to the unification effort.
 
@@ -235,23 +243,20 @@ gives an introduction to the unification effort.
 
 
   
-Table API restructuring https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions;>FLIP-32
-that decouples the Table API from batch/streaming specific environments and 
dependencies.
+Table API restructuring (https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions;>FLIP-32)
+that decouples the Table API from batch/streaming specific environments and 
dependencies. Some key parts of the FLIP are completed, such as the modular 
decoupling of expression parsing and the removal of Scala dependencies, and the 
next step is to unify the function stack (https://issues.apache.org/jira/browse/FLINK-12710;>FLINK-12710).
   
   
-The new source interfaces https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface;>FLIP-27
-generalize across bat

[flink-web] branch asf-site updated (e63933b -> 6b0ffa7)

2019-09-05 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from e63933b  Rebuild website
 new af7bdfc  Update Roadmap after the release of Flink 1.9.
 new 6b0ffa7  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/roadmap.html | 147 +--
 roadmap.md   | 128 +++-
 2 files changed, 126 insertions(+), 149 deletions(-)



[flink] branch release-1.8 updated: [hotfix][docs] Minor fixes in operations playground.

2019-09-03 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new b79c199  [hotfix][docs] Minor fixes in operations playground.
b79c199 is described below

commit b79c19948562ad7ca73e4230a8a817429c1a0381
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 10:11:37 2019 +0200

[hotfix][docs] Minor fixes in operations playground.

[ci skip]
---
 docs/tutorials/docker-playgrounds/flink-operations-playground.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/tutorials/docker-playgrounds/flink-operations-playground.md 
b/docs/tutorials/docker-playgrounds/flink-operations-playground.md
index 3c5effc..df16f95 100644
--- a/docs/tutorials/docker-playgrounds/flink-operations-playground.md
+++ b/docs/tutorials/docker-playgrounds/flink-operations-playground.md
@@ -82,7 +82,7 @@ output of the Flink job should show 1000 views per page and 
window.
 The playground environment is set up in just a few steps. We will walk you 
through the necessary 
 commands and show how to validate that everything is running correctly.
 
-We assume that you have that you have [docker](https://docs.docker.com/) 
(1.12+) and
+We assume that you have [Docker](https://docs.docker.com/) (1.12+) and
 [docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
 
 The required configuration files are available in the 
@@ -204,7 +204,7 @@ docker-compose exec kafka kafka-console-consumer.sh \
 
 Now that you learned how to interact with Flink and the Docker containers, 
let's have a look at 
 some common operational tasks that you can try out on our playground.
-All of these tasks are independent of each other, i.e.i you can perform them 
in any order. 
+All of these tasks are independent of each other, i.e. you can perform them in 
any order. 
 Most tasks can be executed via the [CLI](#flink-cli) and the [REST 
API](#flink-rest-api).
 
 ### Listing Running Jobs
@@ -277,7 +277,7 @@ an external resource).
 docker-compose kill taskmanager
 {% endhighlight %}
 
-After a few seconds, Flink will notice the loss of the TaskManager, cancel the 
affected Job, and 
+After a few seconds, the Flink Master will notice the loss of the TaskManager, 
cancel the affected Job, and 
 immediately resubmit it for recovery.
 When the Job gets restarted, its tasks remain in the `SCHEDULED` state, which 
is indicated by the 
 counts in the gray colored square (see screenshot below).



[flink] branch release-1.9 updated: [hotfix][docs] Minor fixes in operations playground.

2019-09-03 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new a2e90ff  [hotfix][docs] Minor fixes in operations playground.
a2e90ff is described below

commit a2e90ff0104875b8fb76030ba7a13877bc55973f
Author: Fabian Hueske 
AuthorDate: Tue Sep 3 10:01:00 2019 +0200

[hotfix][docs] Minor fixes in operations playground.

[ci skip]
---
 .../docker-playgrounds/flink-operations-playground.md   | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/docs/getting-started/docker-playgrounds/flink-operations-playground.md 
b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
index b3c4f24..bb720b4 100644
--- a/docs/getting-started/docker-playgrounds/flink-operations-playground.md
+++ b/docs/getting-started/docker-playgrounds/flink-operations-playground.md
@@ -87,7 +87,7 @@ output of the Flink job should show 1000 views per page and 
window.
 The playground environment is set up in just a few steps. We will walk you 
through the necessary 
 commands and show how to validate that everything is running correctly.
 
-We assume that you have that you have [docker](https://docs.docker.com/) 
(1.12+) and
+We assume that you have [Docker](https://docs.docker.com/) (1.12+) and
 [docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
 
 The required configuration files are available in the 
@@ -209,7 +209,7 @@ docker-compose exec kafka kafka-console-consumer.sh \
 
 Now that you learned how to interact with Flink and the Docker containers, 
let's have a look at 
 some common operational tasks that you can try out on our playground.
-All of these tasks are independent of each other, i.e.i you can perform them 
in any order. 
+All of these tasks are independent of each other, i.e. you can perform them in 
any order. 
 Most tasks can be executed via the [CLI](#flink-cli) and the [REST 
API](#flink-rest-api).
 
 ### Listing Running Jobs
@@ -282,7 +282,7 @@ an external resource).
 docker-compose kill taskmanager
 {% endhighlight %}
 
-After a few seconds, Flink will notice the loss of the TaskManager, cancel the 
affected Job, and 
+After a few seconds, the Flink Master will notice the loss of the TaskManager, 
cancel the affected Job, and 
 immediately resubmit it for recovery.
 When the Job gets restarted, its tasks remain in the `SCHEDULED` state, which 
is indicated by the 
 purple colored squares (see screenshot below).



[flink-web] 01/02: [hotfix] Update redirected link to new location.

2019-09-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit f176927f9f35cf69691e71a117fd3301f5a6ef00
Author: Fabian Hueske 
AuthorDate: Mon Sep 2 13:43:38 2019 +0200

[hotfix] Update redirected link to new location.
---
 _includes/navbar.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_includes/navbar.html b/_includes/navbar.html
index 8b95b77..7df4993 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -65,7 +65,7 @@
 
 
 
-  {{ 
site.data.i18n[page.language].tutorials }} 
+  {{ 
site.data.i18n[page.language].tutorials }} 
 
 
 



[flink-web] 02/02: Rebuild website

2019-09-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit e498364e8e2325c044e08f032688840e04b9a6ff
Author: Fabian Hueske 
AuthorDate: Mon Sep 2 13:45:10 2019 +0200

Rebuild website
---
 content/2019/05/03/pulsar-flink.html  |  2 +-
 content/2019/05/14/temporal-tables.html   |  2 +-
 content/2019/05/19/state-ttl.html |  2 +-
 content/2019/06/05/flink-network-stack.html   |  2 +-
 content/2019/06/26/broadcast-state.html   |  2 +-
 content/2019/07/23/flink-network-stack-2.html |  2 +-
 content/blog/feed.xml | 15 ++-
 content/blog/index.html   |  2 +-
 content/blog/page2/index.html |  2 +-
 content/blog/page3/index.html |  2 +-
 content/blog/page4/index.html |  2 +-
 content/blog/page5/index.html |  2 +-
 content/blog/page6/index.html |  2 +-
 content/blog/page7/index.html |  2 +-
 content/blog/page8/index.html |  2 +-
 content/blog/page9/index.html |  2 +-
 content/blog/release_1.0.0-changelog_known_issues.html|  2 +-
 content/blog/release_1.1.0-changelog.html |  2 +-
 content/blog/release_1.2.0-changelog.html |  2 +-
 content/blog/release_1.3.0-changelog.html |  2 +-
 content/community.html|  2 +-
 content/contributing/code-style-and-quality-common.html   |  2 +-
 .../contributing/code-style-and-quality-components.html   |  2 +-
 .../contributing/code-style-and-quality-formatting.html   |  2 +-
 content/contributing/code-style-and-quality-java.html |  2 +-
 content/contributing/code-style-and-quality-preamble.html |  2 +-
 .../code-style-and-quality-pull-requests.html |  2 +-
 content/contributing/code-style-and-quality-scala.html|  2 +-
 content/contributing/contribute-code.html |  2 +-
 content/contributing/contribute-documentation.html|  2 +-
 content/contributing/how-to-contribute.html   |  2 +-
 content/contributing/improve-website.html |  2 +-
 content/contributing/reviewing-prs.html   |  2 +-
 content/documentation.html|  2 +-
 content/downloads.html|  2 +-
 content/ecosystem.html|  2 +-
 content/faq.html  |  2 +-
 content/features/2017/07/04/flink-rescalable-state.html   |  2 +-
 .../features/2018/01/30/incremental-checkpointing.html|  2 +-
 .../2018/03/01/end-to-end-exactly-once-apache-flink.html  |  2 +-
 content/features/2019/03/11/prometheus-monitoring.html|  2 +-
 content/flink-applications.html   |  2 +-
 content/flink-architecture.html   |  2 +-
 content/flink-operations.html |  2 +-
 content/gettinghelp.html  |  2 +-
 content/index.html|  2 +-
 content/material.html |  2 +-
 content/news/2014/08/26/release-0.6.html  |  2 +-
 content/news/2014/09/26/release-0.6.1.html|  2 +-
 content/news/2014/10/03/upcoming_events.html  |  2 +-
 content/news/2014/11/04/release-0.7.0.html|  2 +-
 content/news/2014/11/18/hadoop-compatibility.html |  2 +-
 content/news/2015/01/06/december-in-flink.html|  2 +-
 content/news/2015/01/21/release-0.8.html  |  2 +-
 content/news/2015/02/04/january-in-flink.html |  2 +-
 content/news/2015/02/09/streaming-example.html|  2 +-
 content/news/2015/03/02/february-2015-in-flink.html   |  2 +-
 .../03/13/peeking-into-Apache-Flinks-Engine-Room.html |  2 +-
 content/news/2015/04/07/march-in-flink.html   |  2 +-
 content/news/2015/04/13/release-0.9.0-milestone1.html |  2 +-
 content/news/2015/05/11/Juggling-with-Bits-and-Bytes.html |  2 +-
 content/news/2015/05/14/Community-update-April.html   |  2 +-
 .../2015/06/24/announcing-apache-flink-0.9.0-release.html |  2 +-
 content/news/2015/08/24/introducing-flink-gelly.html  |  2 +-
 content/news/2015/09/01/release-0.9.1.html|  2 +-
 content/news/2015/09/03/flink-forward.html|  2 +-
 content/news/2015/09/16/off-heap-memory.html  |  2 +-
 content/news/2015/11/16/release-0.10.0.html   |  2 +-
 content/news/2015/11/27/release-0.10.1.html   |  2 +-
 content/news/2015/12/04/Introducing

[flink-web] branch asf-site updated (1c7a290 -> e498364)

2019-09-02 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 1c7a290  Rebuild website
 new f176927  [hotfix] Update redirected link to new location.
 new e498364  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _includes/navbar.html |  2 +-
 content/2019/05/03/pulsar-flink.html  |  2 +-
 content/2019/05/14/temporal-tables.html   |  2 +-
 content/2019/05/19/state-ttl.html |  2 +-
 content/2019/06/05/flink-network-stack.html   |  2 +-
 content/2019/06/26/broadcast-state.html   |  2 +-
 content/2019/07/23/flink-network-stack-2.html |  2 +-
 content/blog/feed.xml | 15 ++-
 content/blog/index.html   |  2 +-
 content/blog/page2/index.html |  2 +-
 content/blog/page3/index.html |  2 +-
 content/blog/page4/index.html |  2 +-
 content/blog/page5/index.html |  2 +-
 content/blog/page6/index.html |  2 +-
 content/blog/page7/index.html |  2 +-
 content/blog/page8/index.html |  2 +-
 content/blog/page9/index.html |  2 +-
 content/blog/release_1.0.0-changelog_known_issues.html|  2 +-
 content/blog/release_1.1.0-changelog.html |  2 +-
 content/blog/release_1.2.0-changelog.html |  2 +-
 content/blog/release_1.3.0-changelog.html |  2 +-
 content/community.html|  2 +-
 content/contributing/code-style-and-quality-common.html   |  2 +-
 .../contributing/code-style-and-quality-components.html   |  2 +-
 .../contributing/code-style-and-quality-formatting.html   |  2 +-
 content/contributing/code-style-and-quality-java.html |  2 +-
 content/contributing/code-style-and-quality-preamble.html |  2 +-
 .../code-style-and-quality-pull-requests.html |  2 +-
 content/contributing/code-style-and-quality-scala.html|  2 +-
 content/contributing/contribute-code.html |  2 +-
 content/contributing/contribute-documentation.html|  2 +-
 content/contributing/how-to-contribute.html   |  2 +-
 content/contributing/improve-website.html |  2 +-
 content/contributing/reviewing-prs.html   |  2 +-
 content/documentation.html|  2 +-
 content/downloads.html|  2 +-
 content/ecosystem.html|  2 +-
 content/faq.html  |  2 +-
 content/features/2017/07/04/flink-rescalable-state.html   |  2 +-
 .../features/2018/01/30/incremental-checkpointing.html|  2 +-
 .../2018/03/01/end-to-end-exactly-once-apache-flink.html  |  2 +-
 content/features/2019/03/11/prometheus-monitoring.html|  2 +-
 content/flink-applications.html   |  2 +-
 content/flink-architecture.html   |  2 +-
 content/flink-operations.html |  2 +-
 content/gettinghelp.html  |  2 +-
 content/index.html|  2 +-
 content/material.html |  2 +-
 content/news/2014/08/26/release-0.6.html  |  2 +-
 content/news/2014/09/26/release-0.6.1.html|  2 +-
 content/news/2014/10/03/upcoming_events.html  |  2 +-
 content/news/2014/11/04/release-0.7.0.html|  2 +-
 content/news/2014/11/18/hadoop-compatibility.html |  2 +-
 content/news/2015/01/06/december-in-flink.html|  2 +-
 content/news/2015/01/21/release-0.8.html  |  2 +-
 content/news/2015/02/04/january-in-flink.html |  2 +-
 content/news/2015/02/09/streaming-example.html|  2 +-
 content/news/2015/03/02/february-2015-in-flink.html   |  2 +-
 .../03/13/peeking-into-Apache-Flinks-Engine-Room.html |  2 +-
 content/news/2015/04/07/march-in-flink.html   |  2 +-
 content/news/2015/04/13/release-0.9.0-milestone1.html |  2 +-
 content/news/2015/05/11/Juggling-with-Bits-and-Bytes.html |  2 +-
 content/news/2015/05/14/Community-update-April.html   |  2 +-
 .../2015/06/24/announcing-apache-flink-0.9.0-release.html |  2 +-
 content/news/2015/08/24/introducing-flink-gelly.html  |  2 +-
 content/news/2015/09/01/release-0.9.1.html

[flink] branch master updated: [hotfix][docs] Add missing double-quote to redirect.

2019-08-28 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 5df5427  [hotfix][docs] Add missing double-quote to redirect.
5df5427 is described below

commit 5df542705e18777ce917204dc592dafe2b3c6abf
Author: Fabian Hueske 
AuthorDate: Wed Aug 28 18:10:44 2019 +0200

[hotfix][docs] Add missing double-quote to redirect.
---
 docs/redirects/tutorials_datastream_api.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/redirects/tutorials_datastream_api.md 
b/docs/redirects/tutorials_datastream_api.md
index fdf39af..2d2dafc 100644
--- a/docs/redirects/tutorials_datastream_api.md
+++ b/docs/redirects/tutorials_datastream_api.md
@@ -1,5 +1,5 @@
 ---
-title: "DataStream API
+title: "DataStream API"
 layout: redirect
 redirect: /getting-started/tutorials/datastream_api.html
 permalink: /tutorials/datastream_api.html



[flink] branch release-1.9 updated: [FLINK-13875][docs] Add missing redirects to the documentation.

2019-08-28 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 7bba0b3  [FLINK-13875][docs] Add missing redirects to the 
documentation.
7bba0b3 is described below

commit 7bba0b32e779612343e069cdbab15adc77d51c0e
Author: Seth Wiesman 
AuthorDate: Tue Aug 27 09:41:31 2019 -0500

[FLINK-13875][docs] Add missing redirects to the documentation.

This closes #9544.

[ci skip]
---
 docs/redirects/examples_index.md | 24 
 docs/redirects/tutorials_datastream_api.md   | 24 
 docs/redirects/tutorials_flink_on_windows.md | 24 
 docs/redirects/tutorials_local_setup.md  | 24 
 4 files changed, 96 insertions(+)

diff --git a/docs/redirects/examples_index.md b/docs/redirects/examples_index.md
new file mode 100644
index 000..5e12875
--- /dev/null
+++ b/docs/redirects/examples_index.md
@@ -0,0 +1,24 @@
+---
+title: "Examples"
+layout: redirect
+redirect: /getting-started/examples/index.html
+permalink: /examples/index.html
+---
+
\ No newline at end of file
diff --git a/docs/redirects/tutorials_datastream_api.md 
b/docs/redirects/tutorials_datastream_api.md
new file mode 100644
index 000..2d2dafc
--- /dev/null
+++ b/docs/redirects/tutorials_datastream_api.md
@@ -0,0 +1,24 @@
+---
+title: "DataStream API"
+layout: redirect
+redirect: /getting-started/tutorials/datastream_api.html
+permalink: /tutorials/datastream_api.html
+---
+
diff --git a/docs/redirects/tutorials_flink_on_windows.md 
b/docs/redirects/tutorials_flink_on_windows.md
new file mode 100644
index 000..621db15
--- /dev/null
+++ b/docs/redirects/tutorials_flink_on_windows.md
@@ -0,0 +1,24 @@
+---
+title: "Flink On Windows"
+layout: redirect
+redirect: /getting-started/tutorials/flink_on_windows.html
+permalink: /tutorials/flink_on_windows.html
+---
+
\ No newline at end of file
diff --git a/docs/redirects/tutorials_local_setup.md 
b/docs/redirects/tutorials_local_setup.md
new file mode 100644
index 000..bee5330
--- /dev/null
+++ b/docs/redirects/tutorials_local_setup.md
@@ -0,0 +1,24 @@
+---
+title: "Local Setup"
+layout: redirect
+redirect: /getting-started/tutorials/local_setup.html
+permalink: /tutorials/local_setup.html
+---
+
\ No newline at end of file



[flink] branch master updated: [FLINK-13875][docs] Add missing redirects to the documentation.

2019-08-28 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d93e6b0  [FLINK-13875][docs] Add missing redirects to the 
documentation.
d93e6b0 is described below

commit d93e6b01c9c495318d8e10348b2110f588092042
Author: Seth Wiesman 
AuthorDate: Tue Aug 27 09:41:31 2019 -0500

[FLINK-13875][docs] Add missing redirects to the documentation.

This closes #9544.

[ci skip]
---
 docs/redirects/examples_index.md | 24 
 docs/redirects/tutorials_datastream_api.md   | 24 
 docs/redirects/tutorials_flink_on_windows.md | 24 
 docs/redirects/tutorials_local_setup.md  | 24 
 4 files changed, 96 insertions(+)

diff --git a/docs/redirects/examples_index.md b/docs/redirects/examples_index.md
new file mode 100644
index 000..5e12875
--- /dev/null
+++ b/docs/redirects/examples_index.md
@@ -0,0 +1,24 @@
+---
+title: "Examples"
+layout: redirect
+redirect: /getting-started/examples/index.html
+permalink: /examples/index.html
+---
+
\ No newline at end of file
diff --git a/docs/redirects/tutorials_datastream_api.md 
b/docs/redirects/tutorials_datastream_api.md
new file mode 100644
index 000..fdf39af
--- /dev/null
+++ b/docs/redirects/tutorials_datastream_api.md
@@ -0,0 +1,24 @@
+---
+title: "DataStream API
+layout: redirect
+redirect: /getting-started/tutorials/datastream_api.html
+permalink: /tutorials/datastream_api.html
+---
+
diff --git a/docs/redirects/tutorials_flink_on_windows.md 
b/docs/redirects/tutorials_flink_on_windows.md
new file mode 100644
index 000..621db15
--- /dev/null
+++ b/docs/redirects/tutorials_flink_on_windows.md
@@ -0,0 +1,24 @@
+---
+title: "Flink On Windows"
+layout: redirect
+redirect: /getting-started/tutorials/flink_on_windows.html
+permalink: /tutorials/flink_on_windows.html
+---
+
\ No newline at end of file
diff --git a/docs/redirects/tutorials_local_setup.md 
b/docs/redirects/tutorials_local_setup.md
new file mode 100644
index 000..bee5330
--- /dev/null
+++ b/docs/redirects/tutorials_local_setup.md
@@ -0,0 +1,24 @@
+---
+title: "Local Setup"
+layout: redirect
+redirect: /getting-started/tutorials/local_setup.html
+permalink: /tutorials/local_setup.html
+---
+
\ No newline at end of file



  1   2   3   4   5   6   7   8   9   10   >