This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new b66c9ba  Rebuild website
b66c9ba is described below

commit b66c9bac8ce04ab9e80827a2dff4ea25e8c5b563
Author: Dawid Wysakowicz <[email protected]>
AuthorDate: Wed Sep 29 17:43:42 2021 +0200

    Rebuild website
---
 content/blog/feed.xml                       | 6 +++---
 content/blog/index.html                     | 2 +-
 content/news/2021/09/29/release-1.14.0.html | 8 ++++----
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 5bec2a8..2a22757 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -15,10 +15,10 @@ over 1,000 issues. We are proud of how this community is 
consistently moving the
 
 &lt;p&gt;This release brings many new features and improvements in areas such 
as the SQL API, more connector support, checkpointing, and PyFlink.
 A major area of changes in this release is the integrated streaming &amp;amp; 
batch experience. We believe
-that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks in practice,
+that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks,
 because many use cases require processing historic data from various sources 
alongside streaming data.
 Examples are data exploration when developing new applications, bootstrapping 
state for new applications, training
-models to be applied in a streaming application, re-processing data after 
fixes/upgrades, and .&lt;/p&gt;
+models to be applied in a streaming application, or re-processing data after 
fixes/upgrades.&lt;/p&gt;
 
 &lt;p&gt;In Flink 1.14, we finally made it possible to &lt;strong&gt;mix 
bounded and unbounded streams in an application&lt;/strong&gt;:
 Flink now supports taking checkpoints of applications that are partially 
running and partially finished (some
@@ -28,7 +28,7 @@ when reaching their end to ensure smooth committing of 
results in sinks.&lt;/p&g
 &lt;p&gt;The &lt;strong&gt;batch execution mode now supports programs that use 
a mixture of the DataStream API and the SQL/Table API&lt;/strong&gt;
 (previously only pure Table/SQL or DataStream programs).&lt;/p&gt;
 
-&lt;p&gt;The unified Source and Sink APIs have gotten an update, and we 
started &lt;strong&gt;consolidating the connector ecosystem around the unified 
APIs&lt;/strong&gt;. We added a new &lt;strong&gt;hybrid source&lt;/strong&gt; 
can bridge between multiple storage systems.
+&lt;p&gt;The unified Source and Sink APIs have gotten an update, and we 
started &lt;strong&gt;consolidating the connector ecosystem around the unified 
APIs&lt;/strong&gt;. We added a new &lt;strong&gt;hybrid source&lt;/strong&gt; 
that can bridge between multiple storage systems.
 You can now do things like read old data from Amazon S3 and then switch over 
to Apache Kafka.&lt;/p&gt;
 
 &lt;p&gt;In addition, this release furthers our initiative in making Flink 
more self-tuning and
diff --git a/content/blog/index.html b/content/blog/index.html
index b9a4989..47751a2 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -204,7 +204,7 @@
       <h2 class="blog-title"><a 
href="/news/2021/09/29/release-1.14.0.html">Apache Flink 1.14.0 Release 
Announcement</a></h2>
 
       <p>29 Sep 2021
-       Stephan Ewen (<a 
href="https://twitter.com/StephanEwen";>@StephanEwen</a>) &amp; Johannes Moser 
</p>
+       Stephan Ewen (<a 
href="https://twitter.com/StephanEwen";>@StephanEwen</a>) &amp; Johannes Moser 
(<a href="https://twitter.com/joemoeAT";>@joemoeAT</a>)</p>
 
       <p>The Apache Flink community is excited to announce the release of 
Flink 1.14.0! More than 200 contributor worked on over 1,000 issues. The 
release brings exciting new features like a more seamless streaming/batch 
integration, automatic network memory tuning, a hybrid source to switch data 
streams between storgage systems (e.g., Kafka/S3), Fine-grained resource 
management, PyFlink performance and debugging enhancements, and a Pulsar 
connector.</p>
 
diff --git a/content/news/2021/09/29/release-1.14.0.html 
b/content/news/2021/09/29/release-1.14.0.html
index c02e8a5..d72b328 100644
--- a/content/news/2021/09/29/release-1.14.0.html
+++ b/content/news/2021/09/29/release-1.14.0.html
@@ -200,7 +200,7 @@
       <p><i></i></p>
 
       <article>
-        <p>29 Sep 2021 Stephan Ewen (<a 
href="https://twitter.com/StephanEwen";>@StephanEwen</a>) &amp; Johannes Moser 
</p>
+        <p>29 Sep 2021 Stephan Ewen (<a 
href="https://twitter.com/StephanEwen";>@StephanEwen</a>) &amp; Johannes Moser 
(<a href="https://twitter.com/joemoeAT";>@joemoeAT</a>)</p>
 
 <p>The Apache Software Foundation recently released its annual report and 
Apache Flink once again made
 it on the list of the top 5 most active projects! This remarkable
@@ -209,10 +209,10 @@ over 1,000 issues. We are proud of how this community is 
consistently moving the
 
 <p>This release brings many new features and improvements in areas such as the 
SQL API, more connector support, checkpointing, and PyFlink.
 A major area of changes in this release is the integrated streaming &amp; 
batch experience. We believe
-that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks in practice,
+that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks,
 because many use cases require processing historic data from various sources 
alongside streaming data.
 Examples are data exploration when developing new applications, bootstrapping 
state for new applications, training
-models to be applied in a streaming application, re-processing data after 
fixes/upgrades, and .</p>
+models to be applied in a streaming application, or re-processing data after 
fixes/upgrades.</p>
 
 <p>In Flink 1.14, we finally made it possible to <strong>mix bounded and 
unbounded streams in an application</strong>:
 Flink now supports taking checkpoints of applications that are partially 
running and partially finished (some
@@ -222,7 +222,7 @@ when reaching their end to ensure smooth committing of 
results in sinks.</p>
 <p>The <strong>batch execution mode now supports programs that use a mixture 
of the DataStream API and the SQL/Table API</strong>
 (previously only pure Table/SQL or DataStream programs).</p>
 
-<p>The unified Source and Sink APIs have gotten an update, and we started 
<strong>consolidating the connector ecosystem around the unified APIs</strong>. 
We added a new <strong>hybrid source</strong> can bridge between multiple 
storage systems.
+<p>The unified Source and Sink APIs have gotten an update, and we started 
<strong>consolidating the connector ecosystem around the unified APIs</strong>. 
We added a new <strong>hybrid source</strong> that can bridge between multiple 
storage systems.
 You can now do things like read old data from Amazon S3 and then switch over 
to Apache Kafka.</p>
 
 <p>In addition, this release furthers our initiative in making Flink more 
self-tuning and

Reply via email to