This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8e9cf5f  Minor fixes to release 1.14 blog post
8e9cf5f is described below

commit 8e9cf5f90a79af72d48d5048d77cc7de7bf979e7
Author: Stephan Ewen <se...@apache.org>
AuthorDate: Wed Sep 29 17:35:00 2021 +0200

    Minor fixes to release 1.14 blog post
---
 _posts/2021-09-29-release-1.14.0.md | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/_posts/2021-09-29-release-1.14.0.md 
b/_posts/2021-09-29-release-1.14.0.md
index af93b52..ad1c42f 100644
--- a/_posts/2021-09-29-release-1.14.0.md
+++ b/_posts/2021-09-29-release-1.14.0.md
@@ -9,6 +9,7 @@ authors:
   twitter: "StephanEwen"
 - joemoe:
   name: "Johannes Moser"
+  twitter: "joemoeAT"
 
 excerpt: The Apache Flink community is excited to announce the release of 
Flink 1.14.0! More than 200 contributor worked on over 1,000 issues. The 
release brings exciting new features like a more seamless streaming/batch 
integration, automatic network memory tuning, a hybrid source to switch data 
streams between storgage systems (e.g., Kafka/S3), Fine-grained resource 
management, PyFlink performance and debugging enhancements, and a Pulsar 
connector. 
 
@@ -21,10 +22,10 @@ over 1,000 issues. We are proud of how this community is 
consistently moving the
 
 This release brings many new features and improvements in areas such as the 
SQL API, more connector support, checkpointing, and PyFlink.
 A major area of changes in this release is the integrated streaming & batch 
experience. We believe
-that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks in practice,
+that, in practice, unbounded stream processing goes hand-in-hand with bounded- 
and batch processing tasks,
 because many use cases require processing historic data from various sources 
alongside streaming data.
 Examples are data exploration when developing new applications, bootstrapping 
state for new applications, training
-models to be applied in a streaming application, re-processing data after 
fixes/upgrades, and .
+models to be applied in a streaming application, or re-processing data after 
fixes/upgrades.
 
 In Flink 1.14, we finally made it possible to **mix bounded and unbounded 
streams in an application**:
 Flink now supports taking checkpoints of applications that are partially 
running and partially finished (some
@@ -34,7 +35,7 @@ when reaching their end to ensure smooth committing of 
results in sinks.
 The **batch execution mode now supports programs that use a mixture of the 
DataStream API and the SQL/Table API**
 (previously only pure Table/SQL or DataStream programs).
 
-The unified Source and Sink APIs have gotten an update, and we started 
**consolidating the connector ecosystem around the unified APIs**. We added a 
new **hybrid source** can bridge between multiple storage systems.
+The unified Source and Sink APIs have gotten an update, and we started 
**consolidating the connector ecosystem around the unified APIs**. We added a 
new **hybrid source** that can bridge between multiple storage systems.
 You can now do things like read old data from Amazon S3 and then switch over 
to Apache Kafka.
 
 In addition, this release furthers our initiative in making Flink more 
self-tuning and

Reply via email to