Repository: flink-web
Updated Branches:
  refs/heads/asf-site 4e75b86eb -> 2ac4d1c8f


Update date in URL and fix link indentation in 1.3 blog post


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/2ac4d1c8
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/2ac4d1c8
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/2ac4d1c8

Branch: refs/heads/asf-site
Commit: 2ac4d1c8ff84e1b074e6be7e78b49dac51198039
Parents: 4e75b86
Author: Robert Metzger <[email protected]>
Authored: Thu Jun 1 16:30:44 2017 +0200
Committer: Robert Metzger <[email protected]>
Committed: Thu Jun 1 16:30:44 2017 +0200

----------------------------------------------------------------------
 _posts/2017-06-01-release-1.3.0.md         | 131 ++++++++++++++++++++++++
 _posts/2017-5-31-release-1.3.0.md          | 131 ------------------------
 content/blog/feed.xml                      |   4 +-
 content/news/2017/06/01/release-1.3.0.html |   4 +-
 4 files changed, 135 insertions(+), 135 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/2ac4d1c8/_posts/2017-06-01-release-1.3.0.md
----------------------------------------------------------------------
diff --git a/_posts/2017-06-01-release-1.3.0.md 
b/_posts/2017-06-01-release-1.3.0.md
new file mode 100644
index 0000000..c1372cf
--- /dev/null
+++ b/_posts/2017-06-01-release-1.3.0.md
@@ -0,0 +1,131 @@
+---
+layout: post
+title:  "Apache Flink 1.3.0 Release Announcement"
+date:   2017-06-01 12:00:00
+author: "Robert Metzger"
+author-twitter: "rmetzger_"
+categories: news
+---
+
+
+The Apache Flink community is pleased to announce the 1.3.0 release. Over the 
past 4 months, the Flink community has been working hard to resolve more than 
680 issues. See the [complete changelog]({{ site.baseurl 
}}/blog/release_1.3.0-changelog.html) for more detail.
+
+This is the fourth major release in the 1.x.y series. It is API compatible 
with the other 1.x.y releases for APIs annotated with the @Public annotation.
+
+Users can expect Flink releases now in a 4 month cycle. At the beginning of 
the 1.3 [release 
cycle](https://cwiki.apache.org/confluence/display/FLINK/Flink+Release+and+Feature+Plan),
 the community decided to follow a strict [time-based release 
model](https://cwiki.apache.org/confluence/display/FLINK/Time-based+releases).
+
+We encourage everyone to download the release and check out the 
[documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/). 
Feedback through the [Flink mailing 
lists](http://flink.apache.org/community.html#mailing-lists) is, as always, 
gladly encouraged!
+
+You can find the binaries on the updated [Downloads 
page](http://flink.apache.org/downloads.html). Some highlights of the release 
are listed below.
+
+
+
+# Large State Handling/Recovery
+
+* **Incremental Checkpointing for RocksDB**: It is now possible to checkpoint 
only the difference from the previous successful checkpoint, rather than 
checkpointing the entire application state. This speeds up checkpointing and 
saves disk space, because the individual checkpoints are smaller. 
([FLINK-5053](https://issues.apache.org/jira/browse/FLINK-5053)).
+
+* **Asynchronous snapshots for heap-based state backends**: The filesystem and 
memory statebackends now also support asynchronous snapshots using a 
copy-on-write HashMap implementation. Asynchronous snapshotting makes Flink 
more resilient to slow storage systems and expensive serialization. The time an 
operator blocks on a snapshot is reduced to a minimum 
([FLINK-6048](https://issues.apache.org/jira/browse/FLINK-6048), 
[FLINK-5715](https://issues.apache.org/jira/browse/FLINK-5715)).
+
+* **Allow upgrades to state serializers:** Users can now upgrade serializers, 
while keeping their application state. One use case of this is upgrading custom 
serializers used for managed operator state/keyed state. Also, registration 
order for POJO types/Kryo types is now no longer fixed 
([Documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/state.html#custom-serialization-for-managed-state),
 [FLINK-6178](https://issues.apache.org/jira/browse/FLINK-6178)).
+
+* **Recover job state at the granularity of operator**: Before Flink 1.3, 
operator state was bound to Flink’s internal "Task" representation. This made 
it hard to change a job’s topology while keeping its state around. With this 
change, users are allowed to do more topology changes (un-chain operators) by 
restoring state into logical operators instead of “Tasks” 
([FLINK-5892](https://issues.apache.org/jira/browse/FLINK-5892)).
+
+* **Fine-grained recovery** (beta): Instead of restarting the complete 
ExecutionGraph in case of a task failure, Flink is now able to restart only the 
affected subgraph and thereby significantly decrease recovery time 
([FLINK-4256](https://issues.apache.org/jira/browse/FLINK-4256)).
+
+# DataStream API
+
+* **Side Outputs**: This change allows users to have more than one output 
stream for an operator. Operator metadata, internal system information 
(debugging, performance etc.) or rejected/late elements are potential use-cases 
for this new API feature. **The Window operator is now using this new feature 
for late window elements** ([Side Outputs 
Documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/side_output.html),
 [FLINK-4460](https://issues.apache.org/jira/browse/FLINK-4460)).
+
+* **Union Operator State**: Flink 1.2.0 introduced broadcast state 
functionality, but this had not yet been exposed via a public API. Flink 1.3.0 
provides the Union Operator State API for exposing broadcast operator state. 
The union state will send the entire state across all parallel instances to 
each instance on restore, giving each operator a full view of the state 
([FLINK-5991](https://issues.apache.org/jira/browse/FLINK-5991)).
+
+* **Per-Window State**: Previously, the state that a WindowFunction or 
ProcessWindowFunction could access was scoped to the key of the window but not 
the window itself. With this new feature, users can keep window state 
independent of the key 
([FLINK-5929](https://issues.apache.org/jira/browse/FLINK-5929)).
+
+# Deployment and Tooling
+
+* **Flink HistoryServer**: Flink’s 
[HistoryServer](https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/historyserver.html)
 now allows you to query the status and statistics of completed jobs that have 
been archived by a JobManager 
([FLINK-1579](https://issues.apache.org/jira/browse/FLINK-1579)).
+
+* **Watermark Monitoring in Web Front-end**: For easier diagnosis of watermark 
issues, the Flink JobManager front-end now provides a new tab to track the 
watermark of each operator 
([FLINK-3427](https://issues.apache.org/jira/browse/FLINK-3427)).
+
+* **Datadog HTTP Metrics Reporter**: Datadog is a widely-used metrics system, 
and Flink now offers a [Datadog 
reporter](https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/metrics.html#datadog-orgapacheflinkmetricsdatadogdatadoghttpreporter)
 that contacts the Datadog http endpoint directly 
([FLINK-6013](https://issues.apache.org/jira/browse/FLINK-6013)).
+
+* **Network Buffer Configuration**: We finally got rid of the tedious network 
buffer configuration and replaced it with a more generic approach. First of 
all, you may now follow the idiom "more is better" without any penalty on the 
latency which could previously occur due to excessive buffering in incoming and 
outgoing channels. Secondly, instead of defining an absolute number of network 
buffers, we now use fractions of the available JVM memory (10% by default). 
This should cover more use cases by default and may also be tweaked by defining 
a minimum and maximum size.
+
+  → See [Configuring the Network 
Buffers](https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers)
 in the Flink documentation.
+
+# Table API / SQL
+
+* **Support for Retractions in Table API / SQL**: As part of our endeavor to 
support continuous queries on [Dynamic 
Tables](http://flink.apache.org/news/2017/04/04/dynamic-tables.html), 
Retraction is an important building block that will enable a whole range of new 
applications which require updating previously-emitted results. Examples for 
such use cases are computation of early results for long-running windows, 
updates due to late arriving data, or maintaining constantly changing results 
similar to materialized views in relational database systems. Flink 1.3.0 
supports retraction for non-windowed aggregates. Results with updates can be 
either converted into a DataStream or materialized to external data stores 
using TableSinks with upsert or retraction support. 
+
+* **Extended support for aggregations in Table API / SQL**: With Flink 1.3.0, 
the Table API and SQL support many more types of aggregations, including
+       * GROUP BY window aggregations in SQL (via the window functions 
[TUMBLE, HOP, and SESSION 
windows](https://issues.apache.org/jira/browse/FLINK-6011)) for both batch and 
streaming.
+
+       * SQL OVER window aggregations (only for streaming)
+
+       * Non-windowed aggregations (in streaming with retractions).
+
+       * User-defined aggregation functions for custom aggregation logic.
+
+* **External catalog support**: The Table API & SQL allows to register 
external catalogs. Table API and SQL queries can then have access to table 
sources and their schema from the external catalogs without register those 
tables one by one.
+
+→ See [the Flink 
documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/table_api.html#group-windows)
 for details about these features.
+
+<div class="alert alert-warning">
+  The Table API / SQL documentation is currently being reworked. The community 
plans to publish the updated docs in the week of June 5th.
+</div>
+
+# Connectors
+
+* **ElasticSearch 5.x support**: The ElasticSearch connectors have been 
restructured to have a common base module and specific modules for ES 1, 2 and 
5, similar to how the Kafka connectors are organized. This will make fixes and 
future improvements available across all ES versions 
([FLINK-4988](https://issues.apache.org/jira/browse/FLINK-4988)).
+
+* **Allow rescaling the Kinesis Consumer**: Flink 1.2.0 introduced rescalable 
state for DataStream programs. With Flink 1.3, the Kinesis Consumer also makes 
use of that engine feature 
([FLINK-4821](https://issues.apache.org/jira/browse/FLINK-4821)).
+
+* **Transparent shard discovery for Kinesis Consumer**: The Kinesis consumer 
can now discover new shards without failing / restarting jobs when a resharding 
is happening ([FLINK-4577](https://issues.apache.org/jira/browse/FLINK-4577)).
+
+* **Allow setting custom start positions for the Kafka consumer**: With this 
change, you can instruct Flink’s Kafka consumer to start reading messages 
from a specific offset 
([FLINK-3123](https://issues.apache.org/jira/browse/FLINK-3123)) or earliest / 
latest offset ([FLINK-4280](https://issues.apache.org/jira/browse/FLINK-4280)) 
without respecting committed offsets in Kafka.
+
+* **Allow out-opt from offset committing for the Kafka consumer**: By default, 
Kafka commits the offsets to the Kafka broker once a checkpoint has been 
completed. This change allows users to disable this mechanism 
([FLINK-3398](https://issues.apache.org/jira/browse/FLINK-3398)).
+
+# CEP Library
+
+The CEP library has been greatly enhanced and is now able to accommodate more 
use-cases out-of-the-box (expressivity enhancements), make more efficient use 
of the available resources, adjust to changing runtime conditions--all without 
breaking backwards compatibility of operator state.
+
+Please note that the API of the CEP library has been updated with this 
release. 
+
+Below are some of the main features of the revamped CEP library:
+
+* **Make CEP operators rescalable**: Flink 1.2.0 introduced rescalable state 
for DataStream programs. With Flink 1.3, the CEP library also makes use of that 
engine feature ([FLINK-5420](https://issues.apache.org/jira/browse/FLINK-5420)).
+
+* **State Cleanup and Late element handling**: The CEP library will now output 
late elements using the newly introduced side outputs 
([FLINK-6205](https://issues.apache.org/jira/browse/FLINK-6205), 
[FLINK-5174](https://issues.apache.org/jira/browse/FLINK-5174)).
+
+* **New operators for the CEP library**:
+
+    * Quantifiers (*,+,?) for the pattern API 
([FLINK-3318](https://issues.apache.org/jira/browse/FLINK-3318))
+
+    * Support for different continuity requirements 
([FLINK-6208](https://issues.apache.org/jira/browse/FLINK-6208))
+
+    * Support for iterative conditions 
([FLINK-6197](https://issues.apache.org/jira/browse/FLINK-6197))
+
+# Gelly Library
+
+* PageRank algorithm for directed graphs 
+
+* Unified driver for running Gelly examples 
[FLINK-4949](https://issues.apache.org/jira/browse/FLINK-4949)).
+* PageRank algorithm for directed graphs 
([FLINK-4896](https://issues.apache.org/jira/browse/FLINK-4896)).
+* Add Circulant and Echo graph generators 
([FLINK-6393](https://issues.apache.org/jira/browse/FLINK-6393)).
+
+
+<div class="alert alert-warning">
+  There are two <strong>known issues</strong> in Flink 1.3.0. Both will be 
addressed in the next <i>1.3.1</i> release.
+  <br>
+  <ul>
+       <li><a 
href="https://issues.apache.org/jira/browse/FLINK-6783";>FLINK-6783</a>: Wrongly 
extracted TypeInformations for <code>WindowedStream::aggregate</code></li>
+       <li><a 
href="https://issues.apache.org/jira/browse/FLINK-6775";>FLINK-6783</a>: 
StateDescriptor cannot be shared by multiple subtasks</li>
+  </ul> 
+</div>
+
+# List of Contributors 
+
+According to git shortlog, the following 103 people contributed to the 1.3.0 
release. Thank you to all contributors!
+
+Addison Higham, Alexey Diomin, Aljoscha Krettek, Andrea Sella, Andrey 
Melentyev, Anton Mushin, barcahead, biao.liub, Bowen Li, Chen Qin, Chico Sokol, 
David Anderson, Dawid Wysakowicz, DmytroShkvyra, Fabian Hueske, Fabian Wollert, 
fengyelei, Flavio Pompermaier, FlorianFan, Fokko Driesprong, Geoffrey Mon, 
godfreyhe, gosubpl, Greg Hogan, guowei.mgw, hamstah, Haohui Mai, Hequn Cheng, 
hequn.chq, heytitle, hongyuhong, Jamie Grier, Jark Wu, jingzhang, Jinkui Shi, 
Jin Mingjian, Joerg Schad, Joshua Griffith, Jürgen Thomann, kaibozhou, 
Kathleen Sharp, Ken Geis, kkloudas, Kurt Young, lincoln-lil, lingjinjiang, 
liuyuzhong7, Lorenz Buehmann, manuzhang, Marc Tremblay, Mauro Cortellazzi, Max 
Kuklinski, mengji.fy, Mike Dias, mtunique, Nico Kruber, Omar Erminy, Patrick 
Lucas, paul, phoenixjiangnan, rami-alisawi, Ramkrishna, Rick Cox, Robert 
Metzger, Rodrigo Bonifacio, rtudoran, Seth Wiesman, Shaoxuan Wang, shijinkui, 
shuai.xus, Shuyi Chen, spkavuly, Stefano Bortoli, Stefan Richter, Stephan Ewen, 
St
 ephen Gran, sunjincheng121, tedyu, Till Rohrmann, tonycox, Tony Wei, twalthr, 
Tzu-Li (Gordon) Tai, Ufuk Celebi, Ventura Del Monte, Vijay Srinivasaraghavan, 
WangTaoTheTonic, wenlong.lwl, xccui, xiaogang.sxg, Xpray, zcb, zentol, 
zhangminglei, Zhenghua Gao, Zhijiang, Zhuoluo Yang, zjureel, Zohar Mizrahi, 
士远, 槿瑜, 淘江, 金竹
+

http://git-wip-us.apache.org/repos/asf/flink-web/blob/2ac4d1c8/_posts/2017-5-31-release-1.3.0.md
----------------------------------------------------------------------
diff --git a/_posts/2017-5-31-release-1.3.0.md 
b/_posts/2017-5-31-release-1.3.0.md
deleted file mode 100644
index c411f06..0000000
--- a/_posts/2017-5-31-release-1.3.0.md
+++ /dev/null
@@ -1,131 +0,0 @@
----
-layout: post
-title:  "Apache Flink 1.3.0 Release Announcement"
-date:   2017-06-01 12:00:00
-author: "Robert Metzger"
-author-twitter: "rmetzger_"
-categories: news
----
-
-
-The Apache Flink community is pleased to announce the 1.3.0 release. Over the 
past 4 months, the Flink community has been working hard to resolve more than 
680 issues. See the [complete changelog]({{ site.baseurl 
}}/blog/release_1.3.0-changelog.html) for more detail.
-
-This is the fourth major release in the 1.x.y series. It is API compatible 
with the other 1.x.y releases for APIs annotated with the @Public annotation.
-
-Users can expect Flink releases now in a 4 month cycle. At the beginning of 
the 1.3 [release 
cycle](https://cwiki.apache.org/confluence/display/FLINK/Flink+Release+and+Feature+Plan),
 the community decided to follow a strict [time-based release 
model](https://cwiki.apache.org/confluence/display/FLINK/Time-based+releases).
-
-We encourage everyone to download the release and check out the 
[documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/). 
Feedback through the [Flink mailing 
lists](http://flink.apache.org/community.html#mailing-lists) is, as always, 
gladly encouraged!
-
-You can find the binaries on the updated [Downloads 
page](http://flink.apache.org/downloads.html). Some highlights of the release 
are listed below.
-
-
-
-# Large State Handling/Recovery
-
-* **Incremental Checkpointing for RocksDB**: It is now possible to checkpoint 
only the difference from the previous successful checkpoint, rather than 
checkpointing the entire application state. This speeds up checkpointing and 
saves disk space, because the individual checkpoints are smaller. 
([FLINK-5053](https://issues.apache.org/jira/browse/FLINK-5053)).
-
-* **Asynchronous snapshots for heap-based state backends**: The filesystem and 
memory statebackends now also support asynchronous snapshots using a 
copy-on-write HashMap implementation. Asynchronous snapshotting makes Flink 
more resilient to slow storage systems and expensive serialization. The time an 
operator blocks on a snapshot is reduced to a minimum 
([FLINK-6048](https://issues.apache.org/jira/browse/FLINK-6048), 
[FLINK-5715](https://issues.apache.org/jira/browse/FLINK-5715)).
-
-* **Allow upgrades to state serializers:** Users can now upgrade serializers, 
while keeping their application state. One use case of this is upgrading custom 
serializers used for managed operator state/keyed state. Also, registration 
order for POJO types/Kryo types is now no longer fixed 
([Documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/state.html#custom-serialization-for-managed-state),
 [FLINK-6178](https://issues.apache.org/jira/browse/FLINK-6178)).
-
-* **Recover job state at the granularity of operator**: Before Flink 1.3, 
operator state was bound to Flink’s internal "Task" representation. This made 
it hard to change a job’s topology while keeping its state around. With this 
change, users are allowed to do more topology changes (un-chain operators) by 
restoring state into logical operators instead of “Tasks” 
([FLINK-5892](https://issues.apache.org/jira/browse/FLINK-5892)).
-
-* **Fine-grained recovery** (beta): Instead of restarting the complete 
ExecutionGraph in case of a task failure, Flink is now able to restart only the 
affected subgraph and thereby significantly decrease recovery time 
([FLINK-4256](https://issues.apache.org/jira/browse/FLINK-4256)).
-
-# DataStream API
-
-* **Side Outputs**: This change allows users to have more than one output 
stream for an operator. Operator metadata, internal system information 
(debugging, performance etc.) or rejected/late elements are potential use-cases 
for this new API feature. **The Window operator is now using this new feature 
for late window elements** ([Side Outputs 
Documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/side_output.html),
 [FLINK-4460](https://issues.apache.org/jira/browse/FLINK-4460)).
-
-* **Union Operator State**: Flink 1.2.0 introduced broadcast state 
functionality, but this had not yet been exposed via a public API. Flink 1.3.0 
provides the Union Operator State API for exposing broadcast operator state. 
The union state will send the entire state across all parallel instances to 
each instance on restore, giving each operator a full view of the state 
([FLINK-5991](https://issues.apache.org/jira/browse/FLINK-5991)).
-
-* **Per-Window State**: Previously, the state that a WindowFunction or 
ProcessWindowFunction could access was scoped to the key of the window but not 
the window itself. With this new feature, users can keep window state 
independent of the key 
([FLINK-5929](https://issues.apache.org/jira/browse/FLINK-5929)).
-
-# Deployment and Tooling
-
-* **Flink HistoryServer**: Flink’s 
[HistoryServer](https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/historyserver.html)
 now allows you to query the status and statistics of completed jobs that have 
been archived by a JobManager 
([FLINK-1579](https://issues.apache.org/jira/browse/FLINK-1579)).
-
-* **Watermark Monitoring in Web Front-end**: For easier diagnosis of watermark 
issues, the Flink JobManager front-end now provides a new tab to track the 
watermark of each operator 
([FLINK-3427](https://issues.apache.org/jira/browse/FLINK-3427)).
-
-* **Datadog HTTP Metrics Reporter**: Datadog is a widely-used metrics system, 
and Flink now offers a [Datadog 
reporter](https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/metrics.html#datadog-orgapacheflinkmetricsdatadogdatadoghttpreporter)
 that contacts the Datadog http endpoint directly 
([FLINK-6013](https://issues.apache.org/jira/browse/FLINK-6013)).
-
-* **Network Buffer Configuration**: We finally got rid of the tedious network 
buffer configuration and replaced it with a more generic approach. First of 
all, you may now follow the idiom "more is better" without any penalty on the 
latency which could previously occur due to excessive buffering in incoming and 
outgoing channels. Secondly, instead of defining an absolute number of network 
buffers, we now use fractions of the available JVM memory (10% by default). 
This should cover more use cases by default and may also be tweaked by defining 
a minimum and maximum size.
-
-→ See [Configuring the Network 
Buffers](https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers)
 in the Flink documentation.
-
-# Table API / SQL
-
-* **Support for Retractions in Table API / SQL**: As part of our endeavor to 
support continuous queries on [Dynamic 
Tables](http://flink.apache.org/news/2017/04/04/dynamic-tables.html), 
Retraction is an important building block that will enable a whole range of new 
applications which require updating previously-emitted results. Examples for 
such use cases are computation of early results for long-running windows, 
updates due to late arriving data, or maintaining constantly changing results 
similar to materialized views in relational database systems. Flink 1.3.0 
supports retraction for non-windowed aggregates. Results with updates can be 
either converted into a DataStream or materialized to external data stores 
using TableSinks with upsert or retraction support. 
-
-* **Extended support for aggregations in Table API / SQL**: With Flink 1.3.0, 
the Table API and SQL support many more types of aggregations, including
-       * GROUP BY window aggregations in SQL (via the window functions 
[TUMBLE, HOP, and SESSION 
windows](https://issues.apache.org/jira/browse/FLINK-6011)) for both batch and 
streaming.
-
-       * SQL OVER window aggregations (only for streaming)
-
-       * Non-windowed aggregations (in streaming with retractions).
-
-       * User-defined aggregation functions for custom aggregation logic.
-
-* **External catalog support**: The Table API & SQL allows to register 
external catalogs. Table API and SQL queries can then have access to table 
sources and their schema from the external catalogs without register those 
tables one by one.
-
-→ See [the Flink 
documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/table_api.html#group-windows)
 for details about these features.
-
-<div class="alert alert-warning">
-  The Table API / SQL documentation is currently being reworked. The community 
plans to publish the updated docs in the week of June 5th.
-</div>
-
-# Connectors
-
-* **ElasticSearch 5.x support**: The ElasticSearch connectors have been 
restructured to have a common base module and specific modules for ES 1, 2 and 
5, similar to how the Kafka connectors are organized. This will make fixes and 
future improvements available across all ES versions 
([FLINK-4988](https://issues.apache.org/jira/browse/FLINK-4988)).
-
-* **Allow rescaling the Kinesis Consumer**: Flink 1.2.0 introduced rescalable 
state for DataStream programs. With Flink 1.3, the Kinesis Consumer also makes 
use of that engine feature 
([FLINK-4821](https://issues.apache.org/jira/browse/FLINK-4821)).
-
-* **Transparent shard discovery for Kinesis Consumer**: The Kinesis consumer 
can now discover new shards without failing / restarting jobs when a resharding 
is happening ([FLINK-4577](https://issues.apache.org/jira/browse/FLINK-4577)).
-
-* **Allow setting custom start positions for the Kafka consumer**: With this 
change, you can instruct Flink’s Kafka consumer to start reading messages 
from a specific offset 
([FLINK-3123](https://issues.apache.org/jira/browse/FLINK-3123)) or earliest / 
latest offset ([FLINK-4280](https://issues.apache.org/jira/browse/FLINK-4280)) 
without respecting committed offsets in Kafka.
-
-* **Allow out-opt from offset committing for the Kafka consumer**: By default, 
Kafka commits the offsets to the Kafka broker once a checkpoint has been 
completed. This change allows users to disable this mechanism 
([FLINK-3398](https://issues.apache.org/jira/browse/FLINK-3398)).
-
-# CEP Library
-
-The CEP library has been greatly enhanced and is now able to accommodate more 
use-cases out-of-the-box (expressivity enhancements), make more efficient use 
of the available resources, adjust to changing runtime conditions--all without 
breaking backwards compatibility of operator state.
-
-Please note that the API of the CEP library has been updated with this 
release. 
-
-Below are some of the main features of the revamped CEP library:
-
-* **Make CEP operators rescalable**: Flink 1.2.0 introduced rescalable state 
for DataStream programs. With Flink 1.3, the CEP library also makes use of that 
engine feature ([FLINK-5420](https://issues.apache.org/jira/browse/FLINK-5420)).
-
-* **State Cleanup and Late element handling**: The CEP library will now output 
late elements using the newly introduced side outputs 
([FLINK-6205](https://issues.apache.org/jira/browse/FLINK-6205), 
[FLINK-5174](https://issues.apache.org/jira/browse/FLINK-5174)).
-
-* **New operators for the CEP library**:
-
-    * Quantifiers (*,+,?) for the pattern API 
([FLINK-3318](https://issues.apache.org/jira/browse/FLINK-3318))
-
-    * Support for different continuity requirements 
([FLINK-6208](https://issues.apache.org/jira/browse/FLINK-6208))
-
-    * Support for iterative conditions 
([FLINK-6197](https://issues.apache.org/jira/browse/FLINK-6197))
-
-# Gelly Library
-
-* PageRank algorithm for directed graphs 
-
-* Unified driver for running Gelly examples 
[FLINK-4949](https://issues.apache.org/jira/browse/FLINK-4949)).
-* PageRank algorithm for directed graphs 
([FLINK-4896](https://issues.apache.org/jira/browse/FLINK-4896)).
-* Add Circulant and Echo graph generators 
([FLINK-6393](https://issues.apache.org/jira/browse/FLINK-6393)).
-
-
-<div class="alert alert-warning">
-  There are two <strong>known issues</strong> in Flink 1.3.0. Both will be 
addressed in the next <i>1.3.1</i> release.
-  <br>
-  <ul>
-       <li><a 
href="https://issues.apache.org/jira/browse/FLINK-6783";>FLINK-6783</a>: Wrongly 
extracted TypeInformations for <code>WindowedStream::aggregate</code></li>
-       <li><a 
href="https://issues.apache.org/jira/browse/FLINK-6775";>FLINK-6783</a>: 
StateDescriptor cannot be shared by multiple subtasks</li>
-  </ul> 
-</div>
-
-# List of Contributors 
-
-According to git shortlog, the following 103 people contributed to the 1.3.0 
release. Thank you to all contributors!
-
-Addison Higham, Alexey Diomin, Aljoscha Krettek, Andrea Sella, Andrey 
Melentyev, Anton Mushin, barcahead, biao.liub, Bowen Li, Chen Qin, Chico Sokol, 
David Anderson, Dawid Wysakowicz, DmytroShkvyra, Fabian Hueske, Fabian Wollert, 
fengyelei, Flavio Pompermaier, FlorianFan, Fokko Driesprong, Geoffrey Mon, 
godfreyhe, gosubpl, Greg Hogan, guowei.mgw, hamstah, Haohui Mai, Hequn Cheng, 
hequn.chq, heytitle, hongyuhong, Jamie Grier, Jark Wu, jingzhang, Jinkui Shi, 
Jin Mingjian, Joerg Schad, Joshua Griffith, Jürgen Thomann, kaibozhou, 
Kathleen Sharp, Ken Geis, kkloudas, Kurt Young, lincoln-lil, lingjinjiang, 
liuyuzhong7, Lorenz Buehmann, manuzhang, Marc Tremblay, Mauro Cortellazzi, Max 
Kuklinski, mengji.fy, Mike Dias, mtunique, Nico Kruber, Omar Erminy, Patrick 
Lucas, paul, phoenixjiangnan, rami-alisawi, Ramkrishna, Rick Cox, Robert 
Metzger, Rodrigo Bonifacio, rtudoran, Seth Wiesman, Shaoxuan Wang, shijinkui, 
shuai.xus, Shuyi Chen, spkavuly, Stefano Bortoli, Stefan Richter, Stephan Ewen, 
St
 ephen Gran, sunjincheng121, tedyu, Till Rohrmann, tonycox, Tony Wei, twalthr, 
Tzu-Li (Gordon) Tai, Ufuk Celebi, Ventura Del Monte, Vijay Srinivasaraghavan, 
WangTaoTheTonic, wenlong.lwl, xccui, xiaogang.sxg, Xpray, zcb, zentol, 
zhangminglei, Zhenghua Gao, Zhijiang, Zhuoluo Yang, zjureel, Zohar Mizrahi, 
士远, 槿瑜, 淘江, 金竹
-

http://git-wip-us.apache.org/repos/asf/flink-web/blob/2ac4d1c8/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 5b37a9a..4e6e594 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -66,11 +66,11 @@
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;Network Buffer Configuration&lt;/strong&gt;: We 
finally got rid of the tedious network buffer configuration and replaced it 
with a more generic approach. First of all, you may now follow the idiom 
“more is better” without any penalty on the latency which could previously 
occur due to excessive buffering in incoming and outgoing channels. Secondly, 
instead of defining an absolute number of network buffers, we now use fractions 
of the available JVM memory (10% by default). This should cover more use cases 
by default and may also be tweaked by defining a minimum and maximum 
size.&lt;/p&gt;
+
+    &lt;p&gt;→ See &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers&quot;&gt;Configuring
 the Network Buffers&lt;/a&gt; in the Flink documentation.&lt;/p&gt;
   &lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;p&gt;→ See &lt;a 
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers&quot;&gt;Configuring
 the Network Buffers&lt;/a&gt; in the Flink documentation.&lt;/p&gt;
-
 &lt;h1 id=&quot;table-api--sql&quot;&gt;Table API / SQL&lt;/h1&gt;
 
 &lt;ul&gt;

http://git-wip-us.apache.org/repos/asf/flink-web/blob/2ac4d1c8/content/news/2017/06/01/release-1.3.0.html
----------------------------------------------------------------------
diff --git a/content/news/2017/06/01/release-1.3.0.html 
b/content/news/2017/06/01/release-1.3.0.html
index b2a71db..d822d50 100644
--- a/content/news/2017/06/01/release-1.3.0.html
+++ b/content/news/2017/06/01/release-1.3.0.html
@@ -199,11 +199,11 @@
   </li>
   <li>
     <p><strong>Network Buffer Configuration</strong>: We finally got rid of 
the tedious network buffer configuration and replaced it with a more generic 
approach. First of all, you may now follow the idiom “more is better” 
without any penalty on the latency which could previously occur due to 
excessive buffering in incoming and outgoing channels. Secondly, instead of 
defining an absolute number of network buffers, we now use fractions of the 
available JVM memory (10% by default). This should cover more use cases by 
default and may also be tweaked by defining a minimum and maximum size.</p>
+
+    <p>→ See <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers";>Configuring
 the Network Buffers</a> in the Flink documentation.</p>
   </li>
 </ul>
 
-<p>→ See <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/config.html#configuring-the-network-buffers";>Configuring
 the Network Buffers</a> in the Flink documentation.</p>
-
 <h1 id="table-api--sql">Table API / SQL</h1>
 
 <ul>

Reply via email to