This is an automated email from the ASF dual-hosted git repository.
lzljs3620320 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 4ae5bd556 Rebuild website
4ae5bd556 is described below
commit 4ae5bd556499caf1a2fc47ee62e62320687fd271
Author: JingsongLi <[email protected]>
AuthorDate: Wed May 11 13:24:44 2022 +0800
Rebuild website
---
content/blog/feed.xml | 310 +++++++------------
content/blog/index.html | 39 ++-
content/blog/page10/index.html | 38 ++-
content/blog/page11/index.html | 38 ++-
content/blog/page12/index.html | 40 ++-
content/blog/page13/index.html | 38 ++-
content/blog/page14/index.html | 37 ++-
content/blog/page15/index.html | 39 ++-
content/blog/page16/index.html | 38 ++-
content/blog/page17/index.html | 37 ++-
content/blog/page18/index.html | 39 ++-
content/blog/page19/index.html | 25 ++
content/blog/page2/index.html | 36 ++-
content/blog/page3/index.html | 36 ++-
content/blog/page4/index.html | 38 ++-
content/blog/page5/index.html | 38 ++-
content/blog/page6/index.html | 36 ++-
content/blog/page7/index.html | 36 ++-
content/blog/page8/index.html | 36 ++-
content/blog/page9/index.html | 38 ++-
content/downloads.html | 30 ++
.../blog/table-store/table-store-architecture.png | Bin 0 -> 145608 bytes
content/index.html | 11 +-
.../2022/05/11/release-table-store-0.1.0.html} | 342 +++++++--------------
content/zh/downloads.html | 3 -
content/zh/index.html | 11 +-
26 files changed, 707 insertions(+), 702 deletions(-)
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index b20a012f1..38ef3a58e 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -6,6 +6,112 @@
<link>https://flink.apache.org/blog</link>
<atom:link href="https://flink.apache.org/blog/feed.xml" rel="self"
type="application/rss+xml" />
+<item>
+<title>Apache Flink Table Store 0.1.0 Release Announcement</title>
+<description><p>The Apache Flink community is pleased to announce the
preview release of the
+<a href="https://github.com/apache/flink-table-store">Apache
Flink Table Store</a> (0.1.0).</p>
+
+<p>Please check out the full <a
href="https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/">documentation</a>
for detailed information and user guides.</p>
+
+<p>Note: Flink Table Store is still in beta status and undergoing rapid
development.
+We do not recommend that you use it directly in a production
environment.</p>
+
+<h2 id="what-is-flink-table-store">What is Flink Table
Store</h2>
+
+<p>In the past years, thanks to our numerous contributors and users,
Apache Flink has established
+itself as one of the best distributed computing engines, especially for
stateful stream processing
+at large scale. However, there are still a few challenges people are facing
when they try to obtain
+insights from their data in real-time. Among these challenges, one prominent
problem is lack of
+storage that caters to all the computing patterns.</p>
+
+<p>As of now it is quite common that people deploy a few storage systems
to work with Flink for different
+purposes. A typical setup is a message queue for stream processing, a
scannable file system / object store
+for batch processing and ad-hoc queries, and a K-V store for lookups. Such an
architecture posts challenge
+in data quality and system maintenance, due to its complexity and
heterogeneity. This is becoming a major
+issue that hurts the end-to-end user experience of streaming and batch
unification brought by Apache Flink.</p>
+
+<p>The goal of Flink table store is to address the above issues. This is
an important step of the project.
+It extends Flink’s capability from computing to the storage domain. So we can
provide a better end-to-end
+experience to the users.</p>
+
+<p>Flink Table Store aims to provide a unified storage abstraction, so
users don’t have to build the hybrid
+storage by themselves. More specifically, Table Store offers the following
core capabilities:</p>
+
+<ul>
+ <li>Support storage of large datasets and allows read / write in both
batch and streaming manner.</li>
+ <li>Support streaming queries with minimum latency down to
milliseconds.</li>
+ <li>Support Batch/OLAP queries with minimum latency down to the second
level.</li>
+ <li>Support incremental snapshots for stream consumption by default.
So users don’t need to solve the
+problem of combining different stores by themselves.</li>
+</ul>
+
+<center>
+<img src="/img/blog/table-store/table-store-architecture.png"
width="100%" />
+</center>
+
+<p>In this preview version, as shown in the architecture above:</p>
+
+<ul>
+ <li>Users can use Flink to insert data into the Table Store, either by
streaming the change log
+captured from databases, or by loading the data in batches from the other
stores like data warehouses.</li>
+ <li>Users can use Flink to query the table store in different ways,
including streaming queries and
+Batch/OLAP queries. It is also worth noting that users can use other engines
such as Apache Hive to
+query from the table store as well.</li>
+ <li>Under the hood, table Store uses a hybrid storage architecture,
using a Lake Store to store historical data
+and a Queue system (Apache Kafka integration is currently supported) to store
incremental data. It provides
+incremental snapshots for hybrid streaming reads.</li>
+ <li>Table Store’s Lake Store stores data as columnar files on file
system / object store, and uses the LSM Structure
+to support a large amount of data updates and high-performance
queries.</li>
+</ul>
+
+<p>Many thanks for the inspiration of the following systems: <a
href="https://iceberg.apache.org/">Apache Iceberg</a> and
<a href="http://rocksdb.org/">RocksDB</a>.</p>
+
+<h2 id="getting-started">Getting started</h2>
+
+<p>Please refer to the <a
href="https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/docs/try-table-store/quick-start/">getting
started guide</a> for more details.</p>
+
+<h2 id="whats-next">What’s Next?</h2>
+
+<p>The community is currently working on hardening the core logic,
stabilizing the storage format and adding the remaining bits for making the
Flink Table Store production-ready.</p>
+
+<p>In the upcoming 0.2.0 release you can expect (at-least) the following
additional features:</p>
+
+<ul>
+ <li>Ecosystem: Support Flink Table Store Reader for Apache Hive
Engine</li>
+ <li>Core: Support the adjustment of the number of Bucket</li>
+ <li>Core: Support for Append Only Data, Table Store is not just
limited to update scenarios</li>
+ <li>Core: Full Schema Evolution</li>
+ <li>Improvements based on feedback from the preview release</li>
+</ul>
+
+<p>In the medium term, you can also expect:</p>
+
+<ul>
+ <li>Ecosystem: Support Flink Table Store Reader for Trino, PrestoDB
and Apache Spark</li>
+ <li>Flink Table Store Service to accelerate updates and improve query
performance</li>
+</ul>
+
+<p>Please give the preview release a try, share your feedback on the
Flink mailing list and contribute to the project!</p>
+
+<h2 id="release-resources">Release Resources</h2>
+
+<p>The source artifacts and binaries are now available on the updated
<a
href="https://flink.apache.org/downloads.html">Downloads</a>
+page of the Flink website.</p>
+
+<p>We encourage you to download the release and share your feedback with
the community through the <a
href="https://flink.apache.org/community.html#mailing-lists">Flink
mailing lists</a>
+or <a
href="https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20component%20%3D%20%22Table%20Store%22">JIRA</a>.</p>
+
+<h2 id="list-of-contributors">List of Contributors</h2>
+
+<p>The Apache Flink community would like to thank every one of the
contributors that have made this release possible:</p>
+
+<p>Jane Chan, Jiangjie (Becket) Qin, Jingsong Lee, Leonard Xu, Nicholas
Jiang, Shen Zhu, tsreaper, Yubin Li</p>
+</description>
+<pubDate>Wed, 11 May 2022 10:00:00 +0200</pubDate>
+<link>https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html</link>
+<guid isPermaLink="true">/news/2022/05/11/release-table-store-0.1.0.html</guid>
+</item>
+
<item>
<title>The Generic Asynchronous Base Sink</title>
<description><p>Flink sinks share a lot of similar behavior. Most sinks
batch records according to user-defined buffering hints, sign requests, write
them to the destination, retry unsuccessful or throttled requests, and
participate in checkpointing.</p>
@@ -20282,209 +20388,5 @@ Enabling latency metrics can significantly impact the
performance of the cluster
<guid isPermaLink="true">/news/2019/07/02/release-1.8.1.html</guid>
</item>
-<item>
-<title>A Practical Guide to Broadcast State in Apache Flink</title>
-<description><p>Since version 1.5.0, Apache Flink features a new type of
state which is called Broadcast State. In this post, we explain what Broadcast
State is, and show an example of how it can be applied to an application that
evaluates dynamic patterns on an event stream. We walk you through the
processing steps and the source code to implement this application in
practice.</p>
-
-<h2 id="what-is-broadcast-state">What is Broadcast
State?</h2>
-
-<p>The Broadcast State can be used to combine and jointly process two
streams of events in a specific way. The events of the first stream are
broadcasted to all parallel instances of an operator, which maintains them as
state. The events of the other stream are not broadcasted but sent to
individual instances of the same operator and processed together with the
events of the broadcasted stream.
-The new broadcast state is a natural fit for applications that need to join a
low-throughput and a high-throughput stream or need to dynamically update their
processing logic. We will use a concrete example of the latter use case to
explain the broadcast state and show its API in more detail in the remainder of
this post.</p>
-
-<h2
id="dynamic-pattern-evaluation-with-broadcast-state">Dynamic
Pattern Evaluation with Broadcast State</h2>
-
-<p>Imagine an e-commerce website that captures the interactions of all
users as a stream of user actions. The company that operates the website is
interested in analyzing the interactions to increase revenue, improve the user
experience, and detect and prevent malicious behavior.
-The website implements a streaming application that detects a pattern on the
stream of user events. However, the company wants to avoid modifying and
redeploying the application every time the pattern changes. Instead, the
application ingests a second stream of patterns and updates its active pattern
when it receives a new pattern from the pattern stream. In the following, we
discuss this application step-by-step and show how it leverages the broadcast
state feature in Apache Flink.</p>
-
-<center>
-<img src="/img/blog/broadcastState/fig1.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>Our example application ingests two data streams. The first stream
provides user actions on the website and is illustrated on the top left side of
the above figure. A user interaction event consists of the type of the action
(user login, user logout, add to cart, or complete payment) and the id of the
user, which is encoded by color. The user action event stream in our
illustration contains a logout action of User 1001 followed by a
payment-complete event for User 1003, and an “ [...]
-
-<p>The second stream provides action patterns that the application will
evaluate. A pattern consists of two consecutive actions. In the figure above,
the pattern stream contains the following two:</p>
-
-<ul>
- <li>Pattern #1: A user logs in and immediately logs out without
browsing additional pages on the e-commerce website.</li>
- <li>Pattern #2: A user adds an item to the shopping cart and logs out
without completing the purchase.</li>
-</ul>
-
-<p>Such patterns help a business in better analyzing user behavior,
detecting malicious actions, and improving the website experience. For example,
in the case of items being added to a shopping cart with no follow up purchase,
the website team can take appropriate actions to understand better the reasons
why users don’t complete a purchase and initiate specific programs to improve
the website conversion (such as providing discount codes, limited free shipping
offers etc.)</p>
-
-<p>On the right-hand side, the figure shows three parallel tasks of an
operator that ingest the pattern and user action streams, evaluate the patterns
on the action stream, and emit pattern matches downstream. For the sake of
simplicity, the operator in our example only evaluates a single pattern with
exactly two subsequent actions. The currently active pattern is replaced when a
new pattern is received from the pattern stream. In principle, the operator
could also be implemented t [...]
-
-<p>We will describe how the pattern matching application processes the
user action and pattern streams.</p>
-
-<center>
-<img src="/img/blog/broadcastState/fig2.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>First a pattern is sent to the operator. The pattern is broadcasted
to all three parallel tasks of the operator. The tasks store the pattern in
their broadcast state. Since the broadcast state should only be updated using
broadcasted data, the state of all tasks is always expected to be the
same.</p>
-
-<center>
-<img src="/img/blog/broadcastState/fig3.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>Next, the first user actions are partitioned on the user id and
shipped to the operator tasks. The partitioning ensures that all actions of the
same user are processed by the same task. The figure above shows the state of
the application after the first pattern and the first three action events were
consumed by the operator tasks.</p>
-
-<p>When a task receives a new user action, it evaluates the currently
active pattern by looking at the user’s latest and previous actions. For each
user, the operator stores the previous action in the keyed state. Since the
tasks in the figure above only received a single action for each user so far
(we just started the application), the pattern does not need to be evaluated.
Finally, the previous action in the user’s keyed state is updated to the latest
action, to be able to look [...]
-
-<center>
-<img src="/img/blog/broadcastState/fig4.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>After the first three actions are processed, the next event, the
logout action of User 1001, is shipped to the task that processes the events of
User 1001. When the task receives the actions, it looks up the current pattern
from the broadcast state and the previous action of User 1001. Since the
pattern matches both actions, the task emits a pattern match event. Finally,
the task updates its keyed state by overriding the previous event with the
latest action.</p>
-
-<center>
-<img src="/img/blog/broadcastState/fig5.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>When a new pattern arrives in the pattern stream, it is broadcasted
to all tasks and each task updates its broadcast state by replacing the current
pattern with the new one.</p>
-
-<center>
-<img src="/img/blog/broadcastState/fig6.png"
width="600px" alt="Broadcast State in Apache Flink." />
-</center>
-<p><br /></p>
-
-<p>Once the broadcast state is updated with a new pattern, the matching
logic continues as before, i.e., user action events are partitioned by key and
evaluated by the responsible task.</p>
-
-<h2
id="how-to-implement-an-application-with-broadcast-state">How to
Implement an Application with Broadcast State?</h2>
-
-<p>Until now, we conceptually discussed the application and explained
how it uses broadcast state to evaluate dynamic patterns over event streams.
Next, we’ll show how to implement the example application with Flink’s
DataStream API and the broadcast state feature.</p>
-
-<p>Let’s start with the input data of the application. We have two data
streams, actions, and patterns. At this point, we don’t really care where the
streams come from. The streams could be ingested from Apache Kafka or Kinesis
or any other system. Action and Pattern are Pojos with two fields
each:</p>
-
-<div class="highlight"><pre><code
class="language-java"><span
class="n">DataStream</span><span
class="o">&lt;</span><span
class="n">Action</span><span
class="o">&gt;</span> <span
class="n">actions</span> <span
class="o">=</span> <span
class="o">???</span>
-<span class="n">DataStream</span><span
class="o">&lt;</span><span
class="n">Pattern</span><span
class="o">&gt;</span> <span
class="n">patterns</span> <span
class="o">=</span> <span
class="o">???</span></code></pre></div>
-
-<p><code>Action</code> and <code>Pattern</code>
are Pojos with two fields each:</p>
-
-<ul>
- <li>
- <p><code>Action: Long userId, String
action</code></p>
- </li>
- <li>
- <p><code>Pattern: String firstAction, String
secondAction</code></p>
- </li>
-</ul>
-
-<p>As a first step, we key the action stream on the
<code>userId</code> attribute.</p>
-
-<div class="highlight"><pre><code
class="language-java"><span
class="n">KeyedStream</span><span
class="o">&lt;</span><span
class="n">Action</span><span
class="o">,</span> <span
class="n">Long</span><span
class="o">&gt;</span> <span
class="n">actionsByUser</span> <span class="o"&
[...]
- <span class="o">.</span><span
class="na">keyBy</span><span
class="o">((</span><span
class="n">KeySelector</span><span
class="o">&lt;</span><span
class="n">Action</span><span
class="o">,</span> <span
class="n">Long</span><span
class="o">&gt;)</span> <span
class="n">act [...]
-
-<p>Next, we prepare the broadcast state. Broadcast state is always
represented as <code>MapState</code>, the most versatile state
primitive that Flink provides.</p>
-
-<div class="highlight"><pre><code
class="language-java"><span
class="n">MapStateDescriptor</span><span
class="o">&lt;</span><span
class="n">Void</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&gt;</span> <span
class="n">bcStateDescriptor</span> <span class=&q [...]
- <span class="k">new</span> <span
class="n">MapStateDescriptor</span><span
class="o">&lt;&gt;(</span><span
class="s">&quot;patterns&quot;</span><span
class="o">,</span> <span
class="n">Types</span><span
class="o">.</span><span
class="na">VOID</span><span
class="o">,</span> < [...]
-
-<p>Since our application only evaluates and stores a single
<code>Pattern</code> at a time, we configure the broadcast state as
a <code>MapState</code> with key type <code>Void</code>
and value type <code>Pattern</code>. The
<code>Pattern</code> is always stored in the
<code>MapState</code> with <code>null</code> as
key.</p>
-
-<div class="highlight"><pre><code
class="language-java"><span
class="n">BroadcastStream</span><span
class="o">&lt;</span><span
class="n">Pattern</span><span
class="o">&gt;</span> <span
class="n">bcedPatterns</span> <span
class="o">=</span> <span
class="n">patterns</span><span class=" [...]
-<p>Using the <code>MapStateDescriptor</code> for the
broadcast state, we apply the <code>broadcast()</code>
transformation on the patterns stream and receive a <code>BroadcastStream
bcedPatterns</code>.</p>
-
-<div class="highlight"><pre><code
class="language-java"><span
class="n">DataStream</span><span
class="o">&lt;</span><span
class="n">Tuple2</span><span
class="o">&lt;</span><span
class="n">Long</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&g [...]
- <span class="o">.</span><span
class="na">connect</span><span
class="o">(</span><span
class="n">bcedPatterns</span><span
class="o">)</span>
- <span class="o">.</span><span
class="na">process</span><span
class="o">(</span><span
class="k">new</span> <span
class="nf">PatternEvaluator</span><span
class="o">());</span></code></pre></div>
-
-<p>After we obtained the keyed <code>actionsByUser</code>
stream and the broadcasted <code>bcedPatterns</code> stream, we
<code>connect()</code> both streams and apply a
<code>PatternEvaluator</code> on the connected streams.
<code>PatternEvaluator</code> is a custom function that implements
the <code>KeyedBroadcastProcessFunction</code> interface. It
applies the pattern matching logic that we discussed before [...]
-
-<div class="highlight"><pre><code
class="language-java"><span
class="kd">public</span> <span
class="kd">static</span> <span
class="kd">class</span> <span
class="nc">PatternEvaluator</span>
- <span class="kd">extends</span> <span
class="n">KeyedBroadcastProcessFunction</span><span
class="o">&lt;</span><span
class="n">Long</span><span
class="o">,</span> <span
class="n">Action</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">,</span> <span class [...]
-
- <span class="c1">// handle for keyed state (per
user)</span>
- <span class="n">ValueState</span><span
class="o">&lt;</span><span
class="n">String</span><span
class="o">&gt;</span> <span
class="n">prevActionState</span><span
class="o">;</span>
- <span class="c1">// broadcast state descriptor</span>
- <span class="n">MapStateDescriptor</span><span
class="o">&lt;</span><span
class="n">Void</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&gt;</span> <span
class="n">patternDesc</span><span
class="o">;</span>
-
- <span class="nd">@Override</span>
- <span class="kd">public</span> <span
class="kt">void</span> <span
class="nf">open</span><span
class="o">(</span><span
class="n">Configuration</span> <span
class="n">conf</span><span
class="o">)</span> <span
class="o">{</span>
- <span class="c1">// initialize keyed state</span>
- <span class="n">prevActionState</span> <span
class="o">=</span> <span
class="n">getRuntimeContext</span><span
class="o">().</span><span
class="na">getState</span><span
class="o">(</span>
- <span class="k">new</span> <span
class="n">ValueStateDescriptor</span><span
class="o">&lt;&gt;(</span><span
class="s">&quot;lastAction&quot;</span><span
class="o">,</span> <span
class="n">Types</span><span
class="o">.</span><span
class="na">STRING</span><span
class="o">));</ [...]
- <span class="n">patternDesc</span> <span
class="o">=</span>
- <span class="k">new</span> <span
class="n">MapStateDescriptor</span><span
class="o">&lt;&gt;(</span><span
class="s">&quot;patterns&quot;</span><span
class="o">,</span> <span
class="n">Types</span><span
class="o">.</span><span
class="na">VOID</span><span
class="o">,</span> [...]
- <span class="o">}</span>
-
- <span class="cm">/**</span>
-<span class="cm"> * Called for each user action.</span>
-<span class="cm"> * Evaluates the current pattern against
the previous and</span>
-<span class="cm"> * current action of the user.</span>
-<span class="cm"> */</span>
- <span class="nd">@Override</span>
- <span class="kd">public</span> <span
class="kt">void</span> <span
class="nf">processElement</span><span
class="o">(</span>
- <span class="n">Action</span> <span
class="n">action</span><span
class="o">,</span>
- <span class="n">ReadOnlyContext</span> <span
class="n">ctx</span><span
class="o">,</span>
- <span class="n">Collector</span><span
class="o">&lt;</span><span
class="n">Tuple2</span><span
class="o">&lt;</span><span
class="n">Long</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&gt;&gt;</span> <span
class="n">out</span><span class=&qu [...]
- <span class="c1">// get current pattern from broadcast
state</span>
- <span class="n">Pattern</span> <span
class="n">pattern</span> <span
class="o">=</span> <span
class="n">ctx</span>
- <span class="o">.</span><span
class="na">getBroadcastState</span><span
class="o">(</span><span
class="k">this</span><span
class="o">.</span><span
class="na">patternDesc</span><span
class="o">)</span>
- <span class="c1">// access MapState with null as VOID
default value</span>
- <span class="o">.</span><span
class="na">get</span><span
class="o">(</span><span
class="kc">null</span><span
class="o">);</span>
- <span class="c1">// get previous action of current user
from keyed state</span>
- <span class="n">String</span> <span
class="n">prevAction</span> <span
class="o">=</span> <span
class="n">prevActionState</span><span
class="o">.</span><span
class="na">value</span><span
class="o">();</span>
- <span class="k">if</span> <span
class="o">(</span><span
class="n">pattern</span> <span
class="o">!=</span> <span
class="kc">null</span> <span
class="o">&amp;&amp;</span> <span
class="n">prevAction</span> <span
class="o">!=</span> <span
class="kc">null</span><span class="o&qu [...]
- <span class="c1">// user had an action before, check if
pattern matches</span>
- <span class="k">if</span> <span
class="o">(</span><span
class="n">pattern</span><span
class="o">.</span><span
class="na">firstAction</span><span
class="o">.</span><span
class="na">equals</span><span
class="o">(</span><span
class="n">prevAction</span><span
class="o">)</s [...]
- <span class="n">pattern</span><span
class="o">.</span><span
class="na">secondAction</span><span
class="o">.</span><span
class="na">equals</span><span
class="o">(</span><span
class="n">action</span><span
class="o">.</span><span
class="na">action</span><span class="o">))
[...]
- <span class="c1">// MATCH</span>
- <span class="n">out</span><span
class="o">.</span><span
class="na">collect</span><span
class="o">(</span><span
class="k">new</span> <span
class="n">Tuple2</span><span
class="o">&lt;&gt;(</span><span
class="n">ctx</span><span
class="o">.</span><span class="na">get [...]
- <span class="o">}</span>
- <span class="o">}</span>
- <span class="c1">// update keyed state and remember action
for next pattern evaluation</span>
- <span class="n">prevActionState</span><span
class="o">.</span><span
class="na">update</span><span
class="o">(</span><span
class="n">action</span><span
class="o">.</span><span
class="na">action</span><span
class="o">);</span>
- <span class="o">}</span>
-
- <span class="cm">/**</span>
-<span class="cm"> * Called for each new pattern.</span>
-<span class="cm"> * Overwrites the current pattern with the
new pattern.</span>
-<span class="cm"> */</span>
- <span class="nd">@Override</span>
- <span class="kd">public</span> <span
class="kt">void</span> <span
class="nf">processBroadcastElement</span><span
class="o">(</span>
- <span class="n">Pattern</span> <span
class="n">pattern</span><span
class="o">,</span>
- <span class="n">Context</span> <span
class="n">ctx</span><span
class="o">,</span>
- <span class="n">Collector</span><span
class="o">&lt;</span><span
class="n">Tuple2</span><span
class="o">&lt;</span><span
class="n">Long</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&gt;&gt;</span> <span
class="n">out</span><span class=&qu [...]
- <span class="c1">// store the new pattern by updating the
broadcast state</span>
- <span class="n">BroadcastState</span><span
class="o">&lt;</span><span
class="n">Void</span><span
class="o">,</span> <span
class="n">Pattern</span><span
class="o">&gt;</span> <span
class="n">bcState</span> <span
class="o">=</span> <span
class="n">ctx</span><span class="o" [...]
- <span class="c1">// storing in MapState with null as VOID
default value</span>
- <span class="n">bcState</span><span
class="o">.</span><span
class="na">put</span><span
class="o">(</span><span
class="kc">null</span><span
class="o">,</span> <span
class="n">pattern</span><span
class="o">);</span>
- <span class="o">}</span>
-<span
class="o">}</span></code></pre></div>
-
-<p>The <code>KeyedBroadcastProcessFunction</code> interface
provides three methods to process records and emit results.</p>
-
-<ul>
- <li><code>processBroadcastElement()</code> is called for
each record of the broadcasted stream. In our
<code>PatternEvaluator</code> function, we simply put the received
<code>Pattern</code> record in to the broadcast state using the
<code>null</code> key (remember, we only store a single pattern in
the <code>MapState</code>).</li>
- <li><code>processElement()</code> is called for each
record of the keyed stream. It provides read-only access to the broadcast state
to prevent modification that result in different broadcast states across the
parallel instances of the function. The
<code>processElement()</code> method of the
<code>PatternEvaluator</code> retrieves the current pattern from
the broadcast state and the previous action of the user from the keyed state.
If both are [...]
- <li><code>onTimer()</code> is called when a previously
registered timer fires. Timers can be registered in the
<code>processElement</code> method and are used to perform
computations or to clean up state in the future. We did not implement this
method in our example to keep the code concise. However, it could be used to
remove the last action of a user when the user was not active for a certain
period of time to avoid growing state due to inactive users.&l [...]
-</ul>
-
-<p>You might have noticed the context objects of the
<code>KeyedBroadcastProcessFunction</code>’s processing method. The
context objects give access to additional functionality such as:</p>
-
-<ul>
- <li>The broadcast state (read-write or read-only, depending on the
method),</li>
- <li>A <code>TimerService</code>, which gives access to the
record’s timestamp, the current watermark, and which can register
timers,</li>
- <li>The current key (only available in
<code>processElement()</code>), and</li>
- <li>A method to apply a function the keyed state of each registered
key (only available in
<code>processBroadcastElement()</code>)</li>
-</ul>
-
-<p>The <code>KeyedBroadcastProcessFunction</code> has full
access to Flink state and time features just like any other ProcessFunction and
hence can be used to implement sophisticated application logic. Broadcast state
was designed to be a versatile feature that adapts to different scenarios and
use cases. Although we only discussed a fairly simple and restricted
application, you can use broadcast state in many ways to implement the
requirements of your application.</p>
-
-<h2 id="conclusion">Conclusion</h2>
-
-<p>In this blog post, we walked you through an example application to
explain what Apache Flink’s broadcast state is and how it can be used to
evaluate dynamic patterns on event streams. We’ve also discussed the API and
showed the source code of our example application.</p>
-
-<p>We invite you to check the <a
href="https://nightlies.apache.org/flink/flink-docs-stable/dev/stream/state/broadcast_state.html">documentation</a>
of this feature and provide feedback or suggestions for further improvements
through our <a
href="http://mail-archives.apache.org/mod_mbox/flink-community/">mailing
list</a>.</p>
-</description>
-<pubDate>Wed, 26 Jun 2019 14:00:00 +0200</pubDate>
-<link>https://flink.apache.org/2019/06/26/broadcast-state.html</link>
-<guid isPermaLink="true">/2019/06/26/broadcast-state.html</guid>
-</item>
-
</channel>
</rss>
diff --git a/content/blog/index.html b/content/blog/index.html
index 601620a6b..c401e9893 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -232,6 +232,22 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2022/05/11/release-table-store-0.1.0.html">Apache Flink Table Store
0.1.0 Release Announcement</a></h2>
+
+ <p>11 May 2022
+ Jingsong Lee & Jiangjie (Becket) Qin </p>
+
+ <p><p>The Apache Flink community is pleased to announce the preview
release of the
+<a href="https://github.com/apache/flink-table-store">Apache Flink Table
Store</a> (0.1.0).</p>
+
+</p>
+
+ <p><a href="/news/2022/05/11/release-table-store-0.1.0.html">Continue
reading »</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a href="/2022/05/06/async-sink-base.html">The
Generic Asynchronous Base Sink</a></h2>
@@ -360,19 +376,6 @@ This new release brings various improvements to the
StateFun runtime, a leaner w
<hr>
- <article>
- <h2 class="blog-title"><a
href="/2022/01/20/pravega-connector-101.html">Pravega Flink Connector
101</a></h2>
-
- <p>20 Jan 2022
- Yumin Zhou (Brian) (<a
href="https://twitter.com/crazy__zhou">@crazy__zhou</a>)</p>
-
- <p>A brief introduction to the Pravega Flink Connector</p>
-
- <p><a href="/2022/01/20/pravega-connector-101.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -405,6 +408,16 @@ This new release brings various improvements to the
StateFun runtime, a leaner w
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page10/index.html b/content/blog/page10/index.html
index 56cf8e663..713b2fdfc 100644
--- a/content/blog/page10/index.html
+++ b/content/blog/page10/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2019/12/11/release-1.8.3.html">Apache Flink 1.8.3 Released</a></h2>
+
+ <p>11 Dec 2019
+ Hequn Cheng </p>
+
+ <p><p>The Apache Flink community released the third bugfix version of
the Apache Flink 1.8 series.</p>
+
+</p>
+
+ <p><a href="/news/2019/12/11/release-1.8.3.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2019/12/09/flink-kubernetes-kudo.html">Running Apache Flink on
Kubernetes with KUDO</a></h2>
@@ -358,19 +373,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a href="/2019/06/26/broadcast-state.html">A
Practical Guide to Broadcast State in Apache Flink</a></h2>
-
- <p>26 Jun 2019
- Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
-
- <p>Apache Flink has multiple types of operator state, one of which is
called Broadcast State. In this post, we explain what Broadcast State is, and
show an example of how it can be applied to an application that evaluates
dynamic patterns on an event stream.</p>
-
- <p><a href="/2019/06/26/broadcast-state.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -403,6 +405,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page11/index.html b/content/blog/page11/index.html
index 94d9b5ff1..2784e86ce 100644
--- a/content/blog/page11/index.html
+++ b/content/blog/page11/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a href="/2019/06/26/broadcast-state.html">A
Practical Guide to Broadcast State in Apache Flink</a></h2>
+
+ <p>26 Jun 2019
+ Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
+
+ <p>Apache Flink has multiple types of operator state, one of which is
called Broadcast State. In this post, we explain what Broadcast State is, and
show an example of how it can be applied to an application that evaluates
dynamic patterns on an event stream.</p>
+
+ <p><a href="/2019/06/26/broadcast-state.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a href="/2019/06/05/flink-network-stack.html">A
Deep-Dive into Flink's Network Stack</a></h2>
@@ -357,21 +370,6 @@ for more details.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2019/02/25/release-1.6.4.html">Apache Flink 1.6.4 Released</a></h2>
-
- <p>25 Feb 2019
- </p>
-
- <p><p>The Apache Flink community released the fourth bugfix version of
the Apache Flink 1.6 series.</p>
-
-</p>
-
- <p><a href="/news/2019/02/25/release-1.6.4.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -404,6 +402,16 @@ for more details.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page12/index.html b/content/blog/page12/index.html
index c04e8fd90..9d391d443 100644
--- a/content/blog/page12/index.html
+++ b/content/blog/page12/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2019/02/25/release-1.6.4.html">Apache Flink 1.6.4 Released</a></h2>
+
+ <p>25 Feb 2019
+ </p>
+
+ <p><p>The Apache Flink community released the fourth bugfix version of
the Apache Flink 1.6 series.</p>
+
+</p>
+
+ <p><a href="/news/2019/02/25/release-1.6.4.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2019/02/15/release-1.7.2.html">Apache Flink 1.7.2 Released</a></h2>
@@ -367,21 +382,6 @@ Please check the <a
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2018/09/20/release-1.5.4.html">Apache Flink 1.5.4 Released</a></h2>
-
- <p>20 Sep 2018
- </p>
-
- <p><p>The Apache Flink community released the fourth bugfix version of
the Apache Flink 1.5 series.</p>
-
-</p>
-
- <p><a href="/news/2018/09/20/release-1.5.4.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -414,6 +414,16 @@ Please check the <a
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page13/index.html b/content/blog/page13/index.html
index 50b0e23f6..a12ffeb8f 100644
--- a/content/blog/page13/index.html
+++ b/content/blog/page13/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2018/09/20/release-1.5.4.html">Apache Flink 1.5.4 Released</a></h2>
+
+ <p>20 Sep 2018
+ </p>
+
+ <p><p>The Apache Flink community released the fourth bugfix version of
the Apache Flink 1.5 series.</p>
+
+</p>
+
+ <p><a href="/news/2018/09/20/release-1.5.4.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2018/08/21/release-1.5.3.html">Apache Flink 1.5.3 Released</a></h2>
@@ -365,19 +380,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/features/2018/01/30/incremental-checkpointing.html">Managing Large State
in Apache Flink: An Intro to Incremental Checkpointing</a></h2>
-
- <p>30 Jan 2018
- Stefan Ricther (<a
href="https://twitter.com/StefanRRicther">@StefanRRicther</a>) & Chris Ward
(<a href="https://twitter.com/chrischinch">@chrischinch</a>)</p>
-
- <p>Flink 1.3.0 introduced incremental checkpointing, making it possible
for applications with large state to generate checkpoints more efficiently.</p>
-
- <p><a
href="/features/2018/01/30/incremental-checkpointing.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -410,6 +412,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page14/index.html b/content/blog/page14/index.html
index 0215cada1..8b78d6e2e 100644
--- a/content/blog/page14/index.html
+++ b/content/blog/page14/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/features/2018/01/30/incremental-checkpointing.html">Managing Large State
in Apache Flink: An Intro to Incremental Checkpointing</a></h2>
+
+ <p>30 Jan 2018
+ Stefan Ricther (<a
href="https://twitter.com/StefanRRicther">@StefanRRicther</a>) & Chris Ward
(<a href="https://twitter.com/chrischinch">@chrischinch</a>)</p>
+
+ <p>Flink 1.3.0 introduced incremental checkpointing, making it possible
for applications with large state to generate checkpoints more efficiently.</p>
+
+ <p><a
href="/features/2018/01/30/incremental-checkpointing.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2017/12/21/2017-year-in-review.html">Apache Flink in 2017: Year in
Review</a></h2>
@@ -368,20 +381,6 @@ what’s coming in Flink 1.4.0 as well as a preview of what
the Flink community
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2017/04/04/dynamic-tables.html">Continuous Queries on Dynamic
Tables</a></h2>
-
- <p>04 Apr 2017 by Fabian Hueske, Shaoxuan Wang, and Xiaowei Jiang
- </p>
-
- <p><p>Flink's relational APIs, the Table API and SQL, are unified APIs
for stream and batch processing, meaning that a query produces the same result
when being evaluated on streaming or static data.</p>
-<p>In this blog post we discuss the future of these APIs and introduce the
concept of Dynamic Tables. Dynamic tables will significantly expand the scope
of the Table API and SQL on streams and enable many more advanced use cases. We
discuss how streams and dynamic tables relate to each other and explain the
semantics of continuously evaluating queries on dynamic tables.</p></p>
-
- <p><a href="/news/2017/04/04/dynamic-tables.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -414,6 +413,16 @@ what’s coming in Flink 1.4.0 as well as a preview of what
the Flink community
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page15/index.html b/content/blog/page15/index.html
index 8f4fe6378..ad50ac63d 100644
--- a/content/blog/page15/index.html
+++ b/content/blog/page15/index.html
@@ -232,6 +232,20 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2017/04/04/dynamic-tables.html">Continuous Queries on Dynamic
Tables</a></h2>
+
+ <p>04 Apr 2017 by Fabian Hueske, Shaoxuan Wang, and Xiaowei Jiang
+ </p>
+
+ <p><p>Flink's relational APIs, the Table API and SQL, are unified APIs
for stream and batch processing, meaning that a query produces the same result
when being evaluated on streaming or static data.</p>
+<p>In this blog post we discuss the future of these APIs and introduce the
concept of Dynamic Tables. Dynamic tables will significantly expand the scope
of the Table API and SQL on streams and enable many more advanced use cases. We
discuss how streams and dynamic tables relate to each other and explain the
semantics of continuously evaluating queries on dynamic tables.</p></p>
+
+ <p><a href="/news/2017/04/04/dynamic-tables.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and
Back Again: An Update on Flink's Table & SQL API</a></h2>
@@ -361,21 +375,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></h2>
-
- <p>08 Aug 2016
- </p>
-
- <p><div class="alert alert-success"><strong>Important</strong>: The
Maven artifacts published with version 1.1.0 on Maven central have a Hadoop
dependency issue. It is highly recommended to use <strong>1.1.1</strong> or
<strong>1.1.1-hadoop1</strong> as the Flink version.</div>
-
-</p>
-
- <p><a href="/news/2016/08/08/release-1.1.0.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -408,6 +407,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page16/index.html b/content/blog/page16/index.html
index a1bd853b0..7a46d6c6c 100644
--- a/content/blog/page16/index.html
+++ b/content/blog/page16/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></h2>
+
+ <p>08 Aug 2016
+ </p>
+
+ <p><div class="alert alert-success"><strong>Important</strong>: The
Maven artifacts published with version 1.1.0 on Maven central have a Hadoop
dependency issue. It is highly recommended to use <strong>1.1.1</strong> or
<strong>1.1.1-hadoop1</strong> as the Flink version.</div>
+
+</p>
+
+ <p><a href="/news/2016/08/08/release-1.1.0.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a href="/news/2016/05/24/stream-sql.html">Stream
Processing for Everyone with SQL and Apache Flink</a></h2>
@@ -362,19 +377,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2015/12/11/storm-compatibility.html">Storm Compatibility in Apache
Flink: How to run existing Storm topologies on Flink</a></h2>
-
- <p>11 Dec 2015 by Matthias J. Sax (<a
href="https://twitter.com/">@MatthiasJSax</a>)
- </p>
-
- <p>In this blog post, we describe Flink's compatibility package for <a
href="https://storm.apache.org">Apache Storm</a> that allows to embed Spouts
(sources) and Bolts (operators) in a regular Flink streaming job. Furthermore,
the compatibility package provides a Storm compatible API in order to execute
whole Storm topologies with (almost) no code adaption.</p>
-
- <p><a href="/news/2015/12/11/storm-compatibility.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -407,6 +409,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page17/index.html b/content/blog/page17/index.html
index 08b643a28..a4556c250 100644
--- a/content/blog/page17/index.html
+++ b/content/blog/page17/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2015/12/11/storm-compatibility.html">Storm Compatibility in Apache
Flink: How to run existing Storm topologies on Flink</a></h2>
+
+ <p>11 Dec 2015 by Matthias J. Sax (<a
href="https://twitter.com/">@MatthiasJSax</a>)
+ </p>
+
+ <p>In this blog post, we describe Flink's compatibility package for <a
href="https://storm.apache.org">Apache Storm</a> that allows to embed Spouts
(sources) and Bolts (operators) in a regular Flink streaming job. Furthermore,
the compatibility package provides a Storm compatible API in order to execute
whole Storm topologies with (almost) no code adaption.</p>
+
+ <p><a href="/news/2015/12/11/storm-compatibility.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2015/12/04/Introducing-windows.html">Introducing Stream Windows in
Apache Flink</a></h2>
@@ -370,20 +383,6 @@ vertex-centric or gather-sum-apply to Flink dataflows.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Juggling with Bits
and Bytes</a></h2>
-
- <p>11 May 2015 by Fabian Hüske (<a
href="https://twitter.com/">@fhueske</a>)
- </p>
-
- <p><p>Nowadays, a lot of open-source systems for analyzing large data
sets are implemented in Java or other JVM-based programming languages. The most
well-known example is Apache Hadoop, but also newer frameworks such as Apache
Spark, Apache Drill, and also Apache Flink run on JVMs. A common challenge that
JVM-based data analysis engines face is to store large amounts of data in
memory - both for caching and for efficient processing such as sorting and
joining of data. Managing the [...]
-<p>In this blog post we discuss how Apache Flink manages memory, talk about
its custom data de/serialization stack, and show how it operates on binary
data.</p></p>
-
- <p><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Continue
reading »</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -416,6 +415,16 @@ vertex-centric or gather-sum-apply to Flink dataflows.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page18/index.html b/content/blog/page18/index.html
index b2457bdd0..3e2f6fd8d 100644
--- a/content/blog/page18/index.html
+++ b/content/blog/page18/index.html
@@ -232,6 +232,20 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Juggling with Bits
and Bytes</a></h2>
+
+ <p>11 May 2015 by Fabian Hüske (<a
href="https://twitter.com/">@fhueske</a>)
+ </p>
+
+ <p><p>Nowadays, a lot of open-source systems for analyzing large data
sets are implemented in Java or other JVM-based programming languages. The most
well-known example is Apache Hadoop, but also newer frameworks such as Apache
Spark, Apache Drill, and also Apache Flink run on JVMs. A common challenge that
JVM-based data analysis engines face is to store large amounts of data in
memory - both for caching and for efficient processing such as sorting and
joining of data. Managing the [...]
+<p>In this blog post we discuss how Apache Flink manages memory, talk about
its custom data de/serialization stack, and show how it operates on binary
data.</p></p>
+
+ <p><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Continue
reading »</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2015/04/13/release-0.9.0-milestone1.html">Announcing Flink
0.9.0-milestone1 preview release</a></h2>
@@ -377,21 +391,6 @@ and offers a new API including definition of flexible
windows.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2014/11/04/release-0.7.0.html">Apache Flink 0.7.0 available</a></h2>
-
- <p>04 Nov 2014
- </p>
-
- <p><p>We are pleased to announce the availability of Flink 0.7.0. This
release includes new user-facing features as well as performance and bug fixes,
brings the Scala and Java APIs in sync, and introduces Flink Streaming. A total
of 34 people have contributed to this release, a big thanks to all of them!</p>
-
-</p>
-
- <p><a href="/news/2014/11/04/release-0.7.0.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -424,6 +423,16 @@ and offers a new API including definition of flexible
windows.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page19/index.html b/content/blog/page19/index.html
index 423ff87f7..0a6f55882 100644
--- a/content/blog/page19/index.html
+++ b/content/blog/page19/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2014/11/04/release-0.7.0.html">Apache Flink 0.7.0 available</a></h2>
+
+ <p>04 Nov 2014
+ </p>
+
+ <p><p>We are pleased to announce the availability of Flink 0.7.0. This
release includes new user-facing features as well as performance and bug fixes,
brings the Scala and Java APIs in sync, and introduces Flink Streaming. A total
of 34 people have contributed to this release, a big thanks to all of them!</p>
+
+</p>
+
+ <p><a href="/news/2014/11/04/release-0.7.0.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2014/10/03/upcoming_events.html">Upcoming Events</a></h2>
@@ -312,6 +327,16 @@ academic and open source project that Flink originates
from.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index 91106a53e..7b6154957 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/2022/01/20/pravega-connector-101.html">Pravega Flink Connector
101</a></h2>
+
+ <p>20 Jan 2022
+ Yumin Zhou (Brian) (<a
href="https://twitter.com/crazy__zhou">@crazy__zhou</a>)</p>
+
+ <p>A brief introduction to the Pravega Flink Connector</p>
+
+ <p><a href="/2022/01/20/pravega-connector-101.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2022/01/17/release-1.14.3.html">Apache Flink 1.14.3 Release
Announcement</a></h2>
@@ -353,19 +366,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/2021/10/26/sort-shuffle-part1.html">Sort-Based Blocking Shuffle
Implementation in Flink - Part One</a></h2>
-
- <p>26 Oct 2021
- Yingjie Cao (Kevin) & Daisy Tsang </p>
-
- <p>Flink has implemented the sort-based blocking shuffle (FLIP-148) for
batch data processing. In this blog post, we will take a close look at the
design & implementation details and see what we can gain from it.</p>
-
- <p><a href="/2021/10/26/sort-shuffle-part1.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -398,6 +398,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index 44844c578..0f8859f69 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/2021/10/26/sort-shuffle-part1.html">Sort-Based Blocking Shuffle
Implementation in Flink - Part One</a></h2>
+
+ <p>26 Oct 2021
+ Yingjie Cao (Kevin) & Daisy Tsang </p>
+
+ <p>Flink has implemented the sort-based blocking shuffle (FLIP-148) for
batch data processing. In this blog post, we will take a close look at the
design & implementation details and see what we can gain from it.</p>
+
+ <p><a href="/2021/10/26/sort-shuffle-part1.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2021/10/19/release-1.13.3.html">Apache Flink 1.13.3
Released</a></h2>
@@ -373,19 +386,6 @@ This new release brings various improvements to the
StateFun runtime, a leaner w
<hr>
- <article>
- <h2 class="blog-title"><a href="/2021/07/07/backpressure.html">How to
identify the source of backpressure?</a></h2>
-
- <p>07 Jul 2021
- Piotr Nowojski (<a
href="https://twitter.com/PiotrNowojski">@PiotrNowojski</a>)</p>
-
- <p>Apache Flink 1.13 introduced a couple of important changes in the
area of backpressure monitoring and performance analysis of Flink Jobs. This
blog post aims to introduce those changes and explain how to use them.</p>
-
- <p><a href="/2021/07/07/backpressure.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -418,6 +418,16 @@ This new release brings various improvements to the
StateFun runtime, a leaner w
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index 79208888c..d5a9d03e8 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a href="/2021/07/07/backpressure.html">How to
identify the source of backpressure?</a></h2>
+
+ <p>07 Jul 2021
+ Piotr Nowojski (<a
href="https://twitter.com/PiotrNowojski">@PiotrNowojski</a>)</p>
+
+ <p>Apache Flink 1.13 introduced a couple of important changes in the
area of backpressure monitoring and performance analysis of Flink Jobs. This
blog post aims to introduce those changes and explain how to use them.</p>
+
+ <p><a href="/2021/07/07/backpressure.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2021/05/28/release-1.13.1.html">Apache Flink 1.13.1
Released</a></h2>
@@ -361,21 +374,6 @@ to develop scalable, consistent, and elastic distributed
applications.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2021/01/29/release-1.10.3.html">Apache Flink 1.10.3
Released</a></h2>
-
- <p>29 Jan 2021
- Xintong Song </p>
-
- <p><p>The Apache Flink community released the third bugfix version of
the Apache Flink 1.10 series.</p>
-
-</p>
-
- <p><a href="/news/2021/01/29/release-1.10.3.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -408,6 +406,16 @@ to develop scalable, consistent, and elastic distributed
applications.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page5/index.html b/content/blog/page5/index.html
index 4d50ab2c6..c462d8388 100644
--- a/content/blog/page5/index.html
+++ b/content/blog/page5/index.html
@@ -232,6 +232,21 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2021/01/29/release-1.10.3.html">Apache Flink 1.10.3
Released</a></h2>
+
+ <p>29 Jan 2021
+ Xintong Song </p>
+
+ <p><p>The Apache Flink community released the third bugfix version of
the Apache Flink 1.10 series.</p>
+
+</p>
+
+ <p><a href="/news/2021/01/29/release-1.10.3.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2021/01/19/release-1.12.1.html">Apache Flink 1.12.1
Released</a></h2>
@@ -357,19 +372,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html">From
Aligned to Unaligned Checkpoints - Part 1: Checkpoints, Alignment, and
Backpressure</a></h2>
-
- <p>15 Oct 2020
- Arvid Heise & Stephan Ewen </p>
-
- <p>Apache Flink’s checkpoint-based fault tolerance mechanism is one of
its defining features. Because of that design, Flink unifies batch and stream
processing, can easily scale to both very small and extremely large scenarios
and provides support for many operational features. In this post we recap the
original checkpointing process in Flink, its core properties and issues under
backpressure.</p>
-
- <p><a
href="/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html">Continue
reading »</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -402,6 +404,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page6/index.html b/content/blog/page6/index.html
index ff5ae2ed2..cd9f71bec 100644
--- a/content/blog/page6/index.html
+++ b/content/blog/page6/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html">From
Aligned to Unaligned Checkpoints - Part 1: Checkpoints, Alignment, and
Backpressure</a></h2>
+
+ <p>15 Oct 2020
+ Arvid Heise & Stephan Ewen </p>
+
+ <p>Apache Flink’s checkpoint-based fault tolerance mechanism is one of
its defining features. Because of that design, Flink unifies batch and stream
processing, can easily scale to both very small and extremely large scenarios
and provides support for many operational features. In this post we recap the
original checkpointing process in Flink, its core properties and issues under
backpressure.</p>
+
+ <p><a
href="/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html">Continue
reading »</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2020/10/13/stateful-serverless-internals.html">Stateful Functions
Internals: Behind the scenes of Stateful Serverless</a></h2>
@@ -361,19 +374,6 @@ as well as increased observability for operational
purposes.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/2020/08/04/pyflink-pandas-udf-support-flink.html">PyFlink: The
integration of Pandas into PyFlink</a></h2>
-
- <p>04 Aug 2020
- Jincheng Sun (<a
href="https://twitter.com/sunjincheng121">@sunjincheng121</a>) & Markos
Sfikas (<a href="https://twitter.com/MarkSfik">@MarkSfik</a>)</p>
-
- <p>The Apache Flink community put some great effort into integrating
Pandas with PyFlink in the latest Flink version 1.11. Some of the added
features include support for Pandas UDF and the conversion between Pandas
DataFrame and Table. In this article, we will introduce how these
functionalities work and how to use them with a step-by-step example.</p>
-
- <p><a href="/2020/08/04/pyflink-pandas-udf-support-flink.html">Continue
reading »</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -406,6 +406,16 @@ as well as increased observability for operational
purposes.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page7/index.html b/content/blog/page7/index.html
index 96ddd5837..c0530af0c 100644
--- a/content/blog/page7/index.html
+++ b/content/blog/page7/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/2020/08/04/pyflink-pandas-udf-support-flink.html">PyFlink: The
integration of Pandas into PyFlink</a></h2>
+
+ <p>04 Aug 2020
+ Jincheng Sun (<a
href="https://twitter.com/sunjincheng121">@sunjincheng121</a>) & Markos
Sfikas (<a href="https://twitter.com/MarkSfik">@MarkSfik</a>)</p>
+
+ <p>The Apache Flink community put some great effort into integrating
Pandas with PyFlink in the latest Flink version 1.11. Some of the added
features include support for Pandas UDF and the conversion between Pandas
DataFrame and Table. In this article, we will introduce how these
functionalities work and how to use them with a step-by-step example.</p>
+
+ <p><a href="/2020/08/04/pyflink-pandas-udf-support-flink.html">Continue
reading »</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2020/07/30/demo-fraud-detection-3.html">Advanced Flink Application
Patterns Vol.3: Custom Window Processing</a></h2>
@@ -367,19 +380,6 @@ and provide a tutorial for running Streaming ETL with
Flink on Zeppelin.</p>
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2020/06/11/community-update.html">Flink Community Update -
June'20</a></h2>
-
- <p>11 Jun 2020
- Marta Paes (<a href="https://twitter.com/morsapaes">@morsapaes</a>)</p>
-
- <p>And suddenly it’s June. The previous month has been calm on the
surface, but quite hectic underneath — the final testing phase for Flink 1.11
is moving at full speed, Stateful Functions 2.1 is out in the wild and Flink
has made it into Google Season of Docs 2020.</p>
-
- <p><a href="/news/2020/06/11/community-update.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -412,6 +412,16 @@ and provide a tutorial for running Streaming ETL with
Flink on Zeppelin.</p>
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page8/index.html b/content/blog/page8/index.html
index f40787813..5637edd10 100644
--- a/content/blog/page8/index.html
+++ b/content/blog/page8/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2020/06/11/community-update.html">Flink Community Update -
June'20</a></h2>
+
+ <p>11 Jun 2020
+ Marta Paes (<a href="https://twitter.com/morsapaes">@morsapaes</a>)</p>
+
+ <p>And suddenly it’s June. The previous month has been calm on the
surface, but quite hectic underneath — the final testing phase for Flink 1.11
is moving at full speed, Stateful Functions 2.1 is out in the wild and Flink
has made it into Google Season of Docs 2020.</p>
+
+ <p><a href="/news/2020/06/11/community-update.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/news/2020/06/09/release-statefun-2.1.0.html">Stateful Functions 2.1.0
Release Announcement</a></h2>
@@ -358,19 +371,6 @@ This release marks a big milestone: Stateful Functions 2.0
is not only an API up
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2020/04/01/community-update.html">Flink Community Update -
April'20</a></h2>
-
- <p>01 Apr 2020
- Marta Paes (<a href="https://twitter.com/morsapaes">@morsapaes</a>)</p>
-
- <p>While things slow down around us, the Apache Flink community is
privileged to remain as active as ever. This blogpost combs through the past
few months to give you an update on the state of things in Flink — from core
releases to Stateful Functions; from some good old community stats to a new
development blog.</p>
-
- <p><a href="/news/2020/04/01/community-update.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -403,6 +403,16 @@ This release marks a big milestone: Stateful Functions 2.0
is not only an API up
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/blog/page9/index.html b/content/blog/page9/index.html
index 084d7d5c8..82d4a2847 100644
--- a/content/blog/page9/index.html
+++ b/content/blog/page9/index.html
@@ -232,6 +232,19 @@
<div class="col-sm-8">
<!-- Blog posts -->
+ <article>
+ <h2 class="blog-title"><a
href="/news/2020/04/01/community-update.html">Flink Community Update -
April'20</a></h2>
+
+ <p>01 Apr 2020
+ Marta Paes (<a href="https://twitter.com/morsapaes">@morsapaes</a>)</p>
+
+ <p>While things slow down around us, the Apache Flink community is
privileged to remain as active as ever. This blogpost combs through the past
few months to give you an update on the state of things in Flink — from core
releases to Stateful Functions; from some good old community stats to a new
development blog.</p>
+
+ <p><a href="/news/2020/04/01/community-update.html">Continue reading
»</a></p>
+ </article>
+
+ <hr>
+
<article>
<h2 class="blog-title"><a
href="/features/2020/03/27/flink-for-data-warehouse.html">Flink as Unified
Engine for Modern Data Warehousing: Production-Ready Hive Integration</a></h2>
@@ -355,21 +368,6 @@
<hr>
- <article>
- <h2 class="blog-title"><a
href="/news/2019/12/11/release-1.8.3.html">Apache Flink 1.8.3 Released</a></h2>
-
- <p>11 Dec 2019
- Hequn Cheng </p>
-
- <p><p>The Apache Flink community released the third bugfix version of
the Apache Flink 1.8 series.</p>
-
-</p>
-
- <p><a href="/news/2019/12/11/release-1.8.3.html">Continue reading
»</a></p>
- </article>
-
- <hr>
-
<!-- Pagination links -->
@@ -402,6 +400,16 @@
<ul id="markdown-toc">
+ <li><a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></li>
+
+
+
+
+
+
+
+
+
<li><a href="/2022/05/06/async-sink-base.html">The Generic Asynchronous
Base Sink</a></li>
diff --git a/content/downloads.html b/content/downloads.html
index 2462f61c0..0315ce5ad 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -240,6 +240,7 @@
<li><a href="#apache-flink-stateful-functions-320"
id="markdown-toc-apache-flink-stateful-functions-320">Apache Flink Stateful
Functions 3.2.0</a></li>
<li><a href="#apache-flink-ml-200"
id="markdown-toc-apache-flink-ml-200">Apache Flink ML 2.0.0</a></li>
<li><a href="#apache-flink-kubernetes-operator-010"
id="markdown-toc-apache-flink-kubernetes-operator-010">Apache Flink Kubernetes
Operator 0.1.0</a></li>
+ <li><a href="#apache-flink-table-store-010"
id="markdown-toc-apache-flink-table-store-010">Apache Flink Table Store
0.1.0</a></li>
<li><a href="#additional-components"
id="markdown-toc-additional-components">Additional Components</a></li>
<li><a href="#verifying-hashes-and-signatures"
id="markdown-toc-verifying-hashes-and-signatures">Verifying Hashes and
Signatures</a></li>
<li><a href="#maven-dependencies" id="markdown-toc-maven-dependencies">Maven
Dependencies</a> <ul>
@@ -256,6 +257,7 @@
<li><a href="#flink-shaded"
id="markdown-toc-flink-shaded">Flink-shaded</a></li>
<li><a href="#flink-ml" id="markdown-toc-flink-ml">Flink-ML</a></li>
<li><a href="#flink-kubernetes-operator"
id="markdown-toc-flink-kubernetes-operator">Flink-Kubernetes-Operator</a></li>
+ <li><a href="#flink-table-store"
id="markdown-toc-flink-table-store">Flink-Table-Store</a></li>
</ul>
</li>
</ul>
@@ -408,6 +410,23 @@
<hr />
+<p>Apache Flink® Table Store 0.1.0 is the latest stable release for the <a
href="https://github.com/apache/flink-table-store">Flink Table Store</a>.</p>
+
+<h2 id="apache-flink-table-store-010">Apache Flink Table Store 0.1.0</h2>
+
+<p>
+<a
href="https://www.apache.org/dyn/closer.lua/flink/flink-table-store-0.1.0/flink-table-store-0.1.0-src.tgz"
id="010-table-store-download-source">Apache Flink Table Store 0.1.0 Source
Release</a>
+(<a
href="https://downloads.apache.org/flink/flink-table-store-0.1.0/flink-table-store-0.1.0-src.tgz.asc">asc</a>,
<a
href="https://downloads.apache.org/flink/flink-table-store-0.1.0/flink-table-store-0.1.0-src.tgz.sha512">sha512</a>)
+</p>
+<p>
+<a
href="https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar"
id="010-table-store-download-binaries">Apache Flink Table Store Binaries 0.1.0
Binaries Release</a>
+(<a
href="https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar.asc">asc</a>,
<a
href="https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar.sha1">sha1</a>)
+</p>
+
+<p>This version is compatible with Apache Flink version 1.15.0.</p>
+
+<hr />
+
<h2 id="additional-components">Additional Components</h2>
<p>These are components that the Flink project develops which are not part of
the
@@ -1523,6 +1542,17 @@ Flink Kubernetes Operator 0.1.0 - 2022-04-02
(<a
href="https://archive.apache.org/dist/flink/flink-kubernetes-operator-0.1.0/flink-kubernetes-operator-0.1.0-src.tgz">Source</a>,
<a
href="https://archive.apache.org/dist/flink/flink-kubernetes-operator-0.1.0/flink-kubernetes-operator-0.1.0-helm.tgz">Helm
Chart</a>)
</li>
+</ul>
+
+<h3 id="flink-table-store">Flink-Table-Store</h3>
+
+<ul>
+
+<li>
+Flink Table Store 0.1.0 - 2022-05-11
+(<a
href="https://archive.apache.org/dist/flink/flink-table-store-0.1.0/flink-table-store-0.1.0-src.tgz">Source</a>,
<a
href="https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar">Binaries</a>)
+</li>
+
</ul>
diff --git a/content/img/blog/table-store/table-store-architecture.png
b/content/img/blog/table-store/table-store-architecture.png
new file mode 100644
index 000000000..749af2190
Binary files /dev/null and
b/content/img/blog/table-store/table-store-architecture.png differ
diff --git a/content/index.html b/content/index.html
index daad84afe..742e062dd 100644
--- a/content/index.html
+++ b/content/index.html
@@ -397,6 +397,12 @@
<dl>
+ <dt> <a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></dt>
+ <dd><p>The Apache Flink community is pleased to announce the preview
release of the
+<a href="https://github.com/apache/flink-table-store">Apache Flink Table
Store</a> (0.1.0).</p>
+
+</dd>
+
<dt> <a href="/2022/05/06/async-sink-base.html">The Generic
Asynchronous Base Sink</a></dt>
<dd>An overview of the new AsyncBaseSink and how to use it for
building your own concrete sink</dd>
@@ -413,11 +419,6 @@ technology and remain one of the most active projects in
the Apache community. With the release of Flink 1.15, we are proud to announce
a number of
exciting changes.</p>
-</dd>
-
- <dt> <a
href="/news/2022/04/03/release-kubernetes-operator-0.1.0.html">Apache Flink
Kubernetes Operator 0.1.0 Release Announcement</a></dt>
- <dd><p>The Apache Flink Community is pleased to announce the preview
release of the Apache Flink Kubernetes Operator (0.1.0)</p>
-
</dd>
</dl>
diff --git a/content/index.html
b/content/news/2022/05/11/release-table-store-0.1.0.html
similarity index 60%
copy from content/index.html
copy to content/news/2022/05/11/release-table-store-0.1.0.html
index daad84afe..64645fa46 100644
--- a/content/index.html
+++ b/content/news/2022/05/11/release-table-store-0.1.0.html
@@ -5,7 +5,7 @@
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- The above 3 meta tags *must* come first in the head; any other head
content must come *after* these tags -->
- <title>Apache Flink: Stateful Computations over Data Streams</title>
+ <title>Apache Flink: Apache Flink Table Store 0.1.0 Release
Announcement</title>
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
<link rel="icon" href="/favicon.ico" type="image/x-icon">
@@ -145,7 +145,7 @@
<li><a href="/gettinghelp.html">Getting Help</a></li>
<!-- Blog -->
- <li><a href="/blog/"><b>Flink Blog</b></a></li>
+ <li class="active"><a href="/blog/"><b>Flink Blog</b></a></li>
<!-- Flink-packages -->
@@ -177,7 +177,8 @@
<li>
- <a href="/zh/">中文版</a>
+ <!-- link to the Chinese home page when current is blog page
-->
+ <a href="/zh">中文版</a>
</li>
@@ -225,265 +226,132 @@
</div>
<div class="col-sm-9">
<div class="row-fluid">
-
<div class="col-sm-12">
- <p class="lead">
- <strong>Apache Flink<sup>®</sup> — Stateful Computations over Data
Streams</strong>
- </p>
- </div>
+ <div class="row">
+ <h1>Apache Flink Table Store 0.1.0 Release Announcement</h1>
+ <p><i>For building dynamic tables for both stream and batch processing
in Flink, supporting high speed data ingestion and timely data query.</i></p>
-<div class="col-sm-12">
- <hr />
-</div>
+ <article>
+ <p>11 May 2022 Jingsong Lee & Jiangjie (Becket) Qin </p>
-</div>
+<p>The Apache Flink community is pleased to announce the preview release of the
+<a href="https://github.com/apache/flink-table-store">Apache Flink Table
Store</a> (0.1.0).</p>
-<!-- High-level architecture figure -->
+<p>Please check out the full <a
href="https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/">documentation</a>
for detailed information and user guides.</p>
-<div class="row front-graphic">
- <hr />
- <img src="/img/flink-home-graphic.png" width="800px" />
-</div>
+<p>Note: Flink Table Store is still in beta status and undergoing rapid
development.
+We do not recommend that you use it directly in a production environment.</p>
-<!-- Feature grid -->
+<h2 id="what-is-flink-table-store">What is Flink Table Store</h2>
-<!--
-<div class="row">
- <div class="col-sm-12">
- <hr />
- <h2><a href="/features.html">Features</a></h2>
- </div>
-</div>
--->
-<div class="row">
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon-th"></span> <b>All streaming use
cases</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>Event-driven Applications</li>
- <li>Stream & Batch Analytics</li>
- <li>Data Pipelines & ETL</li>
- </ul>
- <a href="/usecases.html">Learn more</a>
- </div>
- </div>
- </div>
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon-ok"></span> <b>Guaranteed
correctness</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>Exactly-once state consistency</li>
- <li>Event-time processing</li>
- <li>Sophisticated late data handling</li>
- </ul>
- <a
href="/flink-applications.html#building-blocks-for-streaming-applications">Learn
more</a>
- </div>
- </div>
- </div>
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon glyphicon-sort-by-attributes"></span>
<b>Layered APIs</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>SQL on Stream & Batch Data</li>
- <li>DataStream API & DataSet API</li>
- <li>ProcessFunction (Time & State)</li>
- </ul>
- <a href="/flink-applications.html#layered-apis">Learn more</a>
- </div>
- </div>
- </div>
-</div>
-<div class="row">
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon-dashboard"></span> <b>Operational
Focus</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>Flexible deployment</li>
- <li>High-availability setup</li>
- <li>Savepoints</li>
- </ul>
- <a href="/flink-operations.html">Learn more</a>
- </div>
- </div>
- </div>
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon-fullscreen"></span> <b>Scales to any
use case</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>Scale-out architecture</li>
- <li>Support for very large state</li>
- <li>Incremental checkpointing</li>
- </ul>
- <a href="/flink-architecture.html#run-applications-at-any-scale">Learn
more</a>
- </div>
- </div>
- </div>
- <div class="col-sm-4">
- <div class="panel panel-default">
- <div class="panel-heading">
- <span class="glyphicon glyphicon-flash"></span> <b>Excellent
Performance</b>
- </div>
- <div class="panel-body">
- <ul style="font-size: small;">
- <li>Low latency</li>
- <li>High throughput</li>
- <li>In-Memory computing</li>
- </ul>
- <a
href="/flink-architecture.html#leverage-in-memory-performance">Learn more</a>
- </div>
- </div>
- </div>
-</div>
+<p>In the past years, thanks to our numerous contributors and users, Apache
Flink has established
+itself as one of the best distributed computing engines, especially for
stateful stream processing
+at large scale. However, there are still a few challenges people are facing
when they try to obtain
+insights from their data in real-time. Among these challenges, one prominent
problem is lack of
+storage that caters to all the computing patterns.</p>
-<!-- Events section -->
-<div class="row">
+<p>As of now it is quite common that people deploy a few storage systems to
work with Flink for different
+purposes. A typical setup is a message queue for stream processing, a
scannable file system / object store
+for batch processing and ad-hoc queries, and a K-V store for lookups. Such an
architecture posts challenge
+in data quality and system maintenance, due to its complexity and
heterogeneity. This is becoming a major
+issue that hurts the end-to-end user experience of streaming and batch
unification brought by Apache Flink.</p>
-<div class="col-sm-12">
- <hr />
-</div>
+<p>The goal of Flink table store is to address the above issues. This is an
important step of the project.
+It extends Flink’s capability from computing to the storage domain. So we can
provide a better end-to-end
+experience to the users.</p>
-<div class="col-sm-3">
+<p>Flink Table Store aims to provide a unified storage abstraction, so users
don’t have to build the hybrid
+storage by themselves. More specifically, Table Store offers the following
core capabilities:</p>
- <h2><a>Upcoming Events</a></h2>
+<ul>
+ <li>Support storage of large datasets and allows read / write in both batch
and streaming manner.</li>
+ <li>Support streaming queries with minimum latency down to milliseconds.</li>
+ <li>Support Batch/OLAP queries with minimum latency down to the second
level.</li>
+ <li>Support incremental snapshots for stream consumption by default. So
users don’t need to solve the
+problem of combining different stores by themselves.</li>
+</ul>
-</div>
-<div class="col-sm-9">
- <!-- Flink Forward -->
- <a href="https://flink-forward.org" target="_blank">
- <img style="width: 180px; padding-right: 10px"
src="/img/flink-forward.png" alt="Flink Forward" />
- </a>
- <!-- ApacheCon -->
- <a href="https://www.apache.org/events/current-event" target="_blank">
- <img style="width: 200px; padding-right: 10px"
src="https://www.apache.org/events/current-event-234x60.png" alt="ApacheCon" />
- </a>
- <!-- Flink Forward Asia -->
- <a href="https://flink-forward.org.cn/" target="_blank">
- <img style="width: 230px" src="/img/flink-forward-asia.png" alt="Flink
Forward Asia" />
- </a>
-</div>
+<center>
+<img src="/img/blog/table-store/table-store-architecture.png" width="100%" />
+</center>
-</div>
+<p>In this preview version, as shown in the architecture above:</p>
-<!-- Updates section -->
+<ul>
+ <li>Users can use Flink to insert data into the Table Store, either by
streaming the change log
+captured from databases, or by loading the data in batches from the other
stores like data warehouses.</li>
+ <li>Users can use Flink to query the table store in different ways,
including streaming queries and
+Batch/OLAP queries. It is also worth noting that users can use other engines
such as Apache Hive to
+query from the table store as well.</li>
+ <li>Under the hood, table Store uses a hybrid storage architecture, using a
Lake Store to store historical data
+and a Queue system (Apache Kafka integration is currently supported) to store
incremental data. It provides
+incremental snapshots for hybrid streaming reads.</li>
+ <li>Table Store’s Lake Store stores data as columnar files on file system /
object store, and uses the LSM Structure
+to support a large amount of data updates and high-performance queries.</li>
+</ul>
-<div class="row">
+<p>Many thanks for the inspiration of the following systems: <a
href="https://iceberg.apache.org/">Apache Iceberg</a> and <a
href="http://rocksdb.org/">RocksDB</a>.</p>
-<div class="col-sm-12">
- <hr />
-</div>
+<h2 id="getting-started">Getting started</h2>
-<div class="col-sm-3">
+<p>Please refer to the <a
href="https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/docs/try-table-store/quick-start/">getting
started guide</a> for more details.</p>
- <h2><a href="/blog">Latest Blog Posts</a></h2>
+<h2 id="whats-next">What’s Next?</h2>
-</div>
+<p>The community is currently working on hardening the core logic, stabilizing
the storage format and adding the remaining bits for making the Flink Table
Store production-ready.</p>
-<div class="col-sm-9">
+<p>In the upcoming 0.2.0 release you can expect (at-least) the following
additional features:</p>
- <dl>
-
- <dt> <a href="/2022/05/06/async-sink-base.html">The Generic
Asynchronous Base Sink</a></dt>
- <dd>An overview of the new AsyncBaseSink and how to use it for
building your own concrete sink</dd>
-
- <dt> <a href="/2022/05/06/pyflink-1.15-thread-mode.html">Exploring the
thread mode in PyFlink</a></dt>
- <dd>Flink 1.15 introduced a new Runtime Execution Mode named 'thread'
mode in PyFlink. This post explains how it works and when to use it.</dd>
-
- <dt> <a href="/2022/05/06/restore-modes.html">Improvements to Flink
operations: Snapshots Ownership and Savepoint Formats</a></dt>
- <dd>This post will outline the journey of improving snapshotting in
past releases and the upcoming improvements in Flink 1.15, which includes
making it possible to take savepoints in the native state backend specific
format as well as clarifying snapshots ownership.</dd>
-
- <dt> <a href="/news/2022/05/05/1.15-announcement.html">Announcing the
Release of Apache Flink 1.15</a></dt>
- <dd><p>Thanks to our well-organized and open community, Apache Flink
continues
-<a href="https://www.apache.org/foundation/docs/FY2021AnnualReport.pdf">to
grow</a> as a
-technology and remain one of the most active projects in
-the Apache community. With the release of Flink 1.15, we are proud to announce
a number of
-exciting changes.</p>
-
-</dd>
-
- <dt> <a
href="/news/2022/04/03/release-kubernetes-operator-0.1.0.html">Apache Flink
Kubernetes Operator 0.1.0 Release Announcement</a></dt>
- <dd><p>The Apache Flink Community is pleased to announce the preview
release of the Apache Flink Kubernetes Operator (0.1.0)</p>
+<ul>
+ <li>Ecosystem: Support Flink Table Store Reader for Apache Hive Engine</li>
+ <li>Core: Support the adjustment of the number of Bucket</li>
+ <li>Core: Support for Append Only Data, Table Store is not just limited to
update scenarios</li>
+ <li>Core: Full Schema Evolution</li>
+ <li>Improvements based on feedback from the preview release</li>
+</ul>
-</dd>
-
- </dl>
+<p>In the medium term, you can also expect:</p>
-</div>
+<ul>
+ <li>Ecosystem: Support Flink Table Store Reader for Trino, PrestoDB and
Apache Spark</li>
+ <li>Flink Table Store Service to accelerate updates and improve query
performance</li>
+</ul>
-<!-- Scripts section -->
-
-<script type="text/javascript" src="/js/jquery.jcarousel.min.js"></script>
-
-<script type="text/javascript">
-
- $(window).load(function(){
- $(function() {
- var jcarousel = $('.jcarousel');
-
- jcarousel
- .on('jcarousel:reload jcarousel:create', function () {
- var carousel = $(this),
- width = carousel.innerWidth();
-
- if (width >= 600) {
- width = width / 4;
- } else if (width >= 350) {
- width = width / 3;
- }
-
- carousel.jcarousel('items').css('width', Math.ceil(width) +
'px');
- })
- .jcarousel({
- wrap: 'circular',
- autostart: true
- });
-
- $('.jcarousel-control-prev')
- .jcarouselControl({
- target: '-=1'
- });
-
- $('.jcarousel-control-next')
- .jcarouselControl({
- target: '+=1'
- });
-
- $('.jcarousel-pagination')
- .on('jcarouselpagination:active', 'a', function() {
- $(this).addClass('active');
- })
- .on('jcarouselpagination:inactive', 'a', function() {
- $(this).removeClass('active');
- })
- .on('click', function(e) {
- e.preventDefault();
- })
- .jcarouselPagination({
- perPage: 1,
- item: function(page) {
- return '<a href="#' + page + '">' + page + '</a>';
- }
- });
- });
- });
-
-</script>
-</div>
+<p>Please give the preview release a try, share your feedback on the Flink
mailing list and contribute to the project!</p>
+
+<h2 id="release-resources">Release Resources</h2>
+
+<p>The source artifacts and binaries are now available on the updated <a
href="https://flink.apache.org/downloads.html">Downloads</a>
+page of the Flink website.</p>
+
+<p>We encourage you to download the release and share your feedback with the
community through the <a
href="https://flink.apache.org/community.html#mailing-lists">Flink mailing
lists</a>
+or <a
href="https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20component%20%3D%20%22Table%20Store%22">JIRA</a>.</p>
+
+<h2 id="list-of-contributors">List of Contributors</h2>
+
+<p>The Apache Flink community would like to thank every one of the
contributors that have made this release possible:</p>
+<p>Jane Chan, Jiangjie (Becket) Qin, Jingsong Lee, Leonard Xu, Nicholas Jiang,
Shen Zhu, tsreaper, Yubin Li</p>
+
+ </article>
+ </div>
+
+ <div class="row">
+ <div id="disqus_thread"></div>
+ <script type="text/javascript">
+ /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE
* * */
+ var disqus_shortname = 'stratosphere-eu'; // required: replace example
with your forum shortname
+
+ /* * * DON'T EDIT BELOW THIS LINE * * */
+ (function() {
+ var dsq = document.createElement('script'); dsq.type =
'text/javascript'; dsq.async = true;
+ dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+ (document.getElementsByTagName('head')[0] ||
document.getElementsByTagName('body')[0]).appendChild(dsq);
+ })();
+ </script>
+ </div>
+ </div>
+</div>
</div>
</div>
diff --git a/content/zh/downloads.html b/content/zh/downloads.html
index e6db747a2..2492e5bb7 100644
--- a/content/zh/downloads.html
+++ b/content/zh/downloads.html
@@ -473,9 +473,6 @@
flink-docs-release-1.11/release-notes/flink-1.11.html">Flink 1.11 的发布说
<span class="nt"><version></span>3.2.0<span
class="nt"></version></span>
<span class="nt"></dependency></span></code></pre></div>
-<p>The <code>statefun-sdk</code> dependency is the only one you will need to
start developing applications.
-The <code>statefun-flink-harness</code> dependency includes a local execution
environment that allows you to locally test your application in an IDE.</p>
-
<p>本地开发程序仅需要依赖 <code>statefun-sdk</code>。<code>statefun-flink-harness</code>
提供了在 IDE 中测试用户开发的程序的本地执行环境。</p>
<h3 id="apache-flink-ml">Apache Flink ML</h3>
diff --git a/content/zh/index.html b/content/zh/index.html
index 7b3fda615..7702729ae 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -394,6 +394,12 @@
<dl>
+ <dt> <a href="/news/2022/05/11/release-table-store-0.1.0.html">Apache
Flink Table Store 0.1.0 Release Announcement</a></dt>
+ <dd><p>The Apache Flink community is pleased to announce the preview
release of the
+<a href="https://github.com/apache/flink-table-store">Apache Flink Table
Store</a> (0.1.0).</p>
+
+</dd>
+
<dt> <a href="/2022/05/06/async-sink-base.html">The Generic
Asynchronous Base Sink</a></dt>
<dd>An overview of the new AsyncBaseSink and how to use it for
building your own concrete sink</dd>
@@ -410,11 +416,6 @@ technology and remain one of the most active projects in
the Apache community. With the release of Flink 1.15, we are proud to announce
a number of
exciting changes.</p>
-</dd>
-
- <dt> <a
href="/news/2022/04/03/release-kubernetes-operator-0.1.0.html">Apache Flink
Kubernetes Operator 0.1.0 Release Announcement</a></dt>
- <dd><p>The Apache Flink Community is pleased to announce the preview
release of the Apache Flink Kubernetes Operator (0.1.0)</p>
-
</dd>
</dl>