[flink-web] 01/03: Rebuild website

2019-12-05 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 80ae871143ff8ad817d564ad8dc2f5433771bd3d
Author: Dian Fu 
AuthorDate: Thu Dec 5 10:10:25 2019 +0800

Rebuild website
---
 content/blog/feed.xml | 419 ++
 1 file changed, 419 insertions(+)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 1410119..be37a00 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,425 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+How to query Pulsar Streams using Apache Flink
+pIn a previous a 
href=https://flink.apache.org/2019/05/03/pulsar-flink.htmlstory/a;
 on the  Flink blog, we explained the different ways that a 
href=https://flink.apache.org/Apache Flink/a and a 
href=https://pulsar.apache.org/Apache Pulsar/a can 
integrate to provide elastic data processing at large scale. This blog post 
discusses the new developments and integrations between the two fra [...]
+
+h1 id=a-short-intro-to-apache-pulsarA short intro to 
Apache Pulsar/h1
+
+pApache Pulsar is a flexible pub/sub messaging system, backed by 
durable log storage. Some of the framework’s highlights include multi-tenancy, 
a unified message model, structured event streams and a cloud-native 
architecture that make it a perfect fit for a wide set of use cases, ranging 
from billing, payments and trading services all the way to the unification of 
the different messaging architectures in an organization. If you are interested 
in finding out more about Pulsar, yo [...]
+
+h1 
id=existing-pulsar--flink-integration-apache-flink-16Existing 
Pulsar amp; Flink integration (Apache Flink 1.6+)/h1
+
+pThe existing integration between Pulsar and Flink exploits Pulsar as 
a message queue in a Flink application. Flink developers can utilize Pulsar as 
a streaming source and streaming sink for their Flink applications by selecting 
a specific Pulsar source and connecting to their desired Pulsar cluster and 
topic:/p
+
+div class=highlightprecode 
class=language-javaspan class=c1// create 
and configure Pulsar consumer/span
+span class=nPulsarSourceBuilder/spanspan 
class=olt;/spanspan 
class=nString/spanspan 
class=ogt;/spanspan 
class=nbuilder/span span 
class=o=/span span 
class=nPulsarSourceBuilder/span  
+  span class=o./spanspan 
class=nabuilder/spanspan 
class=o(/spanspan 
class=knew/span span 
class=nfSimpleStringSchema/spanspan 
class=o())/span 
+  span class=o./spanspan 
class=naserviceUrl/spanspan 
class=o(/spanspan 
class=nserviceUrl/spanspan 
class=o)/span
+  span class=o./spanspan 
class=natopic/spanspan 
class=o(/spanspan 
class=ninputTopic/spanspan 
class=o)/span
+  span class=o./spanspan 
class=nasubsciptionName/spanspan 
class=o(/spanspan 
class=nsubscription/spanspan 
class=o);/span
+span class=nSourceFunction/spanspan 
class=olt;/spanspan 
class=nString/spanspan 
class=ogt;/span span 
class=nsrc/span span 
class=o=/span span 
class=nbuilder/spanspan 
class=o./spanspan 
class=nabuild/spanspan class=o [...]
+span class=c1// ingest DataStream with Pulsar 
consumer/span
+span class=nDataStream/spanspan 
class=olt;/spanspan 
class=nString/spanspan 
class=ogt;/span span 
class=nwords/span span 
class=o=/span span 
class=nenv/spanspan 
class=o./spanspan 
class=naaddSource/spanspan class=o 
[...]
+
+pPulsar streams can then get connected to the Flink processing 
logic…/p
+
+div class=highlightprecode 
class=language-javaspan class=c1// perform 
computation on DataStream (here a simple WordCount)/span
+span class=nDataStream/spanspan 
class=olt;/spanspan 
class=nWordWithCount/spanspan 
class=ogt;/span span 
class=nwc/span span 
class=o=/span span 
class=nwords/span
+  span class=o./spanspan 
class=naflatmap/spanspan 
class=o((/spanspan 
class=nFlatMapFunction/spanspan 
class=olt;/spanspan 
class=nString/spanspan 
class=o,/span span 
class=nWordWithCount/spanspan 
class=ogt;)/span span class= [...]
+span class=ncollector/spanspan 
class=o./spanspan 
class=nacollect/spanspan 
class=o(/spanspan 
class=knew/span span 
class=nfWordWithCount/spanspan 
class=o(/spanspan 
class=nword/spanspan 
class=o,/span span class=mi1/ 
[...]
+  span class=o})/span
+ 
+  span class=o./spanspan 
class=nareturns/spanspan 
class=o(/spanspan 
class=nWordWithCount/spanspan 
class=o./spanspan 
class=naclass/spanspan 
class=o)/span
+  span class=o./spanspan 
class=nakeyBy/spanspan 
class=o(/spanspan 
class=squot;wordquot;/spanspan 
class=o)/span
+  span class=o./spanspan 
class=natimeWindow/spanspan 
class=o(/spanspan 
class=nTime/spanspan 
class=o./spanspan 
class=naseconds/spanspan 
class=o(/spanspan 
class=mi5/spanspan 
class=o))/span
+  span class=o./spanspan 
class=nareduce/spanspan 
class=o((/spanspan 
class=nReduceFunction/spanspan 
class=olt;/spanspan 
class=nWordWithCount/spanspan 
class=ogt;)/span span 

[flink-web] 01/03: Rebuild website

2019-09-13 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 36c49b45c70261cdd53752f625641d1f540ca1f0
Author: Fabian Hueske 
AuthorDate: Fri Sep 13 09:48:30 2019 +0200

Rebuild website
---
 content/blog/feed.xml | 103 +++---
 content/downloads.html|   2 +-
 content/zh/downloads.html |   2 +-
 3 files changed, 99 insertions(+), 8 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index a3f9991..2113d6f 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,97 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+Apache Flink 1.8.2 Released
+pThe Apache Flink community released the second bugfix 
version of the Apache Flink 1.8 series./p
+
+pThis release includes 23 fixes and minor improvements for Flink 
1.8.1. The list below includes a detailed list of all fixes and 
improvements./p
+
+pWe highly recommend all users to upgrade to Flink 1.8.2./p
+
+pUpdated Maven dependencies:/p
+
+div class=highlightprecode 
class=language-xmlspan 
class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-javaspan
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-streaming-java_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-clients_2.11span
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.8.2span 
class=ntlt;/versiongt;/span
+span 
class=ntlt;/dependencygt;/span/code/pre/div
+
+pYou can find the binaries on the updated a 
href=/downloads.htmlDownloads page/a./p
+
+pList of resolved issues:/p
+
+h2Bug
+/h2
+ul
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13941FLINK-13941/a;]
 - Prevent data-loss by not cleaning up small part files from S3.
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-9526FLINK-9526/a;]
 - BucketingSink end-to-end test failed on Travis
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-10368FLINK-10368/a;]
 - #39;Kerberized YARN on Docker test#39; unstable
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12319FLINK-12319/a;]
 - StackOverFlowError in cep.nfa.sharedbuffer.SharedBuffer
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12736FLINK-12736/a;]
 - ResourceManager may release TM with allocated slots
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-12889FLINK-12889/a;]
 - Job keeps in FAILING state
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13059FLINK-13059/a;]
 - Cassandra Connector leaks Semaphore on Exception; hangs on close
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13159FLINK-13159/a;]
 - java.lang.ClassNotFoundException when restore job
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13367FLINK-13367/a;]
 - Make ClosureCleaner detect writeReplace serialization override
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13369FLINK-13369/a;]
 - Recursive closure cleaner ends up with stackOverflow in case of 
circular dependency
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13394FLINK-13394/a;]
 - Use fallback unsafe secure MapR in nightly.sh
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13484FLINK-13484/a;]
 - ConnectedComponents end-to-end test instable with 
NoResourceAvailableException
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13499FLINK-13499/a;]
 - Remove dependency on MapR artifact repository
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13508FLINK-13508/a;]
 - CommonTestUtils#waitUntilCondition() may attempt to sleep with 
negative time
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13586FLINK-13586/a;]
 - Method ClosureCleaner.clean broke backward compatibility between 
1.8.0 and 1.8.1
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13761FLINK-13761/a;]
 - `SplitStream` should be deprecated because `SplitJavaStream` is 
deprecated
+/li
+li[a 
href=https://issues.apache.org/jira/browse/FLINK-13789FLINK-13789/a;]
 - Transactional Id Generation fails due to user code impacting 
formatting string
+/li
+li[a 

[flink-web] 01/03: Rebuild website.

2019-06-05 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit d7dbb09d30afc2605dfc087207af7d46d8e0b8b8
Author: Nico Kruber 
AuthorDate: Tue Jun 4 09:25:00 2019 +0200

Rebuild website.
---
 content/blog/feed.xml | 131 ++
 1 file changed, 131 insertions(+)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 59b7950..31828ce 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,137 @@
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
 
+State TTL in Flink 1.8.0: How to Automatically Cleanup Application 
State in Apache Flink
+pA common requirement for many stateful streaming 
applications is to automatically cleanup application state for effective 
management of your state size, or to control how long the application state can 
be accessed (e.g. due to legal regulations like the GDPR). The state 
time-to-live (TTL) feature was initiated in Flink 1.6.0 and enabled application 
state cleanup and efficient state size management in Apache Flink./p
+
+pIn this post, we motivate the State TTL feature and discuss its use 
cases. Moreover, we show how to use and configure it. We explain how Flink 
internally manages state with TTL and present some exciting additions to the 
feature in Flink 1.8.0. The blog post concludes with an outlook on future 
improvements and extensions./p
+
+h1 id=the-transient-nature-of-stateThe Transient Nature of 
State/h1
+
+pThere are two major reasons why state should be maintained only for a 
limited time. For example, let’s imagine a Flink application that ingests a 
stream of user login events and stores for each user the time of the last login 
to improve the experience of frequent visitors./p
+
+ul
+  li
+pstrongControlling the size of state./strong
+Being able to efficiently manage an ever-growing state size is a primary use 
case for state TTL. Oftentimes, data needs to be persisted temporarily while 
there is some user activity around it, e.g. web sessions. When the activity 
ends there is no longer interest in that data while it still occupies storage. 
Flink 1.8.0 introduces background cleanup of old state based on TTL that makes 
the eviction of no-longer-necessary data frictionless. Previously, the 
application developer had to take [...]
+  /li
+  li
+pstrongComplying with data protection and sensitive data 
requirements./strong
+Recent developments around data privacy regulations, such as the General Data 
Protection Regulation (GDPR) introduced by the European Union, make compliance 
with such data requirements or treating sensitive data a top priority for many 
use cases and applications. An example of such use cases includes applications 
that require keeping data for a specific timeframe and preventing access to it 
thereafter. This is a common challenge for companies providing short-term 
services to their custom [...]
+  /li
+/ul
+
+pBoth requirements can be addressed by a feature that periodically, 
yet continuously, removes the state for a key once it becomes unnecessary or 
unimportant and there is no requirement to keep it in storage any 
more./p
+
+h1 
id=state-ttl-for-continuous-cleanup-of-application-stateState 
TTL for continuous cleanup of application state/h1
+
+pThe 1.6.0 release of Apache Flink introduced the State TTL feature. 
It enabled developers of stream processing applications to configure the state 
of operators to expire and be cleaned up after a defined timeout 
(time-to-live). In Flink 1.8.0 the feature was extended, including continuous 
cleanup of old entries for both the RocksDB and the heap state backends 
(FSStateBackend and MemoryStateBackend), enabling a continuous cleanup process 
of old entries (according to the TTL setti [...]
+
+pIn Flink’s DataStream API, application state is defined by a a 
href=https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/state.html#using-managed-keyed-statestate
 descriptor/a. State TTL is configured by passing a 
codeStateTtlConfiguration/code object to a state descriptor. 
The following Java example shows how to create a state TTL configuration and 
provide it to the state descriptor that holds the last login ti [...]
+
+div class=highlightprecode 
class=language-javaspan 
class=knimport/span span 
class=nnorg.apache.flink.api.common.state.StateTtlConfig/spanspan
 class=o;/span
+span class=knimport/span span 
class=nnorg.apache.flink.api.common.time.Time/spanspan
 class=o;/span
+span class=knimport/span span 
class=nnorg.apache.flink.api.common.state.ValueStateDescriptor/spanspan
 class=o;/span
+
+span class=nStateTtlConfig/span span 
class=nttlConfig/span span 
class=o=/span span 
class=nStateTtlConfig/span
+span class=o./spanspan 
class=nanewBuilder/spanspan 
class=o(/spanspan 
class=nTime/spanspan 
class=o./spanspan