Re: [ANNOUNCE] Apache Flink 1.3.3 released
Hi everyone, Thanks, but I don’t see the binaries for 1.3.3 being pushed anywhere in the Maven repositories [1]. Can we expect them to show up over there as well eventually? [1] https://repo.maven.apache.org/maven2/org/apache/flink/flink-java/ Kind regards, -Phil On Fri, Mar 16, 2018 at 3:36 PM, Stephan Ewenwrote: > This release fixed a quite critical bug that could lead to loss of > checkpoint state: https://issues.apache.org/jira/browse/FLINK-7783 > > We recommend all users on Flink 1.3.2 to upgrade to 1.3.3 > > > On Fri, Mar 16, 2018 at 10:31 AM, Till Rohrmann > wrote: > >> Thanks for managing the release Gordon and also thanks to the community! >> >> Cheers, >> Till >> >> On Fri, Mar 16, 2018 at 9:05 AM, Fabian Hueske >> wrote: >> >>> Thanks for managing this release Gordon! >>> >>> Cheers, Fabian >>> >>> 2018-03-15 21:24 GMT+01:00 Tzu-Li (Gordon) Tai : >>> The Apache Flink community is very happy to announce the release of Apache Flink 1.3.3, which is the third bugfix release for the Apache Flink 1.3 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release is available for download at: https://flink.apache.org/downloads.html Please check out the release blog post for an overview of the improvements for this bugfix release: https://flink.apache.org/news/2018/03/15/release-1.3.3.html The full release notes are available in Jira: https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje ctId=12315522=12341142 We would like to thank all contributors of the Apache Flink community who made this release possible! Cheers, Gordon >>> >> > -- "We cannot change the cards we are dealt, just how we play the hand." - Randy Pausch
[jira] [Created] (FLINK-8484) Kinesis consumer re-reads
Philip Luppens created FLINK-8484: - Summary: Kinesis consumer re-reads Key: FLINK-8484 URL: https://issues.apache.org/jira/browse/FLINK-8484 Project: Flink Issue Type: Bug Components: Kinesis Connector Affects Versions: 1.3.2 Reporter: Philip Luppens We’re using the connector to subscribe to streams varying from 1 to a 100 shards, and used the kinesis-scaling-utils to dynamically scale the Kinesis stream up and down during peak times. What we’ve noticed is that, while we were having closed shards, any Flink job restart with check- or save-point would result in shards being re-read from the event horizon, duplicating our events. We started checking the checkpoint state, and found that the shards were stored correctly with the proper sequence number (including for closed shards), but that upon restarts, the older closed shards would be read from the event horizon, as if their restored state would be ignored. In the end, we believe that we found the problem: in the FlinkKinesisConsumer’s run() method, we’re trying to find the shard returned from the KinesisDataFetcher against the shards’ metadata from the restoration point, but we do this via a containsKey() call, which means we’ll use the StreamShardMetadata’s equals() method. However, this checks for all properties, including the endingSequenceNumber, which might have changed between the restored state’s checkpoint and our data fetch, thus failing the equality check, failing the containsKey() check, and resulting in the shard being re-read from the event horizon, even though it was present in the restored state. We’ve created a workaround where we only check for the shardId and stream name to restore the state of the shards we’ve already seen, and this seems to work correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)