Hi Ben,

Flink correctly maintains the offsets of all partitions that are read by a
Kafka consumer.
A checkpoint is only complete when all functions successful checkpoint
their state. For a Kafka consumer, this state is the current reading offset.
In case of a failure the offsets and the state of all functions are reset
to the last completed checkpoint.

Best, Fabian

Am Mi., 5. Juni 2019 um 22:58 Uhr schrieb xwang355 <ben....@gmail.com>:

> Elias Thanks for your reply. In this case,
>
> *When # of Kafka consumers  = # of partitions, and I use
> setParallelism(>1),
> something like this
> 'messageSteam.rebalance().map(lamba).setParallelism(3).print()'
> *
>
> If checkpointing is enabled, I assume Flink will commit the offsets in the
> 'right order' during checkpoint.
>
>
> For example, if a batch of offsets comprised of (1,2,3,4,5) and there are
> three worker threads(setParallelism(3)
>
> thread 1 -> 1     [stuck by a sync call]
> thread 2 -> 2, 3 [success]
> thread 3 -> 4, 5  [success]
>
> Will Flink commit 5?
>
> I just want to make sure that Flink will manage the pending offsets
> correctly so that there will be no data lost if the above code is used on
> production.
>
> Thanks again!
>
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>

Reply via email to