Hi jiangjie,
Yeah I am using the second case. (Flink 1.7.1, Kafka
0.10.2, FlinkKafkaConsumer010)
But now there is a problem, the data is consumed normally, but the commit
offset is not continued. The following exception is found:
[image: image.png]
Becket Qin 于2019年9月5日周四 上午11:32写道:
> Hi
No, I don't think so.
As long as you have a successful checkpoint, The offset will be committed.
Thanks,
Jiangjie (Becket) Qin
On Thu, Sep 5, 2019 at 4:56 PM Dominik Wosiński wrote:
> Hey,
> Yeah I am using the first case. Is there a specific requirement for
> checkpoints ? Like do they need
Hey,
Yeah I am using the first case. Is there a specific requirement for
checkpoints ? Like do they need to be externalized or so ?
Best,
Dom.
czw., 5 wrz 2019 o 05:32 Becket Qin napisał(a):
> Hi Dominik,
>
> There has not been any change to the offset committing logic in
> KafkaConsumer for
Hi Dominik,
There has not been any change to the offset committing logic in
KafkaConsumer for a while. But the logic is a little complicated. The
offset commit to Kafka is only enabled in the following two cases:
1. Flink checkpoint is enabled AND commitOffsetsOnCheckpoint is true
(default value
Hey,
I was wondering whether something has changed for KafkaConsumer, since I am
using Kafka 2.0.0 with Flink and I wanted to use group offsets but there
seems to be no change in the topic where Kafka stores it's offsets, after
restart Flink uses the `auto.offset.reset` so it seems that there is