On Wednesday, November 20, 2019, Matthias J. Sax
wrote:
> I am not sure what Spring does, but using Kafka Streams writing the
> output and committing offset would be part of the same transaction.
>
> It seems Spring is doing something else and thus, is seems it does not
> use the EOS API
On Wednesday, November 20, 2019, Eric Azama wrote:
> Calls to KafkaConsumer#poll() are completely independent of commits. As
> such they will always return the next set of records, even if the previous
> set have not been committed. This is how the consumer acts, regardless of
> the Exactly Once
I am not sure what Spring does, but using Kafka Streams writing the
output and committing offset would be part of the same transaction.
It seems Spring is doing something else and thus, is seems it does not
use the EOS API correctly.
If you use transactions to copy data from input to output
Calls to KafkaConsumer#poll() are completely independent of commits. As
such they will always return the next set of records, even if the previous
set have not been committed. This is how the consumer acts, regardless of
the Exactly Once semantics.
In order for the Consumer to reset to the
Hello Kafka users, developers and client-developers,
This is the second candidate for release of Apache Kafka 2.4.0.
This release includes many new features, including:
- Allow consumers to fetch from closest replica
- Support for incremental cooperative rebalancing to the consumer rebalance
Hi Ashu, others,
I have tested with the latest kafkacat with librdkafka 1.2.2 which can also do
transactional reading.
Reading the partition with offset reset from beginning will read until offset
10794778 (this is the offset of the LSO that is stuck)
Reading the partition from any offset
Ok. I'm at a point where I believe the exactly once is in question.
Topic input 10 partitions topic output 10 partitions.
Producer writes messages 1 to 100 to topic input.
CTP process calls poll. It receives 100 messages 10 in each partiton.
Process is simple mirroring take from input write to
Alright got that.
What about resetting or changing the consumer offset ? You can try to
change it to some previous offset and restart consumer. Consumer may have
to do duplicate processing but should work .
On Wed, Nov 20, 2019 at 7:18 PM Pieter Hameete
wrote:
> Hi Ashu,
>
> thanks for the
Spring framework works around this by providing a method like
doInTransaction( listofstuff).
Behind the scenes it manages a pool of transactional producers with ids
like transaction_prefix + id.
So each call to dointransaction may initiate a transaction.
On Friday, November 15, 2019, Matthias
Hi Ashu,
thanks for the tip. We have tried restarting the consumer, but that did not
help. All read_committed consumers for this partition (we have multiple) have
the same issue.
The partition already had different leaders, when we performed a
rolling-restart of the brokers. All brokers give
Hello Pieter,
We had similar issue.
Did you try restarting your consumer ? It that doesn't fix then you can
try deleting that particular topic partition from the broker and restart
the broker so that it will get in sync. Please make sure that you have
replica in-sync before deleting the
Hello,
after having some Broker issues (too many open files) we managed to recover our
Brokers, but read_committed consumers are stuck for a specific topic partition.
It seems like the LSO is stuck at a specific offset. The transactional producer
for the topic partition is working without
12 matches
Mail list logo