No worries.
I figure that out already.
Thanks all.
Best regards,
Jack
-Original Message-
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Monday, 29 August 2016 10:13 AM
To: users@kafka.apache.org
Subject: RE: consumer with version 0.10.0
Hi there,
My fault. When I produce messages, i
Hi there,
My fault. When I produce messages, it starts to consume.
Now my question is:
1. Is there a way to check the offset status for a new consumer?
2. for the new consumer, is that possible to force it to consumer messages
given an earlier offset?
For instance, in the old simple-level cons
Jan,
Thanks for the example of reprocessing the messages. I think in any case,
reconsuming all the messages will definitely work. What we want to do here
is to see if we can avoid doing that by only reconsuming necessary
messages.
In the scenario you mentioned, can you store an "offset-of-last-up
Thanks
for your response, I now understand why the error is occurring and was able to
resolve the issue. I will just describe the solution for others who may run
into the same issue. Simply put I need to commit offset during group
rebalance.
I tried
SimpleConsumer (described in my original e
It is per partition
On Aug 27, 2016 3:10 AM, "Amit Karyekar" wrote:
> Hi,
>
> We’re using Kafka 0.9
>
> Wanted to check whether log.retention.bytes works on per partition basis
> or is it cumulative of all partitions?
>
> Regards,
> Amit
> Information contained in this e-mail message is confiden
I am looking at ways how one might have data loss and duplication in a Kafka
cluster and need some help/pointers/discussions.
So far, here's what I have come up with:
Loss at producer-sideSince the data send call is actually adding data to a
cache/buffer, a crash of the producer can potentially r