Hi Tao / Jiangjie,
I think a better fix here may be not letting MirrorMakerProducerCallback to
extend from ErrorLoggingCallback, but rather change the
ErrorLoggingCallback itself as it defeats the usage of logAsString, which I
think is useful for a general error logging purposes. Rather we can
let
Hey Noah,
Carl is right about the offset. The offset to be commit should be the
largest-consumed-offset + 1. But this should not break the at least once
guarantee.
>From what I can see, your consumer should not skip messages. Just to make
sure I understand your test correctly,
1. There is a consum
Marina,
We do not have a command line tool to manually set offsets stored in Kafka
yet, but we are thinking about adding this feature soon. Could you
elaborate your use case of cmd manual offset modification a little bit so I
can understand your scenario better while working on the cmd design?
Gu
Hi Carl,
Generally, you approach works to guarantee at least once consumption -
basically people have to commit offset only after they have processed the
message.
The only problem is that in old high level consumer, during consumer
rebalance consumer will (and should) commit offsets. To guarantee
It seems that your log.index.size.max.bytes was 1K and probably was too
small. This will cause your index file to reach its upper limit before
fully index the log segment.
Jiangjie (Becket) Qin
On 6/18/15, 4:52 PM, "Zakee" wrote:
>Any ideas on why one of the brokers which was down for a day, fa
Kafka is not concerned about your disks and it does not do anything lower
than writing files to a folder. Meaning, the best way to add more capacity
to your servers is to stop the service add the drives create a new volume
and data folder copy over the data from previous location and mount the new