Hi, Could you add me into the mailing list ?
Hi,
Currently we trying to configure Kafka in our system for pulling messages
from Queues.
We have multiple consumers( we might want to add consumers if load on one
consumer increases) which need to receive and process messages from a Kafka
queue. Based on my understanding, under a single
Hi,
I observed some unexpected message loss in kafka fault tolerant test. In the
test, a topic with 3 replicas is created. A sync producer with acks=2 publishes
to the topic. A consumer consumes from the topic and tracks message ids. During
the test, the leader is killed. Both producer and
Hello Jiang,
Which version of Kafka are you using, and did you kill the broker with -9?
Guozhang
On Tue, Jul 15, 2014 at 9:23 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) jwu...@bloomberg.net wrote:
Hi,
I observed some unexpected message loss in kafka fault tolerant test. In
the
Guozhang,
I'm testing on 0.8.1.1; just kill pid, no -9.
Regards,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 13:27:50
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Hello Jiang,
Which
Hi Madhavi,
Dynamically re-balance partitions based on processing efficiency and load
is a bit tricky to do in the current consumer since rebalances will only be
triggered by consumer membership change or topic/partition change. For your
case you would probably stop the slow consumer so that a
What config property values did you use on producer/consumer/broker?
Guozhang
On Tue, Jul 15, 2014 at 10:32 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) jwu...@bloomberg.net wrote:
Guozhang,
I'm testing on 0.8.1.1; just kill pid, no -9.
Regards,
Jiang
From: users@kafka.apache.org
Guozhang,
Please find the config below:
Producer:
props.put(producer.type, sync);
props.put(request.required.acks, 2);
Guozhang,My coworker came up with an explaination: at one moment the leader L,
and two followers F1, F2 are all in ISR. The producer sends a message m1 and
receives acks from L and F1. Before the messge is replicated to F2, L is down.
In the following leader election, F2, instead of F1, becomes
That could be the cause, and it can be verified by changing the acks to -1
and checking the data loss ratio then.
Guozhang
On Tue, Jul 15, 2014 at 12:49 PM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) jwu...@bloomberg.net wrote:
Guozhang,My coworker came up with an explaination: at one
When ack=-1 and the publisher thread number is high, it always happens that
only the leader remains in ISR and shutting down the leader will cause message
loss.
The leader election code shows that the new leader will be the first alive
broker in the ISR list. So it's possible the new leader
Guozhang, I'm not sure he was your message since you just reply to the
mailing list...
François Langelier
Étudiant en génie Logiciel - École de Technologie Supérieure
http://www.etsmtl.ca/
Capitaine Club Capra http://capra.etsmtl.ca/
VP-Communication - CS Games http://csgames.org 2014
Jeux de
I think I know the answer to this already but I wanted to check my assumptions
before proceeding.
We are using Kafka as a queueing mechanism for receiving messages from
stateless producers. We are operating in a legal framework where we can never
lose a committed message, but we can reject a
13 matches
Mail list logo