Xiaoyu,

In 0.7, we have this problem that the producer doesn't receive any ack. So,
syncProducer.send is considered successful as soon as the messages are in
the socket buffer. If the broker goes down before the socket buffer is
flushed, those supposedly successful messages are lost. What's worse is
that the producer doesn't know this since it doesn't wait for a response.
Such lost messages should be small. However, I am not sure how to
reduce/prevent it. This issue will be addressed in 0.8, in which the
producer will receive an ack. If a broker goes down in the middle of a
send, the producer will get an exception and can resend.

Thanks,

Jun

On Thu, Aug 23, 2012 at 5:19 PM, xiaoyu wang <xiaoyu.w...@gmail.com> wrote:

> Hello,
>
> We are using sync produce to push messages to kafka brokers. It will stop
> once it receives an IOException: connection reset by peer. It seems when a
> broker goes down, we lose some messages. I have reduced
> "log.flush.interval" to 1, still see > 200 message loss.
>
> I also reduced the batch.size on producer side to 10, but the message loss
> is about the same.
>
> So, what's the best way to minimize message loss on broker down?
>
> Thanks,
>
> -Xiaoyu
>

Reply via email to