Thanks Guozhang,

So increasing batch.size can lead to more duplicates in case of failure.

Also when you said : "The broker will accept a batch of records as a whole
or reject them ". For example, if a producer request contains two batches,
a first one for Topic A / Partition 0 and a second one for Topic B /
Partition 1.
Does that means the batch 1 will failed if the acknowledgments for batch 2
cannot be satisfied ? Is a producer request is refused as a whole, or only
one batch within a request is refused ?

Thanks,


2016-08-31 20:53 GMT+02:00 Guozhang Wang <wangg...@gmail.com>:

> Hi Florian,
>
> The broker will accept a batch of records as a whole or reject them as a
> whole unless it encounters an IOException while trying to append the
> messages, which will be treated as a fatal error anyways.
>
> Duplicates usually happen when the whole batch is accepted but the ack was
> not delivered in time, and hence it was re-tried.
>
>
> Guozhang
>
>
> On Tue, Aug 30, 2016 at 2:45 AM, Florian Hussonnois <fhussonn...@gmail.com
> >
> wrote:
>
> > Hi all,
> >
> > I am using kafka_2.11-0.10.0.1, my understanding is that the producer API
> > batches records per partition to send efficient requests. We can
> configure
> > batch.size to increase the throughtput.
> >
> > However, in case of failure all records within the batch failed ? If that
> > is true,  does that mean that increasing batch.size can also increase the
> > number of duplicates in case of retries ?
> >
> > Thanks,
> >
> > Florian.
> >
>
>
>
> --
> -- Guozhang
>



-- 
Florian HUSSONNOIS

Reply via email to