Hello,

Have you solved this ? I'm encountering the same issue with the new
Producer on 0.9.0.1 client with a 0.9.0.1 Kafka broker. We tried the same
various breakdown (kafka(s), zookeeper) with 0.8.2.2 client and Kafka
broker 0.8.2.2 and the retries work as expected on the older version. I'm
going to take a look if someone else has filed a related issue about it.

Regards,
Nicolas PHUNG

On Thu, Apr 7, 2016 at 5:15 AM, christopher palm <cpa...@gmail.com> wrote:

> Hi Thanks for the suggestion.
> I lowered the broker message.max.bytes to be smaller than my payload so
> that I now receive an
> org.apache.kafka.common.errors.RecordTooLargeException
> :
>
> I still don't see the retries happening, the default back off is 100ms, and
> my producer loops for a few seconds, long enough to trigger the retry.
>
> Is there something else I need to set?
>
> I have tried this with a sync and async producer both with same results
>
> Thanks,
>
> Chris
>
> On Wed, Apr 6, 2016 at 12:01 AM, Manikumar Reddy <
> manikumar.re...@gmail.com>
> wrote:
>
> > Hi,
> >
> >  Producer message size validation checks ("buffer.memory",
> > "max.request.size" )  happens before
> >  batching and sending messages.  Retry mechanism is applicable for broker
> > side errors and network errors.
> > Try changing "message.max.bytes" broker config property for simulating
> > broker side error.
> >
> >
> >
> >
> >
> >
> > On Wed, Apr 6, 2016 at 9:53 AM, christopher palm <cpa...@gmail.com>
> wrote:
> >
> > > Hi All,
> > >
> > > I am working with the KafkaProducer using the properties below,
> > > so that the producer keeps trying to send upon failure on Kafka .9.0.1.
> > > I am forcing a failure by setting my buffersize smaller than my
> > > payload,which causes the expected exception below.
> > >
> > > I don't see the producer retry to send on receiving this failure.
> > >
> > > Am I  missing something in the configuration to allow the producer to
> > retry
> > > on failed sends?
> > >
> > > Thanks,
> > > Chris
> > >
> > > .java.util.concurrent.ExecutionException:
> > > org.apache.kafka.common.errors.RecordTooLargeException: The message is
> > 8027
> > > bytes when serialized which is larger than the total memory buffer you
> > have
> > > configured with the buffer.memory configuration.
> > >
> > >  props.put("bootstrap.servers", bootStrapServers);
> > >
> > > props.put("acks", "all");
> > >
> > > props.put("retries", 3);//Try for 3 strikes
> > >
> > > props.put("batch.size", batchSize);//Need to see if this number should
> > > increase under load
> > >
> > > props.put("linger.ms", 1);//After 1 ms fire the batch even if the
> batch
> > > isn't full.
> > >
> > > props.put("buffer.memory", buffMemorySize);
> > >
> > > props.put("max.block.ms",500);
> > >
> > > props.put("max.in.flight.requests.per.connection", 1);
> > >
> > > props.put("key.serializer",
> > > "org.apache.kafka.common.serialization.StringSerializer");
> > >
> > > props.put("value.serializer",
> > > "org.apache.kafka.common.serialization.ByteArraySerializer");
> > >
> >
>

Reply via email to