Normally, the background thread shouldn't die. If it did, it's a kafka bug.

How many messages and bytes are you sending per sec? Was I/O saturated on
the broker?

Thanks,

Jun

On Mon, Oct 29, 2012 at 12:07 PM, Mark Grabois <mark.grab...@trendrr.com>wrote:

> What's the best thing to do in each case? I already have
> log.flush.interval=10000, would upping that help if case 1? What's the best
> thing to do if the background thread dies?
>
> On Fri, Oct 26, 2012 at 6:13 PM, Neha Narkhede <neha.narkh...@gmail.com
> >wrote:
>
> > Mark,
> >
> > In 0.7.x, QueueFullException generally happens when either -
> >
> > 1. the producer pushes data faster than the server can handle
> > 2. the background thread that sends data in async producer is dead
> >
> > Can you send around the full producer log and a thread dump of the
> > producer with the full queue ?
> >
> > Thanks,
> > Neha
> >
> > On Fri, Oct 26, 2012 at 12:37 PM, Mark Grabois <mark.grab...@trendrr.com
> >
> > wrote:
> > > Got a ton of QeueFullExceptions on just a single one of several async
> > > producers, queue was full for three hours straight and did not recover.
> > The
> > > other four producers were fine. All were producing to zookeeper on the
> > same
> > > kafka cluster consisting of two kafka brokers with
> > log.flush.interval=10000
> > > on each. The failed producer was handling approx same level of
> throughput
> > > as two of the other producers.
> > >
> > > Could this be a bug?
> > >
> > > Thanks,
> > > Mark
> > >
> > > exception below:
> > >
> > >
> > > kafka.producer.async.QueueFullException (Event queue is full of unsent
> > > messages, could not send event: <message>)
> > > kafka.producer.async.AsyncProducer.send(line 121)
> > > kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$
> > > mcVI$sp$1$$anonfun$apply$2.apply(line 131)
> > > kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$
> > > mcVI$sp$1$$anonfun$apply$2.apply(line 131)
> > > scala.collection.Iterator$class.foreach(line 631)
> > > scala.collection.JavaConversions$JIteratorWrapper.foreach(line 474)
> > > scala.collection.IterableLike$class.foreach(line 79)
> > > scala.collection.JavaConversions$JListWrapper.foreach(line 521)
> > >
> >
> kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$mcVI$sp$1.apply(line
> > > 131)
> > >
> >
> kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$mcVI$sp$1.apply(line
> > > 130)
> > > scala.collection.mutable.ResizableArray$class.foreach(line 57)
> > > scala.collection.mutable.ArrayBuffer.foreach(line 43)
> > > kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(line 130)
> > > kafka.producer.ProducerPool$$anonfun$send$1.apply(line 102)
> > > kafka.producer.ProducerPool$$anonfun$send$1.apply(line 102)
> > > scala.collection.mutable.ResizableArray$class.foreach(line 57)
> > > scala.collection.mutable.ArrayBuffer.foreach(line 43)
> > > kafka.producer.ProducerPool.send(line 102)
> > > kafka.producer.Producer.zkSend(line 143)
> > > kafka.producer.Producer.send(line 105)
> > > kafka.javaapi.producer.Producer.send(line 104)
> >
>

Reply via email to