Got a ton of QeueFullExceptions on just a single one of several async
producers, queue was full for three hours straight and did not recover. The
other four producers were fine. All were producing to zookeeper on the same
kafka cluster consisting of two kafka brokers with log.flush.interval=10000
on each. The failed producer was handling approx same level of throughput
as two of the other producers.

Could this be a bug?

Thanks,
Mark

exception below:


kafka.producer.async.QueueFullException (Event queue is full of unsent
messages, could not send event: <message>)
kafka.producer.async.AsyncProducer.send(line 121)
kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$
mcVI$sp$1$$anonfun$apply$2.apply(line 131)
kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$
mcVI$sp$1$$anonfun$apply$2.apply(line 131)
scala.collection.Iterator$class.foreach(line 631)
scala.collection.JavaConversions$JIteratorWrapper.foreach(line 474)
scala.collection.IterableLike$class.foreach(line 79)
scala.collection.JavaConversions$JListWrapper.foreach(line 521)
kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$mcVI$sp$1.apply(line
131)
kafka.producer.ProducerPool$$anonfun$send$1$$anonfun$apply$mcVI$sp$1.apply(line
130)
scala.collection.mutable.ResizableArray$class.foreach(line 57)
scala.collection.mutable.ArrayBuffer.foreach(line 43)
kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(line 130)
kafka.producer.ProducerPool$$anonfun$send$1.apply(line 102)
kafka.producer.ProducerPool$$anonfun$send$1.apply(line 102)
scala.collection.mutable.ResizableArray$class.foreach(line 57)
scala.collection.mutable.ArrayBuffer.foreach(line 43)
kafka.producer.ProducerPool.send(line 102)
kafka.producer.Producer.zkSend(line 143)
kafka.producer.Producer.send(line 105)
kafka.javaapi.producer.Producer.send(line 104)

Reply via email to