Vaibhav,

Does elastic load balancer have any timeouts or quotas that kill existing
socket connections? Does client resend succeed (you can configure resend in
DefaultEventHandler)?

Thanks,

Jun

On Mon, Jun 25, 2012 at 6:01 PM, Vaibhav Puranik <vpura...@gmail.com> wrote:

> Hi all,
>
> We are sending our ad impressions to Kafka 0.7.0. I am using async
> prouducers in our web app.
> I am pooling kafak producers with commons pool. Pool size - 10. batch.size
> is 100.
>
> We have 3 c1.xlarge instances with Kafka brokers installed behind a elastic
> load balancer in AWS.
> Every minute we loose some events because of the following exception
>
> - Disconnecting from dualstack.kafka-xyz.us-east-1.elb.amazonaws.com:9092
> - Error in handling batch of 64 events
> java.io.IOException: Connection timed out
>    at sun.nio.ch.FileDispatcher.write0(Native Method)
>    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
>    at sun.nio.ch.IOUtil.write(IOUtil.java:75)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>    at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:51)
>    at kafka.network.Send$class.writeCompletely(Transmission.scala:76)
>    at
>
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:25)
>    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:88)
>    at kafka.producer.SyncProducer.send(SyncProducer.scala:87)
>    at kafka.producer.SyncProducer.multiSend(SyncProducer.scala:128)
>    at
> kafka.producer.async.DefaultEventHandler.send(DefaultEventHandler.scala:52)
>    at
>
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:46)
>    at
>
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:119)
>    at
>
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:98)
>    at
>
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:74)
>    at scala.collection.immutable.Stream.foreach(Stream.scala:254)
>    at
>
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:73)
>    at
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:43)
> - Connected to dualstack.kafka-xyz.us-east-1.elb.amazonaws.com:9092 for
> producing
>
> Has anybody faced this kind of timeouts before? Do they indicate any
> resource misconfiguration? The CPU usage on broker is pretty small.
> Also, in spite of setting batch size to 100, the failing batch usually only
> have 50 to 60 events. Is there any other limit I am hitting?
>
> Any help is appreciated.
>
>
> Regards,
> Vaibhav
> GumGum
>

Reply via email to