Rajasekar Elango created KAFKA-1041:
---------------------------------------

             Summary: Number of file handles increases indefinitely in producer 
if broker host is unresolvable
                 Key: KAFKA-1041
                 URL: https://issues.apache.org/jira/browse/KAFKA-1041
             Project: Kafka
          Issue Type: Bug
          Components: producer 
    Affects Versions: 0.8
         Environment: *unix*
            Reporter: Rajasekar Elango
            Assignee: Jun Rao


We found a issue that if broker host is un resolvable, the number of file 
handle keep increasing for every message we produce and eventually it uses up 
all available files handles in operating system. If broker itself is not 
running and broker host name is resolvable, open file handles count stays flat.

lsof output shows number of these open file handles continue to grow for every 
message we produce.

 java      19631    relango   81u     sock                0,6      0t0  
196966526 can't identify protocol

I can easily reproduce this with console producer,  If I run console producer 
with right hostname and if broker is not running, the console producer will 
exit after three tries. But If I run console producer with unresolvable broker, 
it throws below exception and continues to wait for user input, every time I 
enter new message, it opens socket and file handle count keeps increasing.. 

Here is Exception in producer

ERROR fetching topic metadata for topics [Set(test-1378245487417)] from broker 
[ArrayBuffer(id:0,host:localhost1,port:6667)] failed (kafka.utils.Utils$)
kafka.common.KafkaException: fetching topic metadata for topics 
[Set(test-1378245487417)] from broker 
[ArrayBuffer(id:0,host:localhost1,port:6667)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
        at 
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at 
kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
        at kafka.utils.Utils$.swallow(Utils.scala:186)
        at kafka.utils.Logging$class.swallowError(Logging.scala:105)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
        at 
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
        at 
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
        at scala.collection.immutable.Stream.foreach(Stream.scala:526)
        at 
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
        at 
kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
Caused by: java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:30)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
        at 
kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
        at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
        ... 12 more



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to