[
https://issues.apache.org/jira/browse/KAFKA-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14197908#comment-14197908
]
Vishal commented on KAFKA-1745:
-------------------------------
No, since I figured that calling producer.close() and returning that producer
object to the pool would make that producer object unusable afterwards.
> Each new thread creates a PIPE and KQUEUE as open files during
> producer.send() and does not get cleared when the thread that creates them is
> cleared.
> -----------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-1745
> URL: https://issues.apache.org/jira/browse/KAFKA-1745
> Project: Kafka
> Issue Type: Bug
> Affects Versions: 0.8.1.1
> Environment: Mac OS Mavericks
> Reporter: Vishal
> Priority: Critical
>
> Hi,
> I'm using the java client API for Kafka. I wanted to send data to Kafka
> by using a producer pool as I'm using a sync producer. The thread that sends
> the data is from the thread pool that grows and shrinks depending on the
> usage. So, when I try to send data from one thread, 1 KQUEUE and 2 PIPES are
> created (got this info by using lsof). If I keep using the same thread it's
> fine but when a new thread sends data to Kafka (using producer.send() ) a new
> KQUEUE and 2 PIPEs are created.
> This is okay, but when the thread is cleared from the thread pool and a new
> thread is created, then new KQUEUEs and PIPEs are created. The problem is
> that the old ones which were created are not getting destroyed and they are
> showing up as open files. This is causing a major problem as the number of
> open file keep increasing and does not decrease.
> Please suggest any solutions.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)