[ 
https://issues.apache.org/jira/browse/KAFKA-305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13236782#comment-13236782
 ] 

Jun Rao commented on KAFKA-305:
-------------------------------

Prashanth,

v2 patch looks good. 

As for 5, I do see transient failures of testZKSendWithDeadBroker. This is a 
bit weird. During broker shutdown, we close the ZK client, which should cause 
all ephemeral nodes to be deleted in ZK. Could you verify if this is indeed the 
behavior of ZK?

As for BrokerPartitionInfo and ProducerPool, we should clean up dead brokers. 
Could you open a separate jira to track that?
                
> SyncProducer does not correctly timeout
> ---------------------------------------
>
>                 Key: KAFKA-305
>                 URL: https://issues.apache.org/jira/browse/KAFKA-305
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.7, 0.8
>            Reporter: Prashanth Menon
>            Priority: Critical
>         Attachments: KAFKA-305-v1.patch, KAFKA-305-v2.patch
>
>
> So it turns out that using the channel in SyncProducer like we are to perform 
> blocking reads will not trigger socket timeouts (though we set it) and will 
> block forever which is bad.  This bug identifies the issue: 
> http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4614802 and this article 
> presents a potential work-around: 
> http://stackoverflow.com/questions/2866557/timeout-for-socketchannel for 
> workaround. The work-around is a simple solution that involves creating a 
> separate ReadableByteChannel instance for timeout-enabled reads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to