[ 
https://issues.apache.org/jira/browse/KAFKA-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282030#comment-16282030
 ] 

Ted Yu commented on KAFKA-6325:
-------------------------------

I assume you have modified your producer code to accommodate this behavior.

Looks like option #2 can be adopted.

> Producer.flush() doesn't throw exception on timeout
> ---------------------------------------------------
>
>                 Key: KAFKA-6325
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6325
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>            Reporter: Erik Scheuter
>         Attachments: FlushTest.java
>
>
> Reading the javadoc of the flush() method we assumed an exception would've 
> been thrown when an error occurs. This would make the code more 
> understandable as we don't have to return a list of futures if we want to 
> send multiple records to kafka and eventually call future.get().
> When send() is called, the metadata is retrieved and send is blocked on this 
> process. When this process fails (no brokers) an FutureFailure is returned. 
> When you just flush; no exceptions will be thrown (in contrast to 
> future.get()). Ofcourse you can implement callbacks in the send method.
> I think there are two solutions:
> * Change flush() (& doSend()) and throw exceptions
> * Change the javadoc and describe the scenario you can lose events because no 
> exceptions are thrown and the events are not sent.
> I added an unittest to show the behaviour. Kafka doesn't have to be available 
> for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to