[ 
https://issues.apache.org/jira/browse/KAFKA-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17828960#comment-17828960
 ] 

Haruki Okada commented on KAFKA-16372:
--------------------------------------

[~showuon] Agreed.
One concern is, IMO many developers expect this "exception thrown on buffer 
full after max.block.ms"-behavior (because it's stated in Javadoc while we 
rarely hit buffer-full situation so no one realized this discrepancy).

Even some famous open-sources have exception-handling code which doesn't work 
actually due to this. (e.g. 
[logback-kafka-append|https://github.com/danielwegener/logback-kafka-appender/blob/master/src/main/java/com/github/danielwegener/logback/kafka/delivery/AsynchronousDeliveryStrategy.java#L29])

I wonder if just fixing Javadoc and Kafka documentation is fine, or we should 
include a heads up about this somewhere (e.g. at Kafka user mailing list).

I would like to hear committer's opinion.

Anyways, meanwhile let me start fixing the docs.

> max.block.ms behavior inconsistency with javadoc and the config description
> ---------------------------------------------------------------------------
>
>                 Key: KAFKA-16372
>                 URL: https://issues.apache.org/jira/browse/KAFKA-16372
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>            Reporter: Haruki Okada
>            Priority: Minor
>
> As of Kafka 3.7.0, the javadoc of 
> [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956]
>  states that it throws TimeoutException when max.block.ms is exceeded on 
> buffer allocation or initial metadata fetch.
> Also it's stated in [buffer.memory config 
> description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory].
> However, I found that this is not true because TimeoutException extends 
> ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as 
> FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086]
>  instead of throwing it.
> I wonder if this is a bug or the documentation error.
> Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to