[
https://issues.apache.org/jira/browse/KAFKA-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Luke Chen resolved KAFKA-16372.
-------------------------------
Fix Version/s: 4.0.0
Resolution: Fixed
> max.block.ms behavior inconsistency with javadoc and the config description
> ---------------------------------------------------------------------------
>
> Key: KAFKA-16372
> URL: https://issues.apache.org/jira/browse/KAFKA-16372
> Project: Kafka
> Issue Type: Bug
> Components: producer
> Reporter: Haruki Okada
> Assignee: Haruki Okada
> Priority: Minor
> Fix For: 4.0.0
>
>
> As of Kafka 3.7.0, the javadoc of
> [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956]
> states that it throws TimeoutException when max.block.ms is exceeded on
> buffer allocation or initial metadata fetch.
> Also it's stated in [buffer.memory config
> description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory].
> However, I found that this is not true because TimeoutException extends
> ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as
> FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086]
> instead of throwing it.
> I wonder if this is a bug or the documentation error.
> Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)