[ https://issues.apache.org/jira/browse/KAFKA-10034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17141532#comment-17141532 ]
Michael Bingham commented on KAFKA-10034: ----------------------------------------- Looks like {{max.request.size}} is used in 2 ways by the producer: as an up-front per-record size validation, and described in this ticket, but also as an upper bound for the overall {{ProduceRequest}} size (except in the rare case where a single batch size exceeds this limit): [https://github.com/apache/kafka/blob/2.5/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L590] > Clarify Usage of "batch.size" and "max.request.size" Producer Configs > --------------------------------------------------------------------- > > Key: KAFKA-10034 > URL: https://issues.apache.org/jira/browse/KAFKA-10034 > Project: Kafka > Issue Type: Improvement > Components: docs, producer > Reporter: Mark Cox > Assignee: Badai Aqrandista > Priority: Minor > > The documentation around the producer configurations "batch.size" and > "max.request.size", and how they relate to one another, can be confusing. > In reality, the "max.request.size" is a hard limit on each individual record, > but the documentation makes it seem this is the maximum size of a request > sent to Kafka. If there is a situation where "batch.size" is set greater > than "max.request.size" (and each individual record is smaller than > "max.request.size") you could end up with larger requests than expected sent > to Kafka. > There are a few things that could be considered to make this clearer: > # Improve the documentation to clarify the two producer configurations and > how they relate to each other > # Provide a producer check, and possibly a warning, if "batch.size" is found > to be greater than "max.request.size" > # The producer could take the _minimum_ of "batch.size" or "max.request.size" > -- This message was sent by Atlassian Jira (v8.3.4#803005)