[
https://issues.apache.org/jira/browse/KAFKA-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Christo Lolov updated KAFKA-19519:
----------------------------------
Fix Version/s: 4.3.0
> Introduce a new config for group coordinator max record size
> ------------------------------------------------------------
>
> Key: KAFKA-19519
> URL: https://issues.apache.org/jira/browse/KAFKA-19519
> Project: Kafka
> Issue Type: Improvement
> Reporter: Luke Chen
> Assignee: Lan Ding
> Priority: Major
> Labels: need-kip
> Fix For: 4.3.0
>
>
> In KAFKA-19427, there's a use case that when there is a consumer group
> subscribes huge amount of topics/partitions, when this group is rebalanced,
> and then the coordinator broker stores the assignment of this group to
> __consumer_offsets, it will throw an error RecordTooLargeException .
>
>
>
> Currently, to resolve this issue, the only solution is to increase the
> broker-level `message.max.bytes` config. But the side effect of this change
> is it potentially allows all topics without override the topic level
> \{{max.message.bytes}} config, will now allow higher message size.
>
>
>
> We could introduce a new config to drive the value used by the group
> coordinator - e.g group.coordinator.append.max.bytes - instead of relying on
> the broker message.max.bytes. This would be used to set the max bytes at the
> topic level when the topic is created.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)