[
https://issues.apache.org/jira/browse/KAFKA-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma updated KAFKA-7149:
-------------------------------
Fix Version/s: (was: 2.1.0)
2.2.0
> Reduce assignment data size to improve kafka streams scalability
> ----------------------------------------------------------------
>
> Key: KAFKA-7149
> URL: https://issues.apache.org/jira/browse/KAFKA-7149
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Affects Versions: 2.0.0
> Reporter: Ashish Surana
> Assignee: Navinder Brar
> Priority: Major
> Fix For: 2.2.0
>
>
> We observed that when we have high number of partitions, instances or
> stream-threads, assignment-data size grows too fast and we start getting
> below RecordTooLargeException at kafka-broker.
> Workaround of this issue is commented at:
> https://issues.apache.org/jira/browse/KAFKA-6976
> Still it limits the scalability of kafka streams as moving around 100MBs of
> assignment data for each rebalancing affects performance & reliability
> (timeout exceptions starts appearing) as well. Also this limits kafka streams
> scale even with high max.message.bytes setting as data size increases pretty
> quickly with number of partitions, instances or stream-threads.
>
> Solution:
> To address this issue in our cluster, we are sending the compressed
> assignment-data. We saw assignment-data size reduced by 8X-10X. This improved
> the kafka streams scalability drastically for us and we could now run it with
> more than 8,000 partitions.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)