[
https://issues.apache.org/jira/browse/KAFKA-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385882#comment-15385882
]
Ismael Juma commented on KAFKA-3979:
------------------------------------
Thanks for the JIRA and PR. Since this introduces a new config, it technically
needs a simple KIP
(https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals).
I suggest starting a mailing list discussion on the subject first to get input
from the community before creating the KIP.
> Optimize memory used by replication process by using adaptive fetch message
> size
> --------------------------------------------------------------------------------
>
> Key: KAFKA-3979
> URL: https://issues.apache.org/jira/browse/KAFKA-3979
> Project: Kafka
> Issue Type: Improvement
> Components: replication
> Affects Versions: 0.10.0.0
> Reporter: Andrey Neporada
>
> Current replication process fetches messages in replica.fetch.max.bytes-sized
> chunks.
> Since replica.fetch.max.bytes should be bigger than max.message.bytes for
> replication to work, one can face big memory consumption for replication
> process, especially for installations with big number of partitions.
> Proposed solution is to try to fetch messages in smaller chunks (say
> replica.fetch.base.bytes).
> If we encounter message bigger than current fetch chunk, we increase chunk
> (f.e. twofold) and retry. After replicating this bigger message, we shrunk
> fetch chunk size back until it reaches replica.fetch.base.bytes
> replica.fetch.base.bytes should be chosen big enough not to affect throughput
> and to be bigger than most of messages.
> However, it can be much less than replica.fetch.max.bytes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)