[
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551383#comment-13551383
]
Joel Koshy commented on KAFKA-598:
----------------------------------
The full scope should probably move out of 0.8 - i.e., as described above
bounding the consumers memory is
basically a packing problem without knowledge of the message-size on the
broker. One possibility is for the broker
to somehow communicate the size of the large message back to the client, but
that would break our zero-copy
property wrt fetches.
So I would suggest we don't do the full patch (i.e., bounding consumer memory
&& handling large messages).
Instead we can go with the simpler implementation that requires a new config
(which is not ideal, but better IMO than
trying to half-implement the above packing problem.).
I haven't had time to look at this lately, but if people are okay with the
above, then I can revisit one of the
earlier revisions of the patches.
> decouple fetch size from max message size
> -----------------------------------------
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8
> Reporter: Jun Rao
> Assignee: Joel Koshy
> Priority: Blocker
> Attachments: KAFKA-598-v1.patch, KAFKA-598-v2.patch,
> KAFKA-598-v3.patch
>
>
> Currently, a consumer has to set fetch size larger than the max message size.
> This increases the memory footprint on the consumer, especially when a large
> number of topic/partition is subscribed. By decoupling the fetch size from
> max message size, we can use a smaller fetch size for normal consumption and
> when hitting a large message (hopefully rare), we automatically increase
> fetch size to max message size temporarily.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira