[ https://issues.apache.org/jira/browse/KAFKA-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16460689#comment-16460689 ]
ASF GitHub Bot commented on KAFKA-6834: --------------------------------------- rajinisivaram opened a new pull request #4953: KAFKA-6834: Handle compaction with batches bigger than max.message.bytes URL: https://github.com/apache/kafka/pull/4953 ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > log cleaner should handle the case when the size of a message set is larger > than the max message size > ----------------------------------------------------------------------------------------------------- > > Key: KAFKA-6834 > URL: https://issues.apache.org/jira/browse/KAFKA-6834 > Project: Kafka > Issue Type: Bug > Reporter: Jun Rao > Assignee: Rajini Sivaram > Priority: Major > > In KAFKA-5316, we added the logic to allow a message (set) larger than the > per topic message size to be written to the log during log cleaning. However, > the buffer size in the log cleaner is still bounded by the per topic message > size. This can cause the log cleaner to die and cause the broker to run out > of disk space. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)