[ https://issues.apache.org/jira/browse/KAFKA-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167481#comment-17167481 ]
Ismael Juma edited comment on KAFKA-10324 at 7/29/20, 8:00 PM: --------------------------------------------------------------- Segments generally are typically significantly larger than the fetch response size, so it's usually not an issue. was (Author: ijuma): Segments generally are larger than the fetch response size, so it's usually not an issue. > Pre-0.11 consumers can get stuck when messages are downconverted from V2 > format > ------------------------------------------------------------------------------- > > Key: KAFKA-10324 > URL: https://issues.apache.org/jira/browse/KAFKA-10324 > Project: Kafka > Issue Type: Bug > Reporter: Tommy Becker > Priority: Major > > As noted in KAFKA-5443, The V2 message format preserves a batch's lastOffset > even if that offset gets removed due to log compaction. If a pre-0.11 > consumer seeks to such an offset and issues a fetch, it will get an empty > batch, since offsets prior to the requested one are filtered out during > down-conversion. KAFKA-5443 added consumer-side logic to advance the fetch > offset in this case, but this leaves old consumers unable to consume these > topics. > The exact behavior varies depending on consumer version. The 0.10.0.0 > consumer throws RecordTooLargeException and dies, believing that the record > must not have been returned because it was too large. The 0.10.1.0 consumer > simply spins fetching the same empty batch over and over. -- This message was sent by Atlassian Jira (v8.3.4#803005)