divijvaidya commented on code in PR #12228: URL: https://github.com/apache/kafka/pull/12228#discussion_r1159589983
########## clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java: ########## @@ -196,25 +196,24 @@ private static FilterResult filterTo(TopicPartition partition, Iterable<MutableR batch.writeTo(bufferOutputStream); filterResult.updateRetainedBatchMetadata(batch, retainedRecords.size(), false); } else { - final MemoryRecordsBuilder builder; long deleteHorizonMs; if (needToSetDeleteHorizon) deleteHorizonMs = filter.currentTime + filter.deleteRetentionMs; else deleteHorizonMs = batch.deleteHorizonMs().orElse(RecordBatch.NO_TIMESTAMP); - builder = buildRetainedRecordsInto(batch, retainedRecords, bufferOutputStream, deleteHorizonMs); - - MemoryRecords records = builder.build(); - int filteredBatchSize = records.sizeInBytes(); - if (filteredBatchSize > batch.sizeInBytes() && filteredBatchSize > maxRecordBatchSize) - log.warn("Record batch from {} with last offset {} exceeded max record batch size {} after cleaning " + - "(new size is {}). Consumers with version earlier than 0.10.1.0 may need to " + - "increase their fetch sizes.", + try (final MemoryRecordsBuilder builder = buildRetainedRecordsInto(batch, retainedRecords, bufferOutputStream, deleteHorizonMs)) { Review Comment: Note to reviewer(s): `builder.build()` (in the next line) calls the MemoryRecordsBuilder#close() method, hence, this is not a leak here but this part of changes for code robustness (in case we start throwing exceptions from `buildRetainedRecordsInto` in future) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org