chia7712 commented on code in PR #15516:
URL: https://github.com/apache/kafka/pull/15516#discussion_r2314325244


##########
clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java:
##########
@@ -293,14 +294,15 @@ private static MemoryRecordsBuilder 
buildRetainedRecordsInto(RecordBatch origina
                                                                  
ByteBufferOutputStream bufferOutputStream,
                                                                  final long 
deleteHorizonMs) {
         byte magic = originalBatch.magic();
+        Compression compression = 
Compression.of(originalBatch.compressionType()).build();

Review Comment:
   > Right, here we could use the level if specified. I expect most topics to 
use compression.type=producer but in case a specific compression type and level 
is set, that would make sense to use them.
   
   @Yunyung Could you please file a minor patch for it?
   
   > Do you think there are scenarios where the gains of picking a different 
level for older data would be significant enough to motivate such a feature?
   
   The key point is the compression type rather than level. I received a 
request to compress old data during compaction. The change should be 
straightforward, so it seems acceptable.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to