mimaison commented on code in PR #15516:
URL: https://github.com/apache/kafka/pull/15516#discussion_r1600235959


##########
storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java:
##########
@@ -331,12 +332,12 @@ public ValidationResult 
assignOffsetsNonCompressed(LongRef offsetCounter,
     public ValidationResult validateMessagesAndAssignOffsetsCompressed(LongRef 
offsetCounter,
                                                                        
MetricsRecorder metricsRecorder,
                                                                        
BufferSupplier bufferSupplier) {
-        if (targetCompression == CompressionType.ZSTD && 
interBrokerProtocolVersion.isLessThan(IBP_2_1_IV0))
+        if (targetCompression.type() == CompressionType.ZSTD && 
interBrokerProtocolVersion.isLessThan(IBP_2_1_IV0))
             throw new UnsupportedCompressionTypeException("Produce requests to 
inter.broker.protocol.version < 2.1 broker " +
                 "are not allowed to use ZStandard compression");
 
         // No in place assignment situation 1
-        boolean inPlaceAssignment = sourceCompression == targetCompression;
+        boolean inPlaceAssignment = sourceCompressionType == 
targetCompression.type();

Review Comment:
   The broker has no easy way of retrieving the level that the producer used 
when compressing the records. So if the compression codec matches, I decided to 
keep the compressed bytes instead of decompressing and compressing everything 
again as this would be wasteful, especially as the producer could have already 
used the same compression level.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to