[ 
https://issues.apache.org/jira/browse/KAFKA-3252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15354083#comment-15354083
 ] 

Jun Rao commented on KAFKA-3252:
--------------------------------

[~omkreddy], sorry for creating the duplicated jira. I took a quick look at 
your patch in KAFKA-2213 and the patch in this jira. In both patches, one thing 
we have to be a bit careful of is to make sure the re-compressed message set 
doesn't exceed the max message size. In theory, this can happen when we switch 
the compression codec. Not sure what we should do when this happens. Perhaps, 
we can just use the original compression codec?

[~ashishujjain], are you still active working on this jira? If so, perhaps you 
can take a look at [~omkreddy]'s patch in KAFKA-2213 and the above comment.

> compression type for a topic should be used during log compaction 
> ------------------------------------------------------------------
>
>                 Key: KAFKA-3252
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3252
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.9.0.0
>            Reporter: Jun Rao
>            Assignee: Ashish K Singh
>             Fix For: 0.10.1.0
>
>
> Currently, the broker uses the specified compression type in a topic for 
> newly published messages. However, during log compaction, it still uses the 
> compression codec in the original message. To be consistent, it seems that we 
> should use the compression type in a topic when copying the messages to new 
> log segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to