[ 
https://issues.apache.org/jira/browse/KAFKA-1253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907746#comment-13907746
 ] 

Guozhang Wang commented on KAFKA-1253:
--------------------------------------

Thanks for the comments, I think I should elaborate a little bit more on the 
proposed approach:

1. On the producer end, with compression each MemoryRecords will be write to 
channel as a single compressed message.

2. The iterator of the MemoryRecords, when encounter a compressed message, will 
create a second-level CompressedMemoryRecords.iterator to iterate them if 
shallow == false. This also partially enforce to not use nested compression.

3. As I said in the previous comment, we could do in-place decompression just 
like in-place compression to avoid double-copy. It is just today we do the 
double-copy and I was just following that mechanism.

About double-allocation, I will try to refactor the code a bit to avoid that.

> Implement compression in new producer
> -------------------------------------
>
>                 Key: KAFKA-1253
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1253
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: producer 
>            Reporter: Jay Kreps
>         Attachments: KAFKA-1253.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to