[ 
https://issues.apache.org/jira/browse/KAFKA-79?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13119804#comment-13119804
 ] 

Chris Burroughs commented on KAFKA-79:
--------------------------------------

- I think we should have a clear convention for ids. For example: core < 10000, 
contrib < 20000, HERE-BE-DRAGONS > 20000. 
- I think there is room for gzip, and something else in the  LZF/Snappy area in 
the default kafka install.
- I'm mildly uncomfortable with native code dependencies, but the Hadoop guys 
seem to have gotten something working.
                
> Introduce the compression feature in Kafka
> ------------------------------------------
>
>                 Key: KAFKA-79
>                 URL: https://issues.apache.org/jira/browse/KAFKA-79
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.6
>            Reporter: Neha Narkhede
>             Fix For: 0.7
>
>
> With this feature, we can enable end-to-end block compression in Kafka. The 
> idea is to enable compression on the producer for some or all topics, write 
> the data in compressed format on the server and make the consumers 
> compression aware. The data will be decompressed only on the consumer side. 
> Ideally, there should be a choice of compression codecs to be used by the 
> producer. That means a change to the message header as well as the network 
> byte format. On the consumer side, the state maintenance behavior of the 
> zookeeper consumer changes. For compressed data, the consumed offset will be 
> advanced one compressed message at a time. For uncompressed data, consumed 
> offset will be advanced one message at a time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to