[ 
https://issues.apache.org/jira/browse/HADOOP-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13116377#comment-13116377
 ] 

Steve Loughran commented on HADOOP-7657:
----------------------------------------

Couple of Q's related to Hadoop use
 # How well can you seek in it, so that when you work against a large file you 
can start work inside it> .lzo works better than gzip here, for example
 # how well can you recover from corrupted LZ4 blocks? That is if a 128MB block 
has lost a 64KB segment due to an HDD problem, is the whole 128MB lost, or can 
the tooling extract everything other than the compressed areas that the lost 
64KB sector straddles.
issue #1 is important, because if you can't read the data so easily, 
decompression time matters less. handling file corruption is less critical, but 
it is something I'm starting to worry about.
                
> Add support for LZ4 compression
> -------------------------------
>
>                 Key: HADOOP-7657
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7657
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Mr Bsd
>              Labels: compression
>
> According to several benchmark sites, LZ4 seems to overtake other fast 
> compression algorithms, especially in the decompression speed area. The 
> interface is also trivial to integrate 
> (http://code.google.com/p/lz4/source/browse/trunk/lz4.h) and there is no 
> license issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to