[
https://issues.apache.org/jira/browse/HADOOP-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12893076#action_12893076
]
Tatu Saloranta commented on HADOOP-6389:
----------------------------------------
Although I have not worked on integration, I have been able to get a simple
reusable LZF block codec published, available from github
(http://github.com/ning/compress), and main Maven repo (group com.ning,
artifact compress-lzf). So at least simple part (codec itself) is ready for
anyone with enough familiarity to handle full integration, ideally supporting
access at least at block level (can read starting from block boundaries; blocks
are byte-aligned, contain compress and uncompressed block lengths to support
somewhat efficient skipping of blocks).
> Add support for LZF compression
> -------------------------------
>
> Key: HADOOP-6389
> URL: https://issues.apache.org/jira/browse/HADOOP-6389
> Project: Hadoop Common
> Issue Type: New Feature
> Components: io
> Reporter: Tatu Saloranta
>
> (note: related to [HADOOP-4874])
> As per Doug's earlier comments, LZF does indeed look like a good compressor
> candidate for fast compression/decompression, good enough compression rate.
> From my testing it seems at least twice as fast at compression, and somewhat
> faster for decompressing than gzip.
> Code from
> [http://h2database.googlecode.com/svn/trunk/h2/src/main/org/h2/compress/] is
> applicable, and I have tested it with json data.
> I hope to have more to spend on this in near future, but if someone else gets
> to this first that'd be good too.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.