[
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13204031#comment-13204031
]
Todd Lipcon commented on HBASE-5313:
------------------------------------
I'm curious what the expected compression gain would be. Has anyone tried
"rearranging" an example of a production hfile block and recompressing to see
the difference?
My thinking is that typical LZ-based compression (eg snappy) uses a hashtable
for common substring identification which is up to 16K entries or so. So I
don't know that it would do a particularly better job with the common keys if
they were all grouped at the front of the block - so long as the keyval pairs
are less than a few hundred bytes apart, it should still find them OK.
Of course the other gains (storing large values compressed in RAM for example)
seem good.
> Restructure hfiles layout for better compression
> ------------------------------------------------
>
> Key: HBASE-5313
> URL: https://issues.apache.org/jira/browse/HBASE-5313
> Project: HBase
> Issue Type: Improvement
> Components: io
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> A HFile block contain a stream of key-values. Can we can organize these kvs
> on the disk in a better way so that we get much greater compression ratios?
> One option (thanks Prakash) is to store all the keys in the beginning of the
> block (let's call this the key-section) and then store all their
> corresponding values towards the end of the block. This will allow us to
> not-even decompress the values when we are scanning and skipping over rows in
> the block.
> Any other ideas?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira