[
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13222512#comment-13222512
]
He Yongqiang commented on HBASE-5313:
-------------------------------------
As part of working on HBASE-5313, we first tried to write a
HFileWriter/HFileReader to do it. After finishing some work, it seems this
requires a lot of code refactoring in order to reuse existing code as much as
possible.
Then we find seems adding a new columnar encoder/decoder would be easy to do.
opened https://issues.apache.org/jira/browse/HBASE-5521 to do encoder/decoder
specific compression work.
> Restructure hfiles layout for better compression
> ------------------------------------------------
>
> Key: HBASE-5313
> URL: https://issues.apache.org/jira/browse/HBASE-5313
> Project: HBase
> Issue Type: Improvement
> Components: io
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> A HFile block contain a stream of key-values. Can we can organize these kvs
> on the disk in a better way so that we get much greater compression ratios?
> One option (thanks Prakash) is to store all the keys in the beginning of the
> block (let's call this the key-section) and then store all their
> corresponding values towards the end of the block. This will allow us to
> not-even decompress the values when we are scanning and skipping over rows in
> the block.
> Any other ideas?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira