[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13121752#comment-13121752
 ] 

Matt Corgan commented on HBASE-4218:
------------------------------------

Jacek - have you done anything with the KeyValue/scanner/searching interfaces?  
I'm curious to see your approach.  

Like you, I'm materializing a the iterator's current cell, but the materialized 
row/family/qualifier/timestamp/type/value all reside in separate arrays/fields. 
 The scanner can only materialize one cell at a time, which i think can work 
long term but doesn't play well with some of the current scanner interfaces.

The problem can be dodged by spawning a new array and copying everything into 
the KeyValue format, but we would see a massive speedup and could possibly 
eliminate all object instantiation (and furious garbage collection) if we could 
do comparisons on the intermediate arrays.  I've mocked up some cell interfaces 
and comparators but am wondering what you've already got in progress.

Regarding scanners - Supported operations on a block are next(), previous(), 
nextRow(), previousRow(), positionAt(KeyValue kv, boolean beforeIfMiss), and 
some others.  Main problem is that i can't peek() which is used in the current 
version of the KeyValue heap, though i've mocked an alternate approach without 
it.  I'm also starting to think that a traditional iterator's hasNext() method 
should not be supported so that true streaming can be done and so that blocks 
don't need to know about their neighbors.
                
> Delta Encoding of KeyValues  (aka prefix compression)
> -----------------------------------------------------
>
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>            Reporter: Jacek Migdal
>              Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to