[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13127185#comment-13127185
 ] 

Matt Corgan commented on HBASE-4218:
------------------------------------

I'm trying to hook the prefix trie code into this, which is going well enough.

Testing on some HFileV1 data, i think i'm seeing some double-decoding in 
HFileReaderV1.java:328.  You encode the block to put in the block cache in 
blockDeltaEncoder.beforeBlockCache(..), but then go back to using the unencoded 
version, which triggers a second encoding a few lines later at 
blockDeltaEncoder.afterReadFromDiskAndPuttingInCache(..).  Possible change:

{code}
      // Cache the block
      if (cacheBlock && blockCache != null) {
        HFileBlock cachedBlock = blockDeltaEncoder.beforeBlockCache(hfileBlock);
        blockCache.cacheBlock(cacheKey, cachedBlock, inMemory);
      }
      hfileBlock = blockDeltaEncoder.afterReadFromDiskAndPuttingInCache(
          hfileBlock, isCompaction);
{code}

{code}
      // Cache the block
      if (cacheBlock && blockCache != null) {
          hfileBlock = blockDeltaEncoder.beforeBlockCache(hfileBlock);
        blockCache.cacheBlock(cacheKey, hfileBlock, inMemory);
      }
      hfileBlock = blockDeltaEncoder.afterReadFromDiskAndPuttingInCache(
          hfileBlock, isCompaction);
{code}

A few other comments:

* I wonder if we could make some of the naming more general than "Delta" 
encoding since that's not the only type it can support.  I added a TRIE entry 
to DeltaEncoderAlgorithms.  Maybe we could call it KeyValueEncoding, 
DataBlockEncoding, HCellEncoding, BlockEncoding, etc...

* saw "comparator" spelled "comperator" several places

* seems like PREFIX is always the winner.  are the others better at certain 
datasets, or are they just there for comparison?

* i've been running the tests on different block sizes from 1KB to 1MB and 
seeing seeks/s decline from ~300,000/s to 3,000/s because of the sequential 
access inside a block.  even using 64KB block is ~6x slower than 1KB blocks

{code}
table,encoding,blockSize,numCells,avgKeyBytes,avgValueBytes,sequentialMB/s,seeks/s,~cycles/seek
Count5s,PREFIX,1KB  ,1338940,85,9,167,323685,  6178
Count5s,PREFIX,4KB  ,1338627,85,9,281,334873,  5972
Count5s,PREFIX,16KB ,1338420,85,9,381,168987, 11835
Count5s,PREFIX,64KB ,1338016,85,9,380, 52781, 37891
Count5s,PREFIX,256KB,1339210,85,9,392, 14203,140810
Count5s,PREFIX,1MB  ,1337318,85,9,371,  3703,539958
{code}
                
> Delta Encoding of KeyValues  (aka prefix compression)
> -----------------------------------------------------
>
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.94.0
>            Reporter: Jacek Migdal
>              Labels: compression
>         Attachments: open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to