[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13191807#comment-13191807
 ] 

Phabricator commented on HBASE-4218:
------------------------------------

mbautin has commented on the revision "[jira] [HBASE-4218] HFile data block 
encoding framework and delta encoding implementation".

  See responses to some of the comments inline. I will upload a new version of 
the diff a bit later.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java:98 Done.
  src/main/java/org/apache/hadoop/hbase/KeyValue.java:1901 That's correct, good 
catch! Yes, this is a pre-existing bug. Fixed this and added a new test, 
KeyValue.testCreateKeyValueFromKey, to verify this.
  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:87
 Renamed the method to copyFromNext and the parameter to nextState. Added usage 
details to the javadoc.
  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:129-141
 The reason for this is that the key is reconstructed by pieces, and the value 
is stored as-is in the original encoded buffer, so getValue() just provides a 
reference to a sub-array of the original byte array. I renamed these two seeker 
interface methods to getKeyDeepCopy() and getValueShallowCopy() for clarity.
  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:146
 Replaced here and at line 1343 in createKeyOnly:

    public KeyValue createKeyOnly(boolean lenAsVal) {
      // KV format:  <keylen:4><valuelen:4><key:keylen><value:valuelen>
      // Rebuild as: <keylen:4><0:4><key:keylen>
      int dataLen = lenAsVal? Bytes.SIZEOF_INT : 0;
      byte [] newBuffer = new byte[getKeyLength() + ROW_OFFSET + dataLen];
      System.arraycopy(this.bytes, this.offset, newBuffer, 0,
          Math.min(newBuffer.length,this.length));

  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:206
 Good point. If previous is invalid here, then the contract of this function is 
actually violated, as it cannot go to the previous block. The caller should 
check if the requested key is the first key of the block and load the previous 
block if necessary. I added an exception in case previous.isValid() is not true.
  src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java:65 
Actually, all member fields except prevOffset are overridden. prevOffset is 
manipulated directly by encoders/decoders. Added this to the method's javadoc.
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:30 
Done.
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:32 
Done.
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:79 
All current implementations just wrap a portion of the actual block's buffer, 
which makes sense, because we don't encode the first key. Added this to the 
method's javadoc.

REVISION DETAIL
  https://reviews.facebook.net/D447

                
> Data Block Encoding of KeyValues  (aka delta encoding / prefix compression)
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.94.0
>            Reporter: Jacek Migdal
>            Assignee: Mikhail Bautin
>              Labels: compression
>             Fix For: 0.94.0
>
>         Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, 
> 0001-Delta-encoding.patch, 4218-2012-01-14.txt, 4218-v16.txt, 4218.txt, 
> D447.1.patch, D447.10.patch, D447.11.patch, D447.12.patch, D447.13.patch, 
> D447.14.patch, D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch, 
> D447.19.patch, D447.2.patch, D447.20.patch, D447.21.patch, D447.22.patch, 
> D447.23.patch, D447.24.patch, D447.3.patch, D447.4.patch, D447.5.patch, 
> D447.6.patch, D447.7.patch, D447.8.patch, D447.9.patch, 
> Data-block-encoding-2011-12-23.patch, 
> Delta-encoding-2012-01-17_11_09_09.patch, 
> Delta-encoding.patch-2011-12-22_11_52_07.patch, 
> Delta-encoding.patch-2012-01-05_15_16_43.patch, 
> Delta-encoding.patch-2012-01-05_16_31_44.patch, 
> Delta-encoding.patch-2012-01-05_16_31_44_copy.patch, 
> Delta-encoding.patch-2012-01-05_18_50_47.patch, 
> Delta-encoding.patch-2012-01-07_14_12_48.patch, 
> Delta-encoding.patch-2012-01-13_12_20_07.patch, 
> Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to