[
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13169870#comment-13169870
]
Matt Corgan commented on HBASE-4218:
------------------------------------
Mikhail - sorry for the confusion. I was suggesting 4 options for the naming
of the overall "Delta Encoding", not the names of the individual encoders. I
assume the term "delta" comes from the fact that each KV is stored as the
difference from the KV before it.
>From what I can tell, this patch accomplishes something more significant than
>just delta encoding. It is actually a layer of indirection/decoupling that
>allows you to have 1 format of block on disk, another format of blocks in the
>block cache, and still iterate through the KV's without ever fully decoding
>the entire block to the unencoded format. It's really a general purpose
>encoding layer.
Jacek's 4 codecs were all delta based, but I've written a TRIE format where
keys are not based on deltas between each other. Others could write other
formats that also are not based on taking deltas between KVs, so i was just
pointing out that the name DeltaEncoder is too specific. "DataBlockEncoding"
might be more appropriate. "BlockEncoding" might be too generic because I
think index blocks will need a different strategy, and other block types may
never get encoded.
> Delta Encoding of KeyValues (aka prefix compression)
> -----------------------------------------------------
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.94.0
> Reporter: Jacek Migdal
> Assignee: Mikhail Bautin
> Labels: compression
> Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch,
> D447.1.patch, D447.2.patch, D447.3.patch, D447.4.patch, D447.5.patch,
> D447.6.patch, D447.7.patch, D447.8.patch,
> Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very
> similar. Because of that, it is possible to design better compression than
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save
> memory in cache as well as speeding seeks within HFileBlocks. It should
> improve performance a lot, if key lengths are larger than value lengths. For
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes)
> shows that I could achieve decent level of compression:
> key compression ratio: 92%
> total compression ratio: 85%
> LZO on the same data: 85%
> LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than
> LZO). Moreover, it should allow far more efficient seeking which should
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the
> savings are due to prefix compression, int128 encoding, timestamp diffs and
> bitfields to avoid duplication. That way, comparisons of compressed data can
> be much faster than a byte comparator (thanks to prefix compression and
> bitfields).
> In order to implement it in HBase two important changes in design will be
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking
> and iterating; access to uncompressed buffer in HFileBlock will have bad
> performance
> -extend comparators to support comparison assuming that N first bytes are
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira