[
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13192767#comment-13192767
]
Phabricator commented on HBASE-4218:
------------------------------------
mbautin has commented on the revision "[jira] [HBASE-4218] HFile data block
encoding framework and delta encoding implementation".
Replying to the rest of comments. I will upload another patch shortly.
INLINE COMMENTS
src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:130
Done.
src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:129
Updated this javadoc.
src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:137
Done.
src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:36
Done. "128-bit" does not seem to appear anywhere else in the patch.
src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:106
Good catch! Implemented your suggestion.
src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:141
Spent quite a bit of time staring at this code to fully understand it, then
added some more comments :)
The difficult part for me was the "else" clause below. It turns out that as
the column family length and name follow the row, they would be automatically
included in commonPrefix if the whole row matches, so we don't need to
special-case them.
src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:156-159
The condition on line 163 is FLAG_SAME_VALUE, not FLAG_SAME_TYPE, so moving
these lines there would actually change logic. Why exactly are you saying we
should move these two lines there?
src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:305-313
Yes, it appears that we are not using the qualifierLength field during
decompression. Moved the state changes from this block to the
FastDiffCompressionState.decompressFirstKV method.
src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java:45
Renamed to prevKeyOffset.
src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java:84 This
actually fixes a pre-existing bug. The previous heapSize() implementation in
BlockCacheKey did not take into account the object overhead and the hfileName
String reference, and there was no unit test for BlockCacheKey, which I've
added to TestHeapSize.
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:91 Removed.
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:760 Actually
this piece of logic was enabling caching of unencoded blocks. However, as we
decided that we don't care about doing encoding on disk only but not in cache,
I am getting rid of this additional logic.
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java:405
Actually readerV1 is useful, because it saves us a cast from
AbstractHFileReader to HFileReaderV1. But I have now renamed this field to
reader to make it look indistinguishable from what happens in the base class.
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java:96 The
assignment below is an upcast. We use some FSReaderV2-specific methods in the
constructor. Renamed the local variable to fsBlockReaderV2 and added a comment
for clarity.
src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java:752 Good
point. Leaving as is for now—the method name clearly says it is for test only.
We can optimize this later if necessary.
src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java:158 Done.
src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java:192 Done.
src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java:471 Done.
REVISION DETAIL
https://reviews.facebook.net/D447
> Data Block Encoding of KeyValues (aka delta encoding / prefix compression)
> ---------------------------------------------------------------------------
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.94.0
> Reporter: Jacek Migdal
> Assignee: Mikhail Bautin
> Labels: compression
> Fix For: 0.94.0
>
> Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch,
> 0001-Delta-encoding.patch, 4218-2012-01-14.txt, 4218-v16.txt, 4218.txt,
> D447.1.patch, D447.10.patch, D447.11.patch, D447.12.patch, D447.13.patch,
> D447.14.patch, D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch,
> D447.19.patch, D447.2.patch, D447.20.patch, D447.21.patch, D447.22.patch,
> D447.23.patch, D447.24.patch, D447.3.patch, D447.4.patch, D447.5.patch,
> D447.6.patch, D447.7.patch, D447.8.patch, D447.9.patch,
> Data-block-encoding-2011-12-23.patch,
> Delta-encoding-2012-01-17_11_09_09.patch,
> Delta-encoding.patch-2011-12-22_11_52_07.patch,
> Delta-encoding.patch-2012-01-05_15_16_43.patch,
> Delta-encoding.patch-2012-01-05_16_31_44.patch,
> Delta-encoding.patch-2012-01-05_16_31_44_copy.patch,
> Delta-encoding.patch-2012-01-05_18_50_47.patch,
> Delta-encoding.patch-2012-01-07_14_12_48.patch,
> Delta-encoding.patch-2012-01-13_12_20_07.patch,
> Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very
> similar. Because of that, it is possible to design better compression than
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save
> memory in cache as well as speeding seeks within HFileBlocks. It should
> improve performance a lot, if key lengths are larger than value lengths. For
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes)
> shows that I could achieve decent level of compression:
> key compression ratio: 92%
> total compression ratio: 85%
> LZO on the same data: 85%
> LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than
> LZO). Moreover, it should allow far more efficient seeking which should
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the
> savings are due to prefix compression, int128 encoding, timestamp diffs and
> bitfields to avoid duplication. That way, comparisons of compressed data can
> be much faster than a byte comparator (thanks to prefix compression and
> bitfields).
> In order to implement it in HBase two important changes in design will be
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking
> and iterating; access to uncompressed buffer in HFileBlock will have bad
> performance
> -extend comparators to support comparison assuming that N first bytes are
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira