apurtell commented on a change in pull request #3244:
URL: https://github.com/apache/hbase/pull/3244#discussion_r629757734
##########
File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto
##########
@@ -32,6 +32,7 @@ message WALHeader {
optional bool has_tag_compression = 3;
optional string writer_cls_name = 4;
optional string cell_codec_cls_name = 5;
+ optional bool has_value_compression = 6;
Review comment:
Yes, we initialize the `CompressionContext` from the header when
initializing the reader. With this feature toggle placed in
`CompressionContext` this is what we have to do.
If we made value compression something we always do, then this field is not
necessary. I'd prefer that over turning it on on-demand at the first encounter
of a high bit set in `Type`.
##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
##########
@@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength,
int flength, int qlength
DeleteColumn((byte)12),
DeleteFamily((byte)14),
+ // Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of
the
+ // type byte in the WAL codecs to indicate, in a backwards compatible way,
if the
+ // value is compressed there.
+ EffectiveMaximum((byte)Byte.MAX_VALUE),
+
// Maximum is used when searching; you look from maximum on down.
Maximum((byte)255);
Review comment:
The short answer is this part is where I thought people would raise
objections, so wasn't sure what to do here, but I agree with your point. We can
try it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]