wchevreuil commented on code in PR #4640:
URL: https://github.com/apache/hbase/pull/4640#discussion_r926890128
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java:
##########
@@ -172,8 +172,10 @@ public HFileWriterImpl(final Configuration conf,
CacheConfig cacheConf, Path pat
}
closeOutputStream = path != null;
this.cacheConf = cacheConf;
- float encodeBlockSizeRatio =
conf.getFloat(UNIFIED_ENCODED_BLOCKSIZE_RATIO, 1f);
- this.encodedBlockSizeLimit = (int) (hFileContext.getBlocksize() *
encodeBlockSizeRatio);
+ float encodeBlockSizeRatio =
conf.getFloat(UNIFIED_ENCODED_BLOCKSIZE_RATIO, 0f);
Review Comment:
It doesn't change the default behaviour in the sense that if
"hbase.writer.unified.encoded.blocksize.ratio" isn't set, we consider only the
unencoded size for calculating block limit, which is the same with previous if
condition on checkBlockBoundary method.
The difference is when "hbase.writer.unified.encoded.blocksize.ratio" is
set, as we now can have 64KB of encoded data (whilst it would never be possible
before).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]