wchevreuil commented on code in PR #4640:
URL: https://github.com/apache/hbase/pull/4640#discussion_r926886208


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java:
##########
@@ -309,12 +311,15 @@ protected void finishInit(final Configuration conf) {
    * At a block boundary, write all the inline blocks and opens new block.
    */
   protected void checkBlockBoundary() throws IOException {
-    // For encoder like prefixTree, encoded size is not available, so we have 
to compare both
-    // encoded size and unencoded size to blocksize limit.
-    if (
-      blockWriter.encodedBlockSizeWritten() >= encodedBlockSizeLimit
-        || blockWriter.blockSizeWritten() >= hFileContext.getBlocksize()
-    ) {
+    boolean shouldFinishBlock = false;
+    //This means hbase.writer.unified.encoded.blocksize.ratio was set to 
something different from 0

Review Comment:
   Yeah, we don't have prefix tree anymore. So with the previous "||" 
condition, we could still not respect the encoded size desired if the data 
shrinkage by encoding is higher than the configured 
"hbase.writer.unified.encoded.blocksize.ratio' value. This change also allows 
for defining a 1:1 ration where we would then use the encoded size for the 
block limit. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to