anmolnar commented on code in PR #4640:
URL: https://github.com/apache/hbase/pull/4640#discussion_r926641268


##########
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHStoreFile.java:
##########
@@ -1141,4 +1144,64 @@ public void testDataBlockEncodingMetaData() throws 
IOException {
     byte[] value = fileInfo.get(HFileDataBlockEncoder.DATA_BLOCK_ENCODING);
     assertArrayEquals(dataBlockEncoderAlgo.getNameInBytes(), value);
   }
+
+  @Test
+  public void testDataBlockSizeEncoded() throws Exception {
+    // Make up a directory hierarchy that has a regiondir ("7e0102") and 
familyname.
+    Path dir = new Path(new Path(this.testDir, "7e0102"), "familyname");
+    Path path = new Path(dir, "1234567890");
+
+    DataBlockEncoding dataBlockEncoderAlgo =
+      DataBlockEncoding.FAST_DIFF;
+
+    conf.setDouble("hbase.writer.unified.encoded.blocksize.ratio", 1);
+
+    cacheConf = new CacheConfig(conf);
+    HFileContext meta = new 
HFileContextBuilder().withBlockSize(BLOCKSIZE_SMALL)
+      .withChecksumType(CKTYPE)
+      .withBytesPerCheckSum(CKBYTES)
+      .withDataBlockEncoding(dataBlockEncoderAlgo)
+      .build();
+    // Make a store file and write data to it.
+    StoreFileWriter writer = new StoreFileWriter.Builder(conf, cacheConf, 
this.fs)
+      .withFilePath(path)
+      .withMaxKeyCount(2000)
+      .withFileContext(meta)
+      .build();
+    writeStoreFile(writer);
+    //writer.close();

Review Comment:
   Please remove this line.



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java:
##########
@@ -172,8 +172,10 @@ public HFileWriterImpl(final Configuration conf, 
CacheConfig cacheConf, Path pat
     }
     closeOutputStream = path != null;
     this.cacheConf = cacheConf;
-    float encodeBlockSizeRatio = 
conf.getFloat(UNIFIED_ENCODED_BLOCKSIZE_RATIO, 1f);
-    this.encodedBlockSizeLimit = (int) (hFileContext.getBlocksize() * 
encodeBlockSizeRatio);
+    float encodeBlockSizeRatio = 
conf.getFloat(UNIFIED_ENCODED_BLOCKSIZE_RATIO, 0f);
+    this.encodedBlockSizeLimit = encodeBlockSizeRatio >0 ?
+      (int) (hFileContext.getBlocksize() * encodeBlockSizeRatio) : 0;

Review Comment:
   Because of the multiplication the result will be 0 anyway, no need for the 
Elvis operator.



##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##########
@@ -449,7 +449,7 @@ public int getOnDiskSizeWithHeader() {
   }
 
   /** Returns the on-disk size of the data part + checksum (header excluded). 
*/
-  int getOnDiskSizeWithoutHeader() {
+  public int getOnDiskSizeWithoutHeader() {

Review Comment:
   I'm not sure if it's feasible, but if you move the test to `TestHFile`, for 
instance, you don't need to make this method public.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to