shahrs87 commented on a change in pull request #3244:
URL: https://github.com/apache/hbase/pull/3244#discussion_r629708006



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
##########
@@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException {
         }
       }
     }
+
+    private byte[] compressValue(Cell cell) throws IOException {
+      byte[] buffer = new byte[4096];
+      ByteArrayOutputStream baos = new ByteArrayOutputStream();
+      Deflater deflater = compression.getValueCompressor().getDeflater();
+      deflater.setInput(cell.getValueArray(), cell.getValueOffset(), 
cell.getValueLength());
+      boolean finished = false;
+      do {
+        int bytesOut = deflater.deflate(buffer);
+        if (bytesOut > 0) {
+          baos.write(buffer, 0, bytesOut);
+        } else {
+          bytesOut = deflater.deflate(buffer, 0, buffer.length, 
Deflater.SYNC_FLUSH);

Review comment:
       If we reach this else condition that means we have compressed all the 
data and written to ByteArrayOutputStream. 
   Now in else condition, we are sure that bytesOut will be 0. We are not 
resetting `buffer`  anywhere. So for the last chunk of Cell's value whose size 
<= 4096, will we be compressing them twice ?
   @apurtell  It is entirely possible that I misunderstood something. Please 
correct me if that is the case. Thank you !




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to