bbeaudreault commented on code in PR #4502:
URL: https://github.com/apache/hbase/pull/4502#discussion_r917327477
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##########
@@ -625,11 +625,12 @@ HFileBlock unpack(HFileContext fileContext, FSReader
reader) throws IOException
: reader.getDefaultBlockDecodingContext();
// Create a duplicated buffer without the header part.
ByteBuff dup = this.buf.duplicate();
- dup.position(this.headerSize());
+ int totalChecksumBytes = totalChecksumBytes();
+ dup.position(this.headerSize()).limit(dup.limit() - totalChecksumBytes);
dup = dup.slice();
// Decode the dup into unpacked#buf
- ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader(),
- unpacked.getUncompressedSizeWithoutHeader(),
unpacked.getBufferWithoutHeader(true), dup);
+ ctx.prepareDecoding(unpacked.getOnDiskSizeWithoutHeader() -
totalChecksumBytes,
Review Comment:
getOnDiskSizeWithoutHeader actually doesn't include the checksum bytes, so I
don't think this change is correct.
We've been looking at this same area in
https://issues.apache.org/jira/browse/HBASE-27053. My latest comments there
have a deep dive on what's going on here, though more related to compression
than encryption.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]