jojochuang commented on code in PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#discussion_r853715096
##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java:
##########
@@ -316,18 +317,22 @@ FileChecksum makeCompositeCrcResult() throws IOException {
"Added blockCrc 0x{} for block index {} of size {}",
Integer.toString(blockCrc, 16), i, block.getBlockSize());
}
-
- // NB: In some cases the located blocks have their block size adjusted
- // explicitly based on the requested length, but not all cases;
- // these numbers may or may not reflect actual sizes on disk.
- long reportedLastBlockSize =
- blockLocations.getLastLocatedBlock().getBlockSize();
- long consumedLastBlockLength = reportedLastBlockSize;
- if (length - sumBlockLengths < reportedLastBlockSize) {
- LOG.warn(
- "Last block length {} is less than reportedLastBlockSize {}",
- length - sumBlockLengths, reportedLastBlockSize);
- consumedLastBlockLength = length - sumBlockLengths;
+ LocatedBlock nextBlock = locatedBlocks.get(i);
+ long consumedLastBlockLength = Math.min(length - sumBlockLengths,
+ nextBlock.getBlockSize());
+ LocatedBlock lastBlock = blockLocations.getLastLocatedBlock();
+ if (nextBlock.equals(lastBlock)) {
Review Comment:
Could you elaborate what this check is? Looking at the test case I assume
these few lines distinguish replicated vs striped blocks. Am I right? How about
turning them into a helper method that is more readable?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]