ndimiduk commented on code in PR #6740:
URL: https://github.com/apache/hbase/pull/6740#discussion_r1995663746


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##########
@@ -1742,6 +1760,21 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
         onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);
       }
 
+      // Inspect the header's checksumType for known valid values. If we don't 
find such a value,
+      // assume that the bytes read are corrupted.We will clear the cached 
value and roll back to
+      // HDFS checksum
+      if (!checkCheckSumTypeOnHeaderBuf(headerBuf)) {
+        if (verifyChecksum) {
+          invalidateNextBlockHeader();
+          span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
+          return null;
+        } else {
+          throw new IOException(
+            "Unknown checksum type code " + 
headerBuf.get(HFileBlock.Header.CHECKSUM_TYPE_INDEX)
+              + ", the headerBuf of HFileBlock " + "may corrupted.");

Review Comment:
   Did you intend to include a block_id kind of component here in the exception 
message, or is this split string constant the result of refactoring?
   
   Either way, might as well include the file name here if it's handy. 
Otherwise, please join this string as `", the headerBuf of HFileBlock may 
corrupted."`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to