[
https://issues.apache.org/jira/browse/HADOOP-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12674770#action_12674770
]
Raghu Angadi commented on HADOOP-3314:
--------------------------------------
Brian, this issue has little to do with HADOOP-4692. This is pretty simple :
block verification should make sure that block length, crc lengths are correct
(w.r.t each other) and any runtime expected length (if there is one).
For e.g. if the original block length is 64MB and was truncated to 63MB (a 512
byte boundary) for some reason, it would not be caught by the current
verification.
> DataBlockScanner (via periodic verification) could be improved to check for
> corrupt block length
> ------------------------------------------------------------------------------------------------
>
> Key: HADOOP-3314
> URL: https://issues.apache.org/jira/browse/HADOOP-3314
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Environment: All
> Reporter: Lohit Vijayarenu
>
> DataBlockScanner should also check for truncated blocks and report them as
> corrupt block to the NN
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.