[
https://issues.apache.org/jira/browse/HDFS-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194506#comment-13194506
]
Harsh J commented on HDFS-2848:
-------------------------------
I noticed this change when toying with 0.23.0 as well. Its only the appending
of data to blocks that causes this. I'm guessing we've gotten stricter on
registered block lengths and do not consider reads or checksums beyond known
length?
I'm inclined to call any block that has been modified in any way outside of
HDFS as a corrupt block, and that it ought to be invalidated and re-replicated.
> hdfs corruption appended to blocks is not detected by fs commands or fsck
> -------------------------------------------------------------------------
>
> Key: HDFS-2848
> URL: https://issues.apache.org/jira/browse/HDFS-2848
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.23.0
> Reporter: Ravi Prakash
> Assignee: Ravi Prakash
>
> Courtesy Pat White
> {quote}
> Appears that there is a regression in corrupt block detection by both fsck
> and fs cmds like 'cat'. Testcases for
> pre-block and block-overwrite corruption of all replicas is correctly
> reporting errors however post-block corruption is
> not, fsck on the filesystem reports it's Healthy and 'cat' returns without
> error. Looking at the DN blocks themselves,
> they clearly contain the injected corruption pattern.
> {quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira