[
https://issues.apache.org/jira/browse/HDFS-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13975571#comment-13975571
]
Chen He commented on HDFS-2951:
-------------------------------
Hi [~andreina]
This JIRA is 2 years old. Right now, it is time to clean up 0.23 JIRAs. If it
is still a problem in 2.x, please retarget it to 2.x. If not, please close it.
Thanks!
> Block reported as corrupt while running multi threaded client program that
> performs write and read operation on a set of files
> ------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-2951
> URL: https://issues.apache.org/jira/browse/HDFS-2951
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 0.23.0
> Reporter: J.Andreina
> Fix For: 0.24.0
>
>
> Block incorrectly detected as bad in the following scenario:
> Running multi threaded client program which performs write and read operation
> on a set of files
> One block detected as bad by DN
> Multiple recoveries where triggered from the NN side(It was happening every 1
> hr)
> After around 6 hrs the recovery was successful(Commitblocksynchronization
> successful at NN side)
> At the DN side around the same time when Commitblocksynchronization happened
> one more NN recovery call has come and this was subsequently failig as
> already the block was recovered and the generation timestamp is updated.
> At the DN side block verification failed and the block was reported as bad.
> FSCk report is indicating that the block is corrupt
--
This message was sent by Atlassian JIRA
(v6.2#6252)