[ https://issues.apache.org/jira/browse/HDFS-15375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117980#comment-17117980 ]
hemanthboyina commented on HDFS-15375: -------------------------------------- ran test failures in local , seems not related org.apache.hadoop.hdfs.TestReconstructStripedFile.testErasureCodingWorkerXmitsWeight org.apache.hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy.testErasureCodingWorkerXmitsWeight these tests were failing even without this patch , following up on these tests , found they were failing continonusly [https://builds.apache.org/job/PreCommit-HDFS-Build/29368/] [https://builds.apache.org/job/PreCommit-HDFS-Build/29366/|https://builds.apache.org/job/PreCommit-HDFS-Build/29366/#showFailuresLink] [https://builds.apache.org/job/PreCommit-HDFS-Build/29358/] > Reconstruction Work should not happen for Corrupt Block > ------------------------------------------------------- > > Key: HDFS-15375 > URL: https://issues.apache.org/jira/browse/HDFS-15375 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: hemanthboyina > Assignee: hemanthboyina > Priority: Major > Attachments: HDFS-15375-testrepro.patch, HDFS-15375.001.patch > > > In BlockManager#updateNeededReconstructions , while updating the > NeededReconstruction we are adding Pendingreconstruction blocks to live > replicas > {code:java} > int pendingNum = pendingReconstruction.getNumReplicas(block); > int curExpectedReplicas = getExpectedRedundancyNum(block); > if (!hasEnoughEffectiveReplicas(block, repl, pendingNum)) { > neededReconstruction.update(block, repl.liveReplicas() + > pendingNum,{code} > But if two replicas were in pending reconstruction (due to corruption) , and > if the third replica is corrupted the block should be in > QUEUE_WITH_CORRUPT_BLOCKS but because of above logic it was getting added in > QUEUE_LOW_REDUNDANCY , this makes the RedudancyMonitor to reconstruct a > corrupted block , which is wrong -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org