[ https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13410132#comment-13410132 ]
Uma Maheswara Rao G commented on HDFS-3605: ------------------------------------------- Exactly Todd, This is what we have done in our internal branch as a work around for this fix. Still we can discuss about the optimization as we discussed in above comment, like we can process only the current genstamp blkID, and postpone remaining stuff. Other case could be that maintain only, renecnt block report with the greater genstamp. Do you feel any issues with this? > Missing Block in following scenario > ----------------------------------- > > Key: HDFS-3605 > URL: https://issues.apache.org/jira/browse/HDFS-3605 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 2.0.0-alpha, 2.0.1-alpha > Reporter: Brahma Reddy Battula > Assignee: Todd Lipcon > Attachments: TestAppendBlockMiss.java > > > Open file for append > Write data and sync. > After next log roll and editlog tailing in standbyNN close the append stream. > Call append multiple times on the same file, before next editlog roll. > Now abruptly kill the current active namenode. > Here block is missed.. > this may be because of All latest blocks were queued in StandBy Namenode. > During failover, first OP_CLOSE was processing the pending queue and adding > the block to corrupted block. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira