[ https://issues.apache.org/jira/browse/HDFS-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13881518#comment-13881518 ]
Kihwal Lee commented on HDFS-5728: ---------------------------------- I've committed this to trunk and branch-2. Thank you for reporting and working on the patch, Vinay. Thanks for the reveiw, Uma. > [Diskfull] Block recovery will fail if the metafile does not have crc for all > chunks of the block > ------------------------------------------------------------------------------------------------- > > Key: HDFS-5728 > URL: https://issues.apache.org/jira/browse/HDFS-5728 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 0.23.10, 2.2.0 > Reporter: Vinay > Assignee: Vinay > Priority: Critical > Fix For: 3.0.0, 2.4.0 > > Attachments: HDFS-5728.patch, HDFS-5728.patch, HDFS-5728.patch > > > 1. Client (regionsever) has opened stream to write its WAL to HDFS. This is > not one time upload, data will be written slowly. > 2. One of the DataNode got diskfull ( due to some other data filled up disks) > 3. Unfortunately block was being written to only this datanode in cluster, so > client write has also failed. > 4. After some time disk is made free and all processes are restarted. > 5. Now HMaster try to recover the file by calling recoverLease. > At this time recovery was failing saying file length mismatch. > When checked, > actual block file length: 62484480 > Calculated block length: 62455808 > This was because, metafile was having crc for only 62455808 bytes, and it > considered 62455808 as the block size. > No matter how many times, recovery was continously failing. -- This message was sent by Atlassian JIRA (v6.1.5#6160)