[
https://issues.apache.org/jira/browse/HDFS-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866968#comment-13866968
]
Uma Maheswara Rao G commented on HDFS-5728:
-------------------------------------------
Is this case happened only if we restart DN where crc has less data? as we
convert all RBW replica states to RWR and here length will be calculated based
on crc chunks. If that is the case, how about just setting the file length also
to same after creating RWR state?
> [Diskfull] Block recovery will fail if the metafile not having crc for all
> chunks of the block
> ----------------------------------------------------------------------------------------------
>
> Key: HDFS-5728
> URL: https://issues.apache.org/jira/browse/HDFS-5728
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.2.0
> Reporter: Vinay
> Assignee: Vinay
> Attachments: HDFS-5728.patch
>
>
> 1. Client (regionsever) has opened stream to write its WAL to HDFS. This is
> not one time upload, data will be written slowly.
> 2. One of the DataNode got diskfull ( due to some other data filled up disks)
> 3. Unfortunately block was being written to only this datanode in cluster, so
> client write has also failed.
> 4. After some time disk is made free and all processes are restarted.
> 5. Now HMaster try to recover the file by calling recoverLease.
> At this time recovery was failing saying file length mismatch.
> When checked,
> actual block file length: 62484480
> Calculated block length: 62455808
> This was because, metafile was having crc for only 62455808 bytes, and it
> considered 62455808 as the block size.
> No matter how many times, recovery was continously failing.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)