[ 
https://issues.apache.org/jira/browse/HDFS-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13879014#comment-13879014
 ] 

Kihwal Lee edited comment on HDFS-5728 at 1/22/14 6:59 PM:
-----------------------------------------------------------

I have seen this recently after a partial power outage. The disks weren't full 
in this case. I manually truncated the block files to get it going.


was (Author: kihwal):
I have seen this recently after a partial power outage. The disks weren't full 
in this case.

> [Diskfull] Block recovery will fail if the metafile not having crc for all 
> chunks of the block
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5728
>                 URL: https://issues.apache.org/jira/browse/HDFS-5728
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.2.0
>            Reporter: Vinay
>            Assignee: Vinay
>         Attachments: HDFS-5728.patch, HDFS-5728.patch
>
>
> 1. Client (regionsever) has opened stream to write its WAL to HDFS. This is 
> not one time upload, data will be written slowly.
> 2. One of the DataNode got diskfull ( due to some other data filled up disks)
> 3. Unfortunately block was being written to only this datanode in cluster, so 
> client write has also failed.
> 4. After some time disk is made free and all processes are restarted.
> 5. Now HMaster try to recover the file by calling recoverLease. 
> At this time recovery was failing saying file length mismatch.
> When checked,
>  actual block file length: 62484480
>  Calculated block length: 62455808
> This was because, metafile was having crc for only 62455808 bytes, and it 
> considered 62455808 as the block size.
> No matter how many times, recovery was continously failing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to