[ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13869719#comment-13869719
 ] 

Uma Maheswara Rao G commented on HDFS-5761:
-------------------------------------------

Thanks for filing a JIRA. I noticed this when I was looking the JIRA HDFS-5728.
Actually validate integrity check not necessary when it is set to NULL. It 
should consider full file length as is.
I think  the below array becomes 0 length array when checksumSize 0? 
{code}
byte[] buf = new byte[lastChunkSize+checksumSize];
{code}

So, how about just considering blockFileLength when crc type is NULL? Because 
crc is null now, so we need not care about integrity check with CRC file at all 
right.

> DataNode fails to validate integrity for checksum type NULL when DataNode 
> recovers 
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-5761
>                 URL: https://issues.apache.org/jira/browse/HDFS-5761
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0
>            Reporter: Kousuke Saruta
>            Assignee: Kousuke Saruta
>         Attachments: HDFS-5761.patch
>
>
> When DataNode is down during writing blocks, the blocks are not filinalized 
> and the next time DataNode recovers, integrity validation will run.
> But if we use NULL for checksum algorithm (we can set NULL to 
> dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
> up. 
> The cause is in BlockPoolSlice#validateIntegrity.
> In the method, there is following code.
> {code}
> long numChunks = Math.min(
>           (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
>           (metaFileLen - crcHeaderLen)/checksumSize);
> {code}
> When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
> be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to