Kousuke Saruta created HDFS-5761:
------------------------------------
Summary: DataNode fail to validate integrity for checksum type
NULL when DataNode recovers
Key: HDFS-5761
URL: https://issues.apache.org/jira/browse/HDFS-5761
Project: Hadoop HDFS
Issue Type: Bug
Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
When DataNode is down during writing blocks, the blocks are not filinalized and
the next time DataNode recovers, integrity validation will run.
But if we use NULL for checksum algorithm (we can set NULL to
dfs.checksum.type), DataNode will fail to validate integrity and cannot be up.
The cause is in BlockPoolSlice#validateIntegrity.
In the method, there is following code.
{code}
long numChunks = Math.min(
(blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum,
(metaFileLen - crcHeaderLen)/checksumSize);
{code}
When we choose NULL checksum, checksumSize is 0 so ArithmeticException will be
thrown and DataNode cannot be up.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)