xuzq created HDFS-16533:
---------------------------
Summary: COMPOSITE_CRC failed between replicated file and striped
file.
Key: HDFS-16533
URL: https://issues.apache.org/jira/browse/HDFS-16533
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs, hdfs-client
Reporter: xuzq
Assignee: xuzq
After testing the COMPOSITE_CRC with some random length between replicated file
and striped file which has same data with replicated file, it failed.
Reproduce step like this:
{code:java}
@Test(timeout = 90000)
public void testStripedAndReplicatedFileChecksum2() throws Exception {
int abnormalSize = (dataBlocks * 2 - 2) * blockSize +
(int) (blockSize * 0.5);
prepareTestFiles(abnormalSize, new String[] {stripedFile1, replicatedFile});
int loopNumber = 100;
while (loopNumber-- > 0) {
int verifyLength = ThreadLocalRandom.current()
.nextInt(10, abnormalSize);
FileChecksum stripedFileChecksum1 = getFileChecksum(stripedFile1,
verifyLength, false);
FileChecksum replicatedFileChecksum = getFileChecksum(replicatedFile,
verifyLength, false);
if (checksumCombineMode.equals(ChecksumCombineMode.COMPOSITE_CRC.name())) {
Assert.assertEquals(stripedFileChecksum1, replicatedFileChecksum);
} else {
Assert.assertNotEquals(stripedFileChecksum1, replicatedFileChecksum);
}
}
} {code}
And after tracing the root cause, `FileChecksumHelper#makeCompositeCrcResult`
maybe compute an error `consumedLastBlockLength` when updating checksum for the
last block of the fixed length which maybe not the last block in the file.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]