[jira] [Work logged] (HDFS-16533) COMPOSITE_CRC failed between replicated file and striped file due to invalid requested length

2022-07-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16533?focusedWorklogId=795099=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795099
 ]

ASF GitHub Bot logged work on HDFS-16533:
-

Author: ASF GitHub Bot
Created on: 26/Jul/22 02:22
Start Date: 26/Jul/22 02:22
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4155:
URL: https://github.com/apache/hadoop/pull/4155#issuecomment-1194902292

   @jojochuang Thank you very much for your review and suggestions.




Issue Time Tracking
---

Worklog Id: (was: 795099)
Time Spent: 4h 50m  (was: 4h 40m)

> COMPOSITE_CRC failed between replicated file and striped file due to invalid 
> requested length
> -
>
> Key: HDFS-16533
> URL: https://issues.apache.org/jira/browse/HDFS-16533
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> After testing the COMPOSITE_CRC with some random length between replicated 
> file and striped file which has same data with replicated file, it failed. 
> Reproduce step like this:
> {code:java}
> @Test(timeout = 9)
> public void testStripedAndReplicatedFileChecksum2() throws Exception {
>   int abnormalSize = (dataBlocks * 2 - 2) * blockSize +
>   (int) (blockSize * 0.5);
>   prepareTestFiles(abnormalSize, new String[] {stripedFile1, replicatedFile});
>   int loopNumber = 100;
>   while (loopNumber-- > 0) {
> int verifyLength = ThreadLocalRandom.current()
> .nextInt(10, abnormalSize);
> FileChecksum stripedFileChecksum1 = getFileChecksum(stripedFile1,
> verifyLength, false);
> FileChecksum replicatedFileChecksum = getFileChecksum(replicatedFile,
> verifyLength, false);
> if (checksumCombineMode.equals(ChecksumCombineMode.COMPOSITE_CRC.name())) 
> {
>   Assert.assertEquals(stripedFileChecksum1, replicatedFileChecksum);
> } else {
>   Assert.assertNotEquals(stripedFileChecksum1, replicatedFileChecksum);
> }
>   }
> } {code}
> And after tracing the root cause, `FileChecksumHelper#makeCompositeCrcResult` 
> maybe compute an error `consumedLastBlockLength` when updating checksum for 
> the last block of the fixed length which maybe not the last block in the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16533) COMPOSITE_CRC failed between replicated file and striped file due to invalid requested length

2022-07-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16533?focusedWorklogId=795035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795035
 ]

ASF GitHub Bot logged work on HDFS-16533:
-

Author: ASF GitHub Bot
Created on: 25/Jul/22 20:30
Start Date: 25/Jul/22 20:30
Worklog Time Spent: 10m 
  Work Description: jojochuang merged PR #4155:
URL: https://github.com/apache/hadoop/pull/4155




Issue Time Tracking
---

Worklog Id: (was: 795035)
Time Spent: 4h 40m  (was: 4.5h)

> COMPOSITE_CRC failed between replicated file and striped file due to invalid 
> requested length
> -
>
> Key: HDFS-16533
> URL: https://issues.apache.org/jira/browse/HDFS-16533
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> After testing the COMPOSITE_CRC with some random length between replicated 
> file and striped file which has same data with replicated file, it failed. 
> Reproduce step like this:
> {code:java}
> @Test(timeout = 9)
> public void testStripedAndReplicatedFileChecksum2() throws Exception {
>   int abnormalSize = (dataBlocks * 2 - 2) * blockSize +
>   (int) (blockSize * 0.5);
>   prepareTestFiles(abnormalSize, new String[] {stripedFile1, replicatedFile});
>   int loopNumber = 100;
>   while (loopNumber-- > 0) {
> int verifyLength = ThreadLocalRandom.current()
> .nextInt(10, abnormalSize);
> FileChecksum stripedFileChecksum1 = getFileChecksum(stripedFile1,
> verifyLength, false);
> FileChecksum replicatedFileChecksum = getFileChecksum(replicatedFile,
> verifyLength, false);
> if (checksumCombineMode.equals(ChecksumCombineMode.COMPOSITE_CRC.name())) 
> {
>   Assert.assertEquals(stripedFileChecksum1, replicatedFileChecksum);
> } else {
>   Assert.assertNotEquals(stripedFileChecksum1, replicatedFileChecksum);
> }
>   }
> } {code}
> And after tracing the root cause, `FileChecksumHelper#makeCompositeCrcResult` 
> maybe compute an error `consumedLastBlockLength` when updating checksum for 
> the last block of the fixed length which maybe not the last block in the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org