[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15338470#comment-15338470
 ] 

Kai Zheng commented on HDFS-10460:
----------------------------------

Thanks [~rakeshr] for handling the hard part! My quick look gave the following 
comments, and will do a careful review later.

1. Could you explain why we need to add {{actualNumBytes}} for this, or 
ellaborate some bit in the description for better understanding. I'm thinking 
maybe we could use {{requestLength}} for the extra needed info. actualNumBytes 
instead could be set to the block group. Not sure if this could be better.

2. The newly added tests look great! Ref. this codes: 1) you mean less than 
bytesPerCRC, but in fact you passed bytesPerCRC as the request length. 2) you 
could get {{bytesPerCRC}} and save it in setup method? So you can use it in 
other tests.
{code}
+  /**
+   * Test to verify that the checksum can be computed by giving less than
+   * bytesPerCRC length of the file range for checksum calculation. 512 is the
+   * value of bytesPerCRC.
+   */
+  @Test(timeout = 90000)
+  public void testStripedFileChecksumWithMissedDataBlocksRangeQuery2()
+      throws Exception {
+    int bytesPerCRC = conf.getInt(
+        HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY,
+        HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_DEFAULT);
+    testStripedFileChecksumWithMissedDataBlocksRangeQuery(stripedFile1,
+        bytesPerCRC);
+  }
{code}

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10460
>                 URL: https://issues.apache.org/jira/browse/HDFS-10460
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>         Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to