[ 
https://issues.apache.org/jira/browse/HDFS-17293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808444#comment-17808444
 ] 

ASF GitHub Bot commented on HDFS-17293:
---------------------------------------

zhangshuyan0 commented on code in PR #6368:
URL: https://github.com/apache/hadoop/pull/6368#discussion_r1458246249


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java:
##########
@@ -184,6 +186,40 @@ public void testPreventOverflow() throws IOException, 
NoSuchFieldException,
     runAdjustChunkBoundary(configuredWritePacketSize, finalWritePacketSize);
   }
 
+  @Test(timeout=60000)
+  public void testFirstPacketSizeInNewBlocks() throws IOException {
+    final long blockSize = 1L * 1024 * 1024;
+    final int numDataNodes = 3;
+    final Configuration dfsConf = new Configuration();
+    dfsConf.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+    MiniDFSCluster dfsCluster = null;
+    dfsCluster = new 
MiniDFSCluster.Builder(dfsConf).numDataNodes(numDataNodes).build();
+    dfsCluster.waitActive();
+
+    DistributedFileSystem fs = dfsCluster.getFileSystem();
+    Path fileName = new Path("/testfile.dat");
+    FSDataOutputStream fos = fs.create(fileName);
+    DataChecksum crc32c = 
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32C, 512);
+
+    long loop = 0;
+    Random r = new Random();
+    byte[] buf = new byte[1 * 1024 * 1024];
+    r.nextBytes(buf);
+    fos.write(buf);
+    fos.hflush();
+
+    while (loop < 20) {
+      r.nextBytes(buf);
+      fos.write(buf);
+      fos.hflush();
+      loop++;
+      Assert.assertNotEquals(crc32c.getBytesPerChecksum() + 
crc32c.getChecksumSize(),

Review Comment:
   It is more appropriate to precisely specify the expected `packetSize` here.
   Outside the `while loop`:
   ```
   int chunkSize = crc32c.getBytesPerChecksum() + crc32c.getChecksumSize();
   int packetContentSize = (dfsConf.getInt(DFS_CLIENT_WRITE_PACKET_SIZE_KEY, 
DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT) - 
PacketHeader.PKT_MAX_HEADER_LEN)/chunkSize*chunkSize;
   ```
   And here:
   ```
   Assert.assertEquals(((DFSOutputStream) fos.getWrappedStream()).packetSize, 
packetContentSize);
   ```



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java:
##########
@@ -184,6 +186,40 @@ public void testPreventOverflow() throws IOException, 
NoSuchFieldException,
     runAdjustChunkBoundary(configuredWritePacketSize, finalWritePacketSize);
   }
 
+  @Test(timeout=60000)
+  public void testFirstPacketSizeInNewBlocks() throws IOException {
+    final long blockSize = 1L * 1024 * 1024;
+    final int numDataNodes = 3;
+    final Configuration dfsConf = new Configuration();
+    dfsConf.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+    MiniDFSCluster dfsCluster = null;
+    dfsCluster = new 
MiniDFSCluster.Builder(dfsConf).numDataNodes(numDataNodes).build();
+    dfsCluster.waitActive();
+
+    DistributedFileSystem fs = dfsCluster.getFileSystem();
+    Path fileName = new Path("/testfile.dat");
+    FSDataOutputStream fos = fs.create(fileName);
+    DataChecksum crc32c = 
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32C, 512);
+
+    long loop = 0;
+    Random r = new Random();
+    byte[] buf = new byte[1 * 1024 * 1024];

Review Comment:
   `byte[] buf = new byte[(int) blockSize];`





> First packet data + checksum size will be set to 516 bytes when writing to a 
> new block.
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-17293
>                 URL: https://issues.apache.org/jira/browse/HDFS-17293
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.3.6
>            Reporter: farmmamba
>            Assignee: farmmamba
>            Priority: Major
>              Labels: pull-request-available
>
> First packet size will be set to 516 bytes when writing to a new block.
> In  method computePacketChunkSize, the parameters psize and csize would be 
> (0, 512)
> when writting to a new block. It should better use writePacketSize.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to