zhangshuyan0 commented on PR #6368:
URL: https://github.com/apache/hadoop/pull/6368#issuecomment-1899635293

   This PR has corrected the size of the first packet in a new block, which is 
great. However, due to the original logical problem in `adjustChunkBoundary`, 
the calculation of the size of the last packet in a block is still problematic, 
and I think we need a new PR to solve it.
   
https://github.com/apache/hadoop/blob/27ecc23ae7c5cafba6a5ea58d4a68d25bd7507dd/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L531-L543
   Line540, when we pass `blockSize - getStreamer().getBytesCurBlock()` to 
`computePacketChunkSize` as the first parameter, `computePacketChunkSize` is 
likely to split the data that could have been sent in one packet into two 
packets for sending.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to