hfutatzhanghb commented on PR #6368:
URL: https://github.com/apache/hadoop/pull/6368#issuecomment-1903059289
> > Sir, very nice catch. I think below code may resolve the problem you
found. Please take a look when you are free. I will submit another PR to fix it
and Add UT.
> > ```java
> > if (!getStreamer().getAppendChunk()) {
> > int psize = 0;
> > if (blockSize == getStreamer().getBytesCurBlock()) {
> > psize = writePacketSize;
> > } else if (blockSize - getStreamer().getBytesCurBlock() +
PacketHeader.PKT_MAX_HEADER_LEN
> > < writePacketSize ) {
> > psize = (int)(blockSize - getStreamer().getBytesCurBlock()) +
PacketHeader.PKT_MAX_HEADER_LEN;
> > } else {
> > psize = (int) Math
> > .min(blockSize - getStreamer().getBytesCurBlock(),
writePacketSize);
> > }
> > computePacketChunkSize(psize, bytesPerChecksum);
> > }
> > ```
>
> Thank you very much for investing your time in fixing these bugs. The
above fixes did not take `ChecksumSize` into account, and it would be better
for us to discuss this issue in the new PR. Please check if the failed tests
are related to the modification of this PR. Thanks again.
@zhangshuyan0 Sir, Agree with you, let's discuss this issue in the new PR.
The failed tests were all passed in my local.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]