zhangshuyan0 commented on PR #6368: URL: https://github.com/apache/hadoop/pull/6368#issuecomment-1903015641
> Sir, very nice catch. I think below code may resolve the problem you found. Please take a look when you are free. I will submit another PR to fix it and Add UT. > > ```java > if (!getStreamer().getAppendChunk()) { > int psize = 0; > if (blockSize == getStreamer().getBytesCurBlock()) { > psize = writePacketSize; > } else if (blockSize - getStreamer().getBytesCurBlock() + PacketHeader.PKT_MAX_HEADER_LEN > < writePacketSize ) { > psize = (int)(blockSize - getStreamer().getBytesCurBlock()) + PacketHeader.PKT_MAX_HEADER_LEN; > } else { > psize = (int) Math > .min(blockSize - getStreamer().getBytesCurBlock(), writePacketSize); > } > computePacketChunkSize(psize, bytesPerChecksum); > } > ``` Thank you very much for investing your time in fixing these bugs. The above fixes did not take `ChecksumSize` into account, and it would be better for us to discuss this issue in the new PR. Please check if the failed tests are related to the modification of this PR. Thanks again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org