aswinshakil commented on code in PR #7943:
URL: https://github.com/apache/ozone/pull/7943#discussion_r1978417728
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/FilePerBlockStrategy.java:
##########
@@ -174,8 +175,25 @@ public void writeChunk(Container container, BlockID
blockID, ChunkInfo info,
ChunkUtils.validateChunkSize(channel, info, chunkFile.getName());
}
- ChunkUtils
- .writeData(channel, chunkFile.getName(), data, offset, len, volume);
+ long fileLengthBeforeWrite;
+ try {
+ fileLengthBeforeWrite = channel.size();
+ } catch (IOException e) {
+ throw new StorageContainerException("IO error encountered while " +
+ "getting the file size for " + chunkFile.getName(),
CHUNK_FILE_INCONSISTENCY);
+ }
+
+ ChunkUtils.writeData(channel, chunkFile.getName(), data, offset, len,
volume);
+
+ // When overwriting, update the bytes used if the new length is greater
than the old length
+ // This is to ensure that the bytes used is updated correctly when
overwriting a smaller chunk
+ // with a larger chunk.
Review Comment:
Right now we only validate the buffer size with the chunk length. There is
no validation for the above scenario. But we won't be able to do a validation
in the case `putBlock` for the chunk is not already there.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]