errose28 commented on code in PR #7111:
URL: https://github.com/apache/ozone/pull/7111#discussion_r1761710432


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -970,6 +972,53 @@ ContainerCommandResponseProto handleWriteChunk(
     return getWriteChunkResponseSuccess(request, blockDataProto);
   }
 
+  /**
+   * Handle Write Chunk operation for closed container. Calls ChunkManager to 
process the request.
+   *
+   */
+  private void writeChunkForClosedContainer(ChunkInfo chunkInfo, BlockID 
blockID,
+                                            ChunkBuffer data, 
KeyValueContainer kvContainer,
+                                            BlockData blockData) {
+
+    try {
+      Preconditions.checkNotNull(chunkInfo);
+      Preconditions.checkNotNull(data);
+      long writeChunkStartTime = Time.monotonicNowNanos();
+      checkContainerClose(kvContainer);
+
+      DispatcherContext dispatcherContext = 
DispatcherContext.getHandleWriteChunk();
+
+      // Set CHUNK_OVERWRITE to overwrite existing chunk.
+      chunkInfo.addMetadata(OzoneConsts.CHUNK_OVERWRITE, "true");

Review Comment:
   Let's allow the caller to specify whether this is an overwrite with a 
boolean flag or something similar passed in to this method. The reconciliation 
algorithm will know if it is adding new data or replacing existing data when it 
calls this method. This can avoid any surprises that might come up if we always 
assume overwrite even if that is not the intent.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -970,6 +972,53 @@ ContainerCommandResponseProto handleWriteChunk(
     return getWriteChunkResponseSuccess(request, blockDataProto);
   }
 
+  /**
+   * Handle Write Chunk operation for closed container. Calls ChunkManager to 
process the request.
+   *
+   */
+  private void writeChunkForClosedContainer(ChunkInfo chunkInfo, BlockID 
blockID,
+                                            ChunkBuffer data, 
KeyValueContainer kvContainer,
+                                            BlockData blockData) {
+
+    try {
+      Preconditions.checkNotNull(chunkInfo);
+      Preconditions.checkNotNull(data);
+      long writeChunkStartTime = Time.monotonicNowNanos();
+      checkContainerClose(kvContainer);
+
+      DispatcherContext dispatcherContext = 
DispatcherContext.getHandleWriteChunk();
+
+      // Set CHUNK_OVERWRITE to overwrite existing chunk.
+      chunkInfo.addMetadata(OzoneConsts.CHUNK_OVERWRITE, "true");
+      chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
+          dispatcherContext);
+
+      // Increment write stats for WriteChunk after write.
+      metrics.incContainerBytesStats(Type.WriteChunk, chunkInfo.getLen());
+      metrics.incContainerOpsLatencies(Type.WriteChunk, 
Time.monotonicNowNanos() - writeChunkStartTime);
+
+      if (blockData != null) {

Review Comment:
   Should we separate the write chunk and put block methods?



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -1146,6 +1195,38 @@ private void checkContainerOpen(KeyValueContainer 
kvContainer)
     throw new StorageContainerException(msg, result);
   }
 
+  /**
+   * Check if container is Closed. Throw exception otherwise.
+   * @param kvContainer
+   * @throws StorageContainerException
+   */
+  private void checkContainerClose(KeyValueContainer kvContainer)
+      throws StorageContainerException {
+
+    final State containerState = kvContainer.getContainerState();
+    if (containerState == State.QUASI_CLOSED || containerState == State.CLOSED
+        || containerState == State.UNHEALTHY) {

Review Comment:
   nit. Line length was increased to 120 so you might need to update your IDE 
settings to avoid overly aggressive wrapping.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -970,6 +972,53 @@ ContainerCommandResponseProto handleWriteChunk(
     return getWriteChunkResponseSuccess(request, blockDataProto);
   }
 
+  /**
+   * Handle Write Chunk operation for closed container. Calls ChunkManager to 
process the request.
+   *
+   */
+  private void writeChunkForClosedContainer(ChunkInfo chunkInfo, BlockID 
blockID,
+                                            ChunkBuffer data, 
KeyValueContainer kvContainer,
+                                            BlockData blockData) {
+
+    try {
+      Preconditions.checkNotNull(chunkInfo);
+      Preconditions.checkNotNull(data);
+      long writeChunkStartTime = Time.monotonicNowNanos();
+      checkContainerClose(kvContainer);
+
+      DispatcherContext dispatcherContext = 
DispatcherContext.getHandleWriteChunk();
+
+      // Set CHUNK_OVERWRITE to overwrite existing chunk.
+      chunkInfo.addMetadata(OzoneConsts.CHUNK_OVERWRITE, "true");
+      chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
+          dispatcherContext);
+
+      // Increment write stats for WriteChunk after write.
+      metrics.incContainerBytesStats(Type.WriteChunk, chunkInfo.getLen());
+      metrics.incContainerOpsLatencies(Type.WriteChunk, 
Time.monotonicNowNanos() - writeChunkStartTime);
+
+      if (blockData != null) {
+        long putBlockStartTime = Time.monotonicNowNanos();
+        // Add ChunkInfo to BlockData to be persisted in RocksDb
+        List<ContainerProtos.ChunkInfo> chunks = blockData.getChunks();
+        chunks.add(chunkInfo.getProtoBufMessage());
+        blockData.setChunks(chunks);

Review Comment:
   I think this list is implicitly sorted by offset at the time of writing. 
It's possible the chunk we are patching is in the middle, so will need to 
re-sort this. This should also show up when tests are added to read data that 
has been patched with this method.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -970,6 +972,53 @@ ContainerCommandResponseProto handleWriteChunk(
     return getWriteChunkResponseSuccess(request, blockDataProto);
   }
 
+  /**
+   * Handle Write Chunk operation for closed container. Calls ChunkManager to 
process the request.
+   *
+   */
+  private void writeChunkForClosedContainer(ChunkInfo chunkInfo, BlockID 
blockID,
+                                            ChunkBuffer data, 
KeyValueContainer kvContainer,
+                                            BlockData blockData) {
+
+    try {
+      Preconditions.checkNotNull(chunkInfo);
+      Preconditions.checkNotNull(data);
+      long writeChunkStartTime = Time.monotonicNowNanos();
+      checkContainerClose(kvContainer);
+
+      DispatcherContext dispatcherContext = 
DispatcherContext.getHandleWriteChunk();
+
+      // Set CHUNK_OVERWRITE to overwrite existing chunk.
+      chunkInfo.addMetadata(OzoneConsts.CHUNK_OVERWRITE, "true");
+      chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
+          dispatcherContext);
+
+      // Increment write stats for WriteChunk after write.
+      metrics.incContainerBytesStats(Type.WriteChunk, chunkInfo.getLen());
+      metrics.incContainerOpsLatencies(Type.WriteChunk, 
Time.monotonicNowNanos() - writeChunkStartTime);
+
+      if (blockData != null) {
+        long putBlockStartTime = Time.monotonicNowNanos();
+        // Add ChunkInfo to BlockData to be persisted in RocksDb
+        List<ContainerProtos.ChunkInfo> chunks = blockData.getChunks();
+        chunks.add(chunkInfo.getProtoBufMessage());
+        blockData.setChunks(chunks);
+
+        // To be set from the Replica's BCSId
+        blockData.setBlockCommitSequenceId(dispatcherContext.getLogIndex());
+
+        blockManager.putBlock(kvContainer, blockData);

Review Comment:
    The `endOfBlock` boolean is implicitly set to true here. I think that's a 
quirk on the write path we don't need to worry about in this case and can 
explicitly set it to false.



##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -970,6 +972,53 @@ ContainerCommandResponseProto handleWriteChunk(
     return getWriteChunkResponseSuccess(request, blockDataProto);
   }
 
+  /**
+   * Handle Write Chunk operation for closed container. Calls ChunkManager to 
process the request.
+   *
+   */
+  private void writeChunkForClosedContainer(ChunkInfo chunkInfo, BlockID 
blockID,
+                                            ChunkBuffer data, 
KeyValueContainer kvContainer,
+                                            BlockData blockData) {
+
+    try {
+      Preconditions.checkNotNull(chunkInfo);
+      Preconditions.checkNotNull(data);
+      long writeChunkStartTime = Time.monotonicNowNanos();
+      checkContainerClose(kvContainer);
+
+      DispatcherContext dispatcherContext = 
DispatcherContext.getHandleWriteChunk();
+
+      // Set CHUNK_OVERWRITE to overwrite existing chunk.
+      chunkInfo.addMetadata(OzoneConsts.CHUNK_OVERWRITE, "true");
+      chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
+          dispatcherContext);
+
+      // Increment write stats for WriteChunk after write.
+      metrics.incContainerBytesStats(Type.WriteChunk, chunkInfo.getLen());
+      metrics.incContainerOpsLatencies(Type.WriteChunk, 
Time.monotonicNowNanos() - writeChunkStartTime);
+
+      if (blockData != null) {
+        long putBlockStartTime = Time.monotonicNowNanos();
+        // Add ChunkInfo to BlockData to be persisted in RocksDb
+        List<ContainerProtos.ChunkInfo> chunks = blockData.getChunks();
+        chunks.add(chunkInfo.getProtoBufMessage());
+        blockData.setChunks(chunks);
+
+        // To be set from the Replica's BCSId
+        blockData.setBlockCommitSequenceId(dispatcherContext.getLogIndex());

Review Comment:
   This will always be 0 because it was never set in the `DispatcherContext`. 
In the big picture of how this is used, the client should get the block data's 
BCSID when it reads the chunk and pass it in to this method, probably already 
as a part of the BlockData parameter.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to