hemantk-12 commented on code in PR #5301:
URL: https://github.com/apache/ozone/pull/5301#discussion_r1335088161


##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/snapshot/OMSnapshotSetPropertyResponse.java:
##########
@@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.snapshot;
+
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+import javax.annotation.Nonnull;
+import java.io.IOException;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.OmMetadataManagerImpl.SNAPSHOT_INFO_TABLE;
+
+/**
+ * Response for OMSnapshotSetPropertyRequest.
+ */
+@CleanupTableInfo(cleanupTables = {SNAPSHOT_INFO_TABLE})
+public class OMSnapshotSetPropertyResponse extends OMClientResponse {
+  private final Map<String, SnapshotInfo> updatedSnapInfos;
+
+  public OMSnapshotSetPropertyResponse(
+      @Nonnull OMResponse omResponse,
+      Map<String, SnapshotInfo> updatedSnapInfos) {

Review Comment:
   Can you please add `@Notnull` tag for `updatedSnapInfos` as well. Because it 
should be not null otherwise it will throw exception at line 61.



##########
hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto:
##########
@@ -1815,6 +1818,16 @@ message SnapshotPurgeRequest {
   repeated string updatedSnapshotDBKey = 2;
 }
 
+message SetSnapshotPropertyRequest {
+  repeated SnapshotProperty snapshotProperty = 1;

Review Comment:
   I'm not a fan of batch API for couple of reasons.
   - Error handling: For a batch request, if one of the operations fail, the 
whole request will fail and throw error. And can leave in inconsistent state.
   - Performance: Batch APIs don't give correct numbers in terms of server side 
load handling, latency and other performance metrics.
   
   Not sure how much it matters in this case.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -88,6 +93,9 @@ public class KeyDeletingService extends 
AbstractKeyDeletingService {
   private final int keyLimitPerTask;
   private final AtomicLong deletedKeyCount;
   private final AtomicBoolean suspended;
+  private final Map<String, Long> exclusiveSizeList;
+  private final Map<String, Long> exclusiveReplicatedSizeList;
+  private final Set<String> completedExclusiveSizeList;

Review Comment:
   nit: I think this could be a local variable. I see you are clearing it after 
updating the snapshot info with exclusive size but someone might remove the 
line by mistake and no test will catch it unless it is explicitly checking 
`completedExclusiveSizeList` after the run. 



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -348,14 +436,85 @@ private void processSnapshotDeepClean(int delCount)
               if (previousSnapshot != null) {
                 rcPrevOmSnapshot.close();
               }
+              if (previousToPrevSnapshot != null) {

Review Comment:
   nit: you can use 
[IOUtils#closeQuietly()](https://github.com/apache/ozone/blob/bd110a477e5fc339aa3117f8facae06a1d2c7223/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/IOUtils.java#L91)
 to avoid null checks.
   e.g. 
   ```
   IOUtils.closeQuietly(rcPrevOmSnapshot, rcPrevToPrevOmSnapshot);
   ```



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/snapshot/OMSnapshotSetPropertyRequest.java:
##########
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.request.snapshot;
+
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.snapshot.OMSnapshotSetPropertyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.SnapshotProperty;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Updates the exclusive size of the snapshot.
+ */
+public class OMSnapshotSetPropertyRequest extends OMClientRequest {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(OMSnapshotSetPropertyRequest.class);
+
+  public OMSnapshotSetPropertyRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMClientResponse omClientResponse = null;
+    OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+        OmResponseUtil.getOMResponseBuilder(getOmRequest());
+    OzoneManagerProtocolProtos.SetSnapshotPropertyRequest
+        setSnapshotPropertyRequest = getOmRequest()
+        .getSetSnapshotPropertyRequest();
+
+    List<SnapshotProperty> snapshotPropertyList = setSnapshotPropertyRequest
+        .getSnapshotPropertyList();
+    Map<String, SnapshotInfo> updatedSnapInfos = new HashMap<>();
+
+    try {
+      for (SnapshotProperty snapshotProperty: snapshotPropertyList) {
+        String snapshotKey = snapshotProperty.getSnapshotKey();
+        long exclusiveSize = snapshotProperty.getExclusiveSize();
+        long exclusiveReplicatedSize = snapshotProperty
+            .getExclusiveReplicatedSize();
+        SnapshotInfo snapshotInfo = metadataManager
+            .getSnapshotInfoTable().get(snapshotKey);
+
+        if (snapshotInfo == null) {
+          LOG.error("SnapshotInfo for Snapshot: {} is not found", snapshotKey);
+          continue;

Review Comment:
   I understand why you are skipping in the case key is not present but I think 
it's incorrect if I think of it from API's perspective.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -348,14 +436,85 @@ private void processSnapshotDeepClean(int delCount)
               if (previousSnapshot != null) {
                 rcPrevOmSnapshot.close();
               }
+              if (previousToPrevSnapshot != null) {
+                rcPrevToPrevOmSnapshot.close();
+              }
             }
           }
 
         }
       }
+
+      updateSnapshotExclusiveSize();
       updateDeepCleanedSnapshots(deepCleanedSnapshots);
     }
 
+    private OmKeyInfo getPreviousSnapshotKeyName(
+        OmKeyInfo keyInfo, OmBucketInfo bucketInfo, long volumeId,
+        Table<String, String> snapRenamedTable,
+        Table<String, OmKeyInfo> previousKeyTable) throws IOException {
+
+      if (keyInfo == null) {
+        return null;
+      }
+
+      String dbKeyPrevSnap;
+      if (bucketInfo.getBucketLayout().isFileSystemOptimized()) {
+        dbKeyPrevSnap = getOzoneManager().getMetadataManager().getOzonePathKey(
+            volumeId,
+            bucketInfo.getObjectID(),
+            keyInfo.getParentObjectID(),
+            keyInfo.getFileName());
+      } else {
+        dbKeyPrevSnap = getOzoneManager().getMetadataManager().getOzoneKey(
+            keyInfo.getVolumeName(),
+            keyInfo.getBucketName(),
+            keyInfo.getKeyName());
+      }
+
+      String dbRenameKey = getOzoneManager().getMetadataManager().getRenameKey(
+          keyInfo.getVolumeName(),
+          keyInfo.getBucketName(),
+          keyInfo.getObjectID());
+
+      String renamedKey = snapRenamedTable.getIfExist(dbRenameKey);
+      dbKeyPrevSnap = renamedKey != null ? renamedKey : dbKeyPrevSnap;
+
+      return previousKeyTable.get(dbKeyPrevSnap);
+    }
+
+    private void updateSnapshotExclusiveSize() {
+
+      List<SnapshotProperty> snapshotPropertyList = new ArrayList<>();
+      for (String dbKey: completedExclusiveSizeList) {
+        SnapshotProperty snapshotProperty = SnapshotProperty.newBuilder()
+                .setSnapshotKey(dbKey)
+                .setExclusiveSize(exclusiveSizeList.get(dbKey))
+                .setExclusiveReplicatedSize(
+                    exclusiveReplicatedSizeList.get(dbKey))
+                .build();
+        snapshotPropertyList.add(snapshotProperty);
+        exclusiveSizeList.remove(dbKey);
+        exclusiveReplicatedSizeList.remove(dbKey);
+      }
+
+      if (!snapshotPropertyList.isEmpty()) {

Review Comment:
   nit: this could be a first check in `updateSnapshotExclusiveSize()`. 
`completedExclusiveSizeList` is empty early return otherwise update the 
snapshot info.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -309,6 +348,47 @@ private void processSnapshotDeepClean(int delCount)
                 RepeatedOmKeyInfo newRepeatedOmKeyInfo =
                     new RepeatedOmKeyInfo();
                 for (OmKeyInfo keyInfo : repeatedOmKeyInfo.getOmKeyInfoList()) 
{
+                  // To calculate Exclusive Size for current snapshot, Check
+                  // the next snapshot deletedTable if the deleted key is
+                  // referenced in current snapshot and not referenced in the
+                  // previous snapshot then that key is exclusive to the 
current
+                  // snapshot. Here since we are only iterating through
+                  // deletedTable we can check the previous and previous to
+                  // previous snapshot to achieve the same.
+                  if (previousSnapshot != null) {
+                    String prevSnapKey = previousSnapshot.getTableKey();
+                    long exclusiveReplicatedSize =
+                        exclusiveReplicatedSizeList.getOrDefault(
+                            prevSnapKey, 0L) + keyInfo.getReplicatedSize();
+                    long exclusiveSize = exclusiveSizeList.getOrDefault(
+                        prevSnapKey, 0L) + keyInfo.getDataSize();
+
+                    // If there is no previous to previous snapshot, then
+                    // the previous snapshot is the first snapshot.
+                    if (previousToPrevSnapshot == null) {
+                      exclusiveSizeList.put(prevSnapKey, exclusiveSize);
+                      exclusiveReplicatedSizeList.put(prevSnapKey,
+                          exclusiveReplicatedSize);
+                    } else {
+                      OmKeyInfo keyInfoPrevSnapshot =
+                          getPreviousSnapshotKeyName(
+                              keyInfo, bucketInfo, volumeId,
+                              snapRenamedTable, previousKeyTable);
+                      OmKeyInfo keyInfoPrevToPrevSnapshot =
+                          getPreviousSnapshotKeyName(
+                              keyInfoPrevSnapshot, bucketInfo, volumeId,
+                              prevRenamedTable, previousToPrevKeyTable);
+                      // If the previous to previous snapshot doesn't
+                      // have the key, then it is exclusive size for the
+                      // previous snapshot.
+                      if (keyInfoPrevToPrevSnapshot == null) {
+                        exclusiveSizeList.put(prevSnapKey, exclusiveSize);
+                        exclusiveReplicatedSizeList.put(prevSnapKey,
+                            exclusiveReplicatedSize);
+                      }
+                    }
+                  }

Review Comment:
   It would be better if you extract out this code to a helper function.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -269,7 +281,15 @@ private void processSnapshotDeepClean(int delCount)
             String snapshotBucketKey = dbBucketKey + OzoneConsts.OM_KEY_PREFIX;
             SnapshotInfo previousSnapshot = getPreviousActiveSnapshot(
                 currSnapInfo, snapChainManager, omSnapshotManager);
+            SnapshotInfo previousToPrevSnapshot = null;
+
+            if (previousSnapshot != null) {

Review Comment:
   nit: it would be clean if you use `Optional`. It is a perfect case to use 
optional.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -309,6 +348,47 @@ private void processSnapshotDeepClean(int delCount)
                 RepeatedOmKeyInfo newRepeatedOmKeyInfo =
                     new RepeatedOmKeyInfo();
                 for (OmKeyInfo keyInfo : repeatedOmKeyInfo.getOmKeyInfoList()) 
{
+                  // To calculate Exclusive Size for current snapshot, Check
+                  // the next snapshot deletedTable if the deleted key is
+                  // referenced in current snapshot and not referenced in the
+                  // previous snapshot then that key is exclusive to the 
current
+                  // snapshot. Here since we are only iterating through
+                  // deletedTable we can check the previous and previous to

Review Comment:
   nit: it would be great if you could clear out in comment that 
`rcCurrOmSnapshot` is next, `rcPrevOmSnapshot` is current and 
`rcPrevToPrevOmSnapshot` is previous as per the 
[doc](file:///Users/iamgroot/Downloads/Snapshot%20Size%20Calculation.pdf).



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -88,6 +93,9 @@ public class KeyDeletingService extends 
AbstractKeyDeletingService {
   private final int keyLimitPerTask;
   private final AtomicLong deletedKeyCount;
   private final AtomicBoolean suspended;
+  private final Map<String, Long> exclusiveSizeList;
+  private final Map<String, Long> exclusiveReplicatedSizeList;

Review Comment:
   Please use map in variable name e.g. `snapshotToExclusiveSizeMap`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to