hemantk-12 commented on code in PR #7200:
URL: https://github.com/apache/ozone/pull/7200#discussion_r1759837166


##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java:
##########
@@ -216,6 +242,28 @@ OmMultipartUploadListParts listParts(String volumeName, 
String bucketName,
    */
   Table.KeyValue<String, OmKeyInfo> getPendingDeletionDir() throws IOException;
 
+  /**
+   * Returns an iterator for pending deleted directories.
+   * @throws IOException
+   */
+  TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>> 
getPendingDeletionDirs() throws IOException;

Review Comment:
   I don't see it being used anywhere. If so, please remove it.



##########
hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java:
##########
@@ -116,6 +116,22 @@ public interface OMMetadataManager extends 
DBStoreHAManager {
    */
   String getBucketKey(String volume, String bucket);
 
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.
+   *
+   * @param volume - User name
+   * @param bucket - Bucket name
+   */
+  String getBucketKeyPrefix(String volume, String bucket);
+
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.

Review Comment:
   Can you please add how it differs from `getBucketKeyPrefix()` in the 
description? May be provide an example.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/SnapshotChainManager.java:
##########
@@ -382,6 +389,41 @@ public UUID getLatestGlobalSnapshotId() throws IOException 
{
     return latestGlobalSnapshotId;
   }
 
+  /**
+   * Get oldest of global snapshot in snapshot chain.
+   */
+  public UUID getOldestGlobalSnapshotId() throws IOException {
+    validateSnapshotChain();
+    return oldestGlobalSnapshotId;
+  }
+
+  public Iterator<UUID> iterator(final boolean reverse) throws IOException {
+    validateSnapshotChain();
+    return new Iterator<UUID>() {
+      private UUID currentSnapshotId = reverse ? getLatestGlobalSnapshotId() : 
getOldestGlobalSnapshotId();
+      @Override
+      public boolean hasNext() {
+        try {
+          return reverse ? hasPreviousGlobalSnapshot(currentSnapshotId) : 
hasNextGlobalSnapshot(currentSnapshotId);
+        } catch (IOException e) {
+          return false;
+        }
+      }
+
+      @Override
+      public UUID next() {
+        try {
+          UUID prevSnapshotId = currentSnapshotId;
+          currentSnapshotId =
+              reverse ? previousGlobalSnapshot(currentSnapshotId) : 
nextGlobalSnapshot(currentSnapshotId);
+          return prevSnapshotId;
+        } catch (IOException e) {
+          throw new UncheckedIOException("Error while getting next snapshot 
for " + currentSnapshotId, e);

Review Comment:
   ```suggestion
             throw new NoSuchElementException("Error while getting next 
snapshot for " + currentSnapshotId, e);
   ```



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/KeyDeletingService.java:
##########
@@ -92,6 +92,7 @@ public class KeyDeletingService extends 
AbstractKeyDeletingService {
   private final Map<String, Long> exclusiveReplicatedSizeMap;
   private final Set<String> completedExclusiveSizeSet;
   private final Map<String, String> snapshotSeekMap;
+  private boolean isRunningOnAOS;

Review Comment:
   Same as above in `DirectoryDeletingService`.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/DirectoryDeletingService.java:
##########
@@ -144,6 +155,7 @@ public BackgroundTaskResult call() {
         if (LOG.isDebugEnabled()) {
           LOG.debug("Running DirectoryDeletingService");
         }
+        isRunningOnAOS = true;

Review Comment:
   nit: I don't think there is a check if it is running on AOS or not? To me, 
it is just checking if `DirectoryDeletingService` is running to not. It would 
be better to use CountDown latch or something easy to understand.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java:
##########
@@ -662,6 +663,61 @@ public PendingKeysDeletion getPendingDeletionKeys(final 
int count)
         .getPendingDeletionKeys(count, ozoneManager.getOmSnapshotManager());
   }
 
+  @Override
+  public List<Table.KeyValue<String, String>> getRenamesKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)
+        .map(vol -> metadataManager.getBucketKeyPrefix(vol, bucket));
+    List<Table.KeyValue<String, String>> renamedEntries = new ArrayList<>();
+    try (TableIterator<String, ? extends Table.KeyValue<String, String>>
+             renamedKeyIter = 
metadataManager.getSnapshotRenamedTable().iterator(bucketPrefix.orElse(""))) {
+
+      /* Seeking to the start key if it not null. The next key picked up would 
be ensured to start with the bucket
+         prefix, {@link 
org.apache.hadoop.hdds.utils.db.Table#iterator(bucketPrefix)} would ensure this.
+       */
+      if (startKey != null) {
+        renamedKeyIter.seek(startKey);
+      }
+      int currentCount = 0;
+      while (renamedKeyIter.hasNext() && currentCount < count) {
+        Table.KeyValue<String, String> kv = renamedKeyIter.next();
+        if (kv != null) {

Review Comment:
   Can it be null? `hasNext()` should do this check?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java:
##########
@@ -662,6 +663,61 @@ public PendingKeysDeletion getPendingDeletionKeys(final 
int count)
         .getPendingDeletionKeys(count, ozoneManager.getOmSnapshotManager());
   }
 
+  @Override
+  public List<Table.KeyValue<String, String>> getRenamesKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)

Review Comment:
   Can `volume` and `bucket` be empty? If not, we should do a null check rather 
than having a null prefix.
   ```
   Objects.requireNonNull(volume, "Volume name is null.");
   Objects.requireNonNull(bucket, "Bucket name is null.");
   ```
   I checked the usage of `getRenamesKeyEntries`, and it is called on snapshot 
only which means `volume` and `bucket` can't be null.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java:
##########
@@ -1976,6 +2032,27 @@ public Table.KeyValue<String, OmKeyInfo> 
getPendingDeletionDir()
     return null;
   }
 
+  @Override
+  public TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>> 
getPendingDeletionDirs()
+      throws IOException {
+    return this.getPendingDeletionDirs(null, null);
+  }
+
+  @Override
+  public TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>> 
getPendingDeletionDirs(String volume,
+                                                                               
                    String bucket)
+      throws IOException {
+
+    // Either both volume & bucket should be null or none of them should be 
null.
+    if (!StringUtils.isBlank(volume) && StringUtils.isBlank(bucket) ||

Review Comment:
   nit: `StringUtils.isEmpty()` should be enough.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java:
##########
@@ -662,6 +663,61 @@ public PendingKeysDeletion getPendingDeletionKeys(final 
int count)
         .getPendingDeletionKeys(count, ozoneManager.getOmSnapshotManager());
   }
 
+  @Override
+  public List<Table.KeyValue<String, String>> getRenamesKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)
+        .map(vol -> metadataManager.getBucketKeyPrefix(vol, bucket));
+    List<Table.KeyValue<String, String>> renamedEntries = new ArrayList<>();
+    try (TableIterator<String, ? extends Table.KeyValue<String, String>>
+             renamedKeyIter = 
metadataManager.getSnapshotRenamedTable().iterator(bucketPrefix.orElse(""))) {
+
+      /* Seeking to the start key if it not null. The next key picked up would 
be ensured to start with the bucket
+         prefix, {@link 
org.apache.hadoop.hdds.utils.db.Table#iterator(bucketPrefix)} would ensure this.
+       */
+      if (startKey != null) {
+        renamedKeyIter.seek(startKey);
+      }
+      int currentCount = 0;
+      while (renamedKeyIter.hasNext() && currentCount < count) {
+        Table.KeyValue<String, String> kv = renamedKeyIter.next();
+        if (kv != null) {
+          renamedEntries.add(Table.newKeyValue(kv.getKey(), kv.getValue()));
+          currentCount++;
+        }
+      }
+    }
+    return renamedEntries;
+  }
+
+  @Override
+  public List<Table.KeyValue<String, List<OmKeyInfo>>> getDeletedKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)
+        .map(vol -> metadataManager.getBucketKeyPrefix(vol, bucket));

Review Comment:
   Same as previous.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java:
##########
@@ -216,6 +242,28 @@ OmMultipartUploadListParts listParts(String volumeName, 
String bucketName,
    */
   Table.KeyValue<String, OmKeyInfo> getPendingDeletionDir() throws IOException;
 
+  /**
+   * Returns an iterator for pending deleted directories.
+   * @throws IOException
+   */
+  TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>> 
getPendingDeletionDirs() throws IOException;
+
+  TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>> 
getPendingDeletionDirs(
+      String volume, String bucket) throws IOException;
+
+  default List<Table.KeyValue<String, OmKeyInfo>> getDeletedDirEntries(String 
volume, String bucket, int count)
+      throws IOException {
+    List<Table.KeyValue<String, OmKeyInfo>> deletedDirEntries = new 
ArrayList<>(count);
+    try (TableIterator<String, ? extends  Table.KeyValue<String, OmKeyInfo>> 
iterator =
+             getPendingDeletionDirs(volume, bucket)) {

Review Comment:
   Do we really need to have `getPendingDeletionDirs()`? Won't it be easier to 
have an implementation of `getDeletedDirEntries()` in KeyManagerImpl.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/snapshot/OMSnapshotMoveDeletedKeysResponse.java:
##########
@@ -224,35 +224,35 @@ public static RepeatedOmKeyInfo createRepeatedOmKeyInfo(
     return result;
   }
 
-  private RepeatedOmKeyInfo createRepeatedOmKeyInfo(
-      SnapshotMoveKeyInfos snapshotMoveKeyInfos,
-      OMMetadataManager metadataManager) throws IOException {
+  public static RepeatedOmKeyInfo 
createMergedRepeatedOmKeyInfoFromDeletedTableEntry(
+      SnapshotMoveKeyInfos snapshotMoveKeyInfos, OMMetadataManager 
metadataManager) throws IOException {
     String dbKey = snapshotMoveKeyInfos.getKey();
-    List<KeyInfo> keyInfoList = snapshotMoveKeyInfos.getKeyInfosList();
+    List<OmKeyInfo> keyInfoList = new ArrayList<>();
+    for (KeyInfo info : snapshotMoveKeyInfos.getKeyInfosList()) {
+      OmKeyInfo fromProtobuf = OmKeyInfo.getFromProtobuf(info);
+      keyInfoList.add(fromProtobuf);
+    }
     // When older version of keys are moved to the next snapshot's deletedTable
     // The newer version might also be in the next snapshot's deletedTable and
     // it might overwrite. This is to avoid that and also avoid having
-    // orphans blocks.
+    // orphans blocks. Checking the last keyInfoList size omKeyInfo versions,
+    // this is to avoid redundant additions if the last n versions match.
     RepeatedOmKeyInfo result = metadataManager.getDeletedTable().get(dbKey);
-
-    for (KeyInfo keyInfo : keyInfoList) {
-      OmKeyInfo omKeyInfo = OmKeyInfo.getFromProtobuf(keyInfo);
-      if (result == null) {
-        result = new RepeatedOmKeyInfo(omKeyInfo);
-      } else if (!isSameAsLatestOmKeyInfo(omKeyInfo, result)) {
-        result.addOmKeyInfo(omKeyInfo);
-      }
+    if (result == null) {
+      result = new RepeatedOmKeyInfo(keyInfoList);
+    } else if (!isSameAsLatestOmKeyInfo(keyInfoList, result)) {
+      keyInfoList.forEach(result::addOmKeyInfo);
     }
-
     return result;
   }
 
-  private boolean isSameAsLatestOmKeyInfo(OmKeyInfo omKeyInfo,
-                                          RepeatedOmKeyInfo result) {
+  private static boolean isSameAsLatestOmKeyInfo(List<OmKeyInfo> omKeyInfos,

Review Comment:
   1. Why `static` function?
   2. Can we use `Collections.lastIndexOfSubList()`?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java:
##########
@@ -838,6 +838,28 @@ public String getBucketKey(String volume, String bucket) {
     return builder.toString();
   }
 
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.
+   *
+   * @param volume - User name

Review Comment:
   Same as in `OMMetadataManager`.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java:
##########
@@ -838,6 +838,28 @@ public String getBucketKey(String volume, String bucket) {
     return builder.toString();
   }
 
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.
+   *
+   * @param volume - User name
+   * @param bucket - Bucket name
+   */
+  @Override
+  public String getBucketKeyPrefix(String volume, String bucket) {
+    return OzoneFSUtils.addTrailingSlashIfNeeded(getBucketKey(volume, bucket));
+  }
+
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.
+   *
+   * @param volume - User name
+   * @param bucket - Bucket name
+   */
+  @Override
+  public String getBucketKeyPrefixFSO(String volume, String bucket) throws 
IOException {
+    return OzoneFSUtils.addTrailingSlashIfNeeded(getOzoneKeyFSO(volume, 
bucket, ""));

Review Comment:
   nit:
   ```suggestion
       return getOzoneKeyFSO(volume, bucket, OM_KEY_PREFIX);
   ``



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/SnapshotChainManager.java:
##########
@@ -382,6 +389,41 @@ public UUID getLatestGlobalSnapshotId() throws IOException 
{
     return latestGlobalSnapshotId;
   }
 
+  /**
+   * Get oldest of global snapshot in snapshot chain.
+   */
+  public UUID getOldestGlobalSnapshotId() throws IOException {
+    validateSnapshotChain();
+    return oldestGlobalSnapshotId;
+  }
+
+  public Iterator<UUID> iterator(final boolean reverse) throws IOException {

Review Comment:
   nit: Since all the write methods of `SnapshotChainManager` are synchronized, 
do we still need to keep `snapshotChainByPath` and `latestSnapshotIdByPath` as 
`ConcurrentMap`? Maybe we change them to `LinkedHashMap` and then use 
`LinkedHashMap`'s iterator.



##########
hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java:
##########
@@ -116,6 +116,22 @@ public interface OMMetadataManager extends 
DBStoreHAManager {
    */
   String getBucketKey(String volume, String bucket);
 
+  /**
+   * Given a volume and bucket, return the corresponding DB key prefix.
+   *
+   * @param volume - User name

Review Comment:
   `Volume name` not `User Name`.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java:
##########
@@ -662,6 +663,61 @@ public PendingKeysDeletion getPendingDeletionKeys(final 
int count)
         .getPendingDeletionKeys(count, ozoneManager.getOmSnapshotManager());
   }
 
+  @Override
+  public List<Table.KeyValue<String, String>> getRenamesKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)
+        .map(vol -> metadataManager.getBucketKeyPrefix(vol, bucket));
+    List<Table.KeyValue<String, String>> renamedEntries = new ArrayList<>();
+    try (TableIterator<String, ? extends Table.KeyValue<String, String>>
+             renamedKeyIter = 
metadataManager.getSnapshotRenamedTable().iterator(bucketPrefix.orElse(""))) {
+
+      /* Seeking to the start key if it not null. The next key picked up would 
be ensured to start with the bucket
+         prefix, {@link 
org.apache.hadoop.hdds.utils.db.Table#iterator(bucketPrefix)} would ensure this.
+       */
+      if (startKey != null) {
+        renamedKeyIter.seek(startKey);
+      }
+      int currentCount = 0;
+      while (renamedKeyIter.hasNext() && currentCount < count) {
+        Table.KeyValue<String, String> kv = renamedKeyIter.next();
+        if (kv != null) {
+          renamedEntries.add(Table.newKeyValue(kv.getKey(), kv.getValue()));
+          currentCount++;
+        }
+      }
+    }
+    return renamedEntries;
+  }
+
+  @Override
+  public List<Table.KeyValue<String, List<OmKeyInfo>>> getDeletedKeyEntries(
+      String volume, String bucket, String startKey, int count) throws 
IOException {
+    // Bucket prefix would be empty if volume is empty i.e. either null or "".
+    Optional<String> bucketPrefix = Optional.ofNullable(volume).map(vol -> 
vol.isEmpty() ? null : vol)
+        .map(vol -> metadataManager.getBucketKeyPrefix(vol, bucket));
+    List<Table.KeyValue<String, List<OmKeyInfo>>> deletedKeyEntries = new 
ArrayList<>(count);
+    try (TableIterator<String, ? extends Table.KeyValue<String, 
RepeatedOmKeyInfo>>
+             delKeyIter = 
metadataManager.getDeletedTable().iterator(bucketPrefix.orElse(""))) {
+
+      /* Seeking to the start key if it not null. The next key picked up would 
be ensured to start with the bucket
+         prefix, {@link 
org.apache.hadoop.hdds.utils.db.Table#iterator(bucketPrefix)} would ensure this.
+       */
+      if (startKey != null) {
+        delKeyIter.seek(startKey);
+      }
+      int currentCount = 0;
+      while (delKeyIter.hasNext() && currentCount < count) {
+        Table.KeyValue<String, RepeatedOmKeyInfo> kv = delKeyIter.next();
+        if (kv != null) {
+          deletedKeyEntries.add(Table.newKeyValue(kv.getKey(), 
kv.getValue().cloneOmKeyInfoList()));
+        }
+      }
+    }

Review Comment:
   This part of the code is the same as  679-690. It can be extracted to a 
helper function.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to