hemantk-12 commented on code in PR #7345:
URL: https://github.com/apache/ozone/pull/7345#discussion_r1972817066


##########
hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java:
##########
@@ -604,6 +604,11 @@ default String getOpenFileName(long volumeId, long 
bucketId, long parentObjectId
    */
   String getRenameKey(String volume, String bucket, long objectID);
 
+  /**
+   * Given renameKey, return the volume, bucket and objectID from the key.

Review Comment:
   I don't think this is the correct place to keep this function but I'll let 
you decide it.



##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/util/CheckedExceptionOperation.java:
##########
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.util;
+
+/**
+ *
+ * Represents a function that accepts one argument and produces a result.
+ * This is a functional interface whose functional method is apply(Object).
+ * Type parameters:
+ * <T> – the type of the input to the function <R> – the type of the result of 
the function
+ * <E> - the type of exception thrown.
+ */

Review Comment:
   ```suggestion
   /**
    * Represents a function that accepts one argument and produces a result.
    * This is a functional interface whose functional method is apply(Object).
    * Type parameters:
    * <T> – the type of the input to the function
    * <R> – the type of the result of the function
    * <E> - the type of exception thrown.
    */
   ```



##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/util/CheckedExceptionOperation.java:
##########
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.util;
+
+/**
+ *
+ * Represents a function that accepts one argument and produces a result.
+ * This is a functional interface whose functional method is apply(Object).
+ * Type parameters:
+ * <T> – the type of the input to the function <R> – the type of the result of 
the function
+ * <E> - the type of exception thrown.
+ */
+public interface CheckedExceptionOperation<T, R, E extends Exception> {
+  R apply(T t) throws E;
+
+  default <V> CheckedExceptionOperation<T, V, E> 
andThen(CheckedExceptionOperation<R, V, E> operation) {

Review Comment:
   It is not used anywhere. Do you have a plan to use it? if no, please remove 
it.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/lock/MultiLocks.java:
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.Queue;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+
+/**
+ * Class to take multiple locks on a resource.
+ */
+public class MultiLocks<T> {
+  private final Queue<T> objectLocks;
+  private final IOzoneManagerLock lock;
+  private final OzoneManagerLock.Resource resource;
+  private final boolean writeLock;
+
+  public MultiLocks(IOzoneManagerLock lock, OzoneManagerLock.Resource 
resource, boolean writeLock) {
+    this.writeLock = writeLock;

Review Comment:
   I don't think anyone is ever gonna have `MutliLocks` for read. I prefer to 
add read support later when needed.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/lock/MultiLocks.java:
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.Queue;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+
+/**
+ * Class to take multiple locks on a resource.
+ */
+public class MultiLocks<T> {
+  private final Queue<T> objectLocks;
+  private final IOzoneManagerLock lock;
+  private final OzoneManagerLock.Resource resource;
+  private final boolean writeLock;
+
+  public MultiLocks(IOzoneManagerLock lock, OzoneManagerLock.Resource 
resource, boolean writeLock) {
+    this.writeLock = writeLock;
+    this.resource = resource;
+    this.lock = lock;
+    this.objectLocks = new LinkedList<>();
+  }
+
+  public OMLockDetails acquireLock(Collection<T> objects) throws OMException {
+    if (!objectLocks.isEmpty()) {

Review Comment:
   Currently, I see that only one thread is calling it. It will cause DeadLock 
if multiple thread sstart acquiring lock on the same resource. It would be 
better if you call it out. Or Keep its scope to snapshot only.
   This check will not help, if multiple instances of MultiLocks are created.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/lock/MultiLocks.java:
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.Queue;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+
+/**
+ * Class to take multiple locks on a resource.
+ */
+public class MultiLocks<T> {

Review Comment:
   This is no need to keep it as T type. You can just make it String type 
similar to `IOzoneManagerLock`.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/lock/MultiLocks.java:
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.Queue;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+
+/**
+ * Class to take multiple locks on a resource.
+ */
+public class MultiLocks<T> {
+  private final Queue<T> objectLocks;
+  private final IOzoneManagerLock lock;
+  private final OzoneManagerLock.Resource resource;
+  private final boolean writeLock;
+
+  public MultiLocks(IOzoneManagerLock lock, OzoneManagerLock.Resource 
resource, boolean writeLock) {
+    this.writeLock = writeLock;
+    this.resource = resource;
+    this.lock = lock;
+    this.objectLocks = new LinkedList<>();
+  }
+
+  public OMLockDetails acquireLock(Collection<T> objects) throws OMException {
+    if (!objectLocks.isEmpty()) {
+      throw new OMException("More locks cannot be acquired when locks have 
been already acquired. Locks acquired : "
+          + objectLocks, OMException.ResultCodes.INTERNAL_ERROR);
+    }
+    OMLockDetails omLockDetails = OMLockDetails.EMPTY_DETAILS_LOCK_ACQUIRED;
+    for (T object : objects) {
+      if (object != null) {

Review Comment:
   This is an unnecessary null check, IMO.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableDirFilter.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import java.io.IOException;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableDirFilter extends ReclaimableFilter<OmKeyInfo> {
+
+  private final OzoneManager ozoneManager;
+
+  /**
+   * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableDirFilter(OzoneManager ozoneManager,
+                              OmSnapshotManager omSnapshotManager, 
SnapshotChainManager snapshotChainManager,
+                              SnapshotInfo currentSnapshotInfo, 
OMMetadataManager metadataManager,
+                              IOzoneManagerLock lock) {
+    super(ozoneManager, omSnapshotManager, snapshotChainManager, 
currentSnapshotInfo, metadataManager, lock, 1);
+    this.ozoneManager = ozoneManager;
+  }
+
+  @Override
+  protected String getVolumeName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getVolumeName();
+  }
+
+  @Override
+  protected String getBucketName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getBucketName();
+  }
+
+  @Override
+  protected Boolean isReclaimable(Table.KeyValue<String, OmKeyInfo> 
deletedDirInfo) throws IOException {

Review Comment:
   Is there a reason to return `Boolean` and not `boolean`?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableDirFilter.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import java.io.IOException;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableDirFilter extends ReclaimableFilter<OmKeyInfo> {
+
+  private final OzoneManager ozoneManager;
+
+  /**
+   * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in

Review Comment:
   1. This comment is the same as above in the class comment. 
   2. Either add all the parameters or none with proper description.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableFilter.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import com.google.common.collect.Lists;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.lock.MultiLocks;
+import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+import org.apache.hadoop.ozone.om.snapshot.SnapshotUtils;
+import org.apache.hadoop.ozone.util.CheckedExceptionOperation;
+
+/**
+ * This class is responsible for opening last N snapshot given snapshot or AOS 
metadata manager by acquiring a lock.
+ */
+public abstract class ReclaimableFilter<V> implements 
CheckedExceptionOperation<Table.KeyValue<String, V>,
+    Boolean, IOException>, Closeable {

Review Comment:
   Why not `AutoCloseable`?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableDirFilter.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import java.io.IOException;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableDirFilter extends ReclaimableFilter<OmKeyInfo> {
+
+  private final OzoneManager ozoneManager;
+
+  /**
+   * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableDirFilter(OzoneManager ozoneManager,
+                              OmSnapshotManager omSnapshotManager, 
SnapshotChainManager snapshotChainManager,
+                              SnapshotInfo currentSnapshotInfo, 
OMMetadataManager metadataManager,
+                              IOzoneManagerLock lock) {
+    super(ozoneManager, omSnapshotManager, snapshotChainManager, 
currentSnapshotInfo, metadataManager, lock, 1);
+    this.ozoneManager = ozoneManager;

Review Comment:
   There is no need to keep this here. Make it protected in super class or add 
a getter.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableFilter.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import com.google.common.collect.Lists;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.lock.MultiLocks;
+import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+import org.apache.hadoop.ozone.om.snapshot.SnapshotUtils;
+import org.apache.hadoop.ozone.util.CheckedExceptionOperation;
+
+/**
+ * This class is responsible for opening last N snapshot given snapshot or AOS 
metadata manager by acquiring a lock.

Review Comment:
   ```suggestion
    * This class is responsible for opening last N snapshot or AOS metadata 
manager by acquiring a lock.
   ```



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableDirFilter.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import java.io.IOException;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableDirFilter extends ReclaimableFilter<OmKeyInfo> {
+
+  private final OzoneManager ozoneManager;
+
+  /**
+   * Filter to return deleted directories which are reclaimable based on their 
presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableDirFilter(OzoneManager ozoneManager,
+                              OmSnapshotManager omSnapshotManager, 
SnapshotChainManager snapshotChainManager,
+                              SnapshotInfo currentSnapshotInfo, 
OMMetadataManager metadataManager,
+                              IOzoneManagerLock lock) {
+    super(ozoneManager, omSnapshotManager, snapshotChainManager, 
currentSnapshotInfo, metadataManager, lock, 1);
+    this.ozoneManager = ozoneManager;
+  }
+
+  @Override
+  protected String getVolumeName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getVolumeName();
+  }
+
+  @Override
+  protected String getBucketName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getBucketName();
+  }
+
+  @Override
+  protected Boolean isReclaimable(Table.KeyValue<String, OmKeyInfo> 
deletedDirInfo) throws IOException {
+    ReferenceCounted<OmSnapshot> previousSnapshot = getPreviousOmSnapshot(0);
+    Table<String, OmDirectoryInfo> prevDirTable = previousSnapshot == null ? 
null :
+        previousSnapshot.get().getMetadataManager().getDirectoryTable();
+    return isDirReclaimable(deletedDirInfo, prevDirTable,
+        getMetadataManager().getSnapshotRenamedTable());
+  }
+
+  private boolean isDirReclaimable(Table.KeyValue<String, OmKeyInfo> 
deletedDir,
+                                   Table<String, OmDirectoryInfo> 
previousDirTable,
+                                   Table<String, String> renamedTable) throws 
IOException {
+    if (previousDirTable == null) {
+      return true;
+    }
+
+    String deletedDirDbKey = deletedDir.getKey();
+    OmKeyInfo deletedDirInfo = deletedDir.getValue();
+    String dbRenameKey = ozoneManager.getMetadataManager().getRenameKey(
+        deletedDirInfo.getVolumeName(), deletedDirInfo.getBucketName(),
+        deletedDirInfo.getObjectID());
+
+      /*
+      snapshotRenamedTable: /volumeName/bucketName/objectID ->
+          /volumeId/bucketId/parentId/dirName
+       */

Review Comment:
   alignment is off
   ```suggestion
       // snapshotRenamedTable: /volumeName/bucketName/objectID -> 
/volumeId/bucketId/parentId/dirName
   ```



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableFilter.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import com.google.common.collect.Lists;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.lock.MultiLocks;
+import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+import org.apache.hadoop.ozone.om.snapshot.SnapshotUtils;
+import org.apache.hadoop.ozone.util.CheckedExceptionOperation;
+
+/**
+ * This class is responsible for opening last N snapshot given snapshot or AOS 
metadata manager by acquiring a lock.
+ */
+public abstract class ReclaimableFilter<V> implements 
CheckedExceptionOperation<Table.KeyValue<String, V>,
+    Boolean, IOException>, Closeable {
+
+  private final OzoneManager ozoneManager;
+  private final SnapshotInfo currentSnapshotInfo;
+  private final OmSnapshotManager omSnapshotManager;
+  private final SnapshotChainManager snapshotChainManager;
+
+  private final List<SnapshotInfo> previousSnapshotInfos;
+  private final List<ReferenceCounted<OmSnapshot>> previousOmSnapshots;
+  private final MultiLocks<UUID> snapshotIdLocks;
+  private Long volumeId;
+  private OmBucketInfo bucketInfo;
+  private final OMMetadataManager metadataManager;
+  private final int numberOfPreviousSnapshotsFromChain;
+
+  /**
+   * Filter to return deleted keys/directories which are reclaimable based on 
their presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableFilter(OzoneManager ozoneManager, OmSnapshotManager 
omSnapshotManager,
+                           SnapshotChainManager snapshotChainManager,
+                           SnapshotInfo currentSnapshotInfo, OMMetadataManager 
metadataManager,
+                           IOzoneManagerLock lock,
+                           int numberOfPreviousSnapshotsFromChain) {
+    this.ozoneManager = ozoneManager;
+    this.omSnapshotManager = omSnapshotManager;
+    this.currentSnapshotInfo = currentSnapshotInfo;
+    this.snapshotChainManager = snapshotChainManager;
+    this.snapshotIdLocks = new MultiLocks<>(lock, 
OzoneManagerLock.Resource.SNAPSHOT_GC_LOCK, false);
+    this.metadataManager = metadataManager;
+    this.numberOfPreviousSnapshotsFromChain = 
numberOfPreviousSnapshotsFromChain;
+    this.previousOmSnapshots = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+    this.previousSnapshotInfos = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+  }
+
+  private List<SnapshotInfo> getLastNSnapshotInChain(String volume, String 
bucket) throws IOException {
+    if (currentSnapshotInfo != null &&
+        (!currentSnapshotInfo.getVolumeName().equals(volume) || 
!currentSnapshotInfo.getBucketName().equals(bucket))) {
+      throw new IOException("Volume & Bucket name for snapshot : " + 
currentSnapshotInfo + " not matching for " +
+          "key in volume: " + volume + " bucket: " + bucket);
+    }
+    SnapshotInfo expectedPreviousSnapshotInfo = currentSnapshotInfo == null

Review Comment:
   Will ReclaimableFilter be called for AOS? If so, what happens when there is 
no snapshot in the system?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableFilter.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import com.google.common.collect.Lists;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.lock.MultiLocks;
+import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+import org.apache.hadoop.ozone.om.snapshot.SnapshotUtils;
+import org.apache.hadoop.ozone.util.CheckedExceptionOperation;
+
+/**
+ * This class is responsible for opening last N snapshot given snapshot or AOS 
metadata manager by acquiring a lock.
+ */
+public abstract class ReclaimableFilter<V> implements 
CheckedExceptionOperation<Table.KeyValue<String, V>,
+    Boolean, IOException>, Closeable {
+
+  private final OzoneManager ozoneManager;
+  private final SnapshotInfo currentSnapshotInfo;
+  private final OmSnapshotManager omSnapshotManager;
+  private final SnapshotChainManager snapshotChainManager;
+
+  private final List<SnapshotInfo> previousSnapshotInfos;
+  private final List<ReferenceCounted<OmSnapshot>> previousOmSnapshots;
+  private final MultiLocks<UUID> snapshotIdLocks;
+  private Long volumeId;
+  private OmBucketInfo bucketInfo;
+  private final OMMetadataManager metadataManager;
+  private final int numberOfPreviousSnapshotsFromChain;
+
+  /**
+   * Filter to return deleted keys/directories which are reclaimable based on 
their presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableFilter(OzoneManager ozoneManager, OmSnapshotManager 
omSnapshotManager,
+                           SnapshotChainManager snapshotChainManager,
+                           SnapshotInfo currentSnapshotInfo, OMMetadataManager 
metadataManager,
+                           IOzoneManagerLock lock,
+                           int numberOfPreviousSnapshotsFromChain) {
+    this.ozoneManager = ozoneManager;
+    this.omSnapshotManager = omSnapshotManager;
+    this.currentSnapshotInfo = currentSnapshotInfo;
+    this.snapshotChainManager = snapshotChainManager;
+    this.snapshotIdLocks = new MultiLocks<>(lock, 
OzoneManagerLock.Resource.SNAPSHOT_GC_LOCK, false);
+    this.metadataManager = metadataManager;
+    this.numberOfPreviousSnapshotsFromChain = 
numberOfPreviousSnapshotsFromChain;
+    this.previousOmSnapshots = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+    this.previousSnapshotInfos = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+  }
+
+  private List<SnapshotInfo> getLastNSnapshotInChain(String volume, String 
bucket) throws IOException {
+    if (currentSnapshotInfo != null &&
+        (!currentSnapshotInfo.getVolumeName().equals(volume) || 
!currentSnapshotInfo.getBucketName().equals(bucket))) {
+      throw new IOException("Volume & Bucket name for snapshot : " + 
currentSnapshotInfo + " not matching for " +
+          "key in volume: " + volume + " bucket: " + bucket);
+    }
+    SnapshotInfo expectedPreviousSnapshotInfo = currentSnapshotInfo == null
+        ? SnapshotUtils.getLatestSnapshotInfo(volume, bucket, ozoneManager, 
snapshotChainManager)
+        : SnapshotUtils.getPreviousSnapshot(ozoneManager, 
snapshotChainManager, currentSnapshotInfo);
+    List<SnapshotInfo> snapshotInfos = 
Lists.newArrayList(expectedPreviousSnapshotInfo);
+    SnapshotInfo snapshotInfo = expectedPreviousSnapshotInfo;
+    while (snapshotInfos.size() < numberOfPreviousSnapshotsFromChain) {
+      snapshotInfo = snapshotInfo == null ? null
+          : SnapshotUtils.getPreviousSnapshot(ozoneManager, 
snapshotChainManager, snapshotInfo);
+      snapshotInfos.add(snapshotInfo);
+      // If changes made to the snapshot have not been flushed to disk, throw 
exception immediately, next run of
+      // garbage collection would process the snapshot.
+      if 
(!OmSnapshotManager.areSnapshotChangesFlushedToDB(ozoneManager.getMetadataManager(),
 snapshotInfo)) {
+        throw new IOException("Changes made to the snapshot " + snapshotInfo + 
" have not been flushed to the disk ");
+      }
+    }
+
+    // Reversing list to get the correct order in chain. To ensure locking 
order is as per the chain ordering.
+    Collections.reverse(snapshotInfos);
+    return snapshotInfos;
+  }
+
+  private boolean validateExistingLastNSnapshotsInChain(String volume, String 
bucket) throws IOException {
+    List<SnapshotInfo> expectedLastNSnapshotsInChain = 
getLastNSnapshotInChain(volume, bucket);
+    List<UUID> expectedSnapshotIds = expectedLastNSnapshotsInChain.stream()
+        .map(snapshotInfo -> snapshotInfo == null ? null : 
snapshotInfo.getSnapshotId())
+        .collect(Collectors.toList());
+    List<UUID> existingSnapshotIds = previousOmSnapshots.stream()
+        .map(omSnapshotReferenceCounted -> omSnapshotReferenceCounted == null 
? null :
+            
omSnapshotReferenceCounted.get().getSnapshotID()).collect(Collectors.toList());
+    return expectedSnapshotIds.equals(existingSnapshotIds);
+  }
+
+  // Initialize the last N snapshots in the chain by acquiring locks. Throw 
IOException if it fails.
+  private void initializePreviousSnapshotsFromChain(String volume, String 
bucket) throws IOException {
+    // If existing snapshotIds don't match then close all snapshots and reopen 
the previous N snapshots.
+    if (!validateExistingLastNSnapshotsInChain(volume, bucket)) {
+      close();
+      try {
+        // Acquire lock only on last N-1 snapshot & current snapshot(AOS if it 
is null).
+        List<SnapshotInfo> expectedLastNSnapshotsInChain = 
getLastNSnapshotInChain(volume, bucket);
+        List<UUID> expectedSnapshotIds = expectedLastNSnapshotsInChain.stream()
+            .map(snapshotInfo -> snapshotInfo == null ? null : 
snapshotInfo.getSnapshotId())
+            .collect(Collectors.toList());
+        List<UUID> lockIds = new ArrayList<>(expectedSnapshotIds.subList(1, 
expectedSnapshotIds.size()));
+        lockIds.add(currentSnapshotInfo == null ? null : 
currentSnapshotInfo.getSnapshotId());
+
+        if (snapshotIdLocks.acquireLock(lockIds).isLockAcquired()) {
+          for (SnapshotInfo snapshotInfo : expectedLastNSnapshotsInChain) {
+            if (snapshotInfo != null) {
+              // For AOS fail operation if any of the previous snapshots are 
not active. currentSnapshotInfo for
+              // AOS will be null.
+              previousOmSnapshots.add(currentSnapshotInfo == null
+                  ? 
omSnapshotManager.getActiveSnapshot(snapshotInfo.getVolumeName(), 
snapshotInfo.getBucketName(),
+                  snapshotInfo.getName())
+                  : 
omSnapshotManager.getSnapshot(snapshotInfo.getVolumeName(), 
snapshotInfo.getBucketName(),
+                  snapshotInfo.getName()));
+              previousSnapshotInfos.add(snapshotInfo);
+            } else {
+              previousOmSnapshots.add(null);
+              previousSnapshotInfos.add(null);
+            }
+
+            // TODO: Getting volumeId and bucket from active OM. This would be 
wrong on volume & bucket renames
+            //  support.
+            volumeId = ozoneManager.getMetadataManager().getVolumeId(volume);
+            String dbBucketKey = 
ozoneManager.getMetadataManager().getBucketKey(volume, bucket);
+            bucketInfo = 
ozoneManager.getMetadataManager().getBucketTable().get(dbBucketKey);
+          }
+        } else {
+          throw new IOException("Lock acquisition failed for last N snapshots 
: " +
+              expectedLastNSnapshotsInChain + " " + currentSnapshotInfo);
+        }
+      } catch (IOException e) {
+        this.close();
+        throw e;
+      }
+    }
+  }
+
+  @Override
+  public Boolean apply(Table.KeyValue<String, V> keyValue) throws IOException {
+    String volume = getVolumeName(keyValue);
+    String bucket = getBucketName(keyValue);
+    initializePreviousSnapshotsFromChain(volume, bucket);
+    boolean isReclaimable = isReclaimable(keyValue);
+    // This is to ensure the reclamation ran on the same previous snapshot and 
no change occurred in the chain
+    // while processing the entry.
+    return isReclaimable && validateExistingLastNSnapshotsInChain(volume, 
bucket);

Review Comment:
   Since there is no usage of `apply` as of now, I don't know how it will be 
called. But it is weird to me that we have to do this `volume` and `bucket` 
check all the time as a precaution.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableFilter.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import com.google.common.collect.Lists;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.lock.MultiLocks;
+import org.apache.hadoop.ozone.om.lock.OzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+import org.apache.hadoop.ozone.om.snapshot.SnapshotUtils;
+import org.apache.hadoop.ozone.util.CheckedExceptionOperation;
+
+/**
+ * This class is responsible for opening last N snapshot given snapshot or AOS 
metadata manager by acquiring a lock.
+ */
+public abstract class ReclaimableFilter<V> implements 
CheckedExceptionOperation<Table.KeyValue<String, V>,
+    Boolean, IOException>, Closeable {
+
+  private final OzoneManager ozoneManager;
+  private final SnapshotInfo currentSnapshotInfo;
+  private final OmSnapshotManager omSnapshotManager;
+  private final SnapshotChainManager snapshotChainManager;
+
+  private final List<SnapshotInfo> previousSnapshotInfos;
+  private final List<ReferenceCounted<OmSnapshot>> previousOmSnapshots;
+  private final MultiLocks<UUID> snapshotIdLocks;
+  private Long volumeId;
+  private OmBucketInfo bucketInfo;
+  private final OMMetadataManager metadataManager;
+  private final int numberOfPreviousSnapshotsFromChain;
+
+  /**
+   * Filter to return deleted keys/directories which are reclaimable based on 
their presence in previous snapshot in
+   * the snapshot chain.
+   *
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableFilter(OzoneManager ozoneManager, OmSnapshotManager 
omSnapshotManager,
+                           SnapshotChainManager snapshotChainManager,
+                           SnapshotInfo currentSnapshotInfo, OMMetadataManager 
metadataManager,
+                           IOzoneManagerLock lock,
+                           int numberOfPreviousSnapshotsFromChain) {
+    this.ozoneManager = ozoneManager;
+    this.omSnapshotManager = omSnapshotManager;
+    this.currentSnapshotInfo = currentSnapshotInfo;
+    this.snapshotChainManager = snapshotChainManager;
+    this.snapshotIdLocks = new MultiLocks<>(lock, 
OzoneManagerLock.Resource.SNAPSHOT_GC_LOCK, false);
+    this.metadataManager = metadataManager;
+    this.numberOfPreviousSnapshotsFromChain = 
numberOfPreviousSnapshotsFromChain;
+    this.previousOmSnapshots = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+    this.previousSnapshotInfos = new 
ArrayList<>(numberOfPreviousSnapshotsFromChain);
+  }
+
+  private List<SnapshotInfo> getLastNSnapshotInChain(String volume, String 
bucket) throws IOException {
+    if (currentSnapshotInfo != null &&
+        (!currentSnapshotInfo.getVolumeName().equals(volume) || 
!currentSnapshotInfo.getBucketName().equals(bucket))) {
+      throw new IOException("Volume & Bucket name for snapshot : " + 
currentSnapshotInfo + " not matching for " +
+          "key in volume: " + volume + " bucket: " + bucket);
+    }
+    SnapshotInfo expectedPreviousSnapshotInfo = currentSnapshotInfo == null
+        ? SnapshotUtils.getLatestSnapshotInfo(volume, bucket, ozoneManager, 
snapshotChainManager)
+        : SnapshotUtils.getPreviousSnapshot(ozoneManager, 
snapshotChainManager, currentSnapshotInfo);
+    List<SnapshotInfo> snapshotInfos = 
Lists.newArrayList(expectedPreviousSnapshotInfo);
+    SnapshotInfo snapshotInfo = expectedPreviousSnapshotInfo;
+    while (snapshotInfos.size() < numberOfPreviousSnapshotsFromChain) {
+      snapshotInfo = snapshotInfo == null ? null
+          : SnapshotUtils.getPreviousSnapshot(ozoneManager, 
snapshotChainManager, snapshotInfo);
+      snapshotInfos.add(snapshotInfo);
+      // If changes made to the snapshot have not been flushed to disk, throw 
exception immediately, next run of
+      // garbage collection would process the snapshot.
+      if 
(!OmSnapshotManager.areSnapshotChangesFlushedToDB(ozoneManager.getMetadataManager(),
 snapshotInfo)) {
+        throw new IOException("Changes made to the snapshot " + snapshotInfo + 
" have not been flushed to the disk ");
+      }
+    }
+
+    // Reversing list to get the correct order in chain. To ensure locking 
order is as per the chain ordering.
+    Collections.reverse(snapshotInfos);
+    return snapshotInfos;
+  }
+
+  private boolean validateExistingLastNSnapshotsInChain(String volume, String 
bucket) throws IOException {
+    List<SnapshotInfo> expectedLastNSnapshotsInChain = 
getLastNSnapshotInChain(volume, bucket);
+    List<UUID> expectedSnapshotIds = expectedLastNSnapshotsInChain.stream()
+        .map(snapshotInfo -> snapshotInfo == null ? null : 
snapshotInfo.getSnapshotId())
+        .collect(Collectors.toList());
+    List<UUID> existingSnapshotIds = previousOmSnapshots.stream()
+        .map(omSnapshotReferenceCounted -> omSnapshotReferenceCounted == null 
? null :
+            
omSnapshotReferenceCounted.get().getSnapshotID()).collect(Collectors.toList());
+    return expectedSnapshotIds.equals(existingSnapshotIds);
+  }
+
+  // Initialize the last N snapshots in the chain by acquiring locks. Throw 
IOException if it fails.
+  private void initializePreviousSnapshotsFromChain(String volume, String 
bucket) throws IOException {
+    // If existing snapshotIds don't match then close all snapshots and reopen 
the previous N snapshots.
+    if (!validateExistingLastNSnapshotsInChain(volume, bucket)) {

Review Comment:
   nit: this function has too much nesting. Maybe you can use early returns
   ```
     private void initializePreviousSnapshotsFromChain(String volume, String 
bucket) throws IOException {
       if (validateExistingLastNSnapshotsInChain(volume, bucket)) {
         return;
       }
   
       close();
       try {
         ...
         if (!snapshotIdLocks.acquireLock(lockIds).isLockAcquired()) {
           throw new IOException("Lock acquisition failed for last N snapshots 
: " +
               expectedLastNSnapshotsInChain + " " + currentSnapshotInfo);
         }
         for (SnapshotInfo snapshotInfo : expectedLastNSnapshotsInChain) {
           ...
         }
       } catch (IOException e) {
         this.close();
         throw e;
       }
     }
   ```



##########
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/snapshot/filter/TestReclaimableFilter.java:
##########
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import org.junit.jupiter.api.Test;
+
+/**
+ * Test class for ReclaimableFilter.
+ */
+public class TestReclaimableFilter {
+
+  @Test
+  public void testReclaimableFilter() {

Review Comment:
   Is it just a placeholder? or you have plans to add more tests in the next 
revisions?



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableKeyFilter.java:
##########
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OBJECT_ID_RECLAIM_BLOCKS;
+import static 
org.apache.hadoop.ozone.om.snapshot.SnapshotUtils.isBlockLocationInfoSame;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted keys which are reclaimable based on their presence 
in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableKeyFilter extends ReclaimableFilter<OmKeyInfo> {
+  private final OzoneManager ozoneManager;
+  private final Map<String, Long> exclusiveSizeMap;
+  private final Map<String, Long> exclusiveReplicatedSizeMap;
+
+  /**
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */

Review Comment:
   Same as other filters. Please keep them align or remove it.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableKeyFilter.java:
##########
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OBJECT_ID_RECLAIM_BLOCKS;
+import static 
org.apache.hadoop.ozone.om.snapshot.SnapshotUtils.isBlockLocationInfoSame;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted keys which are reclaimable based on their presence 
in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableKeyFilter extends ReclaimableFilter<OmKeyInfo> {
+  private final OzoneManager ozoneManager;
+  private final Map<String, Long> exclusiveSizeMap;
+  private final Map<String, Long> exclusiveReplicatedSizeMap;
+
+  /**
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableKeyFilter(OzoneManager ozoneManager,
+                              OmSnapshotManager omSnapshotManager, 
SnapshotChainManager snapshotChainManager,
+                              SnapshotInfo currentSnapshotInfo, 
OMMetadataManager metadataManager,
+                              IOzoneManagerLock lock) {
+    super(ozoneManager, omSnapshotManager, snapshotChainManager, 
currentSnapshotInfo, metadataManager, lock, 2);
+    this.ozoneManager = ozoneManager;
+    this.exclusiveSizeMap = new HashMap<>();
+    this.exclusiveReplicatedSizeMap = new HashMap<>();
+  }
+
+  @Override
+  protected String getVolumeName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getVolumeName();
+  }
+
+  @Override
+  protected String getBucketName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getBucketName();
+  }
+
+  @Override
+  protected Boolean isReclaimable(Table.KeyValue<String, OmKeyInfo> 
deletedKeyInfo) throws IOException {
+    ReferenceCounted<OmSnapshot> previousSnapshot = getPreviousOmSnapshot(1);
+    ReferenceCounted<OmSnapshot> previousToPreviousSnapshot = 
getPreviousOmSnapshot(0);
+
+    Table<String, OmKeyInfo> previousKeyTable = null;
+    Table<String, OmKeyInfo> previousPrevKeyTable = null;
+
+    Table<String, String> renamedTable = 
getMetadataManager().getSnapshotRenamedTable();
+    Table<String, String> prevRenamedTable = null;
+
+    SnapshotInfo previousSnapshotInfo = getPreviousSnapshotInfo(1);
+    SnapshotInfo prevPrevSnapshotInfo = getPreviousSnapshotInfo(0);
+
+    if (previousSnapshot != null) {
+      previousKeyTable = 
previousSnapshot.get().getMetadataManager().getKeyTable(getBucketInfo().getBucketLayout());
+      prevRenamedTable = 
previousSnapshot.get().getMetadataManager().getSnapshotRenamedTable();
+    }
+    if (previousToPreviousSnapshot != null) {
+      previousPrevKeyTable = 
previousToPreviousSnapshot.get().getMetadataManager()
+          .getKeyTable(getBucketInfo().getBucketLayout());
+    }
+    if (isKeyReclaimable(previousKeyTable, renamedTable, 
deletedKeyInfo.getValue(),
+        getBucketInfo(), getVolumeId(),
+        null)) {
+      return true;
+    }
+    calculateExclusiveSize(previousSnapshotInfo, prevPrevSnapshotInfo, 
deletedKeyInfo.getValue(), getBucketInfo(),
+        getVolumeId(), renamedTable, previousKeyTable, prevRenamedTable, 
previousPrevKeyTable, exclusiveSizeMap,
+        exclusiveReplicatedSizeMap);
+    return false;
+  }
+
+
+  public Map<String, Long> getExclusiveSizeMap() {
+    return exclusiveSizeMap;
+  }
+
+  public Map<String, Long> getExclusiveReplicatedSizeMap() {
+    return exclusiveReplicatedSizeMap;
+  }
+
+  private boolean isKeyReclaimable(
+      Table<String, OmKeyInfo> previousKeyTable,
+      Table<String, String> renamedTable,
+      OmKeyInfo deletedKeyInfo, OmBucketInfo bucketInfo,
+      long volumeId, HddsProtos.KeyValue.Builder renamedKeyBuilder)
+      throws IOException {
+
+    String dbKey;
+    // Handle case when the deleted snapshot is the first snapshot.
+    if (previousKeyTable == null) {
+      return true;
+    }
+
+    // These are uncommitted blocks wrapped into a pseudo KeyInfo
+    if (deletedKeyInfo.getObjectID() == OBJECT_ID_RECLAIM_BLOCKS) {
+      return true;
+    }
+
+    // Construct keyTable or fileTable DB key depending on the bucket type
+    if (bucketInfo.getBucketLayout().isFileSystemOptimized()) {
+      dbKey = ozoneManager.getMetadataManager().getOzonePathKey(
+          volumeId,
+          bucketInfo.getObjectID(),
+          deletedKeyInfo.getParentObjectID(),
+          deletedKeyInfo.getFileName());
+    } else {
+      dbKey = ozoneManager.getMetadataManager().getOzoneKey(
+          deletedKeyInfo.getVolumeName(),
+          deletedKeyInfo.getBucketName(),
+          deletedKeyInfo.getKeyName());
+    }
+
+    /*
+     snapshotRenamedTable:
+     1) /volumeName/bucketName/objectID ->
+                 /volumeId/bucketId/parentId/fileName (FSO)
+     2) /volumeName/bucketName/objectID ->
+                /volumeName/bucketName/keyName (non-FSO)
+    */
+    String dbRenameKey = ozoneManager.getMetadataManager().getRenameKey(
+        deletedKeyInfo.getVolumeName(), deletedKeyInfo.getBucketName(),
+        deletedKeyInfo.getObjectID());
+
+    // Condition: key should not exist in snapshotRenamedTable
+    // of the current snapshot and keyTable of the previous snapshot.
+    // Check key exists in renamedTable of the Snapshot
+    String renamedKey = renamedTable.getIfExist(dbRenameKey);
+
+    if (renamedKey != null && renamedKeyBuilder != null) {
+      renamedKeyBuilder.setKey(dbRenameKey).setValue(renamedKey);
+    }
+    // previousKeyTable is fileTable if the bucket is FSO,
+    // otherwise it is the keyTable.
+    OmKeyInfo prevKeyInfo = renamedKey != null ? previousKeyTable
+        .get(renamedKey) : previousKeyTable.get(dbKey);
+
+    if (prevKeyInfo == null ||
+        prevKeyInfo.getObjectID() != deletedKeyInfo.getObjectID()) {
+      return true;
+    }
+
+    // For key overwrite the objectID will remain the same, In this
+    // case we need to check if OmKeyLocationInfo is also same.
+    return !isBlockLocationInfoSame(prevKeyInfo, deletedKeyInfo);
+  }
+
+  /**
+   * To calculate Exclusive Size for current snapshot, Check
+   * the next snapshot deletedTable if the deleted key is
+   * referenced in current snapshot and not referenced in the
+   * previous snapshot then that key is exclusive to the current
+   * snapshot. Here since we are only iterating through
+   * deletedTable we can check the previous and previous to
+   * previous snapshot to achieve the same.
+   * previousSnapshot - Snapshot for which exclusive size is
+   *                    getting calculating.
+   * currSnapshot - Snapshot's deletedTable is used to calculate
+   *                previousSnapshot snapshot's exclusive size.
+   * previousToPrevSnapshot - Snapshot which is used to check
+   *                 if key is exclusive to previousSnapshot.
+   */
+  @SuppressWarnings("checkstyle:ParameterNumber")
+  public void calculateExclusiveSize(
+      SnapshotInfo previousSnapshot,
+      SnapshotInfo previousToPrevSnapshot,
+      OmKeyInfo keyInfo,
+      OmBucketInfo bucketInfo, long volumeId,
+      Table<String, String> snapRenamedTable,
+      Table<String, OmKeyInfo> previousKeyTable,
+      Table<String, String> prevRenamedTable,
+      Table<String, OmKeyInfo> previousToPrevKeyTable,
+      Map<String, Long> exclusiveSizes,
+      Map<String, Long> exclusiveReplicatedSizes) throws IOException {
+    String prevSnapKey = previousSnapshot.getTableKey();
+    long exclusiveReplicatedSize = exclusiveReplicatedSizes.getOrDefault(
+            prevSnapKey, 0L) + keyInfo.getReplicatedSize();
+    long exclusiveSize = exclusiveSizes.getOrDefault(prevSnapKey, 0L) + 
keyInfo.getDataSize();
+
+    // If there is no previous to previous snapshot, then
+    // the previous snapshot is the first snapshot.
+    if (previousToPrevSnapshot == null) {
+      exclusiveSizes.put(prevSnapKey, exclusiveSize);
+      exclusiveReplicatedSizes.put(prevSnapKey,
+          exclusiveReplicatedSize);
+    } else {
+      OmKeyInfo keyInfoPrevSnapshot = getPreviousSnapshotKeyName(
+          keyInfo, bucketInfo, volumeId,
+          snapRenamedTable, previousKeyTable);
+      OmKeyInfo keyInfoPrevToPrevSnapshot = getPreviousSnapshotKeyName(
+          keyInfoPrevSnapshot, bucketInfo, volumeId,
+          prevRenamedTable, previousToPrevKeyTable);
+      // If the previous to previous snapshot doesn't
+      // have the key, then it is exclusive size for the
+      // previous snapshot.
+      if (keyInfoPrevToPrevSnapshot == null) {
+        exclusiveSizes.put(prevSnapKey, exclusiveSize);
+        exclusiveReplicatedSizes.put(prevSnapKey,
+            exclusiveReplicatedSize);
+      }
+    }
+  }
+
+  private OmKeyInfo getPreviousSnapshotKeyName(OmKeyInfo keyInfo, OmBucketInfo 
bucketInfo, long volumeId,
+      Table<String, String> snapRenamedTable, Table<String, OmKeyInfo> 
previousKeyTable) throws IOException {
+
+    if (keyInfo == null) {
+      return null;
+    }
+
+    String dbKeyPrevSnap;
+    if (bucketInfo.getBucketLayout().isFileSystemOptimized()) {
+      dbKeyPrevSnap = ozoneManager.getMetadataManager().getOzonePathKey(
+          volumeId,
+          bucketInfo.getObjectID(),
+          keyInfo.getParentObjectID(),
+          keyInfo.getFileName());
+    } else {
+      dbKeyPrevSnap = ozoneManager.getMetadataManager().getOzoneKey(
+          keyInfo.getVolumeName(),
+          keyInfo.getBucketName(),
+          keyInfo.getKeyName());
+    }
+
+    String dbRenameKey = ozoneManager.getMetadataManager().getRenameKey(
+        keyInfo.getVolumeName(),
+        keyInfo.getBucketName(),
+        keyInfo.getObjectID());
+
+    String renamedKey = snapRenamedTable.getIfExist(dbRenameKey);

Review Comment:
   This is the same as 137-164 lines. Maybe create a helper function for it.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/filter/ReclaimableKeyFilter.java:
##########
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.snapshot.filter;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OBJECT_ID_RECLAIM_BLOCKS;
+import static 
org.apache.hadoop.ozone.om.snapshot.SnapshotUtils.isBlockLocationInfoSame;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmSnapshot;
+import org.apache.hadoop.ozone.om.OmSnapshotManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.SnapshotChainManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
+import org.apache.hadoop.ozone.om.lock.IOzoneManagerLock;
+import org.apache.hadoop.ozone.om.snapshot.ReferenceCounted;
+
+/**
+ * Filter to return deleted keys which are reclaimable based on their presence 
in previous snapshot in
+ * the snapshot chain.
+ */
+public class ReclaimableKeyFilter extends ReclaimableFilter<OmKeyInfo> {
+  private final OzoneManager ozoneManager;
+  private final Map<String, Long> exclusiveSizeMap;
+  private final Map<String, Long> exclusiveReplicatedSizeMap;
+
+  /**
+   * @param omSnapshotManager
+   * @param snapshotChainManager
+   * @param currentSnapshotInfo  : If null the deleted keys in AOS needs to be 
processed, hence the latest snapshot
+   *                             in the snapshot chain corresponding to bucket 
key needs to be processed.
+   * @param metadataManager      : MetadataManager corresponding to snapshot 
or AOS.
+   * @param lock                 : Lock for Active OM.
+   */
+  public ReclaimableKeyFilter(OzoneManager ozoneManager,
+                              OmSnapshotManager omSnapshotManager, 
SnapshotChainManager snapshotChainManager,
+                              SnapshotInfo currentSnapshotInfo, 
OMMetadataManager metadataManager,
+                              IOzoneManagerLock lock) {
+    super(ozoneManager, omSnapshotManager, snapshotChainManager, 
currentSnapshotInfo, metadataManager, lock, 2);
+    this.ozoneManager = ozoneManager;
+    this.exclusiveSizeMap = new HashMap<>();
+    this.exclusiveReplicatedSizeMap = new HashMap<>();
+  }
+
+  @Override
+  protected String getVolumeName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getVolumeName();
+  }
+
+  @Override
+  protected String getBucketName(Table.KeyValue<String, OmKeyInfo> keyValue) 
throws IOException {
+    return keyValue.getValue().getBucketName();
+  }
+
+  @Override
+  protected Boolean isReclaimable(Table.KeyValue<String, OmKeyInfo> 
deletedKeyInfo) throws IOException {
+    ReferenceCounted<OmSnapshot> previousSnapshot = getPreviousOmSnapshot(1);
+    ReferenceCounted<OmSnapshot> previousToPreviousSnapshot = 
getPreviousOmSnapshot(0);
+
+    Table<String, OmKeyInfo> previousKeyTable = null;
+    Table<String, OmKeyInfo> previousPrevKeyTable = null;
+
+    Table<String, String> renamedTable = 
getMetadataManager().getSnapshotRenamedTable();
+    Table<String, String> prevRenamedTable = null;
+
+    SnapshotInfo previousSnapshotInfo = getPreviousSnapshotInfo(1);
+    SnapshotInfo prevPrevSnapshotInfo = getPreviousSnapshotInfo(0);
+
+    if (previousSnapshot != null) {
+      previousKeyTable = 
previousSnapshot.get().getMetadataManager().getKeyTable(getBucketInfo().getBucketLayout());
+      prevRenamedTable = 
previousSnapshot.get().getMetadataManager().getSnapshotRenamedTable();
+    }
+    if (previousToPreviousSnapshot != null) {
+      previousPrevKeyTable = 
previousToPreviousSnapshot.get().getMetadataManager()
+          .getKeyTable(getBucketInfo().getBucketLayout());
+    }
+    if (isKeyReclaimable(previousKeyTable, renamedTable, 
deletedKeyInfo.getValue(),
+        getBucketInfo(), getVolumeId(),
+        null)) {
+      return true;
+    }
+    calculateExclusiveSize(previousSnapshotInfo, prevPrevSnapshotInfo, 
deletedKeyInfo.getValue(), getBucketInfo(),
+        getVolumeId(), renamedTable, previousKeyTable, prevRenamedTable, 
previousPrevKeyTable, exclusiveSizeMap,
+        exclusiveReplicatedSizeMap);
+    return false;
+  }
+
+
+  public Map<String, Long> getExclusiveSizeMap() {
+    return exclusiveSizeMap;
+  }
+
+  public Map<String, Long> getExclusiveReplicatedSizeMap() {
+    return exclusiveReplicatedSizeMap;
+  }
+
+  private boolean isKeyReclaimable(
+      Table<String, OmKeyInfo> previousKeyTable,
+      Table<String, String> renamedTable,
+      OmKeyInfo deletedKeyInfo, OmBucketInfo bucketInfo,
+      long volumeId, HddsProtos.KeyValue.Builder renamedKeyBuilder)
+      throws IOException {
+
+    String dbKey;
+    // Handle case when the deleted snapshot is the first snapshot.
+    if (previousKeyTable == null) {
+      return true;
+    }
+
+    // These are uncommitted blocks wrapped into a pseudo KeyInfo
+    if (deletedKeyInfo.getObjectID() == OBJECT_ID_RECLAIM_BLOCKS) {
+      return true;
+    }
+
+    // Construct keyTable or fileTable DB key depending on the bucket type
+    if (bucketInfo.getBucketLayout().isFileSystemOptimized()) {
+      dbKey = ozoneManager.getMetadataManager().getOzonePathKey(
+          volumeId,
+          bucketInfo.getObjectID(),
+          deletedKeyInfo.getParentObjectID(),
+          deletedKeyInfo.getFileName());
+    } else {
+      dbKey = ozoneManager.getMetadataManager().getOzoneKey(
+          deletedKeyInfo.getVolumeName(),
+          deletedKeyInfo.getBucketName(),
+          deletedKeyInfo.getKeyName());
+    }
+
+    /*
+     snapshotRenamedTable:
+     1) /volumeName/bucketName/objectID ->
+                 /volumeId/bucketId/parentId/fileName (FSO)
+     2) /volumeName/bucketName/objectID ->
+                /volumeName/bucketName/keyName (non-FSO)
+    */
+    String dbRenameKey = ozoneManager.getMetadataManager().getRenameKey(
+        deletedKeyInfo.getVolumeName(), deletedKeyInfo.getBucketName(),
+        deletedKeyInfo.getObjectID());
+
+    // Condition: key should not exist in snapshotRenamedTable
+    // of the current snapshot and keyTable of the previous snapshot.
+    // Check key exists in renamedTable of the Snapshot
+    String renamedKey = renamedTable.getIfExist(dbRenameKey);
+
+    if (renamedKey != null && renamedKeyBuilder != null) {
+      renamedKeyBuilder.setKey(dbRenameKey).setValue(renamedKey);
+    }
+    // previousKeyTable is fileTable if the bucket is FSO,
+    // otherwise it is the keyTable.
+    OmKeyInfo prevKeyInfo = renamedKey != null ? previousKeyTable
+        .get(renamedKey) : previousKeyTable.get(dbKey);
+
+    if (prevKeyInfo == null ||
+        prevKeyInfo.getObjectID() != deletedKeyInfo.getObjectID()) {
+      return true;
+    }
+
+    // For key overwrite the objectID will remain the same, In this
+    // case we need to check if OmKeyLocationInfo is also same.
+    return !isBlockLocationInfoSame(prevKeyInfo, deletedKeyInfo);
+  }
+
+  /**
+   * To calculate Exclusive Size for current snapshot, Check
+   * the next snapshot deletedTable if the deleted key is
+   * referenced in current snapshot and not referenced in the
+   * previous snapshot then that key is exclusive to the current
+   * snapshot. Here since we are only iterating through
+   * deletedTable we can check the previous and previous to
+   * previous snapshot to achieve the same.
+   * previousSnapshot - Snapshot for which exclusive size is
+   *                    getting calculating.
+   * currSnapshot - Snapshot's deletedTable is used to calculate
+   *                previousSnapshot snapshot's exclusive size.
+   * previousToPrevSnapshot - Snapshot which is used to check
+   *                 if key is exclusive to previousSnapshot.
+   */
+  @SuppressWarnings("checkstyle:ParameterNumber")
+  public void calculateExclusiveSize(

Review Comment:
   Should it be part of the reclaimable filter?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to