adoroszlai commented on code in PR #8683:
URL: https://github.com/apache/ozone/pull/8683#discussion_r2164440067


##########
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/om/PrefixParser.java:
##########
@@ -182,17 +182,13 @@ public BucketLayout getBucketLayout() {
     return BucketLayout.FILE_SYSTEM_OPTIMIZED;
   }
 
-  private void dumpTableInfo(Types type,
+  private <T extends WithParentObjectId> void dumpTableInfo(Types type,
       org.apache.hadoop.fs.Path effectivePath,
-      Table<String, ? extends WithParentObjectId> table,
+      Table<String, T> table,
       long volumeId, long bucketId, long lastObjectId)
       throws IOException {
-    MetadataKeyFilters.KeyPrefixFilter filter = getPrefixFilter(
-            volumeId, bucketId, lastObjectId);
-
-    List<? extends KeyValue
-        <String, ? extends WithParentObjectId>> infoList =
-        table.getRangeKVs(null, 1000, null, filter);
+    final KeyPrefixFilter filter = getPrefixFilter(volumeId, bucketId, 
lastObjectId);
+    final List<KeyValue<String, T>> infoList = table.getRangeKVs(null, 1000, 
null, filter, false);
 
     for (KeyValue<String, ? extends WithParentObjectId> info :infoList) {

Review Comment:
   nit: this can also be replaced with `T`.
   
   ```suggestion
       for (KeyValue<String, T> info :infoList) {
   ```



##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBTable.java:
##########
@@ -289,42 +284,26 @@ && get(startKey) == null) {
 
       while (it.hasNext() && result.size() < count) {
         final KeyValue<byte[], byte[]> currentEntry = it.next();
-        byte[] currentKey = currentEntry.getKey();
-
-        if (filters == null) {
+        if (filter == null || filter.filterKey(currentEntry.getKey())) {
           result.add(currentEntry);
-        } else {
-          // NOTE: the preKey and nextKey are never checked
-          // in all existing underlying filters, so they could
-          // be safely as null here.
-          if (Arrays.stream(filters)
-                  .allMatch(entry -> entry.filterKey(null,
-                          currentKey, null))) {
-            result.add(currentEntry);
-          } else {
-            if (!result.isEmpty() && sequential) {
-              // if the caller asks for a sequential range of results,
-              // and we met a dis-match, abort iteration from here.
-              // if result is empty, we continue to look for the first match.
-              break;
-            }
-          }
+        } else if (!result.isEmpty() && sequential) {
+          // if the caller asks for a sequential range of results,
+          // and we met a dis-match, abort iteration from here.
+          // if result is empty, we continue to look for the first match.
+          break;
         }
       }
     } finally {
       long end = Time.monotonicNow();
       long timeConsumed = end - start;
       if (LOG.isDebugEnabled()) {
-        if (filters != null) {
-          for (MetadataKeyFilters.MetadataKeyFilter filter : filters) {
-            int scanned = filter.getKeysScannedNum();
-            int hinted = filter.getKeysHintedNum();
-            if (scanned > 0 || hinted > 0) {
-              LOG.debug(
-                  "getRangeKVs ({}) numOfKeysScanned={}, numOfKeysHinted={}",
-                  filter.getClass().getSimpleName(), 
filter.getKeysScannedNum(),
-                  filter.getKeysHintedNum());
-            }
+        if (filter != null) {
+          int scanned = filter.getKeysScannedNum();
+          int hinted = filter.getKeysHintedNum();
+          if (scanned > 0 || hinted > 0) {
+            LOG.debug("getRangeKVs ({}) numOfKeysScanned={}, 
numOfKeysHinted={}",
+                filter.getClass().getSimpleName(), filter.getKeysScannedNum(),
+                filter.getKeysHintedNum());

Review Comment:
   nit: can use `scanned` and `hinted` in the message, too.



##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBTable.java:
##########
@@ -266,8 +260,9 @@ public void loadFromFile(File externalFile) throws 
RocksDatabaseException {
     RDBSstFileLoader.load(db, family, externalFile);
   }
 
-  private List<KeyValue<byte[], byte[]>> getRangeKVs(byte[] startKey, int 
count, boolean sequential, byte[] prefix,
-      MetadataKeyFilters.MetadataKeyFilter... filters) throws 
RocksDatabaseException, CodecException {
+  private List<KeyValue<byte[], byte[]>> getRangeKVsImpl(
+      byte[] startKey, int count, byte[] prefix, KeyPrefixFilter filter, 
boolean sequential)

Review Comment:
   It looks like signature is the same for `getRangeKVs` and `getRangeKVsImpl`, 
so do we need to keep the latter?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to