codope commented on code in PR #9343:
URL: https://github.com/apache/hudi/pull/9343#discussion_r1283447863


##########
hudi-common/src/main/java/org/apache/hudi/metadata/FileSystemBackedTableMetadata.java:
##########
@@ -168,57 +167,66 @@ private List<String> 
getPartitionPathWithPathPrefixUsingFilterExpression(String
       // TODO: Get the parallelism from HoodieWriteConfig
       int listingParallelism = Math.min(DEFAULT_LISTING_PARALLELISM, 
pathsToList.size());
 
-      // List all directories in parallel:
-      // if current dictionary contains PartitionMetadata, add it to result
-      // if current dictionary does not contain PartitionMetadata, add its 
subdirectory to queue to be processed.
+      // List all directories in parallel
       engineContext.setJobStatus(this.getClass().getSimpleName(), "Listing all 
partitions with prefix " + relativePathPrefix);
-      // result below holds a list of pair. first entry in the pair optionally 
holds the deduced list of partitions.
-      // and second entry holds optionally a directory path to be processed 
further.
-      List<Pair<Option<String>, Option<Path>>> result = 
engineContext.flatMap(pathsToList, path -> {
+      List<FileStatus> dirToFileListing = engineContext.flatMap(pathsToList, 
path -> {
         FileSystem fileSystem = path.getFileSystem(hadoopConf.get());
-        if (HoodiePartitionMetadata.hasPartitionMetadata(fileSystem, path)) {
-          return 
Stream.of(Pair.of(Option.of(FSUtils.getRelativePartitionPath(dataBasePath.get(),
 path)), Option.empty()));
-        }
-        return Arrays.stream(fileSystem.listStatus(path, p -> {
-          try {
-            return fileSystem.isDirectory(p) && 
!p.getName().equals(HoodieTableMetaClient.METAFOLDER_NAME);
-          } catch (IOException e) {
-            // noop
-          }
-          return false;
-        })).map(status -> Pair.of(Option.empty(), 
Option.of(status.getPath())));
+        return Arrays.stream(fileSystem.listStatus(path));
       }, listingParallelism);
       pathsToList.clear();
 
-      partitionPaths.addAll(result.stream().filter(entry -> 
entry.getKey().isPresent())
-          .map(entry -> entry.getKey().get())
-          .filter(relativePartitionPath -> fullBoundExpr instanceof 
Predicates.TrueExpression
-              || (Boolean) fullBoundExpr.eval(
-                      extractPartitionValues(partitionFields, 
relativePartitionPath, urlEncodePartitioningEnabled)))
-          .collect(Collectors.toList()));
-
-      Expression partialBoundExpr;
-      // If partitionPaths is nonEmpty, we're already at the last path level, 
and all paths
-      // are filtered already.
-      if (needPushDownExpressions && partitionPaths.isEmpty()) {
-        // Here we assume the path level matches the number of partition 
columns, so we'll rebuild
-        // new schema based on current path level.
-        // e.g. partition columns are <region, date, hh>, if we're listing the 
second level, then
-        // currentSchema would be <region, date>
-        // `PartialBindVisitor` will bind reference if it can be found from 
`currentSchema`, otherwise
-        // will change the expression to `alwaysTrue`. Can see 
`PartialBindVisitor` for details.
-        Types.RecordType currentSchema = 
Types.RecordType.get(partitionFields.fields().subList(0, 
++currentPartitionLevel));
-        PartialBindVisitor partialBindVisitor = new 
PartialBindVisitor(currentSchema, caseSensitive);
-        partialBoundExpr = pushedExpr.accept(partialBindVisitor);
-      } else {
-        partialBoundExpr = Predicates.alwaysTrue();
-      }
+      // if current dictionary contains PartitionMetadata, add it to result
+      // if current dictionary does not contain PartitionMetadata, add it to 
queue to be processed.
+      int fileListingParallelism = Math.min(DEFAULT_LISTING_PARALLELISM, 
dirToFileListing.size());
+      if (!dirToFileListing.isEmpty()) {
+        // result below holds a list of pair. first entry in the pair 
optionally holds the deduced list of partitions.
+        // and second entry holds optionally a directory path to be processed 
further.
+        engineContext.setJobStatus(this.getClass().getSimpleName(), 
"Processing listed partitions");
+        List<Pair<Option<String>, Option<Path>>> result = 
engineContext.map(dirToFileListing, fileStatus -> {
+          FileSystem fileSystem = 
fileStatus.getPath().getFileSystem(hadoopConf.get());
+          if (fileStatus.isDirectory()) {
+            if (HoodiePartitionMetadata.hasPartitionMetadata(fileSystem, 
fileStatus.getPath())) {

Review Comment:
   @wecharyu I ran into the issue when I was running a query which had no 
partition predicate (totally listing 1823 partitions) and as you can see from 
the sceenshot that the job "listing all partitions with prefix" took a lot more 
time and after the revert the latency came down. This was not one-off. I ran w/ 
and w/o revert again and I could see the behavior.
   Spark jobs before revert
   <img width="1769" alt="Screenshot 2023-08-03 at 9 51 19 PM" 
src="https://github.com/apache/hudi/assets/16440354/488ac169-5735-4865-8464-231946f4a165";>
   Job after revert
   <img width="1767" alt="Screenshot 2023-08-03 at 9 50 58 PM" 
src="https://github.com/apache/hudi/assets/16440354/e0c2208d-4fa0-48cc-9dad-ec79ef8b165c";>
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to