[ 
https://issues.apache.org/jira/browse/HADOOP-17428?focusedWorklogId=648988&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-648988
 ]

ASF GitHub Bot logged work on HADOOP-17428:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 10/Sep/21 02:25
            Start Date: 10/Sep/21 02:25
    Worklog Time Spent: 10m 
      Work Description: sumangala-patki commented on a change in pull request 
#2549:
URL: https://github.com/apache/hadoop/pull/2549#discussion_r705849850



##########
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ContentSummaryProcessor.java
##########
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.concurrent.CompletionService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.ContentSummary;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+
+public class ContentSummaryProcessor {
+  private static final int CORE_POOL_SIZE = 1;
+  private static final int MAX_THREAD_COUNT = 16;
+  private static final int KEEP_ALIVE_TIME = 5;
+  private static final int POLL_TIMEOUT = 100;
+  private static final Logger LOG = 
LoggerFactory.getLogger(ContentSummaryProcessor.class);
+  private final AtomicLong fileCount = new AtomicLong(0L);
+  private final AtomicLong directoryCount = new AtomicLong(0L);
+  private final AtomicLong totalBytes = new AtomicLong(0L);
+  private final AtomicInteger numTasks = new AtomicInteger(0);
+  private final ListingSupport abfsStore;
+  private final ExecutorService executorService = new ThreadPoolExecutor(
+      CORE_POOL_SIZE, MAX_THREAD_COUNT, KEEP_ALIVE_TIME, TimeUnit.SECONDS,
+      new SynchronousQueue<>());
+  private final CompletionService<Void> completionService =
+      new ExecutorCompletionService<>(executorService);
+  private final LinkedBlockingQueue<FileStatus> queue = new 
LinkedBlockingQueue<>();
+
+  /**
+   * Processes a given path for count of subdirectories, files and total number
+   * of bytes
+   * @param abfsStore Instance of AzureBlobFileSystemStore, used to make
+   * listStatus calls to server
+   */
+  public ContentSummaryProcessor(ListingSupport abfsStore) {
+    this.abfsStore = abfsStore;
+  }
+
+  public ContentSummary getContentSummary(Path path, TracingContext 
tracingContext)
+          throws IOException, ExecutionException, InterruptedException {
+    try {
+      processDirectoryTree(path, tracingContext);
+      while (!queue.isEmpty() || numTasks.get() > 0) {
+        try {
+          completionService.take().get();
+        } finally {
+          numTasks.decrementAndGet();
+          LOG.debug("FileStatus queue size = {}, number of submitted 
unfinished tasks = {}, active thread count = {}",
+              queue.size(), numTasks, ((ThreadPoolExecutor) 
executorService).getActiveCount());
+        }
+      }
+    } finally {
+      executorService.shutdownNow();
+      LOG.debug("Executor shutdown");
+    }
+    LOG.debug("Processed content summary of subtree under given path");
+    ContentSummary.Builder builder = new ContentSummary.Builder()
+        .directoryCount(directoryCount.get()).fileCount(fileCount.get())
+        .length(totalBytes.get()).spaceConsumed(totalBytes.get());
+    return builder.build();
+  }
+
+  /**
+   * Calls listStatus on given path and populated fileStatus queue with
+   * subdirectories. Is called by new tasks to process the complete subtree
+   * under a given path
+   * @param path: Path to a file or directory
+   * @throws IOException: listStatus error
+   * @throws InterruptedException: error while inserting into queue
+   */
+  private void processDirectoryTree(Path path, TracingContext tracingContext)
+      throws IOException, InterruptedException {
+    FileStatus[] fileStatuses = abfsStore.listStatus(path, tracingContext);
+
+    for (FileStatus fileStatus : fileStatuses) {

Review comment:
       Trying to confirm the advantage of processing page-wise listStatus 
results; would like to know your opinion. Analyzed time taken by direct 
liststatus call vs using listiterator (queueing subdir while iterating), but 
getting ambiguous results.
   
   The tests used involved creating a directory tree and calling 
GetContentSummary on the top folder, as the primary use of this api might be on 
the root of an account.
   
   Expt 1: Directory tree with 12 levels (tree height=12), where each level 
comprises one dir and 1-2 files.
   Expt 2: Same 12-level structure as 1, with a branch (of 2 subdir levels) 
around the mid-level, i.e., two subdirs at level 5, each having a subdir. All 
directories in the tree have ~15 files
   Expt 3: Same as expt 2, but with each dir having more than 5000 files (will 
result in liststatus results being fetched in multiple pages)
   
   The analysis was done for both lexicographical positions of directory with 
respect to files at the same level, as it determines whether the directory is 
fetched first. The time taken was calculated as the time between the first 
ListStatus REST call and the DeleteFileSystem call (post the last LS) => this 
will eliminate differences in file/dir creation time.
   ```
   Expt number  Dir after files         Dir before files
   1            LS (few ms)             LS
   2            LS (0.5s)               Itr (8.7s)
   3            LS (3s)                 Itr (4.5s)
   ```
   
   LS(t) -> Normal direct ListStatus call was faster by t
   Itr(t) -> ListIterator was faster by t
   
   Using iterator seems beneficial for some scenarios, should we go ahead with 
it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 648988)
    Time Spent: 2h 10m  (was: 2h)

> ABFS: Implementation for getContentSummary
> ------------------------------------------
>
>                 Key: HADOOP-17428
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17428
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Sumangala Patki
>            Assignee: Sumangala Patki
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Adds implementation for HDFS method getContentSummary, which takes in a Path 
> argument and returns details such as file/directory count and space utilized 
> under that path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to