[
https://issues.apache.org/jira/browse/HADOOP-13704?focusedWorklogId=728937&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-728937
]
ASF GitHub Bot logged work on HADOOP-13704:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 17/Feb/22 13:49
Start Date: 17/Feb/22 13:49
Worklog Time Spent: 10m
Work Description: ahmarsuhail commented on a change in pull request #3978:
URL: https://github.com/apache/hadoop/pull/3978#discussion_r809067257
##########
File path:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/GetContentSummaryOperation.java
##########
@@ -133,34 +133,63 @@ public ContentSummary execute() throws IOException {
* @throws IOException failure
*/
public ContentSummary getDirSummary(Path dir) throws IOException {
+
long totalLength = 0;
long fileCount = 0;
long dirCount = 1;
- final RemoteIterator<S3AFileStatus> it
- = callbacks.listStatusIterator(dir);
+
+ RemoteIterator<S3ALocatedFileStatus> it = callbacks.listFilesIterator(dir,
true);
+
+ Set<Path> dirSet = new HashSet<>();
+ Set<Path> pathsTraversed = new HashSet<>();
while (it.hasNext()) {
- final S3AFileStatus s = it.next();
- if (s.isDirectory()) {
- try {
- ContentSummary c = getDirSummary(s.getPath());
- totalLength += c.getLength();
- fileCount += c.getFileCount();
- dirCount += c.getDirectoryCount();
- } catch (FileNotFoundException ignored) {
- // path was deleted during the scan; exclude from
- // summary.
- }
- } else {
- totalLength += s.getLen();
+ S3ALocatedFileStatus fileStatus = it.next();
+ Path filePath = fileStatus.getPath();
+
+ if (fileStatus.isDirectory() && !filePath.equals(dir)) {
+ dirSet.add(filePath);
+ buildDirectorySet(dirSet, pathsTraversed, dir, filePath.getParent());
+ } else if (!fileStatus.isDirectory()) {
fileCount += 1;
+ totalLength += fileStatus.getLen();
+ buildDirectorySet(dirSet, pathsTraversed, dir, filePath.getParent());
}
+
}
+
// Add the list's IOStatistics
iostatistics.aggregate(retrieveIOStatistics(it));
+
return new ContentSummary.Builder().length(totalLength).
- fileCount(fileCount).directoryCount(dirCount).
- spaceConsumed(totalLength).build();
+ fileCount(fileCount).directoryCount(dirCount + dirSet.size()).
+ spaceConsumed(totalLength).build();
+ }
+
+ /***
+ * This method builds the set of all directories found under the base path.
We need to do this because if the
+ * directory structure /a/b/c was created with a single mkdirs() call, it is
stored as 1 object in S3 and the list
+ * files iterator will only return a single entry /a/b/c.
+ *
+ * We keep track of paths traversed so far to prevent duplication of work.
For eg, if we had a/b/c/file-1.txt and
+ * /a/b/c/file-2.txt, we will only recurse over the complete path once and
won't have to do anything for file-2.txt.
+ *
+ * @param dirSet Set of all directories found in the path
+ * @param pathsTraversed Set of all paths traversed so far
+ * @param basePath Path of directory to scan
+ * @param parentPath Parent path of the current file/directory in the
iterator
+ */
+ private void buildDirectorySet(Set<Path> dirSet, Set<Path> pathsTraversed,
Path basePath, Path parentPath) {
+
+ if (parentPath == null || pathsTraversed.contains(parentPath) ||
parentPath.equals(basePath)) {
Review comment:
In most cases (eg: nested directories a/b/c.txt, a/b/d.txt) the contains
condition will happen a lot more often than parentPath, so I think this order
is probably the fastest
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 728937)
Time Spent: 1h (was: 50m)
> S3A getContentSummary() to move to listFiles(recursive) to count children;
> instrument use
> -----------------------------------------------------------------------------------------
>
> Key: HADOOP-13704
> URL: https://issues.apache.org/jira/browse/HADOOP-13704
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.8.0
> Reporter: Steve Loughran
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> Hive and a bit of Spark use {{getContentSummary()}} to get some summary stats
> of a filesystem. This is very expensive on S3A (and any other object store),
> especially as the base implementation does the recursive tree walk.
> Because of HADOOP-13208, we have a full enumeration of files under a path
> without directory costs...S3A can/should switch to this to speed up those
> places where the operation is called.
> Also
> * API call needs FS spec and contract tests
> * S3A could instrument invocation, so as to enable real-world popularity to
> be measured
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]