[ 
https://issues.apache.org/jira/browse/HADOOP-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289884#comment-17289884
 ] 

Steve Loughran commented on HADOOP-17531:
-----------------------------------------

go for it. 

# The abfs listing speedup actually does >1 page prefetch; we could consider 
that for S3A too. Maybe. There's recycling of the S3A LlstRequestV2 class 
between requests and you can't kick off request 2 until async request 1 is in, 
so you'd still have only one thread fetching. We could just build up a list of 
>1 page worth of results if there's a mismatch between consumer and supplier.
# hdfs, webhdfs and now s3a and abfs are the sole stores where 
listStatusIterator() does more than wrap listStatus; for hdfs/webhdfs its for 
keeping page size down (scale), for the cloud stores its because list is so 
slow, and iteration can help swallow the cost.
# if listFiles(recursive) could be used for the scan, then we'd really see 
speedups on S3a

Anyway, yes, distcp speedup where possible is good.

Note also s3a and (soon) abfs RemoteIterator objects do/will implement 
IOStatisticsSource -you can collect stats on all their IO and performance. Log 
their toString() Value at debug (See IOStatisticsLogging) and you can get 
summaries. 

ps:

# {{hadoop fs -ls}} uses listStatusIterator. 
# PoC of a higher performance copyFromLocal command for cloud storage; uses 
listFiles(path, recursive=true), picks off the largest files first (so they 
don't become stragglers), then randomises the rest to reduce shard throttling: 
https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/tools/cloudup/Cloudup.java

> DistCp: Reduce memory usage on copying huge directories
> -------------------------------------------------------
>
>                 Key: HADOOP-17531
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17531
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ayush Saxena
>            Priority: Critical
>         Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log
>
>
> Presently distCp, uses the producer-consumer kind of setup while building the 
> listing, the input queue and output queue are both unbounded, thus the 
> listStatus grows quite huge.
> Rel Code Part :
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635
> This goes on bredth-first traversal kind of stuff(uses queue instead of 
> earlier stack), so if you have files at lower depth, it will like open up the 
> entire tree and the start processing....



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to