Justin Uang created HADOOP-16132:
------------------------------------
Summary: Support multipart download in S3AFileSystem
Key: HADOOP-16132
URL: https://issues.apache.org/jira/browse/HADOOP-16132
Project: Hadoop Common
Issue Type: Improvement
Reporter: Justin Uang
I noticed that I get 150MB/s when I use the aws CLI
{code:java}
aws s3 cp s3://<bucket>/<key> - > /dev/null{code}
vs 50MB/s when I use the S3AFileSystem
{code:java}
hadoop fs -cat s3://<bucket>/<key> > /dev/null{code}
Looking into the AWS CLI code, it looks like the
[download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
logic is quite clever. It downloads the next couple parts in parallel using
range requests, and then buffers them in memory in order to reorder them and
expose a single contiguous stream. I translated the logic to Java and modified
the S3AFileSystem to do similar things, and am able to achieve 150MB/s download
speeds as well. It is mostly done but I have some things to clean up first.
It would be great to get some other eyes on it to see what we need to do to get
it merged.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]