[ 
https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11183:
------------------------------------
    Attachment: HADOOP-11183-008.patch

The patch didn't apply as the (spurious?) changes to the fs contract base test 
were incompatible; the 008 patch fixes that by removing that file from the diff 
and has some formatting & typo fixes to the documentation. I'll do a test run 
when I next have bandwidth; from the code review itself it looks good.

One issue is that I didn't fully understand the bit in the docs regarding "the 
maximum size of a memory buffer ", mapping the properties to the 

bq. The maximum size of a memory buffer is `fs.s3a.multipart.threshold` / 
`fs.s3a.multipart.size` for an upload / partupload respectively. 

What does that mean? 

> Memory-based S3AOutputstream
> ----------------------------
>
>                 Key: HADOOP-11183
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11183
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Thomas Demoor
>            Assignee: Thomas Demoor
>         Attachments: HADOOP-11183-004.patch, HADOOP-11183-005.patch, 
> HADOOP-11183-006.patch, HADOOP-11183-007.patch, HADOOP-11183-008.patch, 
> HADOOP-11183.001.patch, HADOOP-11183.002.patch, HADOOP-11183.003.patch, 
> design-comments.pdf
>
>
> Currently s3a buffers files on disk(s) before uploading. This JIRA 
> investigates adding a memory-based upload implementation.
> The motivation is evidently performance: this would be beneficial for users 
> with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on 
> an S3-compatible object store (FYI: my contributions are made in name of 
> Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to