Aaron Fabbri commented on HADOOP-15076:

It is very easy to run out of memory when buffering to disk; the option
+`fs.s3a.fast.upload.active.blocks"` exists to tune how many active blocks
+a single output stream writing to S3 may have queued at a time.

/when buffering to disk/when buffering to memory/  right?

Other than that minor typo.. +1 LGTM (I'll leave running `mvn site` and 
checking links and formatting to you--I need to run).. 

> Enhance s3a troubleshooting docs, add perf section
> --------------------------------------------------
>                 Key: HADOOP-15076
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15076
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: documentation, fs/s3
>    Affects Versions: 2.8.2
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>         Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch, 
> HADOOP-15076-003.patch, HADOOP-15076-004.patch, HADOOP-15076-005.patch, 
> HADOOP-15076-006.patch
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to