HADOOP-14019. Fix some typos in the s3a docs. Contributed by Steve Loughran
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bdad8b7b Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bdad8b7b Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bdad8b7b Branch: refs/heads/HDFS-10285 Commit: bdad8b7b97d7f48119f016d68f32982d680c8796 Parents: f432999 Author: Mingliang Liu <lium...@apache.org> Authored: Thu Feb 16 16:41:31 2017 -0800 Committer: Mingliang Liu <lium...@apache.org> Committed: Thu Feb 16 16:41:31 2017 -0800 ---------------------------------------------------------------------- .../src/site/markdown/tools/hadoop-aws/index.md | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/hadoop/blob/bdad8b7b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ---------------------------------------------------------------------- diff --git a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md index 2471a52..0ff314c 100644 --- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md +++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md @@ -970,7 +970,7 @@ This is because the property values are kept in these files, and cannot be dynamically patched. Instead, callers need to create different configuration files for each -bucket, setting the base secrets (`fs.s3a.bucket.nightly.access.key`, etc), +bucket, setting the base secrets (`fs.s3a.access.key`, etc), then declare the path to the appropriate credential file in a bucket-specific version of the property `fs.s3a.security.credential.provider.path`. @@ -1044,7 +1044,7 @@ declaration. For example: ### <a name="s3a_fast_upload"></a>Stabilizing: S3A Fast Upload -**New in Hadoop 2.7; significantly enhanced in Hadoop 2.9** +**New in Hadoop 2.7; significantly enhanced in Hadoop 2.8** Because of the nature of the S3 object store, data written to an S3A `OutputStream` @@ -1204,8 +1204,18 @@ consumed, and so eliminates heap size as the limiting factor in queued uploads <value>disk</value> </property> +<property> + <name>fs.s3a.buffer.dir</name> + <value></value> + <description>Comma separated list of temporary directories use for + storing blocks of data prior to their being uploaded to S3. + When unset, the Hadoop temporary directory hadoop.tmp.dir is used</description> +</property> + ``` +This is the default buffer mechanism. The amount of data which can +be buffered is limited by the amount of available disk space. #### <a name="s3a_fast_upload_bytebuffer"></a>Fast Upload with ByteBuffers: `fs.s3a.fast.upload.buffer=bytebuffer` @@ -1219,7 +1229,7 @@ The amount of data which can be buffered is limited by the Java runtime, the operating system, and, for YARN applications, the amount of memory requested for each container. -The slower the write bandwidth to S3, the greater the risk of running out +The slower the upload bandwidth to S3, the greater the risk of running out of memory âand so the more care is needed in [tuning the upload settings](#s3a_fast_upload_thread_tuning). --------------------------------------------------------------------- To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-commits-h...@hadoop.apache.org