[ 
https://issues.apache.org/jira/browse/HADOOP-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14721242#comment-14721242
 ] 

Hudson commented on HADOOP-12346:
---------------------------------

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2269 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2269/])
HADOOP-12346. Increase some default timeouts / retries for S3a connector. (Sean 
Mackrory via Lei (Eddy) Xu) (lei: rev 6ab2d19f5c010ab1d318214916ba95daa91a4dbf)
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Increase some default timeouts / retries for S3a connector
> ----------------------------------------------------------
>
>                 Key: HADOOP-12346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12346
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.7.1
>            Reporter: Sean Mackrory
>            Assignee: Sean Mackrory
>             Fix For: 2.8.0, 3.0.0
>
>         Attachments: 
> 0001-HADOOP-12346.-Increase-some-default-timeouts-retries.patch
>
>
> I've been seeing some flakiness in jobs runnings against S3a, both first hand 
> and with other accounts, for which increasing fs.s3a.connection.timeout and 
> fs.s3a.attempts.maximum have been a reliable solution. I propose we increase 
> the defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to