[
https://issues.apache.org/jira/browse/HADOOP-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14720291#comment-14720291
]
Sean Mackrory commented on HADOOP-12346:
----------------------------------------
I did manual testing of this by running a bunch of distcp and teragen /
terasort jobs against S3 with various size datasets at different times of day.
I haven't had any failures with the new defaults applied - it's noticably more
reliable. I haven't added tests because no functionality / code is changing -
just default configuration values. An automated test seems impractical since
this is intended to address occasional flakiness.
> Increase some default timeouts / retries for S3a connector
> ----------------------------------------------------------
>
> Key: HADOOP-12346
> URL: https://issues.apache.org/jira/browse/HADOOP-12346
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Reporter: Sean Mackrory
> Attachments:
> 0001-HADOOP-12346.-Increase-some-default-timeouts-retries.patch
>
>
> I've been seeing some flakiness in jobs runnings against S3a, both first hand
> and with other accounts, for which increasing fs.s3a.connection.timeout and
> fs.s3a.attempts.maximum have been a reliable solution. I propose we increase
> the defaults.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)