[
https://issues.apache.org/jira/browse/HADOOP-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17790287#comment-17790287
]
ASF GitHub Bot commented on HADOOP-18915:
-----------------------------------------
steveloughran commented on PR #6180:
URL: https://github.com/apache/hadoop/pull/6180#issuecomment-1828731773
tested: s3 london with ` -Dparallel-tests -DtestsThreadCount=8 -Dscale
-Dprefetch`
took a while to work out why my connection failure test worked standalone
and failed in bulk runs, as even after disabling fs instance reuse it still
occurred.
It was actually prefetching, as this does return connections to the pool!
Given how "running out of connections by leaking open files" is a recurrent
issue,
stabilizing prefetch and switching to it by default will be good.
But: we may need even more http connections!
@ahmarsuhail can you look at this?
The core change is straightforward:
* let users configure the v2 sdk client timeouts and options for http and
async clients.
* allow them to use the suffix times so as to eliminate all confusion about
setting unit
> Extend S3A http client connection timeouts
> ------------------------------------------
>
> Key: HADOOP-18915
> URL: https://issues.apache.org/jira/browse/HADOOP-18915
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Ahmar Suhail
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> * Add ability to configure *all* timeouts, especially acquisition time
> * recognise ApiCallTimeout and map tp a retryable exception
> * use getDuration so suffixes can be used -so remove all ambiguity about time
> unit
> * use units in core-default.xml so warnings aren't printed
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]