[
https://issues.apache.org/jira/browse/HADOOP-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886570#comment-16886570
]
Hudson commented on HADOOP-15729:
---------------------------------
FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16932 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/16932/])
HADOOP-15729. [s3a] Allow core threads to time out. (#1075) (github: rev
5672efa5c7184970c8f9e430ff8c36121f3a836d)
* (edit)
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AConcurrentOps.java
* (edit)
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> [s3a] stop treat fs.s3a.max.threads as the long-term minimum
> ------------------------------------------------------------
>
> Key: HADOOP-15729
> URL: https://issues.apache.org/jira/browse/HADOOP-15729
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Sean Mackrory
> Assignee: Sean Mackrory
> Priority: Major
> Attachments: HADOOP-15729.001.patch, HADOOP-15729.002.patch
>
>
> A while ago the s3a connector started experiencing deadlocks because the AWS
> SDK requires an unbounded threadpool. It places monitoring tasks on the work
> queue before the tasks they wait on, so it's possible (has even happened with
> larger-than-default threadpools) for the executor to become permanently
> saturated and deadlock.
> So we started giving an unbounded threadpool executor to the SDK, and using a
> bounded, blocking threadpool service for everything else S3A needs (although
> currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then
> only limits this threadpool, however we also specified fs.s3a.max.threads as
> the number of core threads in the unbounded threadpool, which in hindsight is
> pretty terrible.
> Currently those core threads do not timeout, so this is actually setting a
> sort of minimum. Once that many tasks have been submitted, the threadpool
> will be locked at that number until it bursts beyond that, but it will only
> spin down that far. If fs.s3a.max.threads is set reasonably high and someone
> uses a bunch of S3 buckets, they could easily have thousands of idle threads
> constantly.
> We should either not use fs.s3a.max.threads for the corepool size and
> introduce a new configuration, or we should simply allow core threads to
> timeout. I'm reading the OpenJDK source now to see what subtle differences
> there are between core threads and other threads if core threads can timeout.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]