[
https://issues.apache.org/jira/browse/HADOOP-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943071#comment-15943071
]
Steve Loughran commented on HADOOP-14237:
-----------------------------------------
I see: it's extracting the credentials, then saving them to the cluster FS, so
that no clients need to hit the IAM infra so much.
if it overloads, it reads back from HDFS.
If this is to go in, as well as needing per-user temp dir, and all the various
tests, maybe even expiry of credentials, this MUST be its own credential
provider. This needs to be optional, and now we've added the ability to declare
your own providers, that'll be how people use it.
Test plan:
* split save/load from rest of provider, test independently, including handling
some read/write failure conditions.
* verify that credentials are saved on successful auth.
* maybe using mocking simulate IAM overload
> S3A Support Shared Instance Profile Credentials Across All Hadoop Nodes
> -----------------------------------------------------------------------
>
> Key: HADOOP-14237
> URL: https://issues.apache.org/jira/browse/HADOOP-14237
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2, 2.8.1
> Environment: EC2, AWS
> Reporter: Kazuyuki Tanimura
>
> When I run a large Hadoop cluster on EC2 instances with IAM Role, it fails
> getting the instance profile credentials, eventually all jobs on the cluster
> fail. Since a number of S3A clients (all mappers and reducers) try to get the
> credentials, the AWS credential endpoint starts responding 5xx and 4xx error
> codes.
> SharedInstanceProfileCredentialsProvider.java is sort of trying to solve it,
> but it still does not share the credentials with other EC2 nodes / JVM
> processes.
> This issue prevents users from creating Hadoop clusters on EC2
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]