[
https://issues.apache.org/jira/browse/HADOOP-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18042535#comment-18042535
]
ASF GitHub Bot commented on HADOOP-19181:
-----------------------------------------
steveloughran commented on PR #8118:
URL: https://github.com/apache/hadoop/pull/8118#issuecomment-3607215388
Do plan to add a test to create an instance of the class, which will either
return no credentials or (in EC2), actually work.
Tests run against s3 express
one failure for @ahmarsuhail to worry about.
```
[ERROR]
ITestS3AAnalyticsAcceleratorStreamReading.testSequentialStreamsNoDuplicateGets:402
[Counter named action_http_get_request with expected value 1]
Expecting:
<2L>
to be equal to:
<1L>
```
and two failures in assume roles of malformed roles. Looks like STS has
changed its error text. fix: remove the probes for specific text
```
ITestAssumeRole.testAssumeRoleFSBadPolicy:251->expectFileSystemCreateFailure:164
Expected to find 'JSON' but got unexpected exception:
org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate
org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on /:
software.amazon.awssdk.services.sts.model.MalformedPolicyDocumentException:
Unexpected IOException: Unexpected close marker '}': expected ']' (for ROOT
starting at [Source: java.io.StringReader@16a63c4d; line: 1, column: 0])
at [Source: java.io.StringReader@16a63c4d; line: 1, column: 2] (Service:
Sts, Status Code: 400, Request ID: 47f1da5d-c400-4269-866a-7324cb167a1a) (SDK
Attempt Count: 1):MalformedPolicyDocument: Unexpected IOException: Unexpected
close marker '}': expected ']' (for ROOT starting at [Source:
java.io.StringReader@16a63c4d; line: 1, column: 0])
at [Source: java.io.StringReader@16a63c4d; line: 1, column: 2] (Service:
Sts, Status Code: 400, Request ID: 47f1da5d-c400-4269-866a-7324cb167a1a) (SDK
Attempt Count: 1)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:271)
at
org.apache.hadoop.fs.s3a.S3AUtils.getInstanceFromReflection(S3AUtils.java:705)
at
org.apache.hadoop.fs.s3a.auth.CredentialProviderListFactory.createAWSV2CredentialProvider(CredentialProviderListFactory.java:303)
at
org.apache.hadoop.fs.s3a.auth.CredentialProviderListFactory.buildAWSProviderList(CredentialProviderListFactory.java:249)
at
org.apache.hadoop.fs.s3a.auth.CredentialProviderListFactory.createAWSCredentialProviderList(CredentialProviderListFactory.java:142)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.createClientManager(S3AFileSystem.java:1151)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:729)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3616)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:555)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:373)
at
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.lambda$expectFileSystemCreateFailure$0(ITestAssumeRole.java:166)
at
org.apache.hadoop.fs.s3a.S3ATestUtils.lambda$interceptClosing$0(S3ATestUtils.java:753)
```
And here
```
ITestAssumeRole.testAssumeRoleFSBadPolicy2:262->expectFileSystemCreateFailure:164
Expected to find 'Syntax errors in policy' but got unexpected exception:
org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate
org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on /:
software.amazon.awssdk.services.sts.model.MalformedPolicyDocumentException:
Unexpected IOException: Unexpected character (''' (code 39)): was expecting
double-quote to start field name
at [Source: java.io.StringReader@21d2e81f; line: 1, column: 3] (Service:
Sts, Status Code: 400, Request ID: 7b22250d-1238-4609-92f2-280531aacf11) (SDK
Attempt Count: 1):MalformedPolicyDocument: Unexpected IOException: Unexpected
character (''' (code 39)): was expecting double-quote to start field name
at [Source: java.io.StringReader@21d2e81f; line: 1, column: 3] (Service:
Sts, Status Code: 400, Request ID: 7b22250d-1238-4609-92f2-280531aacf11) (SDK
Attempt Count: 1)
```
this is all really good for production use: callers are getting errors back
from the parser (notable that this change coincides with re-invent). But our
tests fail...
> S3A: IAMCredentialsProvider throttling results in AWS auth failures
> -------------------------------------------------------------------
>
> Key: HADOOP-19181
> URL: https://issues.apache.org/jira/browse/HADOOP-19181
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Blocker
> Labels: pull-request-available
>
> Tests report throttling errors in IAM being remapped to noauth and failure
> Again, impala tests, but with multiple processes on same host. this means
> that HADOOP-18945 isn't sufficient as even if it ensures a singleton instance
> for a process
> * it doesn't if there are many test buckets (fixable)
> * it doesn't work across processes (not fixable)
> we may be able to
> * use a singleton across all filesystem instances
> * once we know how throttling is reported, handle it through retries +
> error/stats collection
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]