[ 
https://issues.apache.org/jira/browse/HADOOP-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17583075#comment-17583075
 ] 

ASF GitHub Bot commented on HADOOP-16806:
-----------------------------------------

jmahonin commented on PR #4753:
URL: https://github.com/apache/hadoop/pull/4753#issuecomment-1222609366

   ```
   [ERROR] 
testJobSubmissionCollectsTokens[0](org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob)
  Time elapsed: 15.048 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSBadRequestException: getFileStatus on 
s3a://landsat-pds/scene_list.gz: 
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon 
S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
D2F7W48K28SJYAQ8; S3 Extended Request ID: 
jFvfXqdgBlubfaH3svMYtDxgAqD9Ij9RZ/saXipGPoRXDeVUDT/ApOXfYINPOYfG/U+AjZyhfzc=; 
Proxy: null), S3 Extended Request ID: 
jFvfXqdgBlubfaH3svMYtDxgAqD9Ij9RZ/saXipGPoRXDeVUDT/ApOXfYINPOYfG/U+AjZyhfzc=:400
 Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 
400 Bad Request; Request ID: D2F7W48K28SJYAQ8; S3 Extended Request ID: 
jFvfXqdgBlubfaH3svMYtDxgAqD9Ij9RZ/saXipGPoRXDeVUDT/ApOXfYINPOYfG/U+AjZyhfzc=; 
Proxy: null)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:241)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:172)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3567)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3473)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$24(S3AFileSystem.java:3450)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2341)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2360)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3448)
        at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
        at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
        at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$32(S3AFileSystem.java:4387)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2341)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2360)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4380)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:310)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:278)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:432)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1919)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1674)
        at 
org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob.testJobSubmissionCollectsTokens(ITestDelegatedMRJob.java:281)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
   Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request 
(Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
D2F7W48K28SJYAQ8; S3 Extended Request ID: 
jFvfXqdgBlubfaH3svMYtDxgAqD9Ij9RZ/saXipGPoRXDeVUDT/ApOXfYINPOYfG/U+AjZyhfzc=; 
Proxy: null), S3 Extended Request ID: 
jFvfXqdgBlubfaH3svMYtDxgAqD9Ij9RZ/saXipGPoRXDeVUDT/ApOXfYINPOYfG/U+AjZyhfzc=
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5456)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5403)
        at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1372)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$11(S3AFileSystem.java:2511)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:431)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2499)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2479)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3545)
        ... 46 more
   ```
   
   The other test stack trace is basically the same




> AWS AssumedRoleCredentialProvider needs ExternalId add
> ------------------------------------------------------
>
>                 Key: HADOOP-16806
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16806
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.1
>            Reporter: Jon Hartlaub
>            Priority: Minor
>              Labels: pull-request-available
>
> AWS has added a security feature to the assume-role function in the form of 
> the "ExternalId" key in the AWS Java SDK 
> {{STSAssumeRoleSessionCredentialsProvider.Builder}} class.  To support this 
> security feature, the hadoop aws {{AssumedRoleCredentialProvider}} needs a 
> patch to include this value from the configuration as well as an added 
> Constant to the {{org.apache.hadoop.fs.s3a.Constants}} file.
> The ExternalId is not a required security feature, it is an augmentation of 
> the current assume role configuration. 
> Proposed: 
>  * Get the assume-role ExternalId token from the configuration for the 
> configuration key {{fs.s3a.assumed.role.externalid}}
>  * Use the configured ExternalId value in the 
> {{STSAssumeRoleSessionCredentialsProvider.Builder}}   
> e.g.
> {{if (StringUtils.isNotEmpty(externalId)) {}}
>  {{    builder.withExternalId(externalId); // include the token for 
> cross-account assume role}}
>  {{}}}
>  Tests:
>  * +Unit test+ which verifies the ExternalId state value of the 
> {{AssumedRoleCredentialProvider}} is consistent with the configured value - 
> either empty or populated
>  * Question: not sure about how to write the +integration test+ for this 
> feature.  We have an account configured for this use-case that verifies this 
> feature but I don't have much context on the Hadoop project AWS S3 
> integration tests, perhaps a pointer could help.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to