[
https://issues.apache.org/jira/browse/HADOOP-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17776745#comment-17776745
]
ASF GitHub Bot commented on HADOOP-18945:
-----------------------------------------
steveloughran commented on PR #6202:
URL: https://github.com/apache/hadoop/pull/6202#issuecomment-1768758729
tested s3 london with a vpn up to make things slower
one failure in the new test from HADOOP-18939; created HADOOP-18946 to cover
it
```
[ERROR]
testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation)
Time elapsed: 0.026 s <<< FAILURE!
java.lang.AssertionError: retry policy of MultiObjectException
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at
org.apache.hadoop.fs.s3a.impl.TestErrorTranslation.testMultiObjectExceptionFilledIn(TestErrorTranslation.java:151)
```
> S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from
> IMDS
> ---------------------------------------------------------------------------------
>
> Key: HADOOP-18945
> URL: https://issues.apache.org/jira/browse/HADOOP-18945
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 7.2.18.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Blocker
> Labels: pull-request-available
>
> Failures in impala test VMs using iAM for auth
> {code}
> Failed to open file as a parquet file: java.net.SocketTimeoutException:
> re-open
> s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet
> at 84 on
> s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet:
> org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: +: Failed to load
> credentials from IMDS
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]