[
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836358#comment-17836358
]
ASF GitHub Bot commented on HADOOP-19146:
-----------------------------------------
hadoop-yetus commented on PR #6723:
URL: https://github.com/apache/hadoop/pull/6723#issuecomment-2050515997
:confetti_ball: **+1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 7m 36s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 7 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 34m 48s | | trunk passed |
| +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | checkstyle | 0m 22s | | trunk passed |
| +1 :green_heart: | mvnsite | 0m 26s | | trunk passed |
| +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | spotbugs | 0m 44s | | trunk passed |
| +1 :green_heart: | shadedclient | 19m 53s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 17s | | the patch passed |
| +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javac | 0m 23s | | the patch passed |
| +1 :green_heart: | compile | 0m 15s | | the patch passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | javac | 0m 15s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 0m 13s | | the patch passed |
| +1 :green_heart: | mvnsite | 0m 21s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | spotbugs | 0m 43s | | the patch passed |
| +1 :green_heart: | shadedclient | 19m 58s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 13s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 24s | | The patch does not
generate ASF License warnings. |
| | | 93m 51s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.45 ServerAPI=1.45 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/6723 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux bd88072bceb4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / f3b15ae1853cd1fb3abb5207c444e4062c1d6a4e |
| Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/testReport/ |
| Max. process+thread count | 552 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> noaa-cors-pds bucket access with global endpoint fails
> ------------------------------------------------------
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3, test
> Affects Versions: 3.4.0
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Labels: pull-request-available
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to
> access to bucket.
>
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect
> response to region [us-east-1]. This likely indicates that the S3 region
> configured in fs.s3a.endpoint.region does not match the AWS region containing
> the bucket.: null (Service: S3, Status Code: 301, Request ID:
> PMRWMQC9S91CNEJR, Extended Request ID:
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
> at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
> {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended
> Request ID:
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
> at
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
> at
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
> at
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
> at
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
> at
> software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
> at
> software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
> ...
> ...
> ...
> at
> software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
> at
> software.amazon.awssdk.services.s3.DefaultS3Client.headObject(DefaultS3Client.java:6319)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$10(S3AFileSystem.java:2901)
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:431)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2889)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2869)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4019)
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]