[ https://issues.apache.org/jira/browse/HADOOP-18310?focusedWorklogId=783645&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783645 ]
ASF GitHub Bot logged work on HADOOP-18310: ------------------------------------------- Author: ASF GitHub Bot Created on: 22/Jun/22 00:42 Start Date: 22/Jun/22 00:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4483: URL: https://github.com/apache/hadoop/pull/4483#issuecomment-1162498247 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 70m 5s | | trunk passed | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 36s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 47s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 26s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 39s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 40s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 133m 43s | | | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4483 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux edee9c8b9866 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f9292dffe2c3cdc8d351a9f87010fea36c003074 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/testReport/ | | Max. process+thread count | 601 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4483/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking ------------------- Worklog Id: (was: 783645) Time Spent: 0.5h (was: 20m) > Add option and make 400 bad request retryable > --------------------------------------------- > > Key: HADOOP-18310 > URL: https://issues.apache.org/jira/browse/HADOOP-18310 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Affects Versions: 3.3.4 > Reporter: Tak-Lon (Stephen) Wu > Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > When one is using a customized credential provider via > fs.s3a.aws.credentials.provider, e.g. > org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider, when the provided > credential by this pluggable provider is expired and return an error code of > 400 as bad request exception. > Here, the current S3ARetryPolicy will fail immediately and does not retry on > the S3A level. > Our recent use case in HBase found this use case could lead to a Region > Server got immediate abandoned from this Exception without retry, when the > file system is trying open or S3AInputStream is trying to reopen the file. > especially the S3AInputStream use cases, we cannot find a good way to retry > outside of the file system semantic (because if a ongoing stream is failing > currently it's considered as irreparable state), and thus we come up with > this optional flag for retrying in S3A. > {code} > Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided > token has expired. (Service: Amazon S3; Status Code: 400; Error Code: > ExpiredToken; Request ID: XYZ; S3 Extended Request ID: ABC; Proxy: null), S3 > Extended Request ID: 123 > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695) > at > com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559) > at > com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539) > at > com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5453) > at > com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5400) > at > com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1524) > at > org.apache.hadoop.fs.s3a.S3AFileSystem$InputStreamCallbacksImpl.getObject(S3AFileSystem.java:1506) > at > org.apache.hadoop.fs.s3a.S3AInputStream.lambda$reopen$0(S3AInputStream.java:217) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117) > ... 35 more > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org