[
https://issues.apache.org/jira/browse/HADOOP-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17613602#comment-17613602
]
ASF GitHub Bot commented on HADOOP-18465:
-----------------------------------------
hadoop-yetus commented on PR #4977:
URL: https://github.com/apache/hadoop/pull/4977#issuecomment-1270248216
:confetti_ball: **+1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 10m 51s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ branch-3.3 Compile Tests _ |
| +1 :green_heart: | mvninstall | 39m 36s | | branch-3.3 passed |
| +1 :green_heart: | compile | 0m 34s | | branch-3.3 passed |
| +1 :green_heart: | checkstyle | 0m 31s | | branch-3.3 passed |
| +1 :green_heart: | mvnsite | 0m 44s | | branch-3.3 passed |
| +1 :green_heart: | javadoc | 0m 35s | | branch-3.3 passed |
| +1 :green_heart: | spotbugs | 1m 15s | | branch-3.3 passed |
| +1 :green_heart: | shadedclient | 25m 48s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 37s | | the patch passed |
| +1 :green_heart: | compile | 0m 29s | | the patch passed |
| +1 :green_heart: | javac | 0m 29s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 0m 17s | | the patch passed |
| +1 :green_heart: | mvnsite | 0m 33s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 22s | | the patch passed |
| +1 :green_heart: | spotbugs | 1m 8s | | the patch passed |
| +1 :green_heart: | shadedclient | 25m 30s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 7s | | hadoop-aws in the patch passed.
|
| +1 :green_heart: | asflicense | 0m 33s | | The patch does not
generate ASF License warnings. |
| | | 111m 44s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4977/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/4977 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux 7ff1fca53413 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10
17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.3 / 0c1fb26165a86ef1862e4560170e21fca770601c |
| Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4977/1/testReport/ |
| Max. process+thread count | 532 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4977/1/console |
| versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> S3A server-side encryption tests fail before checking encryption tests should
> skip
> ----------------------------------------------------------------------------------
>
> Key: HADOOP-18465
> URL: https://issues.apache.org/jira/browse/HADOOP-18465
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Daniel Carl Jones
> Assignee: Daniel Carl Jones
> Priority: Minor
> Labels: pull-request-available
>
> When setting {{test.fs.s3a.encryption.enabled}} to {{{}false{}}}, this is not
> respected by ITestS3AEncryptionSSEKMSDefaultKey. See failure below.
>
> {code:java}
> ------------------------------------------------------------------------------
> Test set: org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey
> -------------------------------------------------------------------------------
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 6.053 s <<<
> FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey
> testEncryptionOverRename(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEKMSDefaultKey)
> Time elapsed: 3.063 s <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSBadRequestException: PUT 0-byte object on
> fork-0002/test: com.amazonaws.services.s3.model.AmazonS3Exception: SSE
> unavailable (Service: Amazon S3; Status Code: 400; Proxy: null)
> at
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:242)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
> at
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:4394)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:4379)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.access$1800(S3AFileSystem.java:268)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.createFakeDirectory(S3AFileSystem.java:3469)
> at
> org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:159)
> at
> org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
> at
> org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2441)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2460)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3435)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2456)
> at
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:363)
> at
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:205)
> at
> org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:111)
> at
> org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.setup(AbstractTestS3AEncryption.java:94)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> at
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> at
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
> at
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
> at
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
> at
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
> at
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5456)
> at
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5403)
> at
> com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:421)
> at
> com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:6531)
> at
> com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1861)
> at
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1821)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$18(S3AFileSystem.java:2937)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:651)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:2934)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$31(S3AFileSystem.java:4396)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
> ... 37 more
> {code}
> What I believe is happening is it performs the superclass setup method which
> asserts that it can create a directory. If the S3-compatible endpoint does
> not support encryption, this check will fail causing the test to fail before
> skipping.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]