[
https://issues.apache.org/jira/browse/HADOOP-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17898623#comment-17898623
]
Syed Shameerur Rahman commented on HADOOP-19336:
------------------------------------------------
*1. testSizeOfEncryptedObjectFromHeaderWithV1Compatibility*
This fails only when there is bucket overrides and works fine otherwise. It
looks like i remove bucket overrides configs before running the test. my bad.
*2. ITestUploadRecovery*
Not sure how i missed this.
Caused by: java.lang.IllegalStateException: Must use either different key or iv
for GCM encryption
at com.sun.crypto.provider.CipherCore.checkReinit(CipherCore.java:1088)
at com.sun.crypto.provider.CipherCore.update(CipherCore.java:662)
at com.sun.crypto.provider.AESCipher.engineUpdate(AESCipher.java:380)
at javax.crypto.Cipher.update(Cipher.java:1835)
at
software.amazon.encryption.s3.internal.CipherSubscriber.onNext(CipherSubscriber.java:52)
at
software.amazon.encryption.s3.internal.CipherSubscriber.onNext(CipherSubscriber.java:16)
at
software.amazon.awssdk.utils.async.SimplePublisher.doProcessQueue(SimplePublisher.java:267)
at
software.amazon.awssdk.utils.async.SimplePublisher.processEventQueue(SimplePublisher.java:224)
... 10 more
This comes when doing multi part upload, the last part should be mentioned as
"last". I thought we did this. But this is happening in recovery/retry flow.
[[email protected]] - any code pointers for this ? like how MPU part upload is
retired?
*3. ITestAwsSdkWorkarounds*
Again not sure how i missed this. Will look into this as well. Looks like
S3EcnryptionClient SDK doesn't log it.
4. 400 error without region set (KMS providing nothing helpful). Proposed: move
troubleshooting into encryption.md, cover 400 and this as a possible cause
This was already covered. But in that case it was throwing
{color:#000000}NotFoundException. Anyhow now that we have seen, i will cover it
as well.{color}
> S3A: Test failures after CSE support added
> ------------------------------------------
>
> Key: HADOOP-19336
> URL: https://issues.apache.org/jira/browse/HADOOP-19336
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.5.0
> Reporter: Steve Loughran
> Assignee: Syed Shameerur Rahman
> Priority: Major
>
> Testing hadoop trunk with CSE-KMS configured I get
> * 400 error without region set (KMS providing nothing helpful). Proposed:
> move troubleshooting into encryption.md, cover 400 and this as a possible cuse
> * test failures
> {code}
> [ERROR]
> ITestS3AClientSideEncryptionKms>ITestS3AClientSideEncryption.testSizeOfEncryptedObjectFromHeaderWithV1Compatibility:345->ITestS3AClientSideEncryption.assertFileLength:447
> [Length of
> s3a://stevel-london/job-00-fork-0009/test/testSizeOfEncryptedObjectFromHeaderWithV1Compatibility/file
> status:
> S3AFileStatus{path=s3a://stevel-london/job-00-fork-0009/test/testSizeOfEncryptedObjectFromHeaderWithV1Compatibility/file;
> isDirectory=false; length=1024; replication=1; blocksize=33554432;
> modification_time=1731674289000; access_time=0; owner=stevel; group=stevel;
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true;
> isErasureCoded=false} isEmptyDirectory=FALSE
> eTag="0f343b0931126a20f133d67c2b018a3b"
> versionId=JyA1I_OW8osQTS3zWdn_Z0qlQYqBZ_7.] expected:<10[]L> but was:<10[24]L>
> [ERROR] ITestAwsSdkWorkarounds.testNoisyLogging:99 [LOG output does not
> contain the forbidden text. Has the SDK been fixed?]
> Expecting:
> <"">
> to contain:
> <"The provided S3AsyncClient is an instance of MultipartS3AsyncClient">
> [ERROR] Errors:
> [ERROR] ITestUploadRecovery.testCommitOperations:234 » AWSClientIO upload
> part #1 uplo...
> [ERROR] ITestUploadRecovery.testMagicWriteRecovery[array-commit-true] »
> AWSClientIO up...
> [ERROR] ITestUploadRecovery.testMagicWriteRecovery[bytebuffer-commit-false]
> » AWSClientIO
> [ERROR] ITestUploadRecovery.testMagicWriteRecovery[disk-commit-false] »
> AWSClientIO up...
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]