[ 
https://issues.apache.org/jira/browse/HADOOP-19512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17938188#comment-17938188
 ] 

Steve Loughran commented on HADOOP-19512:
-----------------------------------------

S3 express object timeouts. Is the path valid?
{code}

org.apache.hadoop.fs.s3a.AWSBadRequestException: Writing Object on 
test/testObjectUploadTimeouts/__magic_job-0001/__base/file2: 
software.amazon.awssdk.services.s3.model.InvalidRequestException: Invalid 
Request (Service: S3, Status Code: 400, Request ID: 
01a21dc11b000195cd2f8e8a0509e562a4865656, Extended Request ID: 
SiWO9oWVxql):InvalidRequest: Invalid Request (Service: S3, Status Code: 400, 
Request ID: 01a21dc11b000195cd2f8e8a0509e562a4865656, Extended Request ID: 
SiWO9oWVxql)

        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:265)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
        at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:207)
        at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:521)
        at 
org.apache.hadoop.fs.s3a.commit.magic.S3MagicCommitTracker.lambda$upload$0(S3MagicCommitTracker.java:120)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465)
        at 
org.apache.hadoop.fs.s3a.commit.magic.S3MagicCommitTracker.upload(S3MagicCommitTracker.java:119)
        at 
org.apache.hadoop.fs.s3a.commit.magic.S3MagicCommitTracker.aboutToComplete(S3MagicCommitTracker.java:83)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:523)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:77)
        at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.file(ContractTestUtils.java:729)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.createFile(ContractTestUtils.java:708)
        at 
org.apache.hadoop.fs.s3a.impl.ITestConnectionTimeouts.testObjectUploadTimeouts(ITestConnectionTimeouts.java:265)
{code}

        

> S3A: Test failures testing with unusual bucket configurations
> -------------------------------------------------------------
>
>                 Key: HADOOP-19512
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19512
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3, test
>    Affects Versions: 3.5.0, 3.4.2
>            Reporter: Steve Loughran
>            Assignee: Ahmar Suhail
>            Priority: Minor
>         Attachments: test-failures.txt
>
>
> 1. The logic to skip vector IO contract tests doesn't work  if the analytics 
> stream is set on a per-bucket basis for the test bucket
> 2. tests with SSE-C are failing. Test bucket is normally set up to use 
> SSE-KMS, FWIW
> {code}
>   <property>
>     <name>fs.s3a.bucket.stevel-london.encryption.algorithm</name>
>     <value>SSE-KMS</value>
>   </property>
> {code}
> this only happens when the analytics stream is set for the test bucket 
> fs.s3a.bucket.stevel-london.input.stream.type=analytics; set it globally all 
> is good



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to