[
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879003#comment-17879003
]
ASF GitHub Bot commented on HADOOP-19221:
-----------------------------------------
steveloughran commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2327356752
test-wise, something transient
```
[ERROR]
testDirProbes[keep-markers](org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost)
Time elapsed: 0.856 s <<< ERROR!
java.nio.file.AccessDeniedException:
s3a://stevel-london/job-00-fork-0005/test/testDirProbes[keep-markers]:
org.apache.hadoop.fs.s3a.audit.AuditFailureException:
efb58954-16ca-40d2-8f6a-2aef61bba339-00000038 unaudited operation executing a
request outside an audit span {action_http_head_request
'job-00-fork-0005/test/testDirProbes[keep-markers]' size=0, mutating=false}
at
org.apache.hadoop.fs.s3a.audit.AuditIntegration.translateAuditException(AuditIntegration.java:161)
at
org.apache.hadoop.fs.s3a.audit.AuditIntegration.maybeTranslateAuditException(AuditIntegration.java:175)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:200)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:157)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4102)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:4005)
at
org.apache.hadoop.fs.s3a.S3ATestUtils.innerGetFileStatus(S3ATestUtils.java:1641)
at
org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.lambda$interceptGetFileStatusFNFE$5(AbstractS3ACostTest.java:459)
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:500)
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:386)
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:455)
at
org.apache.hadoop.fs.s3a.performance.OperationCostValidator.lambda$intercepting$1(OperationCostValidator.java:221)
at
org.apache.hadoop.fs.s3a.performance.OperationCostValidator.exec(OperationCostValidator.java:167)
at
org.apache.hadoop.fs.s3a.performance.OperationCostValidator.intercepting(OperationCostValidator.java:220)
at
org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.verifyMetricsIntercepting(AbstractS3ACostTest.java:342)
at
org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.interceptOperation(AbstractS3ACostTest.java:361)
at
org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.interceptGetFileStatusFNFE(AbstractS3ACostTest.java:457)
at
org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testDirProbes(ITestS3AFileOperationCost.java:339)
```
this says the call wasn't in a span, but it is unless the span source is
null -and the span source is set to the FS in setup(), and the fs sets its
audit span in initialize to a real or stub audit manager.
so I have *no* idea how this can be reached. Seen something like this before
so I consider it completely unrelated.
> S3A: Unable to recover from failure of multipart block upload attempt "Status
> Code: 400; Error Code: RequestTimeout"
> --------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then
> all subsequent retry attempts fail with a 400 Response and ErrorCode
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within
> the timeout period. Idle connections will be closed. (Service: Amazon S3;
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial
> failure was a 500); all retries failed to upload properly from the source
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1
> sdk we would build a multipart block upload request passing in (file, offset,
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}}
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the
> data blocks first. we don't want that as it would double the memory
> requirements of active blocks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]