[
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879569#comment-17879569
]
ASF GitHub Bot commented on HADOOP-19221:
-----------------------------------------
steveloughran commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2331725271
```
[ERROR] Failures:
[ERROR]
org.apache.hadoop.fs.s3a.TestInvoker.test500isMappedTooAWSStatus500Exception(org.apache.hadoop.fs.s3a.TestInvoker)
[ERROR] Run 1: TestInvoker.test500isMappedTooAWSStatus500Exception:191
[should retry org.apache.hadoop.fs.s3a.AWSStatus500Exception: test on /:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again: We encountered an internal error. Please try
again] expected:<[FAIL]> but was:<[RETRY]>
[ERROR] Run 2: TestInvoker.test500isMappedTooAWSStatus500Exception:191
[should retry org.apache.hadoop.fs.s3a.AWSStatus500Exception: test on /:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again: We encountered an internal error. Please try
again] expected:<[FAIL]> but was:<[RETRY]>
[ERROR] Run 3: TestInvoker.test500isMappedTooAWSStatus500Exception:191
[should retry org.apache.hadoop.fs.s3a.AWSStatus500Exception: test on /:
software.amazon.awssdk.services.s3.model.S3Exception: We encountered an
internal error. Please try again: We encountered an internal error. Please try
again] expected:<[FAIL]> but was:<[RETRY]>
[INFO]
[ERROR]
org.apache.hadoop.fs.s3a.TestInvoker.test5xxRetriesDisabled(org.apache.hadoop.fs.s3a.TestInvoker)
[ERROR] Run 1:
TestInvoker.test5xxRetriesDisabled:240->assertRetryAction:367 500 Expected
action RetryAction(action=FAIL, delayMillis=0, reason=null) from
shouldRetry(software.amazon.awssdk.services.s3.model.S3Exception: We
encountered an internal error. Please try again, 1, true), but got RETRY
[ERROR] Run 2:
TestInvoker.test5xxRetriesDisabled:240->assertRetryAction:367 500 Expected
action RetryAction(action=FAIL, delayMillis=0, reason=null) from
shouldRetry(software.amazon.awssdk.services.s3.model.S3Exception: We
encountered an internal error. Please try again, 1, true), but got RETRY
[ERROR] Run 3:
TestInvoker.test5xxRetriesDisabled:240->assertRetryAction:367 500 Expected
action RetryAction(action=FAIL, delayMillis=0, reason=null) from
shouldRetry(software.amazon.awssdk.services.s3.model.S3Exception: We
encountered an internal error. Please try again, 1, true), but got RETRY
[INFO]
```
> S3A: Unable to recover from failure of multipart block upload attempt "Status
> Code: 400; Error Code: RequestTimeout"
> --------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then
> all subsequent retry attempts fail with a 400 Response and ErrorCode
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within
> the timeout period. Idle connections will be closed. (Service: Amazon S3;
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial
> failure was a 500); all retries failed to upload properly from the source
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1
> sdk we would build a multipart block upload request passing in (file, offset,
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}}
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the
> data blocks first. we don't want that as it would double the memory
> requirements of active blocks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]