[
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-19221:
------------------------------------
Description:
If a multipart PUT request fails for some reason (e.g. networrk error) then all
subsequent retry attempts fail with a 400 Response and ErrorCode RequestTimeout
.
{code}
Your socket connection to the server was not read from or written to within the
timeout period. Idle connections will be closed. (Service: Amazon S3; Status
Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended Request ID:
{code}
The list of supporessed exceptions contains the root cause (the initial failure
was a 500); all retries failed to upload properly from the source input stream
{{RequestBody.fromInputStream(fileStream, size)}}.
Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 sdk
we would build a multipart block upload request passing in (file, offset,
length), the way we are now doing this doesn't recover.
probably fixable by providing our own {{ContentStreamProvider}} implementations
for
# file + offset + length
# bytebuffer
# byte array
The sdk does have explicit support for the memory ones, but they copy the data
blocks first. we don't want that as it would double the memory requirements of
active blocks.
was:
if a slow block update takes too long then the connection is broken s3 side
with an error message, as a 400 response
{code}
Your socket connection to the server was not read from or written to within the
timeout period. Idle connections will be closed. (Service: Amazon S3; Status
Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended Request ID:
{code}
This is recoverable and should be treated as such, either using the normal
exception policy or maybe even throttlePolicy
> S3A: AWS 400 Response +ErrorCode RequestTimeout on multipart PUT
> -----------------------------------------------------------------
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Priority: Minor
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then
> all subsequent retry attempts fail with a 400 Response and ErrorCode
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within
> the timeout period. Idle connections will be closed. (Service: Amazon S3;
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial
> failure was a 500); all retries failed to upload properly from the source
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1
> sdk we would build a multipart block upload request passing in (file, offset,
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}}
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the
> data blocks first. we don't want that as it would double the memory
> requirements of active blocks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]