[
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869405#comment-17869405
]
ASF GitHub Bot commented on HADOOP-19221:
-----------------------------------------
steveloughran commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1695514603
##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##########
@@ -927,28 +931,32 @@ private void uploadBlockAsync(final
S3ADataBlocks.DataBlock block,
throw e;
} finally {
// close the stream and block
- cleanupWithLogger(LOG, uploadData, block);
+ LOG.debug("closing block");
+ cleanupWithLogger(LOG, uploadData);
+ cleanupWithLogger(LOG, block);
}
});
partETagsFutures.add(partETagFuture);
}
/**
* Block awaiting all outstanding uploads to complete.
- * @return list of results
+ * @return list of results or null if interrupted.
* @throws IOException IO Problems
*/
private List<CompletedPart> waitForAllPartUploads() throws IOException {
LOG.debug("Waiting for {} uploads to complete", partETagsFutures.size());
try {
return Futures.allAsList(partETagsFutures).get();
} catch (InterruptedException ie) {
- LOG.warn("Interrupted partUpload", ie);
- Thread.currentThread().interrupt();
- return null;
+ // interruptions are raided if a task is aborted by spark.
+ LOG.warn("Interrupted while waiting for uploads to {} to complete",
key, ie);
+ // abort the upload
+ abort();
+ // then regenerate a new InterruptedIOException
+ throw (IOException) new
InterruptedIOException(ie.toString()).initCause(ie);
} catch (ExecutionException ee) {
//there is no way of recovering so abort
- //cancel all partUploads
Review Comment:
looking at this. may need some more review to be confident we are doing
abort here properly
> S3A: Unable to recover from failure of multipart block upload attempt "Status
> Code: 400; Error Code: RequestTimeout"
> --------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then
> all subsequent retry attempts fail with a 400 Response and ErrorCode
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within
> the timeout period. Idle connections will be closed. (Service: Amazon S3;
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial
> failure was a 500); all retries failed to upload properly from the source
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1
> sdk we would build a multipart block upload request passing in (file, offset,
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}}
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the
> data blocks first. we don't want that as it would double the memory
> requirements of active blocks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]