[
https://issues.apache.org/jira/browse/HADOOP-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17216628#comment-17216628
]
Steve Loughran commented on HADOOP-17312:
-----------------------------------------
Proposed:
* map aborts AWSExceptions to IOE
* S3AUtils.signifiesConnectionBroken() to recognise the (shaded) exception type
and convert to an EOFException
* S3a: catches abort exceptions of any kind as "don't care"
* and close() does that for all read IO, because clearly layers above don't.
We don't care what is going wrong with the HTTP channel when we are about to
discard it.
> S3AInputStream to be resilient to faiures in abort(); translate AWS Exceptions
> ------------------------------------------------------------------------------
>
> Key: HADOOP-17312
> URL: https://issues.apache.org/jira/browse/HADOOP-17312
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.0, 3.2.1
> Reporter: Steve Loughran
> Priority: Major
>
> Stack overflow issue complaining about ConnectionClosedException during
> S3AInputStream close(), seems triggered by an EOF exception in abort. That
> is: we are trying to close the stream and it is failing because the stream is
> closed. oops.
> https://stackoverflow.com/questions/64412010/pyspark-org-apache-http-connectionclosedexception-premature-end-of-content-leng
> Looking @ the stack, we aren't translating AWS exceptions in abort() to IOEs,
> which may be a factor.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]